The synthesis of carbon nanotubes (CNTs) using a chemical vapour deposition (CVD) method requires the use of hydrocarbon as the carbon precursor. Among the commonly used hydrocarbons are methane and acetylene, which are both light gas-phase substances. Besides that, other carbon-rich sources, such as carbon monoxide and coal, have also been reportedly used. Nowadays, researches have also been conducted into utilising heavier hydrocarbons and petrochemical products for the production of CNTs, such as kerosene and diesel oil. Therefore, this article reviews the different kind of hydrocarbon sources for CNTs production using a CVD method. The method is used for it allows the decomposition of the carbon-rich source with the aid of a catalyst at a temperature in the range 600-1200°C. This synthesis technique gives an advantage as a high yield and high-quality CNTs can be produced at a relatively low cost process. As compared to other processes for CNTs production such as arc discharge and laser ablation, they may produce high quality CNTs but has a disadvantage for use as large scale synthesis routes.
Hydrocarbon source, carbon nanotube, catalyst, chemical vapour deposition
Parts of aircraft and gas turbines used for power production are subjected to severe erosion damage since aircrafts frequently operate in sandy environment. Low cost fuel such as poor quality coal is used in gas turbines which produce suspended hard particle in the exhaust. In the past, researchers have worked on minimising the erosion by using certain coatings. Development of new coatings is necessary in order to explore further in improving resistance against erosion process under high operating temperature of gas turbine, aero engines and other components. In the present work, the investigation of elevated temperature erosion behaviour of CoCrAlY/Al2O3/YSZ coatings synthesised by plasma spraying on two different base metals, namely, Hastelloy X (Superni 76) and AISI 321 (MDN 321) was carried out. The coated samples were subjected to erosion test at 600°C with the impact angles of 30° and 90° under steady state condition. Alumina powder was used as erodent material of uneven angular shape of 50 µm particle size. The morphology and phase formed on eroded surface are characterised using SEM and X-ray diffraction to determine the erosion mechanism. The rate of erosion is determined by weight loss method and the CoCrAlY/Al2O3/YSZ coating showed up to about 25% lower erosion rate than the substrate alloy. It was observed that the erosion resistance of CoCrAlY/Al2O3/YSZ coating on both MDN 321 and Superni 76 gave almost similar erosion resistance which shows that the erosion behaviour of coating is not influenced by substrate unless spray parameter and substrate roughness is changed.
Ceramics, erosion mechanism, high temperature erosion, plasma spray, substrate
The objective of this study is to investigate the effects of infiltration wells on flood peak reduction and flood frequency. The analysis was carried out with the use of a stochastic rainfall model incorporating a within-storm rainfall distribution of low, medium and high variability. This study was motivated by flooding in a catchment in Indonesia, which is potentially affected by land use changes. The parameterisation of the climate and landscapes was derived from a specific catchment in Indonesia. An analysis of land use changes underlines the importance of green land in reducing flood peaks. With city development, however, land is converted for settlements, industries and trade areas, which will increase flood peaks significantly. An analysis on the use of infiltration wells shows that flood peak reduction of up to 50% compared to without wells. The results also demonstrate that within-storm rainfall distribution affects flood peaks, where the effects of infiltration wells in reducing flood peaks are more observable when incorporating low within-storm variability.
Infiltration well, within-storm rainfall distribution, land use, flood peak
Medical diagnosis is the process of determining which disease or medical condition explains a person's determinable signs and symptoms. Diagnosis of most diseases is very expensive as many tests are required for predictions. This paper aims to introduce an improved hybrid approach for training the adaptive network based fuzzy inference system (ANFIS). It incorporates hybrid learning algorithms least square estimates with Levenberg-Marquardt algorithm using analytic derivation for computation of Jacobian matrix, as well as code optimisation technique, which indexes membership functions. The goal is to investigate how certain diseases are affected by patient's characteristics and measurement such as abnormalities or a decision about the presence or absence of a disease. In order to achieve an accurate diagnosis at this complex stage of symptom analysis, the physician may need efficient diagnosis system to classify and predict patient condition by using an adaptive neuro fuzzy inference system (ANFIS) pre-processed by grid partitioning. The proposed hybridised intelligent technique was tested with Statlog heart disease and Hepatitis disease datasets obtained from the University of California at Irvine's (UCI) machine learning repository. The robustness of the performance measuring total accuracy, sensitivity and specificity was examined. In comparison, the proposed method was found to achieve superior performance when compared to some other related existing methods.
Adaptive neuro fuzzy inference system, classification, Levenberg-Marquardt algorithm, diagnosis of medical diseases
Despite wide applications in industries, phenol pollution leads to many health effects, and one of the technologies used to clean up phenol pollution is phytoremediation. The aim of this research was to assess the remediation ability of Ipomoea aquatica Forssk., which is easy to handle and and has a fast growth rate. Plantlet was grown in water spiked with 0.05, 0.10, 0.20, 0.30 and 0.40 g/L phenol, followed by daily observation of the plantlets morphology and tracking of phenol concentration in the water and plantlet extracts via 4-aminoantipyrine (4-AAP) assay. Plantlet's roots in 0.10 g/L phenol (57.42 ± 1.41 mm) were significantly longer (p < 0.05) than those of the control plantlets (43.57 ± 3.87 mm) in contrast to other phenol concentrations which had stunted roots growth. I. aquatica Forssk. was able to survive with 0.30 g/L phenol despite exhibiting yellowing of leaves and increased sensitivity to scarring on the stems. The plantlets were able to completely remove the phenol from the water spiked with phenol at 0.05 g/L after 12 days of growth. However, the highest average rate of phenol removal was 0.021 g/L/day from water spiked with 0.30 g/L phenol. Phenol analysis on the plantlets' extracts revealed that I. aquatica Forssk. had degraded the absorbed phenol. This observation is of significant interest as it highlights the potential of I. aquatica Forssk. for use as a phytoremediator to clean up phenol contaminated water.
Swarm intelligence is a research area that models the population of swarm that is able to self-organise effectively. Honey bees that gather around their hive with a distinctive behaviour is another example of swarm intelligence. In fact, the artificial bee colony (ABC) algorithm is a swarm-based meta-heuristic algorithm introduced by Karaboga in order to optimise numerical problems. 2SAT can be treated as a constrained optimisation problem which represents any problem by using clauses containing 2 literals each. Most of the current researchers represent their problem by using 2SAT. Meanwhile, the Hopfield neural network incorporated with the ABC has been utilised to perform randomised 2SAT. Hence, the aim of this study is to investigate the performance of the solutions produced by HNN2SAT-ABC and compared it with the traditional HNN2SAT-ES. The comparison of both algorithms has been examined by using Microsoft Visual Studio 2013 C++ Express Software. The detailed comparison on the performance of the ABC and ES in performing 2SAT is discussed based on global minima ratio, hamming distance, CPU time and fitness landscape. The results obtained from the computer simulation depict the beneficial features of ABC compared to ES. Moreover, the findings have led to a significant implication on the choice of determining an alternative method to perform 2SAT.
Information on the causes of death obtained from death certificates in Thailand is incomplete and inaccurate. Therefore, mortality statistics from death registrations (DR) remains unreliable. Accurate mortality statistics is essential for national policies on intervention and care and resource allocation. Verbal Autopsy (VA) is a more reliable source for cause of deaths than the DR. In this study, the classification of lung cancer deaths in Thailand from 1996 to 2009 was investigated based on a logistic regression model of lung cancer deaths with demographic and medical factors from the 2005 VA data. The estimated proportions of lung cancer deaths from the model were applied to the DR data. The goodness of fit of the model was assessed using the ROC curve. The resulting estimates of lung cancer deaths were higher than those reported with inflation factors 1.54 for males and 1.44 for females. Meanwhile, misclassified cases were reported mainly as other cancer types. There is no evidence of regional variation for lung cancer. The methods enable health professionals to estimate specific cause of deaths in countries where low quality of causes of death in the DR database and reliable data such as the VA data is available. The findings provide useful information on death statistics for policy interventions related to lung cancer prevention and treatment.
Adjusted percentage, lung cancer deaths, logistic regression model, ROC
Understanding rainfall trend can be a first step in the planning and management of water resources especially at the basin scale. In this study, standard tests are used to examine rainfall trends based on monthly, seasonal and mean annual series at the Niger-South Basin, Nigeria, between 1948 and 2008. Rainfall variability index showed that the decade 2000s was the driest (-2.1), while 1950s was the wettest (+0.8), with the decade 1980s being the driest in the second half of the last century, whereas the year 1983 was the driest throughout the series. Over the entire basin, rainfall variability was generally low, but higher intra-monthly than inter-annually. Annual rainfall was dominated by August, contributing about 15%, while December contributed the least (0.7%). On a seasonal scale, July-August-September (JJA) contributed over 40% of the annual rainfall, while rainfall was lowest during December-January- February (DJF) (4.5%). The entire basin displayed negative trends but only 15% indicated significant changes (a<0.1), while the magnitudes of change varied between -3.75 and -0.25 mm/yr. Similarly, only JJA exhibited insignificant upward trend, while the rest showed negative trends. About eight months of the year showed reducing trends, but only January trend was significant. Annual downward trend was generally observed in the series. The trend during 1948-1977 was negative, but it was positive for the 1978-2008 period. Hence, water resources management planning may require construction of water storage facilities to reduce summer flooding and prevent possible future water scarcity in the basin.
The nozzle of the AWJ machine is a critical component which has direct influence on the jet force developed. In the present scenario, commercially available nozzles have conical section followed by focus section. The critical section is where the cross section of the nozzle changes from conical to straight and suffers severe wall shears stress leading to flow loss. In considering this, computational analysis has been carried on the jet flow through AWJ nozzle with different nozzle geometries. The geometric variation of nozzle profiles show that, reduction in radius of curvature (radius 20 mm) of nozzle geometry produced higher jet velocity and force, as well as lower pressure drop compared to other geometric dimensions.
An image forgery is a common problem which causes the negative impact on society. In the earlier period it did not affect the general public because the sophisticated image processing software and editing tools were not easily available. Thus, the rapid growth of the image processing software makes this task pretty easy. If it is done with care then it is very difficult for humans to recognize visually whether the image is original or forged. Therefore, the authenticity of an image is a necessity of today's digital era. The copy-move image forgery is the most common type of image forgery in which an area or object is copied and pasted at some other places within the same image in order to hide some important features of the image. In this paper, we have proposed copy-move image forgery detection technique based on the image projection profiling. First, we convert the input image into binary image. The horizontal and vertical projection profiles, which are used in estimating the rectangular regions of copy-move image forgery, are then calculated. The experimental result shows that the proposed approach is able to detect copy-move region successfully and significant improvements have been suggested in computational time compared to other reported algorithms. The performance of proposed approach is demonstrated on various forged images.
Soluble solid content (SSC) is one of the important traits that indicate the ripeness of banana fruits. Determination of SSC for banana often requires destructive laboratory analysis on the fruit. An impedance measurement technique was investigated as a non-destructive approach for SSC determination of bananas. A pair of electrocardiogram (ECG) electrode connected to an impedance analyser board was used to measure the impedance value of bananas over the frequency of 19.5 to 20.5 KHz. The SSC measurement was conducted using a pocket refractometer and data was analysed to correlate SSC with impedance values. It was found that the mean of impedance, Z decreased from 10.01 to 99.93 KO at the frequency of 20 KHz, while the mean value of SSC increased from 0.58 to 4.93 % Brix from day 1 to day 8. The best correlation between impedance and SSC was found at 20 KHz, with the coefficient of determination, R2 of 0.87. This result indicates the potential of impedance measurement in predicting SSC of banana fruits.
Samples of 0.9Pb(Fe1/2Nb1/2)O3-0.1PbTiO3 were mixed with ZnO at 0, 1, 2, and 3 wt.%, and were synthesised by mixed oxide through a two-step sintering method. Phase transition of the samples was analysed by using X-ray diffractometer (XRD) and Fourier transform infrared spectroscopy (FTIR). The dielectric properties were determined by LCR meter at 1 kHz, 10 kHz, and 100 kHz. The results showed that the morphotropic phase boundary of 0.9PFN-0.1PT shifting to tetragonal phase and suppressed pyrochlore phase when ZnO was added into 0.9PFN-0.1PT. In addition, FTIR spectra peak showed zinc and oxygen bond bonding vibration at frequencies range 3,452 cm-1 and 3,792 cm-1 after level doping ZnO of 3 wt.%. The samples exhibited the maximum dielectric constant at temperature for 144°C. The dispersion of dielectric constantly decreased with increasing ZnO contents. The relaxor ferroelectric of 0.9PFN-0.1PT ceramic shifted to normal ferroelectric with increasing ZnO contents.
This paper introduces new forms of bivariate generalized Poisson (BGP) and bivariate negative binomial (BNB) regression models which can be fitted to bivariate and correlated count data with covariates. The BGP and BNB regression models can be fitted to bivariate count data with positive, zero or negative correlations. Applications of new BGP and BNB regression models are illustrated on Australian health survey data.
The transformation method (TM) of fuzzy arithmetic is aimed at simulation and analysis of a system. The aim of this paper is to use fuzzy arithmetic based on the TM on a state space of a steam turbine system. The model is then used to identify the degree of influence of each parameter on the system. Simulation and analysis of the system are presented in this paper.
Fuzzy arithmetic, uncertain model parameter, steam turbine system
Fuzzy set with similarity measure approaches are known to be effective in handling imprecise and subjective information to solve decision making problems. Many methods have been introduced based on these two concepts. However, most methods do not take into account the reliability factor of the imprecise information in the evaluation process. In 2010, Zadeh coined the idea of Z-number that has the ability to consider the reliability factor or the level of confidence of human's information expression. Since then, some decision-making methods have included this concept. In this paper, we present a new fuzzy decision making procedure by integrating the Jaccard similarity measure with Z-number to solve a multi criteria decision making problem. The conversion method of the Z-number based linguistic value to trapezoidal fuzzy numbers is used and the Jaccard similarity measure of the expected intervals of trapezoidal fuzzy numbers is applied to obtain the final decision. The feasibility of the methodology is demonstrated by investigating the preference factors that could influence customers to buy their preferred choice of car. The proposed methodology is applicable to solving decision making with a fuzzy environment to achieve a reliable and optimal decision.
Decision making, jaccard similarity measure, Z-number, multicriteria group decision making, expected interval of fuzzy numbers
A Structural Equation Model (SEM) is often used to test whether a hypothesised theoretical model agrees with data by examining the model fit. This study investigates the effect of sample size and distribution of data (normal and non-normal) on goodness of fit measures in structural equation model. Simulation results confirm that the GoF measures are affected by sample size, whereas they are quite robust when data are not normal. Absolute measures (GFI, AGFI, RMSEA) are more affected by sample size while incremental fit measures such as TLI and CFI are less affected by sample size and non-normality.
This paper offers a technique to construct a prediction interval for the future value of the last variable in the vector r of m variables when the number of observed values of r is small. Denoting r(t) as the time-t value of r, we model the time-(t+1) value of the m-th variable to be dependent on the present and l-1 previous values r(t), r(t-1), …, r(t-l+1) via a conditional distribution which is derived from an (ml+1)-dimensional power-normal distribution. The 100(a/2)% and 100(1-a/2)% points of the conditional distribution may then be used to form a prediction interval for the future value of the m-th variable. A method is introduced to estimate the above (ml+1)-dimensional power-normal distribution such that the coverage probability of the resulting prediction interval is nearer to the target value 1- a.
Multivariate power-normal distribution, prediction interval, coverage probability
Portfolio optimisation is one of the most crucial issues in investment decision-making and has received considerable attention from researchers and practitioners. Traditionally, the portfolio optimisation models are formulated based on the assumption that investors have complete information on the distribution of random returns. However, in real life case, this is not possible since decisions have to be made under uncertainty. This paper deals with a fuzzy portfolio optimisation problem in which returns and turnover rates of securities are represented by fuzzy variables. A goal programming model is proposed to optimise three objectives: maximisation of portfolio return, maximisation of liquidity and minimisation of the portfolio risk. The cardinality constraints, floor and ceiling constraints are also taken into consideration. Finally, a numerical experiment using real data is conducted to demonstrate the applicability of the model.
Parameter estimation in Generalized Autoregressive Conditional Heteroscedastic (GARCH) model has received much attention in the literature. Commonly used quasi maximum likelihood estimator (QMLE) may not be suitable if the model is misspecified. Alternatively, we can consider using variance targeting estimator (VTE) as it seems to be a better fit for misspecified initial parameters. This paper extends the application to see how both QMLE and VTE perform under error distribution misspecifications. Data are simulated under two error distribution conditions: one is to have a true normal error distribution and the other is to have a true student-t error distribution with degree of freedom equals to 3. The error distribution assumption that has been selected for this study are: normal distribution, student-t distribution, skewed normal distribution and skewed student-t. In addition, this study also includes the effect of initial parameter specification. The analyses are divided into two case designs. Case 1 is when w_0=0.1,a_0=0.05,ß_0=0.85 to represent the well specified initial parameters while Case 2 is when w_0=1,a_0=0,ß_0=0 to represent misspecified initial parameters. The results show that both QMLE and VTE estimator performances for misspecified initial parameters may not improve in well specified error distribution assumptions. Nevertheless, VTE shows a favourable performance compared to QMLE when the error distribution assumption is not the same as true underlying error distribution.
GARCH, variance targeting, parameter estimation, error distribution
Data mining processes such as clustering, classification, regression and outlier detection are developed based on similarity between two objects. Data mining processes of categorical data is found to be most challenging. Earlier similarity measures are context-free. In recent years, researchers have come up with context-sensitive similarity measure based on the relationships of objects. This paper provides an in-depth review of context-based similarity measures. Descriptions of algorithm for four context-based similarity measure, namely Association-based similarity measure, DILCA, CBDL and the hybrid context-based similarity measure, are described. Advantages and limitations of each context-based similarity measure are identified and explained. Context-based similarity measure is highly recommended for data-mining tasks for categorical data. The findings of this paper will help data miners in choosing appropriate similarity measures to achieve more accurate classification or clustering results.
Categorical data, context-based, data mining, similarity measure
The study of stock market volatility has been the focus of market participants primarily because most of the applications in financial economics are concerned with volatility. The economic structure in Malaysia is divided into three sectors: primary, secondary and tertiary. As the stability of the stock market is important for businesses, this paper carefully reviews the concept of volatility and analyses how different business sectors in Malaysia are affected by stock market volatility.
Historical volatility, stock market volatility, business sector, Malaysia
This paper presents a mathematical approach to solve railway rescheduling problems. The approach assumes that the trains are able to resume their journey after a given time frame of disruption whereby The train that experiences disruption and trains affected by the incident are rescheduled. The approach employed mathematical model to prioritise certain types of train according the railway operator's requirement. A pre-emptive goal programming model was adapted to find an optimal solution that satisfies the operational constraints and the company's stated goals. Initially, the model minimises the total service delay of all trains while adhering to the minimum headway requirement and track capacity. Subsequently, it maximises the train service reliability by only considering the trains with delay time window of five minutes or less. The model uses MATLAB R2014a software which automatically generates the optimal solution of the problem based on the input matrix of constraints. An experiment with three incident scenarios on a double-track railway of local network was conducted to evaluate the performance of the proposed model. The new provisional timetable was produced in short computing time and the model was able to prioritise desired train schedule.
Mathematical optimisation model, mixed integer programming, service delays, railway rescheduling