This paper seeks to clarify Building Information Modelling (BIM) and its implementation in Malaysia. Most developed countries that have implemented BIM in the construction industry have found it effective. This paper reviews existing literature on the implementation of BIM and examines the implementation strategies that have been developed. The review highlights numerous advantages of BIM in construction, which include, among others, reducing cost, time, carbon burden and capital cost. BIM can also help increase broader efficiencies and improve coordination and communication between each party. However, implementing BIM is complicated and requires efforts from both the government and the private sector. While the implementation of BIM may reduce costs in developed countries, it may not do so in developing countries; in Malaysia, for instance, costs act as an initial barrier. Other obstacles to implementing BIM in Malaysia include application system requirements and lack of knowledge and readiness to change. To facilitate its implementation in the construction industry, the Malaysian government needs to hold seminars to promote a better understanding of BIM. They may also introduce a properly structured BIM course by preparing a standard code of practices and guidelines for BIM in the education sector.
Building Information Modelling, cost reduction, time reduction, construction industry
This study sought to examine the reliability and validity of height measurements using a portable stadiometer as compared to a mechanical scale. Samples from 142 adults aged 22 to 57 were taken during data collection in November 2014. There was a high degree of reliability for the inter-examiner, intra-examiner and inter-instrument aspects with regards to mean difference, the inter correlation coefficient (ICC) and Bland-Altman Plot. For the inter-examiner aspect, the height measurement taken by the first examiner was 0.01 cm higher than that by the second examiner with an ICC of 0.999. For the intra-examiner aspect, the difference was 0.1 cm; this was higher in the first measurement compared to the second. The ICC was also 0.999. For the inter-instrument aspect, measurement taken by stadiometer was 0.61 cm higher than the measurement taken by mechanical scale and the ICC was 0.997. The Bland- Altman plot showed a distribution of differences between measurements in the inter-examiner, intra-examiner and inter-instrument aspects that were close to zero within the narrow range of ±1.96SD. The technical error of measurement (TEM), coefficient of reliability (R) and coefficient of variation (CV) for the inter-examiner, intra-examiner and inter-instrument aspects were within the acceptable limits. This study suggests that the portable stadiometer is reliable and valid for use in community surveys.
Stadiometer, reliability, technical error of measurement, validity of height
This study sought to prospectively evaluate the influence of contrasted fluorine-18 fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDGPET/CT) in the staging of and impact on the management plan for treatment in patients with nasopharyngeal carcinoma (NPC). A total of 14 histologically proven NPC patients (mean age: 44.64±4.01years) were included in the study. These patients underwent contrasted Computed Tomography (CT) as well as 18F-FDGPET/CT imaging. Staging was based on the 7th edition of the American Joint Committee on Cancer Tumor Node Metastases (AJCC-TNM) recommendations. The oncologist was asked to prospectively assign a treatment plan for all patients being evaluated by CT and 18F-FDGPET/CT. The treatment plans were compared with the incremental information supplied by the FDG-PET/CT. The maximum standardised uptake value (SUVmax) and the widest dimension of the primary tumour, cervical lymph nodes size and the distant metastatic lesions were quantified on the co-registered PET/CT images by two experienced nuclear radiologists. The contrasted 18F-FDGPET/CT changed the management intent in nine patients (64.7%). A univariate analysis showed that there were significant correlations between SUVmax and the size of the metastatic lymph nodes (R2 =0.0761, p<0.01), lymph node volume (R2=0.695, p<0.01) and the T-stage (R2=0.647, p<0.01). Multiple linear regression analysis revealed the tumour SUVmax to be the independent predictor of the T-stage (adjusted R2=0.889, p<0.05). The SUVmax may potentially be a surrogate marker for the T-stage in the NPC patients. The use of the combined imaging modality, 18F-FDGPET/CT, substantially impacted on the management strategy for treatment of NPC patients.
The objective of this research was to design a transportable swine-roasting machine. The methodology was in two parts. The first was to survey the local knowledge of roast swine and to examine the process. The second was to design a movable swine-roasting machine. The result showed that most manufacturers and distributors produced roast swine. Their daily income was about USD170 per day. The cost of the roast swine was USD5.5 per kilogram. The process involved slaughtering a swine that was approximately 25 kg, removing the offal and inserting lemongrass stalks into the carcass. The pig carcass was then roasted on hot charcoal for about 3 hours and turned by hand. The average chest diameters of 47 roast swine were 0.332, 0.307 and 0.244 m. (large, medium and small, respectively). The design concepts of a movable swine roaster involved equipment that allowed roasting on only two sides of the swine, which was supported by stainless steel pipes (the swine holder) rotated by a 0.5 hp electrical motor. The amount of charcoal for a transportable swine-roasting machine was between 14 and 18 kg depending on the weight of the swine to be roasted. The average temperature of the roaster was 260°C. The average weight of fresh pork to°C harcoal was around 1.46:1 kg. The roaster was easy to use and maintain.
The effect of four selected variables on Chemical Oxygen Demand (COD) removal and H2 production by anaerobic mixed cultures from tapioca wastewater in batch mode (viz. ferrous sulphate (FeSO4), initial pH, sodium bicarbonate (NaHCO3) and nutrient solution with two inoculums (3,750 mgVSS/L and 7,500 mgVSS/L) were sought. Identification and screening of significant variables were conducted using the Plackett-Burman Design. An independent sample t-test was applied using 12 trials to evaluate inoculums content to determine the optimum level of the main variables and inoculum content at the steepest ascent. FeSO4 and initial pH both had a statistically significant (P<0.05) influence on COD removal and H2 production. COD removal and H2 production was greater at 7,500 mgVSS/L inoculums content than at 3,750 mgVSS/L (P< 0.05). An initial pH of 10 and FeSO4 at 2.5 g/L yielded the maximum H2 production potential (443.37 mL H2/L) and COD removal (61.54 %).
Score-based structure learning algorithm is commonly used in learning the Bayesian Network. Other than searching strategy, scoring functions play a vital role in these algorithms. Many studies proposed various types of scoring functions with different characteristics. In this study, we compare the performances of five scoring functions: Bayesian Dirichlet equivalent-likelihood (BDe) score (equivalent sample size, ESS of 4 and 10), Akaike Information Criterion (AIC) score, Bayesian Information Criterion (BIC) score and K2 score. Instead of just comparing networks with different scores, we included different learning algorithms to study the relationship between score functions and greedy search learning algorithms. Structural hamming distance is used to measure the difference between networks obtained and the true network. The results are divided into two sections where the first section studies the differences between data with different number of variables and the second section studies the differences between data with different sample sizes. In general, the BIC score performs well and consistently for most data while the BDe score with an equivalent sample size of 4 performs better for data with bigger sample sizes.
Single Stock Futures (SSFs) was introduced in Bursa Malaysia on 28th April 2006. There have been many studies on derivative instruments in Malaysia; however, none is on SSFs. Various statistical methods have been used to analyse the SSFs and its spot returns, namely Descriptive Statistics, Unit Root test, VAR, Johansen and Juselius Co-integration test, Granger Causality test, Variance Decomposition test, VECM, and GARCH model. This study analyses the SSFs and spot returns of eight companies listed in Bursa Malaysia. It found that Berjaya Sports Toto Bhd and Genting Bhd have no long-run and short-run causality (Genting Bhd has bi-directional causality) while AirAsia Bhd and AMMB Holdings Bhd's spot returns' volatility decreased after the introduction of SSFs; it increased in the other seven companies. In addition, only AMMB Holdings Bhd futures return did not affect its spot return. Bursa Malaysia Bhd and RHB Capital Bhd spot returns lead their futures returns
Single Stock Futures, SSF, VAR, Granger Causality, GARCH
Input-Output analysis provides important information about the structure of a country's economy. The construction of input-output tables based on detailed census or surveys is a complex procedure requiring substantial financial outlay, human capital, and time. This is the main reason why Malaysia Input-Output (MIO) Table is produced and published on average once every five years. For policy makers past data is not seen as suitable for planning economic policies. The aim of this study is to compare RAS and Euro methods to project input-output tables for Malaysia. The data for the study are MIO table and Gross Domestic Product for the years 2000, 2005 and 2010. The RAS and Euro method were used to project the MIO table 2005 using MIO table 2000 and also projection of MIO table 2010 using MIO table 2005. The projection of I-O tables involved an intensive iterative procedure using Excel Visual Basic programming. The projection performance of RAS and Euro methods were assessed based on Mean Absolute Deviation (MAD), Root Mean Squared Error (RMSE) and Dissimilarity Index (DI). The results show that Euro method performed better than the RAS method in the projection of MIO table.
Euro method, projecting input-output table, RAS method
In the study of disease mapping, relative risk estimation is the focus of analysis. Many methods have been introduced to estimate relative risk. In this paper, one of the common spatial models known as Besag, York and Mollié (BYM) model is discussed, and its application to dengue data for epidemiology weeks 1 to 52 of the year 2013 for 16 states in Malaysia is studied. Findings show that Selangor has the highest relative risk of dengue in comparison with other states. Data on the estimated relative risks are presented in the form of risk maps which can be used as a tool for the prevention and control of dengue.
Relative risk estimation, disease mapping, dengue disease, BYM model
Spectral data is often required to be pre-processed prior to applying a multivariate modelling technique. Baseline correction of spectral data is one of the most important and frequently applied pre-processing procedures. This preliminary study aims to investigate the impacts of six types of baseline correction algorithms on classifying 150 infrared spectral data of three varieties of paper. The algorithms investigated were Iterative Restricted Least Squares, Asymmetric Least Squares (ALS), Low-pass FFT Filter, Median Window (MW), Fill Peaks and Modified Polynomial Fitting. Processed spectral data were then analysed using Principal Component Analysis (PCA) to visually examine the clustering among the three varieties of paper. Results show that separation among the three varieties of paper is greatly improved after baseline correction via ALS, FP and MW algorithms.
Forensic science, paper, baseline correction, principal component analysis (PCA), IR spectroscopy
There are many research papers on implementing the salam structure in the financial system. This study introduces a mathematical model of salam contract with credit risk that can be used as an Islamic financial derivative. It explores the properties of salam contract and the credit model that represents it, that is, the structural model with the default event on maturity of the salam contract.
Combining forecast values based on simple univariate models may produce more favourable results than complex models. In this study, the results of combining the forecast values of Naïve model, Single Exponential Smoothing Model, The Autoregressive Moving Average (ARIMA) model, and Holt Method are shown to be superior to that of the Error Correction Model (ECM).Malaysia's unemployment rates data are used in this study. The independent variable used in the ECM formulation is the industrial production index. Both data sets were collected for the months of January 2004 to December 2010. The selection criteria used to determine the best model, is the Mean Square Error (MSE), Root Mean Squared Error (RMSE) and Mean Absolute Percentage Error (MAPE). Initial findings showed that both time series data sets were not influenced by the seasonality effect.
Combination forecast, unemployment rate, error correction model
Complexity has been discussed in decision making, computational, task complexity, activity network, supply chain, imaging, project management and mechanical. This paper reviews the definition of complexity and the preliminary-related definitions of complexity index in decision making. It proposes a complexity index for decision making, its properties, and implementation.
Prolonged mechanical ventilation (PMV) is associated with increase in mortality and resource utilisation as well as hospitalisation costs. This study evaluates the risk factors of PMV. A retrospective study was conducted involving 890 paediatric patients comprising 237 neonates, 306 infants, 223 of pre-school age and124 who are of school going age. The data mining decision trees algorithms and logistic regression was employed to develop predictive models for each age category. The independent variables were classified into four categories, that is, demographic data, admission factors, medical factors and score factors. The dependent variable is the duration of ventilation where it is categorized 0 denoting non-PMV and 1 denoting PMV. The performances of three decision tree models (CHAID, CART and C5.0) and logistic regression were compared to determine the best model. The results indicated that the decision tree outperformed the logistic regression model for all age categories, given its good accuracy rate for testing dataset. Decision trees results identified length of stay and inotropes as significant risk factors in all age categories. PRISM 12 hours and principal diagnosis were identified as significant risk factors for infants.
Mechanical ventilation, prolonged mechanical ventilation, paediatric, logistic regression, decision tree
State estimation plays a vital role in the security analysis of a power system. The weighted least squares method is one of the conventional techniques used to estimate the unknown state vector of the power system. The existence of bad data can distort the reliability of the estimated state vector. A new algorithm based on the technique of quality control charts is developed in this paper for detection of bad data. The IEEE 6-bus power system data are utilised for the implementation of the proposed algorithm. The output of the study shows that this method is practically applicable for the separation of bad data in the problem of power system state estimation.
Nonlinear estimation, weighted least squares method, bad data, Chi-square test, normalised residual test, Gauss-Newton algorithm
Internet of Things (IoT) is the biggest ICT revolution that the world is witnessing with potential to be the next biggest technology disruptor that will improve productivity and efficiency across different industries and services sector. The purpose of this paper is to study adoption of Internet of Things (IoT) enabled technologies in the corporate sector in India and also to identify factors influencing its adoption rate. It prescribes a suitable model serving as a blue print for enterprises. The methodology used is exploratory research as significance of Technology Adoption Model (TAM) in IoT projects is still not studied to lay the groundwork for future studies. Literature review proposed different models based on TAM or their abridged versions. In this study, a team of five experts in IoT project adoption proposed factors crucial to successful IoT project implementation. Based on these, questionnaires were developed and sent to respondents who are senior officers at their respective selected companies. Data obtained was used to validate existing and proven TAM research model. Based on this, the study proposed a new model (IOT-TAM). Variables namely Perceived utility, Perceived ease of use, intrinsic variables and external organization were developed. First generation multivariate method of multiple regressions was used to assess reliability and validity of the model measures.
Indian enterprises, internet of things, technology adoption, technology adoption model
Micro Electro Discharge Machining (micro-EDM) is widely used for producing different types of micro features and micro components. Tool wear rate (TWR) is an important factor that affects the accuracy of machining as well as the productivity of micro-EDM process. This study examines the effects of process parameters and the use of Maghemite (ý-Fe2O3) nano-powder mixed dielectric medium on tool wear rate when micro-EDM Co-Cr-Mo. A Copper electrode with 300 µm diameter and positive polarity was used to evaluate the machining process by focusing on TWR. Two different concentrations of nano-powder (i.e., 2 g/l and 4 g/l) were added to the dielectric. Results showed that increasing the discharge current and voltage leads to a corresponding increase in TWR, while the presence of ý-Fe2O3 nano-powder in the dielectric liquid decreases TWR. Mixed micro-EDM with 2 g/l of nano-powder achieved a lower TWR.
Atmospheric adversities impacts on the performance of free space optical (FSO) links, with turbulence-induced fading being the most prominent among them. Since FSO links involve transmission of optically modulated signal through atmosphere, it is crucial to have well defined mathematical model to understand and map association of atmospheric turbulence with channel link characteristics. To model a reliable optical wireless communication link, it is important to have an accurate probability density function (PDF) of received intensity, as it allows us to understand the atmospheric factors and magnitude of their impact that may lead to impairment of the link. It was observed that the variation in turbulence has a direct impact on channel behaviour and in turn affects the PDF of received intensity. This paper also analyses the performance of different channel models by contrasting their PDF for varying degrees of turbulence.
Atmospheric turbulence, channel fading, channel modelling, Cumulative Distribution Function (CDF), Irradiance Probability Density Function (PDF), variance
Non-Functional Requirements (NFRs) determine the utility and effectiveness of a framework. Due to the subjective nature and complexity of NFRs, it is quite unrealistic to concentrate on each NFR. Consequently, agreement between groups of cross-utilitarian and cross functional decision makers are important. This paper models NFRs in the form of Soft Goal Interdependency Digraph (SID). The SID is based on Interpretive Structural Modelling (ISM) method which in turn utilises MICMAC (Matrices Impacts Croise's Multiplication Appliquée a UN Classement) and AHP (Analytic Hierarchy Process) approaches for identification of critical NFRs. These objectives allow the analysts and developers to accept the best possible trade off choices among NFRs. This is discussed using a general case of cafeteria ordering framework. The proposed model contrasts well with other positioning methodologies.
Analytic Hierarchy Process, Interpretive Structural Modelling, Matrices Impacts Croise's Multiplication Appliquée a UN Classement, Non-Functional requirements, sensitivity analysis
Security is a major concern for the communication sector. The technique presented in this paper provides a novel security key generation mechanism. The proposed technique aims to generate a security key using the biological characteristics of the human body and the mathematically generated pseudo random sequences, thus producing different keys for different individuals. The final key is produced through the fusion of deoxyribonucleic acid (DNA) sequence of 1024 characters and Bernoulli Random Number Generator sequence of 256 bits. The performance of produced keys is evaluated using National Institute of Standards and Technology (NIST) tests and uniqueness is verified through avalanche test.
Authentication, Bernoulli random number generator, biometrics, communication, confidentiality, DNA, integrity, security
Atmospheric turbulence is the main impairment in free space optical communication links. To mitigate the effect of turbulence spatial diversity techniques are used. In this paper, we analyse the performance of Gamma-Gamma channel model with spatial diversity and compare it with K-distribution. The modulation techniques assumed here are on-off keying, binary PPM and binary phase shift keying and the bit error rate and Gain performance with single input single output (SISO), single input multiple output(SIMO), multiple input single output (MISO) and multiple input multiple output (MIMO) are presented.
Data transfer in wireless communication systems requires higher data rate, transmission capability, high bandwidth and robustness. Orthogonal frequency division multiplexing (OFDM) is mainly used in regard with Multipath fading and delay. To increase the systems performance various pilot assisted method was discovered and studied. We have used compressed sensing to estimate the channel coefficients of the fading channel; then we have performed the compressive sensing (CS) recovery algorithm to estimate the channel and to nullify the fading effect, the thus much better result are obtained in the simulation which satisfies the better performance of the system as compared to the traditional method.
There are many sophisticated models available for estimating the effort of the software project. However, estimation using existing model developed with agile software is questionable, making it necessary to develop a distinct model for web applications. This paper proposes a model that will evaluate cost of web applications developed through agile methodology and discusses the difference between the conventional software development and web application development.
Agile software, cost estimation, function point, Kalman Filter, web objects
Internet of things (IoT) has so far been considered to be a complex network of objects speaking to each other across a digital network transmitting information and broadcasts about themselves and their surroundings through an assembly of sensors, actuators and motors. In this paper we present a use case for IoT to understand and record consumer behavior. In essence the paper attempts to classify how humans and IoT devices can learn from each other. In this paper retail smart devices can help salespersons understand more accurately a consumer or shoppers preferences and suggest more suitable options.
Human - IoT bridge, Indian enterprises, Intelligent gesture recognition, Internet of Things, Knowledge recording, SmartMirrors
The present study deals with tapping of Al6061/SiC metal matrix composite. Stir casting technique was used for the fabrication of composite. Castings were produced by varying weight percentage of SiC (5, 7.5 and 10 wt. %) of 23 µm size in Al6061. The fabricated specimens were characterized for their hardness and tensile strength. It was found that hardness increases by the addition of SiC. Images from Scanning Electron Microscope (SEM) and metallurgical microscope showed the fair distribution of reinforcement. Due to the presence of SiC reinforcement that is highly abrasive in nature makes machining difficult and produces high rate of tool wear. After drilling, tapping experiments were conducted for the machinability study of Al6061/SiC metal matrix composite by using M8 HSS machine tap. Tapping operation was performed under dry condition with different cutting speed (12, 14 and 16 m/min) and constant feed rate equal to the pitch of thread. Torque required for tapping was measured using strain gauge based two-component cutting tool dynamometer. Microstructure and surface morphology of thread surfaces was analyzed using Metallurgical Microscope and Scanning Electron Microscope (SEM) respectively. Estimation of progressive flank wear of machine taps was undertaken using profile projector. The performance of HSS machine taps was evaluated in terms of tapping torque, tool flank wear and surface characteristics of thread surfaces.
Dynamic stiffness and damping coefficients of a finite journal bearing operating on TiO2 based nanolubricant are obtained using the linear perturbation approach. Time dependent version of governing Reynolds equation is modified to consider the couple stress effect of TiO2 nanoparticle lubricant additives. The viscosity variation of lubricant with varying concentrations of nanoparticle additives is simulated using a modified Krieger-Dougherty model. The modified Reynolds equation is solved using linear perturbation approach to obtain the dynamic pressures and dynamic coefficients. Threshold stability maps are plotted depicting stable operating regions of journal bearing operating on TiO2 nanolubricants. Results reveal an increase in stiffness and damping coefficients, and a corresponding improvement in whirl instability characteristics of journal bearings, with increase in TiO2 nanoparticle concentration.
Pre-stressing is a concept used in many engineering structures. In this study prestressing in the form of axial compression stress is proposed in the blade structure of H-Darrieus wind turbine. The study draws a structural comparison between reference and prestressed configurations of turbine rotor with respect to their dynamic vibrational response. Rotordynamics calculations provided by ANSYS Mechanical is used to investigate the effects of turbine rotation on the dynamic response of the system. Rotation speed ranging between 0 to 150 rad/s was examined to cover the whole operating range of commercial instances. The modal analysis ends up with first six mode shapes of both rotor configurations. As a result, the displacement of the proposed configurations reduced effectively. Apparent variations in Campbell diagrams of both cases indicate that prestressed configuration has its resonant frequencies far away from turbine operation speeds and thus remarkably higher safety factor against whirling and probable following failures.
Diesel engines produce high emissions of nitrogen oxide, smoke and particulate matter. The challenge is to reduce exhaust emissions but without making changing their mechanical configuration. This paper is an overview of the effect of natural gas on the diesel engine emissions. Literature review suggests that engine load, air-fuel ratio, and engine speed play a key role in reducing the pollutants in the diesel engine emissions with natural gas enrichment. It is found that increasing the percentage of natural gas (CNG) will affect emissions. Nitrogen oxide (NOx) is decreased and increased at part loads and high loads respectively when adding CNG. The reduction in carbon dioxide (CO2), particulate matter (PM) and smoke are observed when adding CNG. However, carbon monoxide (CO) and unburned hydrocarbon (HC) are increased when CNG is added.
CO, CO2, Diesel, Engine, Emissions, HC, Natural gas (CNG), NOx
The stability of a bearing is influenced by the turbulent conditions encountered during its operation. In this paper, a linearised perturbation method is used to theoretically investigate the stability of a three-axial groove water lubricated bearing. The stiffness and the damping coefficients are plotted for different eccentricity ratios of the bearing, for groove angles of 36o and 18o. The mass parameter and the whirl ratio, which are a measure of stability, are also plotted against the bearing number for different values of Reynolds number. The bearing shows very good stability at higher eccentricity ratios. The stiffness and damping coefficients as well as the mass parameter increase as the Reynolds number increases. The whirl ratios are unaffected by the change in Reynolds number.
This study simulates the nearshore current characteristics at Carey Island by using MIKE 21 Hydrodynamic FM. The model simulations are calibrated and validated against measured conditions by adjusting the values of bed resistant over the stipulated computation domain. To evaluate the accuracy of the simulation results, three statistical parameters, namely RMSE, R Squared, and Thiel's inequality coefficients are calculated to compare the observed and simulated results. The results indicate that the current speeds during the spring tide are approximately between 0 m/s and 0.64 m/s which come from Northwest to southeast direction. A good agreement between observed and simulated values of current speeds, current direction and water level with R squared of approximately 0.92 to 0.95 are obtained. Results suggest that the bed resistant is an important parameter in the hydrodynamic simulation using MIKE 21 Hydrodynamic FM.
Bed resistant, current characteristics, estimation, validation, MIKE 21 Hydrodynamic FM
This paper is a Computational Fluid Dynamics (CFD) study of the performance of a jet engine annular combustor that was subjected to various loading conditions. The aim is to comprehend the effect of various genuine working conditions on ignition and emission performance. The numerical models utilized for fuel ignition is the feasible k-? model for turbulent stream, species transport (aviation fuel and air) with eddy-dissipation reaction modelling and pollution model for nitrogen oxides (NOX) emission. The results obtained confirm the findings described in the literature.
Annular combustor, CFD, combustor loading, jet engine, gas emission
Kenaf natural fibre is used as a sustainable form of material to reinforce polymeric composite. However, natural fibres usually do not perform as well as synthetic fibres. Silica nanoparticle is a material with high surface area and its high interfacial interaction with the matrix results in its improvement. In this research, silica nanoparticles were introduced into epoxy resin as a filler material to improve the mechanical properties of the kenaf-reinforced epoxy. They were dispersed into the epoxy using a homogeniser at 3000 rpm for 10 minutes. The composites were fabricated by spreading the silica filled epoxy evenly onto the kenaf mat before hot pressing the resin wet kenaf mat. The results show for flexural properties, composites with higher fibre and silica volume content generally had better properties with specimen 601 (60 vol% kenaf and1 vol% silica) having the highest strength at 68.9 MPa. Compressive properties were erratic with specimen 201 (20 vol% kenaf and 1 vol% silica) having the highest strength at 53.6 MPa.