scispace - formally typeset
Search or ask a question

Showing papers in "International journal of engineering and technology in 2010"



Journal Article
TL;DR: In this paper, a case study of the Sungai Kayu Ara river basin which is located in the west part of the Kuala Lumpur in Malaysia was used to perform river flood hazard mapping using hydrologic and hydraulic models, respectively.
Abstract: In the past decades, thousands of lives have been lost, directly or indirectly, by flooding. In fact, of all natural hazards, floods pose the most widely distributed natural hazard to life today. Sungai Kayu Ara river basin which is located in the west part of the Kuala Lumpur in Malaysia was the case study of this research. In order to perform river flood hazard mapping HEC-HMS and HEC-RAS were utilized as hydrologic and hydraulic models, respectively. The generated river flood hazard was based on water depth and flow velocity maps which were prepared according to hydraulic model results in GIS environment. The results show that, magnitude of rainfall event (ARI) and river basin land-use development condition have significant influences on the river flood hazard maps pattern. Moreover, magnitude of rainfall event caused more influences on the river flood hazard map in comparison with land-use development condition for Sungai Kayu Ara river basin.

58 citations


Journal ArticleDOI
TL;DR: In this paper, the formability characteristics of aluminium based composites using FEM and statistical tool are discussed and detailed review has been carried out on the workability studies on metal matrix composites and forming limit diagrams.
Abstract: In this review, the formability characteristics of Aluminium based composites using FEM and statistical tool are discussed. Formability characteristics of Metal Matrix Composites (MMCs) have been influenced by many factors viz., temperature, pressure, volume fraction of reinforcement, size and shape of particles. Investigation on wall friction, friction between particles and matrix powder are also presented. Detailed review has been carried out on the workability studies on upsetting of MMCs and forming limit diagrams. The state of the art has been discussed with the help of literatures.

55 citations


Journal ArticleDOI
TL;DR: The proposed multipath routing scheme provides better performance and scalability by computing multiple routes in a single route discovery and reduces the routing overhead by using secondary paths.
Abstract: 394 Abstract—Mobile ad hoc networks (MANETs) consist of a collection of wireless mobile nodes which dynamically exchange data among themselves without the need of fixed infrastructure or a wired backbone network. Due to limited transmission range of wireless network nodes, multiple hops are usually needed for a node to exchange information with any other node in the network. Thus routing is a crucial issue in the design of MANET. On-demand routing protocols for mobile ad hoc networks discover and maintain only the needed routes to reduce routing overheads. They use a flood-based discovery mechanism to find routes when required. Since each route discovery incurs high overhead and latency, the frequency of route discoveries must be kept low for ondemand protocols to be effective. The wide availability of wireless devices requires the routing protocol should be scalable. But, as the size of the network increases the ondemand routing protocols produce poor performance due to large routing overhead generated while repairing route breaks. The proposed multipath routing scheme provides better performance and scalability by computing multiple routes in a single route discovery. Also, it reduces the routing overhead by using secondary paths. This scheme computes combination of the node-disjoint path and fail-safe paths for multiple routes and provides all the intermediate nodes of the primary path with multiple routes to destination.

44 citations


Journal ArticleDOI
TL;DR: In this article, the optimal machining parameters (i.e., spindle speed, depth of cut and feed rate) for face milling operations were investigated in order to minimize the surface roughness and to maximize the material removal rate.
Abstract: Nowadays numerical and Artificial Neural Networks (ANN) methods are widely used for both modeling and optimizing the performance of the manufacturing technologies. Optimum machining parameters are of great concern in manufacturing environments, where economy of machining operation plays a key role in competitiveness in the market. In this paper, the selection of optimal machining parameters (i.e., spindle speed, depth of cut and feed rate) for face milling operations was investigated in order to minimize the surface roughness and to maximize the material removal rate. Effects of selected parameters on process variables (i.e., surface roughness and material removal rate) were investigated using Response Surface Methodology (RSM) and artificial neural networks. Optimum machining parameters were carried out using RSM and compared to the experimental results. The obtained results indicate the appropriate ability of RSM and ANN methods for milling process modeling and optimization.

38 citations



Journal ArticleDOI
TL;DR: In this paper, an artificial neural network (ANN) model was used to predict the exhaust emissions of a diesel engine to predict performance and emissions of the engine, and the performance of the ANN predictions were measured by comparing the predictions with the experimental results which were not used in the training process.
Abstract: This study deals with artificial neural network (ANN) modeling of a diesel engine to predict the exhaust emissions of the engine. To acquire data for training and testing the proposed ANN, a single cylinder, four-stroke test engine was fuelled with biodiesel blended with diesel and operated at different loads. Using some of the experimental data for training, an ANN model based on feed forward neural network for the engine was developed. Then, the performance of the ANN predictions were measured by comparing the predictions with the experimental results which were not used in the training process. It was observed that the ANN model can predict the engine exhaust emissions quite well with correlation coefficients, with very low root mean square errors. This study shows that, as an alternative to classical modeling techniques, the ANN approach can be used to accurately predict the performance and emissions of internal combustion engines.

35 citations


Journal ArticleDOI
TL;DR: The obtained results demonstrate that use of the proposed nonlinear optimal control technique improves the tradeoff between ride quality and suspension travel compared to the passive suspension system and the proportional integral sliding mode method.
Abstract: 78  Abstract—In this paper, a nonlinear optimal control law based on a quadratic cost function is developed, and applied on a half-car model for the control of active suspension systems. Nonlinear model of half-car is constructed using the nonlinear dynamics of the electro hydraulic actuator and dynamic characteristics of the dampings and springs. The states of half car model are first estimated by Extended Kalman Filter (EKF) and then the estimated states predicted by Taylor series expansion and finally a control law is introduced by minimizing the local differences between the predicted and desired states. The derived control law has an analytical form which is easy to apply and also it is not required online numerical computations in optimization. Performance of the nonlinear optimal controller is compared to the existing passive suspension system and the proportional integral sliding mode controller. The obtained results demonstrate that use of the proposed nonlinear optimal control technique improves the tradeoff between ride quality and suspension travel compared to the passive suspension system and the proportional integral sliding mode method.

33 citations


Journal ArticleDOI
TL;DR: The aim of the proposed system is to hide information ( data file) within image page of execution file (EXEfile) to make sure changes made to the file will not be detected by universe and the functionality of the exe.file is still functioning after hiding process.
Abstract: A Previously traditional methods were sufficient to protect the information, since it is simplicity in the past does not need complicated methods but with the progress of information technology, it become easy to attack systems, and detection of encryption methods became necessary to find ways parallel with the differing methods used by hackers, so the embedding methods could be under surveillance from system managers in an organization that requires the high level of security. This fact requires researches on new hiding methods and cover objects which hidden information is embedded in. It is the result from the researches to embed information in executable files, but when will use the executable file for cover they have many challenges must be taken into consideration which is any changes made to the file will be firstly detected by untie viruses , secondly the functionality of the file is not still functioning. In this paper, a new information hiding system is presented. The aim of the proposed system is to hide information (data file) within image page of execution file (EXEfile) to make sure changes made to the file will not be detected by universe and the functionality of the exe.file is still functioning after hiding process. Meanwhile, since the cover file might be used to identify hiding information, the proposed system considers overcoming this dilemma by using the execution file as a cover file.

32 citations



Journal ArticleDOI
TL;DR: A new model for the classification of Alzheimer's disease, vascular disease and Parkinson's disease is proposed by considering the most influencing risk factors by using various attribute evaluation scheme with ranker search method.
Abstract: Medical data mining has great potential for exploring the hidden patterns in the data sets of the medical domain. These patterns can be utilized for the classification of various diseases. Data mining technology provides a user-oriented approach to novel and hidden patterns in the data. The present study consisted of records of 746 patients collected from ADRC, ISTAART, USA. Around eight hundred and ninety patients were recruited to ADRC and diagnosed for Alzheimer's disease (65%), vascular dementia (38%) and Parkinson's disease (40%), according to the established criteria. In our study we concentrated particularly on the major risk factors which are responsible for Alzheimer's disease, vascular dementia and Parkinson's disease. This paper proposes a new model for the classification of Alzheimer's disease, vascular disease and Parkinson's disease by considering the most influencing risk factors. The main focus was on the selection of most influencing risk factors for both AD and PD using various attribute evaluation scheme with ranker search method. Different models for the classification of AD, VD and PD using various classification techniques such as Neural Networks (NN) and Machine Learning (ML) methods were also developed. It is observed that increase in the vascular risk factors increases the risk of Alzheimer's disease. It was found that some specific genetic factors, diabetes, age and smoking were the strongest risk factors for Alzheimer's disease. Similarly, for the classification of Parkinson's disease, the risk factors such as stroke, diabetes, genes and age were the vital factors.

Journal ArticleDOI
TL;DR: A cumulative speckle reduction (CSR) algorithm in the MATLAB environment, which performs all despeckle filtering functions as well as performance metrics calculation in a single trial is developed, which finds that SRAD and Wavelet despekling filters are exhibiting fairly well performance over the other standard spatial filters.
Abstract: This paper presents a comparative study of among one Multi Scale filter (wavelet), nine single scale spatial adaptive filters (viz. Anistropic Diffusion (PMAD)) that are widely used for speckle noise reduction in biomedical ultrasound B-Scan images. The main objective of this study is to identify the efficient and optimum speckle filter in terms of preserving the edge details of the images with effective denoising. The performance of the filters are best estimated by calculating twenty-one established performance metrics along with execution time in order to determine the effective and optimum-despeckling algorithm for real time implementation. To do this, we have developed a cumulative speckle reduction (CSR) algorithm in the MATLAB environment, which performs all despeckle filtering functions as well as performance metrics calculation in a single trial. In the case of diffusion filter implementation, provision is given to execute the diffusion filter for several trials to identify the best iteration in terms of denoising the speckle and preserving the diagnostic information found in the B-scan images. The algorithm has been experimented with more than 200 digital ultrasound B-scan images of kidney, abdomen, liver and choroids. Based on the visual inspection of the despeckled images and the calculation of the performance metrics, it is found that SRAD and Wavelet despeckling filters are exhibiting fairly well performance over the other standard spatial filters.

Journal ArticleDOI
TL;DR: This work proposes a practical framework for digital forensics on flash drives, a unique way of generating, storing and analyzing data, retrieved from digital devices which pose as evidence in forensic analysis.
Abstract: With the rapid advancements in information and communication technology in the world, crimes committed are becoming technically intensive. When crimes committed use digital devices, forensic examiners have to adopt practical frameworks and methods to recover data for analysis which can pose as evidence. Data Generation, Data Warehousing and Data Mining, are the three essential features involved in the investigation process. This paper proposes a unique way of generating, storing and analyzing data, retrieved from digital devices which pose as evidence in forensic analysis. A statistical approach is used in validating the reliability of the pre-processed data. This work proposes a practical framework for digital forensics on flash drives.

Journal ArticleDOI
TL;DR: This paper employs individual fuzzy decision making to capture the subjectivity encapsulated in individual concerns of stakeholders with respect to goals and constraints associated with conflicting requirements and obtains an integrated set of requirements using Fuzzy Decision-Making that would satisfy all the stakeholders.
Abstract: The success of a system depends upon how intensively it accomplishes its intended purpose by meeting all stakeholders' concerns pertaining to conflicting requirements such as cost, schedule, performance etc. Various stakeholders may have their individual and consolidated concerns over conflicting requirements. Individual concerns facilitate a stakeholder to obtain preference orderings of conflicting requirements and consolidated concerns assist developer to obtain consensual preference ordering that would satisfy all stakeholders. As the concerns over the conflicting requirements are vague, uncertain and subjective in nature, this paper employs Fuzzy Decision Making for modeling the vagueness, haziness and non-specificity associated with the requirements. Finally a case study using an agent oriented system is presented to illustrate the application of the methodology. associated with the requirements. Secondly, these methods don't take into account the goals and constraints associated with the conflicting requirements that may lead to stakeholders' dissatisfaction. This paper employs individual fuzzy decision making to capture the subjectivity encapsulated in individual concerns of stakeholders with respect to goals and constraints of conflicting requirements and hence facilitates them to obtain preference orderings of conflicting requirements that reflects their individual concerns. Secondly this paper utilizes multi-person decision making to resolve diverse concerns of various stakeholders. These integrated set of requirements would satisfy all stakeholders and also assist the developer to ascertain the essential requirements of stakeholders within limited resources. The application of the methodology is illustrated using Agent oriented Paradigm (AOP) that is a recent way of representing the requirements of a system in terms of agents. An Agent-oriented System (AoS) typically involves a large number of agents playing different roles, interacting with each other to achieve individual and common goals (11). Software agents are computer programs that act autonomously on behalf of their users across open and distributed environments to solve growing number of complex problems. In an agent-oriented system, various stakeholders may differ over the implementation issues of agents. In addition, they may have their own individual and consolidated concerns associated with goals and constraints associated with conflicting requirements e.g. cost of accomplishing a system may be a conflicting requirement for various stakeholders. They may have their own priorities over cost in terms of goals and constraints associated with it. Goal associated with the cost may be to enhance the return of an organization but not at the stake of quality of service. This paper takes into account goals and constraints associated with the conflicting requirements and obtain an integrated set of requirements using Fuzzy Decision-Making that would satisfy all the stakeholders. The organization of the paper is as follows: Section II introduces Fuzzy Decision-Making. Section III utilizes Fuzzy Decision-Making to deal with individual and consolidated concerns of stakeholders to prioritize requirements. Section IV illustrates the proposed methodology using an agent oriented system.


Journal ArticleDOI
TL;DR: This article presents the review of the computing models applied for solving problems of midterm load forecasting and results can be used in electricity generation such as energy reservation and maintenance scheduling.
Abstract:  Abstract—This article presents the review of the computing models applied for solving problems of midterm load forecasting. The load forecasting results can be used in electricity generation such as energy reservation and maintenance scheduling. Principle, strategy and results of short term, midterm, and long term load forecasting using statistic methods and artificial intelligence technology (AI) are summaried, Which, comparison between each method and the articles have difference feature input and strategy. The last, will get the idea or literature review conclusion to solve the problem of mid term load forecasting (MTLF).

Journal ArticleDOI
TL;DR: This paper discusses about dynamic creation of replicas, replica placement and replica selection, implemented by using a data grid simulator, Optorsim developed by European data grid projects.
Abstract: —Data grids provide distributed resources for dealing with large scale applications that generate huge volume of data sets. Data replication, a technique much discussed by data grid researchers in past years creates multiple copies of file and stores them in conventional locations to shorten file access times. One of the challenges in data replication is creation of replicas, replica placement and replica selection. Dynamic creation of replicas in a suitable site by data replication strategy can increase the systems performance. When creating replicas a decision has to be made on when to create replicas and which one to be created. This decision is based on popularity of file. Placement of replicas selects the best site where replicas should be placed. Placing the replicas in the appropriate site reduces the bandwidth consumption and reduces the job execution time. Replica selection decides which replica to locate among many replicas. This paper discusses about dynamic creation of replicas, replica placement and replica selection. It is implemented by using a data grid simulator, Optorsim developed by European data grid projects.



Journal ArticleDOI
TL;DR: The implementation of this work was successful on achieving significant compression ratios and showed different degrees of contrast and fine detail to show how the compression affected high frequency components within the images.
Abstract: 252 Abstract— Image and video compression is one of the major components used in video-telephony, videoconferencing and multimedia-related applications where digital pixel information can comprise considerably large amounts of data. Management of such data can involve significant overhead in computational complexity and data processing. Compression allows efficient utilization of channel bandwidth and storage size. Typical access speeds for storage mediums are inversely proportional to capacity. Through data compression, such tasks can be optimized. One of the commonly used methods for image and video compression is JPEG (an image compression standard).Image and video compressors and decompressors (codecs) are implemented mainly in software as digital signal processors. Hardware-specific codecs can be integrated into digital systems fairly easily, requiring work only in the areas of interface and overall integration. Using an FPGA (Field Programmable Gate Array) to implement a codec combines the best of two worlds. The implementation of this work is carried out with JPEG algorithm with Artificial Neural Networks (ANN). The core compression design was created using the Verilog hardware description language. The supporting software was written in matlab, developed for a DSP and the PC. The implementation of this work was successful on achieving significant compression ratios. The sample images chosen showed different degrees of contrast and fine detail to show how the compression affected high frequency components within the images.

Journal ArticleDOI
TL;DR: Investigation of different routing protocols and their performances on 802.16 WiMAX networks show that DSDV in general outperforms other routing protocols.
Abstract: The selection of an appropriate routing protocol is a key issue when designing a scalable and efficient wireless networks. Various routing protocols have been used in wireless networks. In this paper, we investigate different routing protocols and evaluate their performances on 802.16 WiMAX networks. Using simulation, different routing protocols have been tested with various network parameters. Results show that DSDV in general outperforms other routing protocols.

Journal ArticleDOI
TL;DR: This paper proposes the three methodology for addressing an architecture for load balancing system with security, which address an Architecture for mobile agent to roam all the nodes in an distributed network and an architecture to rearrange the loads among the peers for better performance of the distributed system.
Abstract: —Load balancing system is commonly used for highly efficient utilization of the physical or logical resources and enhancing the performance of the distributed systems and scalability of the Internet. Numerous proposals exist for load balancing system in peer-to-peer networks, but it does not mainly address the security issues. Load balancing among the peers is critical to provide a solution for distribution of resources with security. These paper propose the three methodology for addressing an architecture for load balancing system with security. First, it address an architecture for mobile agent to roam all the nodes in an distributed network and. second, it address an architecture to rearrange the loads among the peers for better performance of the distributed system. Third, and perhaps more significantly, address the mobile agent to provide the security in a network.

Journal ArticleDOI
TL;DR: In this article, the authors proposed new designs for optical signal processing elements based on solely on the combination of single mode waveguides and 2x2 multimode interference couplers on a silicon-on-insulator (SOI) platform.
Abstract: This paper proposes new designs for optical signal processing elements based on solely on the combination of single mode waveguides and 2x2 multimode interference (MMI) couplers on a silicon-on-insulator (SOI) platform. For the first time, it is shown how optical Hadamard and Haar wavelet transforms may be implemented on an SOI platform using these passive planar devices. The designs for these devices are optimized using the three dimensional beam propagation method (3D BPM).

Journal ArticleDOI
TL;DR: A fuzzification operation is applied to extract the pixel wise association of face images to different classes using membership function to obtain the degree of belonging of a particular pixel to all classes.
Abstract: This paper brings out a new approach of information extraction based on fuzzy logic, which can be used for robust face recognition system. We have applied a fuzzification operation to extract the pixel wise association of face images to different classes. The fuzzification operation uses  membership function to obtain the degree of belonging of a particular pixel to all classes. Further nearest neighbor classification using correlation coefficient and principal component analysis are used to obtain the classification error over AT&T face database. The results clearly confirmed the superiority of proposed approach.

Journal Article
TL;DR: The duration of each activity is estimated by the experts as linguistic variables and the said variables are represented in fuzzy numbers form using the fuzzy theory to estimate the project accomplishment duration and determine the project critical path.
Abstract: Correct scheduling of the project is the necessary condition for the project success. In traditional models, the activities duration times are deterministic and known. In real world however, accurate calculation of time for performing each activity is not possible and is always faced with uncertainty. In this paper, the duration of each activity is estimated by the experts as linguistic variables and the said variables are represented in fuzzy numbers form using the fuzzy theory. Estimating the project accomplishment duration and determining the project critical path will be possible through resolving a fuzzy linear programming model. For solving model, Fuzzy Critical Path Method Algorithm (FCPMA) is introduced that uses fuzzy numbers ranking. In none of this method steps the defuzzification of the fuzzy numbers occurs, and the project accomplishment duration is gained in trapezoidal fuzzy number. Finally the performance of the introduced algorithm is shown using an application example.


Journal ArticleDOI
TL;DR: In this paper, the authors assimilate voltage capacity of DC storage unit of a DVR with transmission voltage so that the information is available in a convenient manner and can be used as a basis for continued research in this area.
Abstract: Voltage sags and momentary power interruptions are probably the most significant power quality problems affecting industrial and large commercial customers. The installation of mitigation devices can be seen as a short term solution. DVR is a very effective series-compensation device for mitigating voltage sags. The mitigation capability of these devices is mainly limited by the energy storage capacity. This paper is intended to assimilate voltage capacity of DC storage unit of a DVR with transmission voltage so that the information is available in a convenient manner and can be used as a basis for continued research in this area.

Journal ArticleDOI
TL;DR: The design of a nonlinear feedback controller is analyzed for temperature control of continuous stirred tank reactors (CSTRs) which have strong nonlinearities and a method for adaptive control of a continuous stirred tanks with output temperature constraint is developed.
Abstract: —This paper presents the design of a nonlinear feedback controller is analyzed for temperature control of continuous stirred tank reactors (CSTRs) which have strong nonlinearities. Consequently, we need to introduce a control mechanism that will make the proper changes on the process to cancel the negative impact that such nonlinearities may have on the desired operation of chemical plant. A method for adaptive control of a continuous stirred tank reactor with output temperature constraint is developed. The controller is robust to modeling errors and random disturbances occurring in the system. The controller design is analyzed for this situation we use two controller Adaptive and PID and analyzed which controller provide most linear response, The basic PID controllers have difficulty in dealing with problems that appear in complex non-linear processes Simulation studies give satisfactory results.

Journal ArticleDOI
TL;DR: In this paper, the effects of process parameters including laser power, welding speed, and focal point position on Butt weld geometries were carried out using Artificial Neural Network (ANN).
Abstract: 491 Abstract—Nowadays Artificial Neural Networks (ANNs) are widely used for modeling and investigation the effects of process parameters. In the presented study, laser butt welding of Ti 6Al 4V material on a 2.2 Kw Co2 laser is investigated. The experiments were designed using a five level Response Surface Method (RSM). Effects of process parameters including laser power, welding speed and focal point position on butt weld geometries were carried out using Artificial Neural Network. Results indicate that the welding speed and laser power have significant effect, whereas, the focal point position show low effect on the process. The welding speed has an opposite effect on all responses while the laser power has a positive effect.

Journal ArticleDOI
TL;DR: In this article, acid activated saw dust was used to remove dyes from aqueous solution in a column filtration reactor and the experimental data were fitted to Langmuir and Freundlich isotherm.
Abstract: The use of low cost, recycled waste and eco-friendly absorbent has been investigated as an alternative process for replacement of currently expensive process for removing dyes from wastewater. In this study, Acid activated saw dust was used to remove dyes from aqueous solution in a column filtration reactor. Saw dust is an excellent low cost adsorbent of colored organic anions and may have significant potential as a color removal from tannery wastewater. The effectiveness of acid activated sawdust in absorbing Lurazol Brown pH (LBP) dye from aqueous solutions was studied as a function of agitation time, adsorbent dosage and initial dye concentration. The experimental data were fitted to Langmuir and Freundlich isotherm and found that adsorption process follows both the isotherm. The values of Langmuir and Freundlich constants indicate favorable and beneficial adsorption. A two-stage treatment system was developed and its performance assessed in relation to a variety of initial dye concentrations. This was backed by a series of laboratory experiments, the results of which provide a better scientific understanding of the biodegradable material like acid activated sawdust and help realize their potential as commercial products.