scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Computers in 2011"


Journal ArticleDOI
TL;DR: This paper presented a simple and efficient algorithm for multi-focus image fusion, which used a multi-resolution signal decomposition scheme called Laplacian pyramid method, which has good performance, and the quality of the fused image is better than the results of other methods.
Abstract: This paper presented a simple and efficient algorithm for multi-focus image fusion, which used a multi-resolution signal decomposition scheme called Laplacian pyramid method. The principle of Laplacian pyramid transform is introduced, and based on it the fusion strategy is described in detail. The method mainly composed of three steps. Firstly, the Laplacian pyramids of each source image are deconstructed separately, and then each level of new Laplacian pyramid is fused by adopting different fusion rules. To the top level, it adopts the maximum region information rule; and to the rest levels, it adopts the maximum region energy rule. Finally, the fused image is obtained by inverse Laplacian pyramid transform. Two sets of images are applied to verify the fusion approach proposed and compared it with other fusion approaches. By analyzing the experimental results, it showed that this method has good performance, and the quality of the fused image is better than the results of other methods.

181 citations


Journal ArticleDOI
TL;DR: Experimental results suggest that the proposed classifier is an effective algorithm for the classification tasks in many practical situations, owing to its satisfactory classification performance and robustness over a wide range of k.
Abstract: K-nearest neighbor rule (KNN) is the well-known non-parametric technique in the statistical pattern classification, owing to its simplicity, intuitiveness and effectiveness. In this paper, we firstly review the related works in brief and detailedly analyze the sensitivity issue on the choice of the neighborhood size k , existed in the KNN rule. Motivated by the problem, a novel dual weighted voting scheme for KNN is developed. With the goal of overcoming the sensitivity of the choice of the neighborhood size k and improving the classification performance, the proposed classifier mainly employs the dual weighted voting function to reduce the effect of the outliers in the k nearest neighbors of each query object. To verify the superiority of the proposed classifier, the experiments are conducted on one artificial data set and twelve real data sets, in comparison with the other classifiers. Experimental results suggest that our proposed classifier is an effective algorithm for the classification tasks in many practical situations, owing to its satisfactory classification performance and robustness over a wide range of k .

80 citations


Journal ArticleDOI
TL;DR: A self-adaptive mutation operation based on the degree of a path blocked by obstacles is designed to improve the feasibility of a new path in a global path planning approach based on multi-objective particle swarm optimization.
Abstract: Aiming at robot path planning in an environment with danger sources, a global path planning approach based on multi-objective particle swarm optimization is presented in this paper. First, based on the environment map of a mobile robot described with a series of horizontal and vertical lines, an optimization model of the above problem including two indices, i.e. the length and the danger degree of a path, is established. Then, an improved multi-objective particle swarm optimization algorithm of solving the above model is developed. In this algorithm, a self-adaptive mutation operation based on the degree of a path blocked by obstacles is designed to improve the feasibility of a new path. To improve the performance of our algorithm in exploration, another archive is adopted to save infeasible solutions besides a feasible solutions archive, and the global leader of particles is selected from either the feasible solutions archive or the infeasible one. Moreover, a constrained Pareto domination based on the degree of a path blocked by obstacles is employed to update local leaders of a particle and the two archives. Finally, simulation results confirm the effectiveness of our algorithm.

80 citations


Journal ArticleDOI
TL;DR: This paper proposes a new version of the global K-means algorithm that takes less run time and can avoid the influence of noisy data on clustering results efficiently and can efficiently deal with gene expression data with high dimensions.
Abstract: K-means clustering is a popular clustering algorithm based on the partition of data. However, K-means clustering algorithm suffers from some shortcomings, such as its requiring a user to give out the number of clusters at first, and its sensitiveness to initial conditions, and its being easily trapped into a local solution et cetera. The global K-means algorithm proposed by Likas et al is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure consisting of N (with N being the size of the data set) runs of the K-means algorithm from suitable initial positions. It avoids the depending on any initial conditions or parameters, and considerably outperforms the K-means algorithms, but it has a heavy computational load. In this paper, we propose a new version of the global K-means algorithm. That is an efficient global K-means clustering algorithm. The outstanding feature of our algorithm is its superiority in execution time. It takes less run time than that of the available global K-means algorithms do. In this algorithm we modified the way of finding the optimal initial center of the next new cluster by defining a new function as the criterion to select the optimal candidate center for the next new cluster. Our idea grew under enlightened by Park and Jun's idea of K-medoids clustering algorithm. We chose the best candidate initial center for the next cluster by calculating the value of our new function which uses the information of the natural distribution of data, so that the optimal initial center we chose is the point which is not only with the highest density, but also apart from the available cluster centers. Experiments on fourteen well-known data sets from UCI machine learning repository show that our new algorithm can significantly reduce the computational time without affecting the performance of the global K-means algorithms. Further experiments demonstrate that our improved global K-means algorithm outperforms the global K-means algorithm greatly and is suitable for clustering large data sets. Experiments on colon cancer tissue data set revealed that our new global K-means algorithm can efficiently deal with gene expression data with high dimensions. And experiment results on synthetic data sets with different proportions noisy data points prove that our global k-means can avoid the influence of noisy data on clustering results efficiently.

78 citations


Journal ArticleDOI
TL;DR: The direct similarity and the indirect similarity between users are proposed, and the similarity matrix through the relative distance between the user’s rating is computed, realized a new collaborative filtering approach to alleviate the sparsity problem and improved the quality of the recommendation.
Abstract: Recommender systems are being widely applied in many fields, such as e-commerce etc, to provide products, services and information to potential customers. Collaborative filtering as the most successful approach, which recommends contents to the current customers mainly is based on the past transactions and feedback of the similar customer. However, it is difficult to distinguish the similar interests between customers because the sparsity problem is caused by the insufficient number of the transactions and feedback data, which confined the usability of the collaborative filtering. This paper proposed the direct similarity and the indirect similarity between users, and computed the similarity matrix through the relative distance between the user’s rating; using association retrieval technology to explore the transitive associations based on the user’s feedback data, realized a new collaborative filtering approach to alleviate the sparsity problem and improved the quality of the recommendation. In the end, we implemented experiment based on Movielens data set, the experiment results indicated that the proposed approach can effectively alleviate the sparsity problem, have good coverage rate and recommendation quality.

60 citations


Journal ArticleDOI
TL;DR: The results on 42 slices inter-subject registration indicate that the proposed method can produce accurate and optimized parameters of tensor registration with fastest speed relative to Genetic Algorithm and Particle Swarm Optimization.
Abstract: Registration or spatial normalization of diffusion tensor images plays an important role in many areas of human brain white matter research, such as analysis of Fraction Anisotropy (FA) or whiter matter tracts. More difficult than registration of scalar images, spatial normalization of tensor images requires two important parts: one is tensor interpolation, and the other is tensor reorientation. Current tensor reorientation strategy possessed many defects during tensor registration. To overcome the shortcomings, we first presented a multi-channel model with one FA and six log-Euclidean tensors, and then proposed an adaptive chaotic particle swarm optimization to find the global minima of the objective function of the multi-channel model. The results on 42 slices inter-subject registration indicate that our proposed method can produce accurate and optimized parameters of tensor registration with fastest speed relative to Genetic Algorithm and Particle Swarm Optimization.

46 citations


Journal ArticleDOI
TL;DR: This paper looks deeper into some key issues of smart grid, such as distributed cooperation and control, data and application integration, and knowledge-based comprehensive decision, and gives some solutions to resolve these challenges.
Abstract: Since its emergence, smart grid has been given increasingly widespread attentions. Basically, smart grid combines a various modern technologies like network communication, information processing and distributed control to provide a more secure, reliable and intelligent grid, thus meeting the requirements of future social and economic development. As a new paradigm in the power grid, smart grid undoubtedly represents the mainstream trend of future electric industry. As a result, it also brings some new technical challenges to researchers and engineering practioners. To support researchers and engineering practioners constructing a modern and intelligent grid, research in the field of smart grid has proliferated. In this paper, we look deeper into some key issues of smart grid, such as distributed cooperation and control, data and application integration, and knowledge-based comprehensive decision. Still, we give some solutions to resolve these challenges. In addition, we also introduce the concept of smart grid and define its key characteristics. Finally, we outline future directions of research and conclude the paper.

46 citations


Journal ArticleDOI
TL;DR: A novel image denoising algorithm named FIIDA is proposed, which based on fractional calculus Riemann-Liouville definition shows that the FIIDA's performance is prior to the Gaussian smoothing filter, especially when the noise standard deviation is less than 30.
Abstract: In this paper, a novel image denoising algorithm named fractional integral image denoising algorithm (FIIDA) is proposed, which based on fractional calculus Riemann-Liouville definition. The structures of n*n fractional integral masks of this algorithm on the directions of 135 degrees, 90 degrees, 45 degrees, 0 degrees, 180 degrees, 315 degrees, 270 degrees and 225 degrees are constructed and discussed. The denoising performance of FIIDA is measured using experiments according to subjective and objective standards of visual perception and PSNR values. The simulation results show that the FIIDA's performance is prior to the Gaussian smoothing filter, especially when the noise standard deviation is less than 30.

44 citations


Journal ArticleDOI
Guoming Hu, Zhenyu Hu, Bin Jian, Liping Liu, Hui Wan 
TL;DR: Regardless of adhesiveness and plasticity, an improved model based on Hertzian theory is proposed, in which the approximated expression relating the damping constant to the restitution coefficient is given.
Abstract: The Discrete Element Method (DEM) is widely used in the simulation of a particle system. The viscoelastic contact models are the most common ones in the DEM simulation. However, these models stills have a few unrealistic behaviors, which will influence the reasonableness and accuracy of the simulation results. A general form of damping coefficient is proposed through dimensional analysis, and the condition is found so that the unrealistic behaviors can be solved. Regardless of adhesiveness and plasticity, an improved model based on Hertzian theory is proposed, in which the approximated expression relating the damping constant to the restitution coefficient is given. The impact of a single ball to a wall is simulated by using three different models, of which the results of contact force are presented and discussed.

43 citations


Journal ArticleDOI
TL;DR: The results from a 2 * 2 experiment design indicate that the impact of online reviews valence is moderated by consumer expertise: the impact difference between negative reviews and positive reviews is greater for consumers with low expertise than for those with high expertise.
Abstract: The previous studies have shown inconsistent relationship between the valence (positive or negative) of online consumer reviews and consumer decision making. With accessibility/diagnosticity theory, this study attempts to explain this discrepancy through exploring consumer expertise as a moderator. Our results from a 2 * 2 experiment design indicate that the impact of online reviews valence is moderated by consumer expertise: The impact difference between negative reviews and positive reviews is greater for consumers with low expertise than for those with high expertise. Our study adds to the literature relevant with e-WOM effect. And we also provide managerial implications for e-marketers. Index Terms—online consumer reviews, valence, consumer expertise, accessibility/diagnosticity theory

41 citations


Journal ArticleDOI
TL;DR: The results show that this method can recognize characters precisely and improve the ability of license plate character recognition effectively.
Abstract: License plate localization and character segmentation and recognition are the research hotspots of vehicle license plate recognition (VLPR) technology. A new method to VLPR is presented in this paper. In license plate localization section, Otsu binarization is operated to get the plate-candidates regions, and a text-line is constructed from the candidate regions. According to the text-line construction result and the characteristics of the license plate character arrangement, the license plate location will be determined. And then the locally optimal adaptive binarization is utilized to make more accurate license plate localization. After the license plate localization, the segment method of vertical projection information with prior knowledge is used to slit characters and the statistical features are extracted. Then the multilevel classification RBF neural network is used to recognize characters using the feature vector as input. The results show that this method can recognize characters precisely and improve the ability of license plate character recognition effectively.

Journal ArticleDOI
TL;DR: In this paper, a new method using CP-Nets for the analysis of security protocols is presented that provides an open-ended base for the integration of multiple attack tactics and is a viable approach to overcome the state space explosion problem.
Abstract: Security protocols are the basis of security in networks. Therefore, it is essential to ensure that these protocols function correctly. However, it is difficult to design security protocols that are immune to malicious attack, since good analysis techniques are lacking. In this paper, the current main analysis techniques using Colored Petri Nets (CP-Nets) for analysis of security protocols are introduced. Based on the techniques, a new method using CP-Nets for the analysis of security protocols is presented. Specially, in the new method, an intruder CP-Net model is presented that provides an open-ended base for the integration of multiple attack tactics. This is a viable approach to overcome the state space explosion problem. Furthermore, the automated analysis tools CPN Tools is used. The Andrew secure RPC protocol is chosen to illustrate how a security protocol is analyzed using the new method. After model checking, an attack is found which the same as the one found by Gavin Lowe. These are stunning confirmations of the validity of the new method for analyzing security protocols.

Journal ArticleDOI
TL;DR: Simulation results and comparisons with the standard ABC and several meta-heuristics show that the proposed differential ABC (DABC) can effectively enhance the searching efficiency and greatly improve the searching quality.
Abstract: Artificial bee colony (ABC) is the one of the newest nature inspired heuristics for optimization problem.In order to improve the convergence characteristics and to prevent the ABC to get stuck on local solutions, a differential ABC (DABC) is proposed. The differential operator obeys uniform distribution and creates candidate food position that can fully represent the solution space. So the diversity of populations and capability of global search will be enhanced. To show the performance of our proposed DABC, a number of experiments are carried out on a set of well-known benchmark continuous optimization problems. Simulation results and comparisons with the standard ABC and several meta-heuristics show that the DABC can effectively enhance the searching efficiency and greatly improve the searching quality.

Journal ArticleDOI
TL;DR: Test result shows that the efficiency and accuracy of service selection are improved with the service selection method based on OWL-SQ model, and the dimensionless method of QoS properties for un-numerical value type and numerical value type in OWL
Abstract: In order to select the proper one from the web service candidate sets which are provided by semantic matching, a method for web service selection with QoS (quality of service) constraints is proposed. In this paper, firstly, the Four-Level Matching Model for Semantic Web Service based on QoS Ontology, which includes Application Domain Matching, Description Matching of Service, Function Matching of Service and QoS Matching, is presented, then, the QoS ontology for web service is constructed and the QoS description information with the QoS concept in OWL-S (Web Ontology Language for Services) model is extended, thirdly, the extensible OWL-SQ model for service description with service semantic and QoS description is presented, and it is QoS supporting and constraining. Then based on decision theory and normalized algorithm, the dimensionless method of QoS properties for un-numerical value type and numerical value type in OWL-SQ model is proposed. Finally, the algorithm for constructing matching matrix of QoS and the service selection method are presented, to evaluate and select optimization candidate web service comprehensively. Test result shows that the efficiency and accuracy of service selection are improved with the service selection method based on OWL-SQ model.

Journal ArticleDOI
TL;DR: Experiments prove that the network got by the optimizing GA method has a better architecture and stronger classification ability, and the time of constructing the network artificially is saved.
Abstract: Artificial Neural Networks (ANNs) are the nonlinear and adaptive information processing systems which are combined by numerous processing units, with the characteristics of self-adapting, self-organizing and real- time learning, and play an important in pattern recognition, machine learning and data mining. But we've encountered many problems, such as the selection of the structure and the parameters of the networks, the selection of the learning samples, the selection of the initial values, the convergence of the learning algorithms and so on. Genetic algorithms (GA) is a kind of random search algorithm, on one hand, it simulates the nature selection and evolution, on the other, it has the advantages of good global search abilities and learning the approximate optimal solution without the gradient information of the error functions. In this paper, some optimization algorithms for ANNs with GA are studied. Firstly, an optimizing BP neural network is set up. It is using GA to optimize the connection weights of the neural network, and using GA to optimize both the connection weights and the architecture. Secondly, an optimizing RBF neural network is proposed. It used hybrid encoding method, that is, to encode the network by binary encoding and the weights by real encoding, the network architecture is self-adapted adjusted, the weights are learned, and the network is further adjusted by pseudo- inverse method or LMS method. Then they are used in real world classification tasks, and compared with the modified BP algorithm with adaptive learning rate. Experiments prove that the network got by this method has a better architecture and stronger classification ability, and the time of constructing the network artificially is saved. The algorithm is a self-adapted and intelligent learning algorithm.

Journal ArticleDOI
TL;DR: The job is to extend and enrich the simulator CloudSim by auction algorithms inherited from GridSim simulator, but its algorithms do not support the virtualization which is an important part of Cloud Computing, why several parameters and functions adapted to the environment of cloud computing as well as users to meet their requirements.
Abstract: In Cloud computing, the availability and performance of services are two important aspects to be lifted, because users require a certain level of quality service in terms of timeliness of their duties in a lower cost Several studies have overcome this problem by the proposed algorithms borrowed from economic models of real world economy to ensure that quality of service, our job is to extend and enrich the simulator CloudSim by auction algorithms inherited from GridSim simulator, but its algorithms do not support the virtualization which is an important part of Cloud Computing, why we introduced several parameters and functions adapted to the environment of cloud computing as well as users to meet their requirements

Journal ArticleDOI
TL;DR: A novel method to impute missing values in microarray time-series data combining k-nearest neighbor (KNN) and dynamic time warping (DTW) and results show that this method is more accurate compared with existing missing value imputation methods on real micro array time series datasets.
Abstract: Microarray technology provides an opportunity for scientists to analyze thousands of gene expression profiles simultaneously. However, microarray gene expression data often contain multiple missing expression values due to many reasons. Effective methods for missing value imputation in gene expression data are needed since many algorithms for gene analysis require a complete matrix of gene array values. Several algorithms are proposed to handle this problem, but they have various limitations. In this paper, we develop a novel method to impute missing values in microarray time-series data combining k-nearest neighbor (KNN) and dynamic time warping (DTW). We also analyze and implement several variants of DTW to further improve the efficiency and accuracy of our method. Experimental results show that our method is more accurate compared with existing missing value imputation methods on real microarray time series datasets.

Journal ArticleDOI
TL;DR: A novel multimodal biometric system using face-iris fusion feature using feature-level fusion scheme that normalizes the original features of iris and face using z-score model, and connects the normalized feature vectors in serial rule.
Abstract: With the wide application, the performance of unimodal biometrics systems has to contend with a variety of problems such as background noise, signal noise and distortion, and environment or device variations. Therefore, multimodal biometric systems are proposed to solve the above mentioned problems. This paper proposed a novel multimodal biometric system using face-iris fusion feature. Face feature and iris feature are first extracted respectively and fused in feature-level. However, existing feature level schemes such as sum rule and weighted sum rule are inefficient in complicated condition. In this paper, we adopt an efficient feature-level fusion scheme for iris and face in series. The algorithm normalizes the original features of iris and face using z-score model to eliminate the unbalance in the order of magnitude and the distribution between two different kinds of feature vectors, and then connect the normalized feature vectors in serial rule. The proposed algorithm is tested using CASIA iris database and two face databases (ORL database and Yale database). Experimental results show the effectiveness of the proposed algorithm.

Journal ArticleDOI
TL;DR: A method for robot indoor automatic positioning and orientating based on two-dimensional (2D) barcode landmark based on the scheme of the 2D barcode for reference can meet accuracy requirements of indoor position and orientation.
Abstract: A method for robot indoor automatic positioning and orientating based on two-dimensional (2D) barcode landmark is proposed. By using the scheme of the 2D barcode for reference, a special landmark is designed which is convenient to operate and easy to recognize , contain coordinates of their absolute positions and have some ability to automatically correct errors . Landmarks are placed over the “ceiling” and photographed by a camera mounted on the robot with its optical axis vertical to the ceiling plane. The coordinates and angle of the landmark is acquired through image segmentation, contour extracting, characteristic curves matching and landmark properties identifying, and then the robot’s current absolute position and heading angle is computed. The experiments proved the effectiveness of the method and shows that the method can meet accuracy requirements of indoor position and orientation.

Journal ArticleDOI
TL;DR: A new method of bearing fault diagnosis by wavelet packet transform with support vector machine (SVM) that can also effectively diagnose compound faults is presented.
Abstract: After briefly analyzing past research, by wavelet packet transform with support vector machine (SVM), a new method of bearing fault diagnosis is presented. Wavelet packets have greater decor relation properties than standard wavelets in that they induce a finer partitioning of the frequency domain of the process generating the data. we analyze the vibration features of testing signals of a bearing system in different running conditions by wavelet de-noising with thresholds; we decompose the feature signals into different frequency bands with the wavelet packet transform (WPT) and then calculate the energy percentage of every frequency band component to obtain its fault detection index used for fault diagnosis by the support vector machine (SVM). We analyze the vibration features of testing signals of a bearing system in different running conditions by wavelet de-noising with thresholds; and to decompose the feature signals into different frequency bands with the wavelet packet transform (WPT), through wavelet packet transform to obtain wavelet coefficients and then Energy eigenvector of frequency domain are extracted by using Shannon entropy principle. Subsequently, the extracted Energy eigenvector of frequency domain are applied as inputs to support vector machine(SVM)for bearing from internal fault. Fault state of bearing is identified by using radial basis function genetic-support vector machine. What is worth mentioning in particular is that our method can also effectively diagnose compound faults.

Journal ArticleDOI
TL;DR: This work establishes sunspot prediction model with NARX network and shows that compared with BP Network and ARIMA Model, N ARX network can better predict the chaos.
Abstract: Chaotic time-series is a dynamic nonlinear system whose features can not be fully reflected by Linear Regression Model or Static Neural Network. While Nonlinear Autoregressive with eXogenous input includes feedback of network output, therefore, it can better reflect the system’s dynamic feature. Take annual active times of sunspot as an example, after verifying the chaos of sunspot time-series and calculating the series’ embedding dimension and delay, we establish sunspot prediction model with NARX network. The result shows that compared with BP Network and ARIMA Model, NARX network can better predict the chaos.

Journal ArticleDOI
TL;DR: Google GFS massive data storage system and the popular open source Hadoop HDFS were detailed introduced to analyze the principle of Cloud Storage technology.
Abstract: Cloud storage is a new concept come into being simultaneously with cloud computing, and can be divided into public cloud storage, private cloud storage and hybrid cloud storage. This article gives a quick introduction to cloud storage. It covers the key technologies in Cloud Computing and Cloud Storage. Google GFS massive data storage system and the popular open source Hadoop HDFS were detailed introduced to analyze the principle of Cloud Storage technology. As an important technology area and research direction, cloud storage is becoming a hot research for both academia and industry session. The future valuable research works were summarized at the end.

Journal ArticleDOI
Jiyi Wu, Qianli Shen, Tong Wang1, Ji Zhu, Jianlin Zhang 
TL;DR: A quick introduction to cloud security is given, which covers the key technologies of cloud computing, security challenge and problem in cloud Computing, recent advances in cloud security.
Abstract: Cloud computing emerges as a new computing paradigm which aims to provide reliable, customized and QoS guaranteed dynamic computing environments for end-users. Although cloud computing industry promises tremendous prospects of market growth, for users of cloud services, cloud computing has a wide range of potential risks and safety issues. This article gives a quick introduction to cloud security. It covers the key technologies of cloud computing, security challenge and problem in cloud computing, recent advances in cloud security.

Journal ArticleDOI
TL;DR: Experimental results show that A RAS-M can approximately achieve the equilibrium state, that is, demand and supply is nearly balanced, which validate s that A ras-M is effective and practicable, and is capable of achieving resource balance in cloud computing.
Abstract: Resource management is one of the main issues in Cloud Computing. In order to improve resource utilization of large Data Centers while delivering services with higher QoS to Cloud Clients, an automatic resource allocation strategy based on market Mechanism ( A RAS-M) is proposed. Firstly, the architecture and the market model of A RAS-M are constructed, in which a QoS-refectitive utility function is designed according to different resource requirements of Cloud Client . T he equilibrium state of A RAS-M is defined and the proof of its optimality is given. Secondly, A Genetic Algorithm (GA) -based automatic price adjusting algorithm is introduced to deal with the problem of achieving the equilibrium state of A RAS-M. Finally, A RAS-M is implemented on Xen. Experiment results show that A RAS-M can approximately achieve the equilibrium state, that is, demand and supply is nearly balanced, which validate s that A RAS-M is effective and practicable , and is capable of achieving resource balance in cloud computing .

Journal ArticleDOI
TL;DR: Based on the distance measure between the sample data under test and the model of attribute of species, a new method to obtain BPA is proposed.
Abstract: Dempster Shafer theory of evidence has been widely used in many data fusion application systems. However, how to determine basic probability assignment, which is the main and the first step in evidence theory, is still an open issue. In this paper, based on the distance measure between the sample data under test and the model of attribute of species, a new method to obtain BPA is proposed. A numerical example is used to illustrate the efficiency of the proposed method.

Journal ArticleDOI
TL;DR: Investigation of how RFID technology was implemented and adopted in Taiwan’s logistics industry revealed the attributes of innovations mentioned above were significantly positively associated with the adoption of RFID.
Abstract: Logistics tasks heavily depend on reliable shipment and accurate tracking information. For this reason, logistics today have evolved into a high-technology industry. Distribution is no longer simply about moving cargo on the road or via air from location A to B, but is a complex process based on intelligent system for sorting, planning, routing, and consolidation that supports faster transportation. [1] The purpose of this study was to investigate how RFID technology was implemented and adopted in Taiwan’s logistics industry. Specifically, this study focused on the positive influence of using RFID technology on the industry and the strategical benefits the RFID system had provided to companies, which had accepted and utilized this technology. This study also aimed to determine the concern factors of adopting the RFID system into current company management systems. An integral part of this research was to develop and to empirically test a model of the adoption of RFID in the context of the logistics industry in Taiwan. Based on the concepts of Rogers [2]-- the theory of technology diffusion, this research used a questionnaire to assess Taiwan logistics companies' cognition and perspective of the relative advantage, compatibility, complexity, trialability and observability of the RFID system; as well as to assess their attitudes toward the RFID system and intentions of using the system. Research findings revealed the attributes of innovations mentioned above were significantly positively associated with the adoption of RFID. According to the research results, managerial implications and opportunities for future research were discussed in the final.

Journal ArticleDOI
TL;DR: The simulation results show that the controller has faster dynamic responses and suppress stick-slip in oil well drill string, can achieve global stability of rotary drilling system.
Abstract: This paper presents a time-varying sliding mode adaptive controller in order to handle the stick-slip oscillation of nonlinear rotary drilling system. The time-varying sliding mode controller with strong robust has two time-varying sliding surfaces, one of them induced time-varying integral sliding mode control can control the transient stage of the rotary drilling system and ensure the system remains the sliding condition whatever in usual or existing the parameter changes and disturbances to arrive at a controller capable of global stability. The herein developed controller is, a time-varying sliding mode adaptive controller has tracking performance and identification of drilling parameters. Lyapunov principles have been carried out to verify the stability and robustness of system. The simulation results show that the controller has faster dynamic responses and suppress stick-slip in oil well drill string, can achieve global stability of rotary drilling system.

Journal ArticleDOI
TL;DR: Based on unit feedback closed-loop control structure, a simple analytical design method of decoupling controller matrix is proposed in terms of idea of coupling matrix for two-input-two-output (TITO) processes with time delays in chemical and industrial practice.
Abstract: Based on unit feedback closed-loop control structure, a simple analytical design method of decoupling controller matrix is proposed in terms of idea of coupling matrix for two-input-two-output (TITO) processes with time delays in chemical and industrial practice. By means of powerful robustness of two degree-of-freedom PID Desired Dynamic Equation (DDE) method, PID decoupling controller is analytically designed. And the Monte-Carlo stochastic experiment is introduced to analyze performance robustness of the controller. The most important merit of the proposed method is that for the nominal system the output of each channel can be decoupled entirely. Moreover, the decoupling matrix is simple and easily realized. Finally, illustrative simulation examples are included to demonstrate the remarkable superiority of the proposed method.

Journal ArticleDOI
TL;DR: An improved artificial fish swarm algorithm (IAFSA) is proposed and used to solve the problem of optimal operation of cascade reservoirs and the vision and the step of artificial fish are adjusted dynamicly in IAFSA to increase the convergence speed.
Abstract: Based on traditional artificial fish swarm algorithm (AFSA), an improved artificial fish swarm algorithm (IAFSA) is proposed and used to solve the problem of optimal operation of cascade reservoirs. To improve the ability of searching the global and the local extremum, the vision and the step of artificial fish are adjusted dynamicly in IAFSA. Moreover, to increase the convergence speed, the threshold selection strategy is employed to decrease the individual large space gap between before and after update operation in the local update operation. The validity of IAFSA is proved by case study and the threshold parameters of IAFSA are rated.

Journal ArticleDOI
TL;DR: It is shown how semantic web technologies can be used to build a flexible access control system for web service by following the Role-based Access Control model and extending it with credential attributes.
Abstract: Due to the open and distributed characteristics of web service, its access control becomes a challenging problem which has not been addressed properly. In this paper, we show how semantic web technologies can be used to build a flexible access control system for web service. We follow the Role-based Access Control model and extend it with credential attributes. The access control model is represented by a semantic ontology, and specific semantic rules are constructed to implement such as dynamic roles assignment, separation of duty constraints and roles hierarchy reasoning, etc. These semantic rules can be verified and executed automatically by the reasoning engine, which can simplify the definition and enhance the interoperability of the access control policies. The basic access control architecture based on the semantic proposal for web service is presented. Finally, a prototype of the system is implemented to validate the proposal.