scispace - formally typeset
Search or ask a question

Showing papers in "Mathematical Problems in Engineering in 2015"


Journal ArticleDOI
TL;DR: This survey presented a comprehensive investigation of PSO, including its modifications, extensions, and applications to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology.
Abstract: Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms.

836 citations


Journal ArticleDOI
TL;DR: This paper provides a concise review of mainstream methods in major aspects of the PHM framework, including the updated research from both statistical science and engineering, with a focus on data-driven approaches.
Abstract: Prognostics and health management (PHM) is a framework that offers comprehensive yet individualized solutions for managing system health. In recent years, PHM has emerged as an essential approach for achieving competitive advantages in the global market by improving reliability, maintainability, safety, and affordability. Concepts and components in PHM have been developed separately in many areas such as mechanical engineering, electrical engineering, and statistical science, under varied names. In this paper, we provide a concise review of mainstream methods in major aspects of the PHM framework, including the updated research from both statistical science and engineering, with a focus on data-driven approaches. Real world examples have been provided to illustrate the implementation of PHM in practice.

267 citations


Journal ArticleDOI
TL;DR: A new initial population strategy has been developed to improve the genetic algorithm for solving the well-known combinatorial optimization problem, traveling salesman problem, by reconnecting each cluster based on the k-means algorithm.
Abstract: A new initial population strategy has been developed to improve the genetic algorithm for solving the well-known combinatorial optimization problem, traveling salesman problem. Based on the k-means algorithm, we propose a strategy to restructure the traveling route by reconnecting each cluster. The clusters, which randomly disconnect a link to connect its neighbors, have been ranked in advance according to the distance among cluster centers, so that the initial population can be composed of the random traveling routes. This process is -means initial population strategy. To test the performance of our strategy, a series of experiments on 14 different TSP examples selected from TSPLIB have been carried out. The results show that KIP can decrease best error value of random initial population strategy and greedy initial population strategy with the ratio of approximately between 29.15% and 37.87%, average error value between 25.16% and 34.39% in the same running time.

177 citations


Journal ArticleDOI
TL;DR: In this article, the National Natural Science Foundation of China under Grants 61329301, 61374039, 61473163, and 61374127, the Hujiang Foundation under Grants C14002 andD15009, the Engineering and Physical Sciences Research Council (EPSRC) of the UK, the Royal Society of UK, and the Alexander von Humboldt Foundation of Germany.
Abstract: This work was supported in part by the National Natural Science Foundation of China under Grants 61329301, 61374039, 61473163, and 61374127, the Hujiang Foundation of China under Grants C14002 andD15009, the Engineering and Physical Sciences Research Council (EPSRC) of the UK, the Royal Society of the UK, and the Alexander von Humboldt Foundation of Germany.

156 citations


Journal ArticleDOI
TL;DR: Effectiveness of the application of DELM in EEG classification is confirmed and it is confirmed that MLELM approximate the complicated function but it also does not need to iterate during the training process.
Abstract: Recently, deep learning has aroused wide interest in machine learning fields. Deep learning is a multilayer perceptron artificial neural network algorithm. Deep learning has the advantage of approximating the complicated function and alleviating the optimization difficulty associated with deep models. Multilayer extreme learning machine (MLELM) is a learning algorithm of an artificial neural network which takes advantages of deep learning and extreme learning machine. Not only does MLELM approximate the complicated function but it also does not need to iterate during the training process. We combining with MLELM and extreme learning machine with kernel (KELM) put forward deep extreme learning machine (DELM) and apply it to EEG classification in this paper. This paper focuses on the application of DELM in the classification of the visual feedback experiment, using MATLAB and the second brain-computer interface (BCI) competition datasets. By simulating and analyzing the results of the experiments, effectiveness of the application of DELM in EEG classification is confirmed.

152 citations


Journal ArticleDOI
TL;DR: In this paper, the second generation of the Unified Theory of Acceptance and Use of Technology (UTAUT2) is introduced as a theoretic basis to explore and predict the intentions to use and use behaviors of Phablets.
Abstract: The smart mobile devices have emerged during the past decade and have become one of the most dominant consumer electronic products. Therefore, exploring and understanding the factors which can influence the acceptance of novel mobile technology have become the essential task for the vendors and distributors of mobile devices. The Phablets, integrated smart devices combining the functionality and characteristics of both tablet PCs and smart phones, have gradually become possible alternatives for smart phones. Therefore, predicting factors which can influence the acceptance of Phablets have become indispensable for designing, manufacturing, and marketing of such mobile devices. However, such predictions are not easy. Meanwhile, very few researches tried to study related issues. Consequently, the authors aim to explore and predict the intentions to use and use behaviors of Phablets. The second generation of the Unified Theory of Acceptance and Use of Technology (UTAUT2) is introduced as a theoretic basis. The Decision Making Trial and Evaluation Laboratory (DEMATEL) based Network Process (DNP) will be used to construct the analytic framework. In light of the analytic results, the causal relationships being derived by the DEMATEL demonstrate the direct influence of the habit on other dimensions. Also, based on the influence weights being derived, the use intention, hedonic motivation, and performance expectancy are the most important dimensions. The analytic results can serve as a basis for concept developments, marketing strategy definitions, and new product designs of the future Phablets. The proposed analytic framework can also be used for predicting and analyzing consumers’ preferences toward future mobile devices.

136 citations


Journal ArticleDOI
Qian Leng, Honggang Qi1, Jun Miao, Wentao Zhu, Guiping Su 
TL;DR: The experimental evaluation shows that the ELM based one-class classifier can learn hundreds of times faster than autoencoder and it is competitive over a variety of one- class classification methods.
Abstract: One-class classification problem has been investigated thoroughly for past decades. Among one of the most effective neural network approaches for one-class classification, autoencoder has been successfully applied for many applications. However, this classifier relies on traditional learning algorithms such as backpropagation to train the network, which is quite time-consuming. To tackle the slow learning speed in autoencoder neural network, we propose a simple and efficient one-class classifier based on extreme learning machine (ELM). The essence of ELM is that the hidden layer need not be tuned and the output weights can be analytically determined, which leads to much faster learning speed. The experimental evaluation conducted on several real-world benchmarks shows that the ELM based one-class classifier can learn hundreds of times faster than autoencoder and it is competitive over a variety of one-class classification methods.

127 citations


Journal ArticleDOI
TL;DR: A feature selection algorithm based on evolutionary computation that uses the MapReduce paradigm to obtain subsets of features from big datasets, improving both the classification accuracy and its runtime when dealing with big data problems.
Abstract: Nowadays, many disciplines have to deal with big datasets that additionally involve a high number of features. Feature selection methods aim at eliminating noisy, redundant, or irrelevant features that may deteriorate the classification performance. However, traditional methods lack enough scalability to cope with datasets of millions of instances and extract successful results in a delimited time. This paper presents a feature selection algorithm based on evolutionary computation that uses the MapReduce paradigm to obtain subsets of features from big datasets. The algorithm decomposes the original dataset in blocks of instances to learn from them in the map phase; then, the reduce phase merges the obtained partial results into a final vector of feature weights, which allows a flexible application of the feature selection procedure using a threshold to determine the selected subset of features. The feature selection method is evaluated by using three well-known classifiers (SVM, Logistic Regression, and Naive Bayes) implemented within the Spark framework to address big data problems. In the experiments, datasets up to 67 millions of instances and up to 2000 attributes have been managed, showing that this is a suitable framework to perform evolutionary feature selection, improving both the classification accuracy and its runtime when dealing with big data problems.

126 citations


Journal ArticleDOI
TL;DR: The effectiveness of the proposed methodology is demonstrated through two typical numerical examples of the nonlinear performance functions with nonconvex and disconnected acceptability regions and high-dimensional input parameters and a real-world application in the parameter design of a track circuit for Chinese high-speed railway.
Abstract: A belief rule-based (BRB) system provides a generic nonlinear modeling and inference mechanism. It is capable of modeling complex causal relationships by utilizing both quantitative information and qualitative knowledge. In this paper, a BRB system is firstly developed to model the highly nonlinear relationship between circuit component parameters and the performance of the circuit by utilizing available knowledge from circuit simulations and circuit designers. By using rule inference in the BRB system and clustering analysis, the acceptability regions of the component parameters can be separated from the value domains of the component parameters. Using the established nonlinear relationship represented by the BRB system, an optimization method is then proposed to seek the optimal feasibility region in the acceptability regions so that the volume of the tolerance region of the component parameters can be maximized. The effectiveness of the proposed methodology is demonstrated through two typical numerical examples of the nonlinear performance functions with nonconvex and disconnected acceptability regions and high-dimensional input parameters and a real-world application in the parameter design of a track circuit for Chinese high-speed railway.

125 citations


Journal ArticleDOI
TL;DR: This paper provides an up-to-date survey on the recent developments of ELM and its applications in high dimensional and large data.
Abstract: Extreme learning machine (ELM) has been developed for single hidden layer feedforward neural networks (SLFNs). In ELM algorithm, the connections between the input layer and the hidden neurons are randomly assigned and remain unchanged during the learning process. The output connections are then tuned via minimizing the cost function through a linear system. The computational burden of ELM has been significantly reduced as the only cost is solving a linear system. The low computational complexity attracted a great deal of attention from the research community, especially for high dimensional and large data applications. This paper provides an up-to-date survey on the recent developments of ELM and its applications in high dimensional and large data. Comprehensive reviews on image processing, video processing, medical signal processing, and other popular large data applications with ELM are presented in the paper.

122 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed an interval-valued intuitionistic fuzzy MULTIMOORA (IVIF-MULTIMOLA) method for group decision making in the uncertain environment.
Abstract: Multiple criteria decision making methods have received different extensions under the uncertain environment in recent years. The aim of the current research is to extend the application of the MULTIMOORA method (Multiobjective Optimization by Ratio Analysis plus Full Multiplicative Form) for group decision making in the uncertain environment. Taking into account the advantages of IVIFS (interval-valued intuitionistic fuzzy sets) in handling the problem of uncertainty, the development of the interval-valued intuitionistic fuzzy MULTIMOORA (IVIF-MULTIMOORA) method for group decision making is considered in the paper. Two numerical examples of real-world civil engineering problems are presented, and ranking of the alternatives based on the suggested method is described. The results are then compared to the rankings yielded by some other methods of decision making with IVIF information. The comparison has shown the conformity of the proposed IVIF-MULTIMOORA method with other approaches. The proposed algorithm is favorable because of the abilities of IVIFS to be used for imagination of uncertainty and the MULTIMOORA method to consider three different viewpoints in analyzing engineering decision alternatives.

Journal ArticleDOI
TL;DR: In this article, an Iterative Learning Control (ILC) method with Extended State Observer (ESO) is proposed to enhance the tracking precision of the telescope, which can find an ideal control signal by the process of iterative learning.
Abstract: An Iterative Learning Control (ILC) method with Extended State Observer (ESO) is proposed to enhance the tracking precision of telescope. Telescope systems usually suffer some uncertain nonlinear disturbances, such as nonlinear friction and unknown disturbances. Thereby, to ensure the tracking precision, the ESO which can estimate system states (including parts of uncertain nonlinear disturbances) is introduced. The nonlinear system is converted to an approximate linear system by making use of the ESO. Besides, to make further improvement on the tracking precision, we make use of the ILC method which can find an ideal control signal by the process of iterative learning. Furthermore, this control method theoretically guarantees a prescribed tracking performance and final tracking accuracy. Finally, a few comparative experimental results show that the proposed control method has excellent performance for reducing the tracking error of telescope system.

Journal ArticleDOI
TL;DR: The results are promising, and the information extracted using the proposed pothole detection method can be used, not only in determining the preliminary maintenance for a road management system and in taking immediate action for their repair and maintenance, but also in providing alert information of potholes to drivers as one of ITS services.
Abstract: Potholes can generate damage such as flat tire and wheel damage, impact and damage of lower vehicle, vehicle collision, and major accidents. Thus, accurately and quickly detecting potholes is one of the important tasks for determining proper strategies in ITS (Intelligent Transportation System) service and road management system. Several efforts have been made for developing a technology which can automatically detect and recognize potholes. In this study, a pothole detection method based on two-dimensional (2D) images is proposed for improving the existing method and designing a pothole detection system to be applied to ITS service and road management system. For experiments, 2D road images that were collected by a survey vehicle in Korea were used and the performance of the proposed method was compared with that of the existing method for several conditions such as road, recording, and brightness. The results are promising, and the information extracted using the proposed method can be used, not only in determining the preliminary maintenance for a road management system and in taking immediate action for their repair and maintenance, but also in providing alert information of potholes to drivers as one of ITS services.

Journal ArticleDOI
TL;DR: This paper surveys some of the most interesting, widespread used, and advanced state-of-the-art methodologies for intrinsic dimensionality estimators and suggests a benchmark framework that can be applied to comparatively evaluate relevant state of theart estimators.
Abstract: When dealing with datasets comprising high-dimensional points, it is usually advantageous to discover some data structure. A fundamental information needed to this aim is the minimum number of parameters required to describe the data while minimizing the information loss. This number, usually called intrinsic dimension, can be interpreted as the dimension of the manifold from which the input data are supposed to be drawn. Due to its usefulness in many theoretical and practical problems, in the last decades the concept of intrinsic dimension has gained considerable attention in the scientific community, motivating the large number of intrinsic dimensionality estimators proposed in the literature. However, the problem is still open since most techniques cannot efficiently deal with datasets drawn from manifolds of high intrinsic dimension and nonlinearly embedded in higher dimensional spaces. This paper surveys some of the most interesting, widespread used, and advanced state-of-the-art methodologies. Unfortunately, since no benchmark database exists in this research field, an objective comparison among different techniques is not possible. Consequently, we suggest a benchmark framework and apply it to comparatively evaluate relevant state-of-the-art estimators.

Journal ArticleDOI
TL;DR: The Non-Centralized Model Predictive Control framework is proposed, which proposes suitable on-line methods to decide which information is shared and how this information is used between the different local predictive controllers operating in a decentralized, distributed, and/or hierarchical way.
Abstract: The Non-Centralized Model Predictive Control (NC-MPC) framework refers in this paper to any distributed, hierarchical, or decentralized model predictive controller (or a combination of them) the structure of which can change over time and the control actions of which are not obtained based on a centralized computation. Within this framework, we propose suitable on-line methods to decide which information is shared and how this information is used between the different local predictive controllers operating in a decentralized, distributed, and/or hierarchical way. Evaluating all the possible structures of the NC-MPC controller leads into a combinatorial optimization problem. Therefore, we also propose heuristic reduction methods, to keep tractable the number of NC-MPC problems to be solved. To show the benefits of the proposed framework, a case study of a set of coupled water tanks is presented.

Journal ArticleDOI
TL;DR: It is stated that before long the decision-making in civil engineering may face several methodological problems: the need to combine fuzzy and probabilistic representations of uncertainties in one decision- making matrix, the necessity to extend a global sensitivity analysis to all input elements of a MCDM problem with uncertainties, and an application of M CDM methods in the areas of civil engineering where decision- Making under uncertainty is presently not common.
Abstract: The present review examines decision-making methods developed for dealing with uncertainties and applied to solve problems of civil engineering. Several methodological difficulties emerging from uncertainty quantification in decision-making are identified. The review is focused on formal methods of multiple criteria decision-making (MCDM). Handling of uncertainty by means of fuzzy logic and probabilistic modelling is analysed in light of MCDM. A sensitivity analysis of MCDM problems with uncertainties is discussed. An application of stochastic MCDM methods to a design of safety critical objects of civil engineering is considered. Prospects of using MCDM under uncertainty in developing areas of civil engineering are discussed in brief. These areas are design of sustainable and energy efficient buildings, building information modelling, and assurance of security and safety of built property. It is stated that before long the decision-making in civil engineering may face several methodological problems: the need to combine fuzzy and probabilistic representations of uncertainties in one decision-making matrix, the necessity to extend a global sensitivity analysis to all input elements of a MCDM problem with uncertainties, and an application of MCDM methods in the areas of civil engineering where decision-making under uncertainty is presently not common.

Journal ArticleDOI
TL;DR: An improved SIRS model considering communication radius and distributed density of nodes is proposed and Reproductive number which determines global dynamics of worm propagation in WSNs is obtained.
Abstract: An improved SIRS model considering communication radius and distributed density of nodes is proposed. The proposed model captures both the spatial and temporal dynamics of worms spread process. Using differential dynamical theories, we investigate dynamics of worm propagation to time in wireless sensor networks (WSNs). Reproductive number which determines global dynamics of worm propagation in WSNs is obtained. Equilibriums and their stabilities are also found. If reproductive number is less than one, the infected fraction of the sensor nodes disappears and if the reproduction number is greater than one, the infected fraction asymptotically stabilizes at the endemic equilibrium. Based on the reproduction number, we discuss the threshold of worm propagation about communication radius and distributed density of nodes in WSNs. Finally, numerical simulations verify the correctness of theoretical analysis.

Journal ArticleDOI
TL;DR: In this article, the use of nonlinear least squares optimization is proposed to approximate the passband ripple characteristics of traditional Chebyshev lowpass filters with fractional order steps in the stopband.
Abstract: We propose the use of nonlinear least squares optimization to approximate the passband ripple characteristics of traditional Chebyshev lowpass filters with fractional order steps in the stopband. MATLAB simulations of , , and order lowpass filters with fractional steps from = 0.1 to = 0.9 are given as examples. SPICE simulations of 1.2, 1.5, and 1.8 order lowpass filters using approximated fractional order capacitors in a Tow-Thomas biquad circuit validate the implementation of these filter circuits.

Journal ArticleDOI
TL;DR: This survey provides a comprehensive review of some of the above-mentioned new areas including both theoretical and applied results.
Abstract: A great amount of new applied problems in the area of energy networks has recently arisen that can be efficiently solved only as mixed-integer bilevel programs. Among them are the natural gas cash-out problem, the deregulated electricity market equilibrium problem, biofuel problems, a problem of designing coupled energy carrier networks, and so forth, if we mention only part of such applications. Bilevel models to describe migration processes are also in the list of the most popular new themes of bilevel programming, as well as allocation, information protection, and cybersecurity problems. This survey provides a comprehensive review of some of the above-mentioned new areas including both theoretical and applied results.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed algorithm can effectively reduce data transmissions by filtering the unnecessary data and greatly prolong the lifetime of WSNs.
Abstract: As uncertainty is the inherent character of sensing data, the processing and optimization techniques for Probabilistic Skyline (PS) in wireless sensor networks (WSNs) are investigated. It can be proved that PS is not decomposable after analyzing its properties, so in-network aggregation techniques cannot be used directly to improve the performance. In this paper, an efficient algorithm, called Distributed Processing of Probabilistic Skyline (DPPS) query in WSNs, is proposed. The algorithm divides the sensing data into candidate data (CD), irrelevant data (ID), and relevant data (RD). The ID in each sensor node can be filtered directly to reduce data transmissions cost, since, only according to both CD and RD, PS result can be correctly obtained on the base station. Experimental results show that the proposed algorithm can effectively reduce data transmissions by filtering the unnecessary data and greatly prolong the lifetime of WSNs.

Journal ArticleDOI
TL;DR: In this article, the authors address the problem of rate dependent hysteretic nonlinearity in piezoelectric actuators (PEA) and propose a novel inversion based feedforward mechanism in combination with a feedback compensator is proposed to achieve high-precision tracking.
Abstract: Piezoelectric-stack actuated platforms are very popular in the parlance of nanopositioning with myriad applications like micro/nanofactory, atomic force microscopy, scanning probe microscopy, wafer design, biological cell manipulation, and so forth. Motivated by the necessity to improve trajectory tracking in such applications, this paper addresses the problem of rate dependent hysteretic nonlinearity in piezoelectric actuators (PEA). The classical second order Dahl model for hysteresis encapsulation is introduced first, followed by the identification of parameters through particle swarm optimization. A novel inversion based feedforward mechanism in combination with a feedback compensator is proposed to achieve high-precision tracking wherein the paradoxical concept of noise as a performance enhancer is introduced in the realm of PZAs. Having observed that dither induced stochastic resonance in the presence of periodic forcing reduces tracking error, dither capability is further explored in conjunction with a novel output harmonics based adaptive control scheme. The proposed adaptive controller is then augmented with an internal model control based approach to impart robustness against parametric variations and external disturbances. The proposed control law has been employed to track multifrequency signals with consistent compensation of rate dependent hysteresis of the PEA. The results indicate a greatly improved positioning accuracy along with considerable robustness achieved with the proposed integrated approach even for dual axis tracking applications.

Journal ArticleDOI
TL;DR: The proposed real-time pothole detection approach can be used to improve the safety of traffic for ITS and can detect potholes with lower cost in a comprehensive environment.
Abstract: In recent years, fast economic growth and rapid technology advance have led to significant impact on the quality of traditional transport system. Intelligent transportation system (ITS), which aims to improve the transport system, has become more and more popular. Furthermore, improving the safety of traffic is an important issue of ITS, and the pothole on the road causes serious harm to drivers’ safety. Therefore, drivers’ safety may be improved with the establishment of real-time pothole detection system for sharing the pothole information. Moreover, using the mobile device to detect potholes has been more popular in recent years. This approach can detect potholes with lower cost in a comprehensive environment. This study proposes a pothole detection method based on the mobile sensing. The accelerometer data is normalized by Euler angle computation and is adopted in the pothole detection algorithm to obtain the pothole information. Moreover, the spatial interpolation method is used to reduce the location errors from global positioning system (GPS) data. In experiments, the results show that the proposed approach can precisely detect potholes without false-positives, and the higher accuracy is performed by the proposed approach. Therefore, the proposed real-time pothole detection approach can be used to improve the safety of traffic for ITS.

Journal ArticleDOI
TL;DR: A new threat assessment model based on interval number to deal with the intrinsic uncertainty and imprecision in combat environment is proposed and both objective and subjective factors are taken into consideration.
Abstract: Threat evaluation is extremely important to decision makers in many situations, such as military application and physical protection systems. In this paper, a new threat assessment model based on interval number to deal with the intrinsic uncertainty and imprecision in combat environment is proposed. Both objective and subjective factors are taken into consideration in the proposed model. For the objective factors, the genetic algorithm (GA) is used to search out an optimal interval number representing all the attribute values of each object. In addition, for the subjective factors, the interval Analytic Hierarchy Process (AHP) is adopted to determine each object’s threat weight according to the experience of commanders/experts. Then a discounting method is proposed to integrate the objective and subjective factors. At last, the ideal of Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) is applied to obtain the threat ranking of all the objects. A real application is used to illustrate the effectiveness of the proposed model.

Journal ArticleDOI
TL;DR: This article investigated the impact of an aging agricultural labor population on agricultural production and found that the adverse effects of changes in the agricultural population age result more from the agricultural output of older farmers who intend to give up farming.
Abstract: Chinese agriculture is facing an aging workforce which could negatively impact the industry. In this context, research is needed on how work preferences and age of farmers affect agricultural output. This paper attempts to investigate these factors to more fully understand the impact of an aging agricultural labor population on agricultural production. The results show that, in this context of aging, changes in the working-age households have a significant impact on agricultural output. Despite the fact that the impacts of intention to abandon land management were not significant, we can ignore this preference in the workforce. The combination of changes in the composition of the working-age households indicates that 58.53 percent of the agricultural producers will likely quit. This is a potential threat for the future of agricultural development. We also found that elderly farmers who do not intend to abandon farming had higher agricultural output compared to other farmers. This indicates that the adverse effects of changes in the agricultural population age result more from the agricultural output of older farmers who intend to give up farming. This intention adversely affected other elements and reduced investment. Therefore, various forms of training should increase efforts to cultivate modern professional farmers and policies should be simultaneously developed to increase agricultural production levels.

Journal ArticleDOI
TL;DR: In this paper, the authors developed an Excel spreadsheet-based framework that allows simultaneous implementation of LCA and LCCA in a real building case to evaluate three possible alternatives for an external skin system.
Abstract: Advancements in building materials and technology have led to the rapid development of various design solutions. At the same time, life cycle assessment (LCA) and life cycle cost analysis (LCCA) of such solutions have become a great burden to engineers and project managers. To help conduct LCA and LCCA conveniently, this study (i) analyzed the information needed to conduct LCA and LCCA, (ii) evaluated a way to obtain such information in an easy and accurate manner using a building information modeling tool, and (iii) developed an Excel spreadsheet-based framework that allowed for the simultaneous implementation of LCA and LCCA. The framework developed for LCA and LCCA was applied to a real building case to evaluate three possible alternatives for an external skin system. The framework could easily and accurately determine which skin system had good properties in terms of the LCA and LCCA performance. Therefore, these results are expected to assist in decision making based on the perspectives of economic and environmental performances in the early phases of a project, where various alternatives can be created and evaluated.

Journal ArticleDOI
TL;DR: A method that uses a coarse-to-fine scheme and has superior performance in the accuracy and stability of its reading identification is introduced and is applicable for reading gauges whose scale marks are either evenly or unevenly distributed.
Abstract: This study proposes an automatic reading approach for a pointer gauge based on computer vision. Moreover, the study aims to highlight the defects of the current automatic-recognition method of the pointer gauge and introduces a method that uses a coarse-to-fine scheme and has superior performance in the accuracy and stability of its reading identification. First, it uses the region growing method to locate the dial region and its center. Second, it uses an improved central projection method to determine the circular scale region under the polar coordinate system and detect the scale marks. Then, the border detection is implemented in the dial image, and the Hough transform method is used to obtain the pointer direction by means of pointer contour fitting. Finally, the reading of the gauge is obtained by comparing the location of the pointer with the scale marks. The experimental results demonstrate the effectiveness of the proposed approach. This approach is applicable for reading gauges whose scale marks are either evenly or unevenly distributed.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a single-setup multidelivery (SSMD) policy in supply chain management, where the buyer inspects all received products and returns defective items to vendor for reworking process.
Abstract: Due to heavy transportation for single-setup multidelivery (SSMD) policy in supply chain management, this model assumes carbon emission cost to obtain a realistic behavior for world environment. The transportation for buyer and vendor is considered along with setup cost reduction by using an investment function. It is assumed that the shipment lot size of each delivery is unequal and variable. The buyer inspects all received products and returns defective items to vendor for reworking process. Because of this policy, end customers will only obtain nondefective items. The analytical optimization is considered to obtain the optimum solution of the model. The main goal of this paper is to reduce the total cost by considering carbon emission during the transportation. A numerical example, graphical representation, and sensitivity analysis are given to illustrate the model.

Journal ArticleDOI
TL;DR: The proposed intelligent model can be considered as an alternative model to predict the efficiency of chemical flooding in oil reservoir when the required experimental data are not available or accessible.
Abstract: Application of chemical flooding in petroleum reservoirs turns into hot topic of the recent researches. Development strategies of the aforementioned technique are more robust and precise when we consider both economical points of view (net present value, NPV) and technical points of view (recovery factor, RF). In current study many attempts have been made to propose predictive model for estimation of efficiency of chemical flooding in oil reservoirs. To gain this end, a couple of swarm intelligence and artificial neural network (ANN) is employed. Also, lucrative and high precise chemical flooding data banks reported in previous attentions are utilized to test and validate proposed intelligent model. According to the mean square error (MSE), correlation coefficient, and average absolute relative deviation, the suggested swarm approach has acceptable reliability, integrity and robustness. Thus, the proposed intelligent model can be considered as an alternative model to predict the efficiency of chemical flooding in oil reservoir when the required experimental data are not available or accessible.

Journal ArticleDOI
TL;DR: This paper analyses the error sources of common indoor localization techniques and provides a multilayered conceptual framework of improvement schemes for location estimation, and investigates the hybrid localization methods, including multimodal fingerprinting, triangulation fusing multiple measurements, combination of wireless positioning with pedestrian dead reckoning (PDR), and cooperative localization.
Abstract: Location estimation is significant in mobile and ubiquitous computing systems. The complexity and smaller scale of the indoor environment impose a great impact on location estimation. The key of location estimation lies in the representation and fusion of uncertain information from multiple sources. The improvement of location estimation is a complicated and comprehensive issue. A lot of research has been done to address this issue. However, existing research typically focuses on certain aspects of the problem and specific methods. This paper reviews mainstream schemes on improving indoor location estimation from multiple levels and perspectives by combining existing works and our own working experiences. Initially, we analyze the error sources of common indoor localization techniques and provide a multilayered conceptual framework of improvement schemes for location estimation. This is followed by a discussion of probabilistic methods for location estimation, including Bayes filters, Kalman filters, extended Kalman filters, sigma-point Kalman filters, particle filters, and hidden Markov models. Then, we investigate the hybrid localization methods, including multimodal fingerprinting, triangulation fusing multiple measurements, combination of wireless positioning with pedestrian dead reckoning (PDR), and cooperative localization. Next, we focus on the location determination approaches that fuse spatial contexts, namely, map matching, landmark fusion, and spatial model-aided methods. Finally, we present the directions for future research.

Journal ArticleDOI
TL;DR: In this article, a fully gradient elasticity model for bending of nanobeams is proposed by using a nonlocal thermodynamic approach, where the proposed constitutive law is assumed to depend on the axial strain gradient.
Abstract: A fully gradient elasticity model for bending of nanobeams is proposed by using a nonlocal thermodynamic approach. As a basic theoretical novelty, the proposed constitutive law is assumed to depend on the axial strain gradient, while existing gradient elasticity formulations for nanobeams contemplate only the derivative of the axial strain with respect to the axis of the structure. Variational equations governing the elastic equilibrium problem of bending of a fully gradient nanobeam and the corresponding differential and boundary conditions are thus provided. Analytical solutions for a nanocantilever are given and the results are compared with those predicted by other theories. As a relevant implication of applicative interest in the research field of nanobeams used in nanoelectromechanical systems (NEMS), it is shown that displacements obtained by the present model are quite different from those predicted by the known gradient elasticity treatments.