scispace - formally typeset
Search or ask a question

Showing papers in "Mathematical Problems in Engineering in 2018"


Journal ArticleDOI
TL;DR: Decision making trial and evaluation laboratory (DEMATEL) is considered as an effective method for the identification of cause-effect chain components of a complex system as discussed by the authors, which deals with evaluating interdependent relationships among factors and finding the critical ones through a visual structural model.
Abstract: Decision making trial and evaluation laboratory (DEMATEL) is considered as an effective method for the identification of cause-effect chain components of a complex system. It deals with evaluating interdependent relationships among factors and finding the critical ones through a visual structural model. Over the recent decade, a large number of studies have been done on the application of DEMATEL and many different variants have been put forward in the literature. The objective of this study is to review systematically the methodologies and applications of the DEMATEL technique. We reviewed a total of 346 papers published from 2006 to 2016 in the international journals. According to the approaches used, these publications are grouped into five categories: classical DEMATEL, fuzzy DEMATEL, grey DEMATEL, analytical network process- (ANP-) DEMATEL, and other DEMATEL. All papers with respect to each category are summarized and analyzed, pointing out their implementing procedures, real applications, and crucial findings. This systematic and comprehensive review holds valuable insights for researchers and practitioners into using the DEMATEL in terms of indicating current research trends and potential directions for further research.

429 citations


Journal ArticleDOI
TL;DR: In this article, a deep network architecture using residual bidirectional long short-term memory (LSTM) is proposed, which concatenates the positive time direction (forward state) and the negative time direction(backward state).
Abstract: Human activity recognition (HAR) has become a popular topic in research because of its wide application. With the development of deep learning, new ideas have appeared to address HAR problems. Here, a deep network architecture using residual bidirectional long short-term memory (LSTM) is proposed. The advantages of the new network include that a bidirectional connection can concatenate the positive time direction (forward state) and the negative time direction (backward state). Second, residual connections between stacked cells act as shortcut for gradients, effectively avoiding the gradient vanishing problem. Generally, the proposed network shows improvements on both the temporal (using bidirectional cells) and the spatial (residual connections stacked) dimensions, aiming to enhance the recognition rate. When testing with the Opportunity dataset and the public domain UCI dataset, the accuracy is significantly improved compared with previous results.

192 citations


Journal ArticleDOI
Xingyu Zhou, Zhisong Pan, Guyu Hu, Siqi Tang, Cheng Zhao1 
TL;DR: A generic framework employing Long Short-Term Memory (LSTM) and convolutional neural network for adversarial training to forecast high-frequency stock market can effectively improve stock price direction prediction accuracy and reduce forecast error.
Abstract: Stock price prediction is an important issue in the financial world, as it contributes to the development of effective strategies for stock exchange transactions. In this paper, we propose a generic framework employing Long Short-Term Memory (LSTM) and convolutional neural network (CNN) for adversarial training to forecast high-frequency stock market. This model takes the publicly available index provided by trading software as input to avoid complex financial theory research and difficult technical analysis, which provides the convenience for the ordinary trader of nonfinancial specialty. Our study simulates the trading mode of the actual trader and uses the method of rolling partition training set and testing set to analyze the effect of the model update cycle on the prediction performance. Extensive experiments show that our proposed approach can effectively improve stock price direction prediction accuracy and reduce forecast error.

155 citations


Journal ArticleDOI
TL;DR: An effective deep learning method known as stacked autoencoders (SAEs) is proposed to solve gearbox fault diagnosis and can directly extract salient features from frequency-domain signals and eliminate the exhausted use of handcrafted features.
Abstract: Machinery fault diagnosis is pretty vital in modern manufacturing industry since an early detection can avoid some dangerous situations. Among various diagnosis methods, data-driven approaches are gaining popularity with the widespread development of data analysis techniques. In this research, an effective deep learning method known as stacked autoencoders (SAEs) is proposed to solve gearbox fault diagnosis. The proposed method can directly extract salient features from frequency-domain signals and eliminate the exhausted use of handcrafted features. Furthermore, to reduce the overfitting problem in training process and improve the performance for small training set, dropout technique and ReLU activation function are introduced into SAEs. Two gearbox datasets are employed to conform the effectiveness of the proposed method; the result indicates that the proposed method can not only achieve significant improvement but also is superior to the raw SAEs and some other traditional methods.

146 citations


Journal ArticleDOI
TL;DR: This paper proposes an algebraic substitution method and its structure, which can convert a noncascaded integral system of PID control into a cascaded integral form, and shows that the converted system can achieve a better control effect under the ADRC than that of a PID.
Abstract: The Active Disturbance Rejection Control (ADRC) prefers the cascaded integral system for a convenient design or better control effect and takes it as a typical form. However, the state variables of practical system do not necessarily have a cascaded integral relationship. Therefore, this paper proposes an algebraic substitution method and its structure, which can convert a noncascaded integral system of PID control into a cascaded integral form. The adjusting parameters of the ADRC controller are also demonstrated. Meanwhile, a numerical example and the oscillation control of a flexible arm are demonstrated to show the conversion, controller design, and control effect. The converted system is proved to be more suitable for a direct ADRC control. In addition, for the numerical example, its control effect for the converted system is compared with a PID controller under different disturbances. The result shows that the converted system can achieve a better control effect under the ADRC than that of a PID. The theory is a guide before practice. This converting method not only solves the ADRC control problem of some noncascaded integral systems in theory and simulation but also expands the application scope of the ADRC method.

133 citations


Journal ArticleDOI
TL;DR: This article presents a comprehensive literature review of the FJSSPs solved using the GA and includes the inclusion of the hybrid GA (hGA) techniques used in the solution of the problem.
Abstract: Flexible Job Shop Scheduling Problem (FJSSP) is an extension of the classical Job Shop Scheduling Problem (JSSP). The FJSSP is known to be NP-hard problem with regard to optimization and it is very difficult to find reasonably accurate solutions of the problem instances in a rational time. Extensive research has been carried out in this area especially over the span of the last 20 years in which the hybrid approaches involving Genetic Algorithm (GA) have gained the most popularity. Keeping in view this aspect, this article presents a comprehensive literature review of the FJSSPs solved using the GA. The survey is further extended by the inclusion of the hybrid GA (hGA) techniques used in the solution of the problem. This review will give readers an insight into use of certain parameters in their future research along with future research directions.

101 citations


Journal ArticleDOI
TL;DR: A novel L STM ensemble forecasting algorithm that effectively combines multiple forecast (prediction) results from a set of individual LSTM networks and achieves state-of-the-art forecasting performance on four real-life time series datasets publicly available.
Abstract: Time series forecasting is essential for various engineering applications in finance, geology, and information technology, etc. Long Short-Term Memory (LSTM) networks are nowadays gaining renewed interest and they are replacing many practical implementations of the time series forecasting systems. This paper presents a novel LSTM ensemble forecasting algorithm that effectively combines multiple forecast (prediction) results from a set of individual LSTM networks. The main advantages of our LSTM ensemble method over other state-of-the-art ensemble techniques are summarized as follows: (1) we develop a novel way of dynamically adjusting the combining weights that are used for combining multiple LSTM models to produce the composite prediction output; for this, our method is devised for updating combining weights at each time step in an adaptive and recursive way by using both past prediction errors and forgetting weight factor; (2) our method is capable of well capturing nonlinear statistical properties in the time series, which considerably improves the forecasting accuracy; (3) our method is straightforward to implement and computationally efficient when it comes to runtime performance because it does not require the complex optimization in the process of finding combining weights. Comparative experiments demonstrate that our proposed LSTM ensemble method achieves state-of-the-art forecasting performance on four real-life time series datasets publicly available.

78 citations


Journal ArticleDOI
TL;DR: The morphology and rhythm of heartbeats are fused into a two-dimensional information vector for subsequent processing by CNNs that include adaptive learning rate and biased dropout methods to demonstrate that the proposed CNN model is effective for detecting irregular heartbe beats or arrhythmias via automatic feature extraction.
Abstract: Although convolutional neural networks (CNNs) can be used to classify electrocardiogram (ECG) beats in the diagnosis of cardiovascular disease, ECG signals are typically processed as one-dimensional signals while CNNs are better suited to multidimensional pattern or image recognition applications. In this study, the morphology and rhythm of heartbeats are fused into a two-dimensional information vector for subsequent processing by CNNs that include adaptive learning rate and biased dropout methods. The results demonstrate that the proposed CNN model is effective for detecting irregular heartbeats or arrhythmias via automatic feature extraction. When the proposed model was tested on the MIT-BIH arrhythmia database, the model achieved higher performance than other state-of-the-art methods for five and eight heartbeat categories (the average accuracy was 99.1% and 97%). In particular, the proposed system had better performance in terms of the sensitivity and positive predictive rate for V beats by more than 4.3% and 5.4%, respectively, and also for S beats by more than 22.6% and 25.9%, respectively, when compared to existing algorithms. It is anticipated that the proposed method will be suitable for implementation on portable devices for the e-home health monitoring of cardiovascular disease.

78 citations


Journal ArticleDOI
TL;DR: In this article, a turning test of stainless steel was carried out by using the central composite surface design of response surface method (RSM) and Taguchi design method of central combination design.
Abstract: The turning test of stainless steel was carried out by using the central composite surface design of response surface method (RSM) and Taguchi design method of central combination design. The influence of cutting parameters (cutting speed, feed rate, and cutting depth) on the surface roughness was analyzed. The surface roughness prediction model was established based on the second-order RSM. According to the test results, the regression coefficient was estimated by the least square method, and the regression equation was curve fitted. Meanwhile, the significance analysis was conducted to test the fitting degree and response surface design and analysis, in addition to establishing a response surface map and three-dimensional surface map. The life of the machining tool was analyzed based on the optimized parameters. The results show that the influence of feed rate on the surface roughness is very significant. Cutting depth is the second, and the influence of cutting speed is the least. Therefore, the cutting parameters are optimized and tool life is analyzed to realize the efficient and economical cutting of difficult-to-process materials under the premise of ensuring the processing quality.

74 citations


Journal ArticleDOI
TL;DR: In this article, a simple method is proposed to predict the ground displacements caused by installing horizontal jet-grouting columns in soft ground, which is simplified as the expansion of a cylindrical cavity with a uniform radial stress applied at plastic-elastic interface in a half plane.
Abstract: During the horizontal jet grouting in soft ground, injection of large volumes of water and grout into the soil can lead to significant ground displacements. A simple method is proposed in this paper to predict the ground displacements caused by installing horizontal jet-grouting columns. The process of installing a horizontal column is simplified as the expansion of a cylindrical cavity with a uniform radial stress applied at plastic-elastic interface in a half plane. In this study, the analytical solution is adopted to calculate the deformation induced by the expansion of a cylindrical cavity. Considering the main jetting parameters (jetting pressure of the fluid, flow rate of the fluid, and withdrawal rate of the rod) and the soil properties (stiffness of the surrounding soil), an empirical equation to estimate the radius of plastic zone is developed. Two field tests are carried out in Shanghai, China, to verify the correctness and applicability of the proposed method. Comparisons between the predicted and measured values indicate that the proposed method can provide a reasonable prediction. The proposed simple method can be recommended as a useful tool for the design of ground improvement by means of horizontal jet grouting.

73 citations


Journal ArticleDOI
TL;DR: Experimental results show that a classifier based on convolutional layer is more effective for object detection than that based on fully connected layer and that the better detection performance can be achieved by employing deeper ConvNets as the feature classifier.
Abstract: Modern object detectors always include two major parts: a feature extractor and a feature classifier as same as traditional object detectors. The deeper and wider convolutional architectures are adopted as the feature extractor at present. However, many notable object detection systems such as Fast/Faster RCNN only consider simple fully connected layers as the feature classifier. In this paper, we declare that it is beneficial for the detection performance to elaboratively design deep convolutional networks (ConvNets) of various depths for feature classification, especially using the fully convolutional architectures. In addition, this paper also demonstrates how to employ the fully convolutional architectures in the Fast/Faster RCNN. Experimental results show that a classifier based on convolutional layer is more effective for object detection than that based on fully connected layer and that the better detection performance can be achieved by employing deeper ConvNets as the feature classifier.

Journal ArticleDOI
TL;DR: A Pythagorean fuzzy VIKOR (PF-VIKOR) approach is developed for solving the EVCS site selection problems, in which the evaluations of alternatives are given as linguistic terms characterized by Pythagorian fuzzy values (PFVs).
Abstract: Site selection for electric vehicle charging stations (EVCSs) is the process of determining the most suitable location among alternatives for the construction of charging facilities for electric vehicles. It can be regarded as a complex multicriteria decision-making (MCDM) problem requiring consideration of multiple conflicting criteria. In the real world, it is often hard or impossible for decision makers to estimate their preferences with exact numerical values. Therefore, Pythagorean fuzzy set theory has been frequently used to handle imprecise data and vague expressions in practical decision-making problems. In this paper, a Pythagorean fuzzy VIKOR (PF-VIKOR) approach is developed for solving the EVCS site selection problems, in which the evaluations of alternatives are given as linguistic terms characterized by Pythagorean fuzzy values (PFVs). Particularly, the generalized Pythagorean fuzzy ordered weighted standardized distance (GPFOWSD) operator is proposed to calculate the utility and regret measures for ranking alternative sites. Finally, a practical example in Shanghai, China, is included to demonstrate the proposed EVCS sitting model, and the advantages are highlighted by comparing the results with other relevant methods.

Journal ArticleDOI
TL;DR: An effective novel technique is introduced to improve the performance of CBIR on the basis of visual words fusion of scale-invariant feature transform (SIFT) and local intensity order pattern (LIOP) descriptors which overcomes the aforementioned issues and significantly improves the performanceof CBIR.
Abstract: Content-based image retrieval (CBIR) is a mechanism that is used to retrieve similar images from an image collection. In this paper, an effective novel technique is introduced to improve the performance of CBIR on the basis of visual words fusion of scale-invariant feature transform (SIFT) and local intensity order pattern (LIOP) descriptors. SIFT performs better on scale changes and on invariant rotations. However, SIFT does not perform better in the case of low contrast and illumination changes within an image, while LIOP performs better in such circumstances. SIFT performs better even at large rotation and scale changes, while LIOP does not perform well in such circumstances. Moreover, SIFT features are invariant to slight distortion as compared to LIOP. The proposed technique is based on the visual words fusion of SIFT and LIOP descriptors which overcomes the aforementioned issues and significantly improves the performance of CBIR. The experimental results of the proposed technique are compared with another proposed novel features fusion technique based on SIFT-LIOP descriptors as well as with the state-of-the-art CBIR techniques. The qualitative and quantitative analysis carried out on three image collections, namely, Corel-A, Corel-B, and Caltech-256, demonstrate the robustness of the proposed technique based on visual words fusion as compared to features fusion and the state-of-the-art CBIR techniques.

Journal ArticleDOI
TL;DR: In this article, an inventory model for deteriorating items with controllable deterioration rate (by using preservation technology) under trade credit policy is developed, where the main objective of the inventory model is to determine jointly the optimal ordering, pricing, and preservation technology investment policies for retailer so that the total profit is maximized.
Abstract: This article develops an inventory model for deteriorating items with controllable deterioration rate (by using preservation technology) under trade credit policy. As in practical scenarios the demand of an item is directly associated with its selling price, keeping this in mind, it is assumed to be a price dependent demand. The main objective of the inventory model is to determine jointly the optimal ordering, pricing, and preservation technology investment policies for retailer so that the total profit is maximized. The effects of key parameters on optimal solution are studied through a sensitivity analysis with the aim of examining the behavior of the inventory model with controllable deterioration under the permissible delay in payments.

Journal ArticleDOI
TL;DR: In this paper, a short-term power load forecasting method based on the improved exponential smoothing grey model was proposed, which can take the effects of the influencing factors on the power load into consideration.
Abstract: In order to improve the prediction accuracy, this paper proposes a short-term power load forecasting method based on the improved exponential smoothing grey model. It firstly determines the main factor affecting the power load using the grey correlation analysis. It then conducts power load forecasting using the improved multivariable grey model. The improved prediction model firstly carries out the smoothing processing of the original power load data using the first exponential smoothing method. Secondly, the grey prediction model with an optimized background value is established using the smoothed sequence which agrees with the exponential trend. Finally, the inverse exponential smoothing method is employed to restore the predicted value. The first exponential smoothing model uses the 0.618 method to search for the optimal smooth coefficient. The prediction model can take the effects of the influencing factors on the power load into consideration. The simulated results show that the proposed prediction algorithm has a satisfactory prediction effect and meets the requirements of short-term power load forecasting. This research not only further improves the accuracy and reliability of short-term power load forecasting but also extends the application scope of the grey prediction model and shortens the search interval.

Journal ArticleDOI
TL;DR: A deep convolutional neural network (dCNN) using augmented face dataset to extract discriminative features from face images containing synthetic makeup variations to compete with the state of the art.
Abstract: Recently, face datasets containing celebrities photos with facial makeup are growing at exponential rates, making their recognition very challenging. Existing face recognition methods rely on feature extraction and reference reranking to improve the performance. However face images with facial makeup carry inherent ambiguity due to artificial colors, shading, contouring, and varying skin tones, making recognition task more difficult. The problem becomes more confound as the makeup alters the bilateral size and symmetry of the certain face components such as eyes and lips affecting the distinctiveness of faces. The ambiguity becomes even worse when different days bring different facial makeup for celebrities owing to the context of interpersonal situations and current societal makeup trends. To cope with these artificial effects, we propose to use a deep convolutional neural network (dCNN) using augmented face dataset to extract discriminative features from face images containing synthetic makeup variations. The augmented dataset containing original face images and those with synthetic make up variations allows dCNN to learn face features in a variety of facial makeup. We also evaluate the role of partial and full makeup in face images to improve the recognition performance. The experimental results on two challenging face datasets show that the proposed approach can compete with the state of the art.

Journal ArticleDOI
TL;DR: The fuzzy Kano model combined with the fuzzy analytic hierarchy process (AHP) is developed to determine the priority of the development of attractive factors and the results are expected to help designers to increase design efficiency and improve consumer satisfaction of new products.
Abstract: The success of a new product is usually determined not by whether it includes high-end technology, but by whether it meets consumer expectations, especially key Kansei demands. This article aims to evaluate attractive factors (Kansei words) and convert them to design elements to make products stand out in the global competition. The evaluation grid method (EGM) is an important research method of Miryoku engineering. The method can build qualitative relations among consumers’ attractive factors and design elements. The quality function deployment (QFD) is a quantitative method which converts customer requirements into engineering characteristics using the House of Quality Matrix. The QFD together with the concept of fuzziness can objectively measure questionnaires made by experts. Accordingly, this paper proposes a systematic approach that integrates the EGM together with the fuzzy QFD for the development of new products. The fuzzy Kano model combined with the fuzzy analytic hierarchy process (AHP) is developed to determine the priority of the development of attractive factors. This empirical study uses minicars as an example to verify the feasibility and validity of the approach. The results are expected to help designers to increase design efficiency and improve consumer satisfaction of new products.

Journal ArticleDOI
TL;DR: In this paper, the authors optimize the convolutional neural network model by embedding the word order characteristics in its convolution layer and pooling layer, which makes CNN more suitable for short text classification and deceptive opinions detection.
Abstract: Convolutional neural network (CNN) has revolutionized the field of natural language processing, which is considerably efficient at semantics analysis that underlies difficult natural language processing problems in a variety of domains. The deceptive opinion detection is an important application of the existing CNN models. The detection mechanism based on CNN models has better self-adaptability and can effectively identify all kinds of deceptive opinions. Online opinions are quite short, varying in their types and content. In order to effectively identify deceptive opinions, we need to comprehensively study the characteristics of deceptive opinions and explore novel characteristics besides the textual semantics and emotional polarity that have been widely used in text analysis. In this paper, we optimize the convolutional neural network model by embedding the word order characteristics in its convolution layer and pooling layer, which makes convolutional neural network more suitable for short text classification and deceptive opinions detection. The TensorFlow-based experiments demonstrate that the proposed detection mechanism achieves more accurate deceptive opinion detection results.

Journal ArticleDOI
TL;DR: To protect from pixel difference histogram (PDH) analysis and RS analysis, two hybrid image steganography techniques by appropriate combination of LSB substitution, pixel value differencing (PVD), and exploiting modification directions (EMD) have been proposed.
Abstract: To protect from pixel difference histogram (PDH) analysis and RS analysis, two hybrid image steganography techniques by appropriate combination of LSB substitution, pixel value differencing (PVD), and exploiting modification directions (EMD) have been proposed in this paper. The cover image is traversed in raster scan order and partitioned into blocks. The first technique operates on 2 × 2 pixel blocks and the second technique operates on 3 × 3 pixel blocks. For each block, the average pixel value difference, , is calculated. If value is greater than 15, the block is in an edge area, so a combination of LSB substitution and PVD is applied. If value is less than or equal to 15, the block is in a smooth area, so a combination of LSB substitution and EMD is applied. Each of these two techniques exists in two variants (Type 1 and Type 2) with respect to two different range tables. The hiding capacities and PSNR of both the techniques are found to be improved. The results from experiments prove that PDH analysis and RS analysis cannot detect these proposed techniques.

Journal ArticleDOI
TL;DR: This machine learning algorithm used with the feature extraction process proposed in this study can be a very promising tool to assist transportation agencies in the task of pavement condition survey.
Abstract: Periodic surveys of asphalt pavement condition are very crucial in road maintenance This work carries out a comparative study on the performance of machine learning approaches used for automatic pavement crack recognition Six machine learning approaches, Naive Bayesian Classifier (NBC), Classification Tree (CT), Backpropagation Artificial Neural Network (BPANN), Radial Basis Function Neural Network (RBFNN), Support Vector Machine (SVM), and Least Squares Support Vector Machine (LSSVM), have been employed Additionally, Median Filter (MF), Steerable Filter (SF), and Projective Integral (PI) have been used to extract useful features from pavement images In the feature extraction phase, performance comparison shows that the input pattern including the diagonal PIs enhances the classification performance significantly by creating more informative features A simple moving average method is also employed to reduce the size of the feature set with positive effects on the model classification performance Experimental results point out that LSSVM has achieved the highest classification accuracy rate Therefore, this machine learning algorithm used with the feature extraction process proposed in this study can be a very promising tool to assist transportation agencies in the task of pavement condition survey

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the role of trade credit and quantity discount in supply chain coordination when the sales effort effect on market demand is considered and developed a hybrid quantitative analytical model for supply chain co-ordination by coherently integrating incentives of trade credits and quantity discounts with sales effort effects.
Abstract: The purpose of this paper is to investigate the role of trade credit and quantity discount in supply chain coordination when the sales effort effect on market demand is considered. In this paper, we consider a two-echelon supply chain consisting of a single retailer ordering a single product from a single manufacturer. Market demand is stochastic and is influenced by retailer sales effort. We formulate an analytical model based on a single trade credit and find that the single trade credit cannot achieve the perfect coordination of the supply chain. Then, we develop a hybrid quantitative analytical model for supply chain coordination by coherently integrating incentives of trade credit and quantity discount with sales effort effects. The results demonstrate that, providing that the discount rate satisfies certain conditions, the proposed hybrid model combining trade credit and quantity discount will be able to effectively coordinate the supply chain by motivating retailers to exert their sales effort and increase product order quantity. Furthermore, the hybrid quantitative analytical model can provide great flexibility in coordinating the supply chain to achieve an optimal situation through the adjustment of relevant parameters to resolve conflict of interests from different supply chain members. Numerical examples are provided to demonstrate the effectiveness of the hybrid model.

Journal ArticleDOI
TL;DR: A novel framework, which primarily integrates the Taguchi Method to a deep autoencoder based system without considering to modify the overall structure of the network, is presented and the results are quite encouraging and verified the overall performance of the proposed framework.
Abstract: Deep autoencoder neural networks have been widely used in several image classification and recognition problems, including hand-writing recognition, medical imaging, and face recognition. The overall performance of deep autoencoder neural networks mainly depends on the number of parameters used, structure of neural networks, and the compatibility of the transfer functions. However, an inappropriate structure design can cause a reduction in the performance of deep autoencoder neural networks. A novel framework, which primarily integrates the Taguchi Method to a deep autoencoder based system without considering to modify the overall structure of the network, is presented. Several experiments are performed using various data sets from different fields, i.e., network security and medicine. The results show that the proposed method is more robust than some of the well-known methods in the literature as most of the time our method performed better. Therefore, the results are quite encouraging and verified the overall performance of the proposed framework.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors presented a RAGA-PP-SFA model to measure green technology's innovation efficiency in the high-end manufacturing industry, which solved the shortcomings of traditional SFA methods that are unable to improve multi-output efficiency.
Abstract: This study offers a RAGA-PP-SFA model to measure green technology’s innovation efficiency in the high-end manufacturing industry. The study’s aim is to solve the shortcomings of traditional SFA methods that are unable to improve multi-output efficiency. The RAGA-PP-SFA model presented here is based on the multi-emission and multi-output characteristics of high-end manufacturing innovation activities. Using panel data from 2010 to 2015 on China's high-end manufacturing industry and considering factors such as environmental regulation, government subsidy, and market maturity, this paper empirically examines and compares the efficiency of green technology innovation versus traditional technology innovation, as well as regional heterogeneity in China's high-end manufacturing industry. The study ultimately found a low level of green technology innovation efficiency in China’s high-end manufacturing industry. However, an overall rising trend shows that the green development of China's high-end manufacturing industry has achieved remarkable results. Green technology innovation efficiency in high-end manufacturing industries across various regions was generally lower than the efficiency of traditional technology innovation. Both types of efficiency showed a pattern of “high in the east and low in the middle and in the west”. High-high efficiency is primarily found in the east, whereas the west is characterized by low-low efficiency. There are significant differences between regions, pointing to an equal rate of development. Government subsidies and enterprise scale had a significant negative impact on green technology innovation efficiency in regional high-end manufacturing industries, while market maturity and industrial agglomeration had a significant positive impact. Based on the study’s findings, environmental regulation and openness to the outside world play insignificant roles in green technology innovation efficiency.

Journal ArticleDOI
TL;DR: According to the set of robustly reachable states, some necessary and sufficient criteria are obtained for robust synchronization of drive-response BCNs with disturbances under a given state feedback controller.
Abstract: This paper investigates the robust synchronization of drive-response Boolean control networks (BCNs) with disturbances via semi-tensor product of matrices. Firstly, the definition of robust synchronization is presented for the drive-response BCNs with disturbances. Then, based on the algebraic state space representation of drive-response BCNs, the robustly reachable states/sets are presented to investigate robust synchronization of disturbed BCNs. According to the set of robustly reachable states, some necessary and sufficient criteria are obtained for robust synchronization of drive-response BCNs with disturbances under a given state feedback controller. Finally, an illustrative example is presented to demonstrate the obtained theoretical results.

Journal ArticleDOI
TL;DR: A new algorithm based on perceptual color difference saliency along with binary morphological analysis for segmentation of melanoma skin lesion in dermoscopic images is presented.
Abstract: The prevalence of melanoma skin cancer disease is rapidly increasing as recorded death cases of its patients continue to annually escalate. Reliable segmentation of skin lesion is one essential requirement of an efficient noninvasive computer aided diagnosis tool for accelerating the identification process of melanoma. This paper presents a new algorithm based on perceptual color difference saliency along with binary morphological analysis for segmentation of melanoma skin lesion in dermoscopic images. The new algorithm is compared with existing image segmentation algorithms on benchmark dermoscopic images acquired from public corpora. Results of both qualitative and quantitative evaluations of the new algorithm are encouraging as the algorithm performs excellently in comparison with the existing image segmentation algorithms.

Journal ArticleDOI
TL;DR: This study proposes a novel hybrid intelligent method to improve existing forecasting models such as random forest (RF) and artificial neural networks, for higher accuracy and scalability.
Abstract: A variety of supervised learning methods using numerical weather prediction (NWP) data have been exploited for short-term wind power forecasting (WPF). However, the NWP data may not be available enough due to its uncertainties on initial atmospheric conditions. Thus, this study proposes a novel hybrid intelligent method to improve existing forecasting models such as random forest (RF) and artificial neural networks, for higher accuracy. First, the proposed method develops the predictive deep belief network (DBN) to perform short-term wind speed prediction (WSP). Then, the WSP data are transformed into supplementary input features in the prediction process of WPF. Second, owing to its ensemble learning and parallelization, the random forest is used as supervised forecasting model. In addition, a data driven dimension reduction procedure and a weighted voting method are utilized to optimize the random forest algorithm in the training process and the prediction process, respectively. The increasing number of training samples would cause the overfitting problem. Therefore, the k-fold cross validation (CV) technique is adopted to address this issue. Numerical experiments are performed at 15-min, 30-min, 45-min, and 24-h to indicate the superiority and signal advantages compared with existing methods in terms of forecasting accuracy and scalability.

Journal ArticleDOI
TL;DR: The proposed forest fire detection algorithm consists of background subtraction applied to movement containing region detection, and temporal variation is employed to differentiate between fire and fire-color objects.
Abstract: Forest fires represent a real threat to human lives, ecological systems, and infrastructure. Many commercial fire detection sensor systems exist, but all of them are difficult to apply at large open spaces like forests because of their response delay, necessary maintenance needed, high cost, and other problems. In this paper a forest fire detection algorithm is proposed, and it consists of the following stages. Firstly, background subtraction is applied to movement containing region detection. Secondly, converting the segmented moving regions from RGB to YCbCr color space and applying five fire detection rules for separating candidate fire pixels were undertaken. Finally, temporal variation is then employed to differentiate between fire and fire-color objects. The proposed method is tested using data set consisting of 6 videos collected from Internet. The final results show that the proposed method achieves up to 96.63% of true detection rates. These results indicate that the proposed method is accurate and can be used in automatic forest fire-alarm systems.

Journal ArticleDOI
TL;DR: The general detector YOLOv2 is employed, a state-of-the-art method in the general detection tasks, in the pedestrian detection and the network parameters and structures are modified, according to the characteristics of the pedestrians, making this method more suitable for detecting pedestrians.
Abstract: In recent years, techniques based on the deep detection model have achieved overwhelming improvements in the accuracy of detection, which makes them being the most adapted for the applications, such as pedestrian detection. However, speed and accuracy are a pair of contradictions that always exist and have long puzzled researchers. How to achieve the good trade-off between them is a problem we must consider while designing the detectors. To this end, we employ the general detector YOLOv2, a state-of-the-art method in the general detection tasks, in the pedestrian detection. Then we modify the network parameters and structures, according to the characteristics of the pedestrians, making this method more suitable for detecting pedestrians. Experimental results in INRIA pedestrian detection dataset show that it has a fairly high detection speed with a small precision gap compared with the state-of-the-art pedestrian detection methods. Furthermore, we add weak semantic segmentation networks after shared convolution layers to illuminate pedestrians and employ a scale-aware structure in our model according to the characteristics of the wide size range in Caltech pedestrian detection dataset, which make great progress under the original improvement.

Journal ArticleDOI
TL;DR: The objective of this work is to design an intelligent fuzzy-based fractional-order PID control scheme to ensure a robust performance with respect to load variation and external disturbances.
Abstract: This article presents a fuzzy fractional-order PID (FFOPID) controller scheme for a pneumatic pressure regulating system. The industrial pneumatic pressure systems are having strong dynamic and nonlinearity characteristics; further, these systems come across frequent load variations and external disturbances. Hence, for the smooth and trouble-free operation of the industrial pressure system, an effective control mechanism could be adopted. The objective of this work is to design an intelligent fuzzy-based fractional-order PID control scheme to ensure a robust performance with respect to load variation and external disturbances. A novel model of a pilot pressure regulating system is developed to validate the effectiveness of the proposed control scheme. Simulation studies are carried out in a delayed nonlinear pressure regulating system under different operating conditions using fractional-order PID (FOPID) controller with fuzzy online gain tuning mechanism. The results demonstrate the usefulness of the proposed strategy and confirm the performance improvement for the pneumatic pressure system. To highlight the advantages of the proposed scheme a comparative study with conventional PID and FOPID control schemes is made.

Journal ArticleDOI
TL;DR: The Social Spider Optimization is a novel swarm algorithm that is based on the cooperative characteristics of the social spider and defines two different search agents: male and female.
Abstract: Swarm intelligence (SI) is a research field which has recently attracted the attention of several scientific communities. An SI approach tries to characterize the collective behavior of animal or insect groups to build a search strategy. These methods consider biological systems, which can be modeled as optimization processes to a certain extent. The Social Spider Optimization (SSO) is a novel swarm algorithm that is based on the cooperative characteristics of the social spider. In SSO, search agents represent a set of spiders which collectively move according to the biological behavior of the colony. In most of SI algorithms, all individuals are modeled considering the same properties and behavior. In contrast, SSO defines two different search agents: male and female. Therefore, according to the gender, each individual is conducted by using a different evolutionary operation which emulates its biological role in the colony. This individual categorization allows reducing critical flaws present in several SI approaches such as incorrect exploration-exploitation balance and premature convergence. After its introduction, SSO has been modified and applied in several engineering domains. In this paper, the state of the art, improvements, and applications of the SSO are reviewed.