scispace - formally typeset
Search or ask a question

Showing papers in "Algorithms in 2018"


Journal ArticleDOI
TL;DR: The NIRS Brain AnalyzIR toolbox is introduced as an open-source Matlab-based analysis package for fNIRS data management, pre-processing, and first- and second-level statistical analysis, based on the object-oriented programming paradigm.
Abstract: Functional near-infrared spectroscopy (fNIRS) is a noninvasive neuroimaging technique that uses low-levels of light (650–900 nm) to measure changes in cerebral blood volume and oxygenation. Over the last several decades, this technique has been utilized in a growing number of functional and resting-state brain studies. The lower operation cost, portability, and versatility of this method make it an alternative to methods such as functional magnetic resonance imaging for studies in pediatric and special populations and for studies without the confining limitations of a supine and motionless acquisition setup. However, the analysis of fNIRS data poses several challenges stemming from the unique physics of the technique, the unique statistical properties of data, and the growing diversity of non-traditional experimental designs being utilized in studies due to the flexibility of this technology. For these reasons, specific analysis methods for this technology must be developed. In this paper, we introduce the NIRS Brain AnalyzIR toolbox as an open-source Matlab-based analysis package for fNIRS data management, pre-processing, and first- and second-level (i.e., single subject and group-level) statistical analysis. Here, we describe the basic architectural format of this toolbox, which is based on the object-oriented programming paradigm. We also detail the algorithms for several of the major components of the toolbox including statistical analysis, probe registration, image reconstruction, and region-of-interest based statistics.

228 citations


Journal ArticleDOI
TL;DR: In this article, a knowledge-base representation learning framework is proposed to embed heterogeneous entities for recommendation, and based on the embedded knowledge base, a soft matching algorithm was proposed to generate personalized explanations for the recommended items.
Abstract: Providing model-generated explanations in recommender systems is important to user experience. State-of-the-art recommendation algorithms—especially the collaborative filtering (CF)- based approaches with shallow or deep models—usually work with various unstructured information sources for recommendation, such as textual reviews, visual images, and various implicit or explicit feedbacks. Though structured knowledge bases were considered in content-based approaches, they have been largely ignored recently due to the availability of vast amounts of data and the learning power of many complex models. However, structured knowledge bases exhibit unique advantages in personalized recommendation systems. When the explicit knowledge about users and items is considered for recommendation, the system could provide highly customized recommendations based on users’ historical behaviors and the knowledge is helpful for providing informed explanations regarding the recommended items. A great challenge for using knowledge bases for recommendation is how to integrate large-scale structured and unstructured data, while taking advantage of collaborative filtering for highly accurate performance. Recent achievements in knowledge-base embedding (KBE) sheds light on this problem, which makes it possible to learn user and item representations while preserving the structure of their relationship with external knowledge for explanation. In this work, we propose to explain knowledge-base embeddings for explainable recommendation. Specifically, we propose a knowledge-base representation learning framework to embed heterogeneous entities for recommendation, and based on the embedded knowledge base, a soft matching algorithm is proposed to generate personalized explanations for the recommended items. Experimental results on real-world e-commerce datasets verified the superior recommendation performance and the explainability power of our approach compared with state-of-the-art baselines.

214 citations


Journal ArticleDOI
TL;DR: Work in the area encompasses both structural questions (Is the reconfiguration graph connected?) and algorithmic ones (How can one find the shortest sequence of steps between two solutions?)
Abstract: Reconfiguration is concerned with relationships among solutions to a problem instance, where the reconfiguration of one solution to another is a sequence of steps such that each step produces an intermediate feasible solution. The solution space can be represented as a reconfiguration graph, where two vertices representing solutions are adjacent if one can be formed from the other in a single step. Work in the area encompasses both structural questions (Is the reconfiguration graph connected?) and algorithmic ones (How can one find the shortest sequence of steps between two solutions?) This survey discusses techniques, results, and future directions in the area.

174 citations


Journal ArticleDOI
TL;DR: This study used the datasets MNIST, HCL2000, and EnglishHand as the benchmark data, analyzed the performance of the SGD optimizer under different learning parameters, and found that the proposed algorithm exhibited good recognition performance when the learning rate was set to [0.05, 0.07].
Abstract: This study proposes a modified convolutional neural network (CNN) algorithm that is based on dropout and the stochastic gradient descent (SGD) optimizer (MCNN-DS), after analyzing the problems of CNNs in extracting the convolution features, to improve the feature recognition rate and reduce the time-cost of CNNs. The MCNN-DS has a quadratic CNN structure and adopts the rectified linear unit as the activation function to avoid the gradient problem and accelerate convergence. To address the overfitting problem, the algorithm uses an SGD optimizer, which is implemented by inserting a dropout layer into the all-connected and output layers, to minimize cross entropy. This study used the datasets MNIST, HCL2000, and EnglishHand as the benchmark data, analyzed the performance of the SGD optimizer under different learning parameters, and found that the proposed algorithm exhibited good recognition performance when the learning rate was set to [0.05, 0.07]. The performances of WCNN, MLP-CNN, SVM-ELM, and MCNN-DS were compared. Statistical results showed the following: (1) For the benchmark MNIST, the MCNN-DS exhibited a high recognition rate of 99.97%, and the time-cost of the proposed algorithm was merely 21.95% of MLP-CNN, and 10.02% of SVM-ELM; (2) Compared with SVM-ELM, the average improvement in the recognition rate of MCNN-DS was 2.35% for the benchmark HCL2000, and the time-cost of MCNN-DS was only 15.41%; (3) For the EnglishHand test set, the lowest recognition rate of the algorithm was 84.93%, the highest recognition rate was 95.29%, and the average recognition rate was 89.77%.

81 citations


Journal ArticleDOI
TL;DR: A multiple attribute decision-making (MADM) method based on the NCDWAA and N CDWGA operators is proposed and two illustrative examples of MADM are provided to demonstrate the application and effectiveness of the established method.
Abstract: The neutrosophic cubic set can describe complex decision-making problems with its single-valued neutrosophic numbers and interval neutrosophic numbers simultaneously. The Dombi operations have the advantage of good flexibility with the operational parameter. In order to solve decision-making problems with flexible operational parameter under neutrosophic cubic environments, the paper extends the Dombi operations to neutrosophic cubic sets and proposes a neutrosophic cubic Dombi weighted arithmetic average (NCDWAA) operator and a neutrosophic cubic Dombi weighted geometric average (NCDWGA) operator. Then, we propose a multiple attribute decision-making (MADM) method based on the NCDWAA and NCDWGA operators. Finally, we provide two illustrative examples of MADM to demonstrate the application and effectiveness of the established method.

77 citations


Journal ArticleDOI
TL;DR: The significant impact of pre- and post-processing choices are shown and it is stressed how important it is to combine data from both hemoglobin species in order to make accurate inferences about the activation site.
Abstract: With the rapid increase in new fNIRS users employing commercial software, there is a concern that many studies are biased by suboptimal processing methods. The purpose of this study is to provide a visual reference showing the effects of different processing methods, to help inform researchers in setting up and evaluating a processing pipeline. We show the significant impact of pre- and post-processing choices and stress again how important it is to combine data from both hemoglobin species in order to make accurate inferences about the activation site.

67 citations


Journal ArticleDOI
TL;DR: The empirical findings suggest that geopolitical events in emerging countries are of little importance to the global economy, since their effect on the assets examined is mainly transitory and only of regional importance, while gold prices seem to be affected by fluctuation in geopolitical risk.
Abstract: An important ingredient in economic policy planning both in the public or the private sector is risk management. In economics and finance, risk manifests through many forms and it is subject to the sector that it entails (financial, fiscal, international, etc.). An under-investigated form is the risk stemming from geopolitical events, such as wars, political tensions, and conflicts. In contrast, the effects of terrorist acts have been thoroughly examined in the relevant literature. In this paper, we examine the potential ability of geopolitical risk of 14 emerging countries to forecast several assets: oil prices, exchange rates, national stock indices, and the price of gold. In doing so, we build forecasting models that are based on machine learning techniques and evaluate the associated out-of-sample forecasting error in various horizons from one to twenty-four months ahead. Our empirical findings suggest that geopolitical events in emerging countries are of little importance to the global economy, since their effect on the assets examined is mainly transitory and only of regional importance. In contrast, gold prices seem to be affected by fluctuation in geopolitical risk. This finding may be justified by the nature of investments in gold, in that they are typically used by economic agents to hedge risk.

57 citations


Journal ArticleDOI
TL;DR: Simulation results provide evidence that the FDE algorithm outperforms the results of the FBCO and FHS algorithms in the optimization of fuzzy controllers and the better errors are found with the implementation of the fuzzy systems to enhance each proposed algorithm.
Abstract: This paper presents a comparison among the bee colony optimization (BCO), differential evolution (DE), and harmony search (HS) algorithms. In addition, for each algorithm, a type-1 fuzzy logic system (T1FLS) for the dynamic modification of the main parameters is presented. The dynamic adjustment in the main parameters for each algorithm with the implementation of fuzzy systems aims at enhancing the performance of the corresponding algorithms. Each algorithm (modified and original versions) is analyzed and compared based on the optimal design of fuzzy systems for benchmark control problems, especially in fuzzy controller design. Simulation results provide evidence that the FDE algorithm outperforms the results of the FBCO and FHS algorithms in the optimization of fuzzy controllers. Statistically is demonstrated that the better errors are found with the implementation of the fuzzy systems to enhance each proposed algorithm.

50 citations


Journal ArticleDOI
TL;DR: The generalized kinetic Monte Carlo framework for the simulation of organic semiconductors and electronic devices such as solar cells and light-emitting diodes is presented and triplet exciton dynamics are included, which allows an enhanced investigation of OSCs and OLEDs.
Abstract: In this paper, we present our generalized kinetic Monte Carlo (kMC) framework for the simulation of organic semiconductors and electronic devices such as solar cells (OSCs) and light-emitting diodes (OLEDs). Our model generalizes the geometrical representation of the multifaceted properties of the organic material by the use of a non-cubic, generalized Voronoi tessellation and a model that connects sites to polymer chains. Herewith, we obtain a realistic model for both amorphous and crystalline domains of small molecules and polymers. Furthermore, we generalize the excitonic processes and include triplet exciton dynamics, which allows an enhanced investigation of OSCs and OLEDs. We outline the developed methods of our generalized kMC framework and give two exemplary studies of electrical and optical properties inside an organic semiconductor.

44 citations


Journal ArticleDOI
TL;DR: The improved A* algorithm can greatly improve the safety and smoothness of the planned path and the movement time of the robot in complex terrain is greatly reduced.
Abstract: The A* algorithm has been widely investigated and applied in path planning problems, but it does not fully consider the safety and smoothness of the path. Therefore, an improved A* algorithm is presented in this paper. Firstly, a new environment modeling method is proposed in which the evaluation function of A* algorithm is improved by taking the safety cost into account. This results in a safer path which can stay farther away from obstacles. Then a new path smoothing method is proposed, which introduces a path evaluation mechanism into the smoothing process. This method is then applied to smoothing the path without safety reduction. Secondly, with respect to path planning problems in complex terrains, a complex terrain environment model is established in which the distance and safety cost of the evaluation function of the A* algorithm are converted into time cost. This results in a unification of units as well as a clarity in their physical meanings. The simulation results show that the improved A* algorithm can greatly improve the safety and smoothness of the planned path and the movement time of the robot in complex terrain is greatly reduced.

43 citations


Journal ArticleDOI
TL;DR: Results show that the proposed method can achieve proportional–integral–derivative automatic tuning and effectively overcome the effects of inertia mutation and torque disturbance.
Abstract: We developed a novel control strategy of speed servo systems based on deep reinforcement learning. The control parameters of speed servo systems are difficult to regulate for practical applications, and problems of moment disturbance and inertia mutation occur during the operation process. A class of reinforcement learning agents for speed servo systems is designed based on the deep deterministic policy gradient algorithm. The agents are trained by a significant number of system data. After learning completion, they can automatically adjust the control parameters of servo systems and compensate for current online. Consequently, a servo system can always maintain good control performance. Numerous experiments are conducted to verify the proposed control strategy. Results show that the proposed method can achieve proportional–integral–derivative automatic tuning and effectively overcome the effects of inertia mutation and torque disturbance.

Journal ArticleDOI
TL;DR: The experimental results show that the standard PSO and LDIW-PSO algorithms with random values generated by U or G are more likely to avoid falling into local optima and quickly obtain the global optima.
Abstract: Particle swarm optimization (PSO) algorithm is generally improved by adaptively adjusting the inertia weight or combining with other evolution algorithms. However, in most modified PSO algorithms, the random values are always generated by uniform distribution in the range of [0, 1]. In this study, the random values, which are generated by uniform distribution in the ranges of [0, 1] and [−1, 1], and Gauss distribution with mean 0 and variance 1 ( U [ 0 , 1 ] , U [ − 1 , 1 ] and G ( 0 , 1 ) ), are respectively used in the standard PSO and linear decreasing inertia weight (LDIW) PSO algorithms. For comparison, the deterministic PSO algorithm, in which the random values are set as 0.5, is also investigated in this study. Some benchmark functions and the pressure vessel design problem are selected to test these algorithms with different types of random values in three space dimensions (10, 30, and 100). The experimental results show that the standard PSO and LDIW-PSO algorithms with random values generated by U [ − 1 , 1 ] or G ( 0 , 1 ) are more likely to avoid falling into local optima and quickly obtain the global optima. This is because the large-scale random values can expand the range of particle velocity to make the particle more likely to escape from local optima and obtain the global optima. Although the random values generated by U [ − 1 , 1 ] or G ( 0 , 1 ) are beneficial to improve the global searching ability, the local searching ability for a low dimensional practical optimization problem may be decreased due to the finite particles.

Journal ArticleDOI
TL;DR: It is concluded that fuzzy systems with Gaussian membership functions provide a better classification than those designed with trapezoidal membership functions.
Abstract: In this paper, the optimal designs of type-1 and interval type-2 fuzzy systems for the classification of the heart rate level are presented. The contribution of this work is a proposed approach for achieving the optimal design of interval type-2 fuzzy systems for the classification of the heart rate in patients. The fuzzy rule base was designed based on the knowledge of experts. Optimization of the membership functions of the fuzzy systems is done in order to improve the classification rate and provide a more accurate diagnosis, and for this goal the Bird Swarm Algorithm was used. Two different type-1 fuzzy systems are designed and optimized, the first one with trapezoidal membership functions and the second with Gaussian membership functions. Once the best type-1 fuzzy systems have been obtained, these are considered as a basis for designing the interval type-2 fuzzy systems, where the footprint of uncertainty was optimized to find the optimal representation of uncertainty. After performing different tests with patients and comparing the classification rate of each fuzzy system, it is concluded that fuzzy systems with Gaussian membership functions provide a better classification than those designed with trapezoidal membership functions. Additionally, tests were performed with the Crow Search Algorithm to carry out a performance comparison, with Bird Swarm Algorithm being the one with the best results.

Journal ArticleDOI
TL;DR: This study aims to assist marine container terminal operators with improving the seaside operations and primarily focuses on the berth scheduling problem, formulated as a mixed integer linear programming model, where the crossover and mutation probabilities are encoded in the chromosomes.
Abstract: Since ancient times, maritime transportation has played a very important role for the global trade and economy of many countries. The volumes of all major types of cargo, which are transported by vessels, has substantially increased in recent years. Considering a rapid growth of waterborne trade, marine container terminal operators should focus on upgrading the existing terminal infrastructure and improving operations planning. This study aims to assist marine container terminal operators with improving the seaside operations and primarily focuses on the berth scheduling problem. The problem is formulated as a mixed integer linear programming model, minimizing the total weighted vessel turnaround time and the total weighted vessel late departures. A self-adaptive Evolutionary Algorithm is proposed to solve the problem, where the crossover and mutation probabilities are encoded in the chromosomes. Numerical experiments are conducted to evaluate performance of the developed solution algorithm against the alternative Evolutionary Algorithms, which rely on the deterministic parameter control, adaptive parameter control, and parameter tuning strategies, respectively. Results indicate that all the considered solution algorithms demonstrate a relatively low variability in terms of the objective function values at termination from one replication to another and can maintain the adequate population diversity. However, application of the self-adaptive parameter control strategy substantially improves the objective function values at termination without a significant impact on the computational time.

Journal ArticleDOI
TL;DR: Conditional inapproximability results with essentially optimal ratios for the following graph problems based on the Small Set Expansion Hypothesis are proved: Maximum Edge Biclique, Maximum Balanced Bicles, Minimum k-Cut and Densest At-Least-k-Subgraph.
Abstract: The Small Set Expansion Hypothesis is a conjecture which roughly states that it is NP-hard to distinguish between a graph with a small subset of vertices whose (edge) expansion is almost zero and one in which all small subsets of vertices have expansion almost one. In this work, we prove conditional inapproximability results with essentially optimal ratios for the following graph problems based on this hypothesis: Maximum Edge Biclique, Maximum Balanced Biclique, Minimum k-Cut and Densest At-Least-k-Subgraph. Our hardness results for the two biclique problems are proved by combining a technique developed by Raghavendra, Steurer and Tulsiani to avoid locality of gadget reductions with a generalization of Bansal and Khot’s long code test whereas our results for Minimum k-Cut and Densest At-Least-k-Subgraph are shown via elementary reductions.

Journal ArticleDOI
TL;DR: An optimal sliding mode control (OSMC) based on a genetic algorithm (GA) is proposed and results show that the OSMC controller tuned using a GA has better control performance than the traditional SMC controller.
Abstract: In order to improve the dynamic quality of traditional sliding mode control for an active suspension system, an optimal sliding mode control (OSMC) based on a genetic algorithm (GA) is proposed. First, the overall structure and control principle of the active suspension system are introduced. Second, the mathematical model of the quarter car active suspension system is established. Third, a sliding mode control (SMC) controller is designed to manipulate the active force to control the active suspension system. Fourth, GA is applied to optimize the weight coefficients of an SMC switching function and the parameters of the control law. Finally, the simulation model is built based on MATLAB/Simulink (version 2014a), and the simulations are performed and analyzed with the proposed control strategy to identify its performance. The simulation results show that the OSMC controller tuned using a GA has better control performance than the traditional SMC controller.

Journal ArticleDOI
TL;DR: Modified cuckoo search algorithm with variational parameter and logistic map (VLCS) to ameliorate defects of CS and demonstrates that the VLCS algorithm can over come the disadvantages of the CS algorithm.
Abstract: Cuckoo Search (CS) is a Meta-heuristic method, which exhibits several advantages such as easier to application and fewer tuning parameters. However, it has proven to very easily fall into local optimal solutions and has a slow rate of convergence. Therefore, we propose Modified cuckoo search algorithm with variational parameter and logistic map (VLCS) to ameliorate these defects. To balance the exploitation and exploration of the VLCS algorithm, we not only use the coefficient function to change step size α and probability of detection p a at next generation, but also use logistic map of each dimension to initialize host nest location and update the location of host nest beyond the boundary. With fifteen benchmark functions, the simulations demonstrate that the VLCS algorithm can over come the disadvantages of the CS algorithm.In addition, the VLCS algorithm is good at dealing with high dimension problems and low dimension problems.

Journal ArticleDOI
TL;DR: An additional deterministic optimization process which further enhances the original ACO algorithm has been proposed and modified and adapted for M-MDVRP.
Abstract: This article deals with the modified Multi-Depot Vehicle Routing Problem (MDVRP). The modification consists of altering the optimization criterion. The optimization criterion of the standard MDVRP is to minimize the total sum of routes of all vehicles, whereas the criterion of modified MDVRP (M-MDVRP) is to minimize the longest route of all vehicles, i.e., the time to conduct the routing operation is as short as possible. For this problem, a metaheuristic algorithm—based on the Ant Colony Optimization (ACO) theory and developed by the author for solving the classic MDVRP instances—has been modified and adapted for M-MDVRP. In this article, an additional deterministic optimization process which further enhances the original ACO algorithm has been proposed. For evaluation of results, Cordeau’s benchmark instances are used.

Journal ArticleDOI
TL;DR: Investigation of a complex infrastructure composed of data centers located in different geographical areas in which renewable energy generators are installed, co-located with the data centers, to reduce the amount of energy that must be purchased by the power grid shows that renewable energy can be effectively exploited in geographical data centers when a smart load allocation strategy is implemented.
Abstract: The success of cloud computing services has led to big computing infrastructures that are complex to manage and very costly to operate. In particular, power supply dominates the operational costs of big infrastructures, and several solutions have to be put in place to alleviate these operational costs and make the whole infrastructure more sustainable. In this paper, we investigate the case of a complex infrastructure composed of data centers (DCs) located in different geographical areas in which renewable energy generators are installed, co-located with the data centers, to reduce the amount of energy that must be purchased by the power grid. Since renewable energy generators are intermittent, the load management strategies of the infrastructure have to be adapted to the intermittent nature of the sources. In particular, we consider EcoMultiCloud , a load management strategy already proposed in the literature for multi-objective load management strategies, and we adapt it to the presence of renewable energy sources. Hence, cost reduction is achieved in the load allocation process, when virtual machines (VMs) are assigned to a data center of the considered infrastructure, by considering both energy cost variations and the presence of renewable energy production. Performance is analyzed for a specific infrastructure composed of four data centers. Results show that, despite being intermittent and highly variable, renewable energy can be effectively exploited in geographical data centers when a smart load allocation strategy is implemented. In addition, the results confirm that EcoMultiCloud is very flexible and is suited to the considered scenario.

Journal ArticleDOI
TL;DR: Compared with the traditional PID controller, the new controller designed in this paper has better control precision and robustness, which provides the basis for practical application.
Abstract: In order to improve the control precision and robustness of the existing proportion integration differentiation (PID) controller of a 3-Revolute–Revolute–Revolute (3-RRR) parallel robot, a variable PID parameter controller optimized by a genetic algorithm controller is proposed in this paper. Firstly, the inverse kinematics model of the 3-RRR parallel robot was established according to the vector method, and the motor conversion matrix was deduced. Then, the error square integral was chosen as the fitness function, and the genetic algorithm controller was designed. Finally, the control precision of the new controller was verified through the simulation model of the 3-RRR planar parallel robot—built in SimMechanics—and the robustness of the new controller was verified by adding interference. The results show that compared with the traditional PID controller, the new controller designed in this paper has better control precision and robustness, which provides the basis for practical application.

Journal ArticleDOI
TL;DR: The main elements of ship collision are examined, a mathematical model for the risk assessment is developed, and a collision assessment based on AIS information is simulated, thereby providing meaningful recommendations for crew training and a warning system, in conjunction with the AIS on board.
Abstract: The identification of risks associated with collision for vessels is an important element in maritime safety and management. A vessel collision avoidance system is a topic that has been deeply studied, and it is a specialization in navigation technology. The automatic identification system (AIS) has been used to support navigation, route estimation, collision prediction, and abnormal traffic detection. This article examined the main elements of ship collision, developed a mathematical model for the risk assessment, and simulated a collision assessment based on AIS information, thereby providing meaningful recommendations for crew training and a warning system, in conjunction with the AIS on board.

Journal ArticleDOI
TL;DR: A novel technique for the fast tuning of the parameters of the proportional–integral–derivative (PID) controller of a second-order heat, ventilation, and air conditioning (HVAC) system using a fast convergence evolution algorithm, called Big Bang–Big Crunch (BB–BC).
Abstract: This article presents a novel technique for the fast tuning of the parameters of the proportional–integral–derivative (PID) controller of a second-order heat, ventilation, and air conditioning (HVAC) system. The HVAC systems vary greatly in size, control functions and the amount of consumed energy. The optimal design and power efficiency of an HVAC system depend on how fast the integrated controller, e.g., PID controller, is adapted in the changes of the environmental conditions. In this paper, to achieve high tuning speed, we rely on a fast convergence evolution algorithm, called Big Bang–Big Crunch (BB–BC). The BB–BC algorithm is implemented, along with the PID controller, in an FPGA device, in order to further accelerate of the optimization process. The FPGA-in-the-loop (FIL) technique is used to connect the FPGA board (i.e., the PID and BB–BC subsystems) with the plant (i.e., MATLAB/Simulink models of HVAC) in order to emulate and evaluate the entire system. The experimental results demonstrate the efficiency of the proposed technique in terms of optimization accuracy and convergence speed compared with other optimization approaches for the tuning of the PID parameters: sw implementation of the BB–BC, genetic algorithm (GA), and particle swarm optimization (PSO).

Journal ArticleDOI
TL;DR: This paper proposes a method that takes into account context-sensitivity and gradient problems, namely the Bidirectional Grid Long Short-Term Memory (BiGridLSTM) recurrent neural network, which not only takes advantage of the grid architecture, but it also captures information around the current moment.
Abstract: The Recurrent Neural Network (RNN) utilizes dynamically changing time information through time cycles, so it is very suitable for tasks with time sequence characteristics. However, with the increase of the number of layers, the vanishing gradient occurs in the RNN. The Grid Long Short-Term Memory (GridLSTM) recurrent neural network can alleviate this problem in two dimensions by taking advantage of the two dimensions calculated in time and depth. In addition, the time sequence task is related to the information of the current moment before and after. In this paper, we propose a method that takes into account context-sensitivity and gradient problems, namely the Bidirectional Grid Long Short-Term Memory (BiGridLSTM) recurrent neural network. This model not only takes advantage of the grid architecture, but it also captures information around the current moment. A large number of experiments on the dataset LibriSpeech show that BiGridLSTM is superior to other deep LSTM models and unidirectional LSTM models, and, when compared with GridLSTM, it gets about 26 percent gain improvement.

Journal ArticleDOI
TL;DR: Experimental results show that in the proposed model, the classification accuracy increases from 0.916 to 0.944, compared to a traditional convolutional neural network model; furthermore, the number of training runs is reduced, and theNumber of labelled samples can be reduced by more than half, all while ensuring a classification accuracy of no less than 0.8.
Abstract: Variation in the format and classification requirements for remote sensing data makes establishing a standard remote sensing sample dataset difficult. As a result, few remote sensing deep neural network models have been widely accepted. We propose a hybrid deep neural network model based on a convolutional auto-encoder and a complementary convolutional neural network to solve this problem. The convolutional auto-encoder supports feature extraction and data dimension reduction of remote sensing data. The extracted features are input into the convolutional neural network and subsequently classified. Experimental results show that in the proposed model, the classification accuracy increases from 0.916 to 0.944, compared to a traditional convolutional neural network model; furthermore, the number of training runs is reduced from 40,000 to 22,000, and the number of labelled samples can be reduced by more than half, all while ensuring a classification accuracy of no less than 0.9, which suggests the effectiveness and feasibility of the proposed model.

Journal ArticleDOI
TL;DR: A proposed tuning method, based on the desired dynamics equation (DDE) and the generalized frequency method (GFM), for a two-degree-of-freedom proportional-integral-derivative (PID) controller, which guarantees not only the desired dynamic but also the stability margin.
Abstract: In this paper, a new tuning method is proposed, based on the desired dynamics equation (DDE) and the generalized frequency method (GFM), for a two-degree-of-freedom proportional-integral-derivative (PID) controller. The DDE method builds a quantitative relationship between the performance and the two-degree-of-freedom PID controller parameters and guarantees the desired dynamic, but it cannot guarantee the stability margin. So, we have developed the proposed tuning method, which guarantees not only the desired dynamic but also the stability margin. Based on the DDE and the GFM, several simple formulas are deduced to calculate directly the controller parameters. In addition, it performs almost no overshooting setpoint response. Compared with Panagopoulos’ method, the proposed methodology is proven to be effective.

Journal ArticleDOI
TL;DR: A novel two-level multi-objective genetic algorithm (GA) to optimize time series forecasting data for fans used in road tunnels by the Swedish Transport Administration and shows the drawbacks of forecasting using a multi- objective GA based on the dynamic regression model.
Abstract: The aim of this study is to develop a novel two-level multi-objective genetic algorithm (GA) to optimize time series forecasting data for fans used in road tunnels by the Swedish Transport Administ ...

Journal ArticleDOI
TL;DR: Three methods that use local convolutional feature aggregation to implement document classification using the recurrent attention model (RAM), in which a reinforcement learning module is introduced to act as a controller for selecting the next block position based on the recurrent state.
Abstract: The exponential increase in online reviews and recommendations makes document classification and sentiment analysis a hot topic in academic and industrial research. Traditional deep learning based document classification methods require the use of full textual information to extract features. In this paper, in order to tackle long document, we proposed three methods that use local convolutional feature aggregation to implement document classification. The first proposed method randomly draws blocks of continuous words in the full document. Each block is then fed into the convolution neural network to extract features and then are concatenated together to output the classification probability through a classifier. The second model improves the first by capturing the contextual order information of the sampled blocks with a recurrent neural network. The third model is inspired by the recurrent attention model (RAM), in which a reinforcement learning module is introduced to act as a controller for selecting the next block position based on the recurrent state. Experiments on our collected four-class arXiv paper dataset show that the three proposed models all perform well, and the RAM model achieves the best test accuracy with the least information.

Journal ArticleDOI
TL;DR: The experimental results show the feasibility of deep belief networks in the field of weather forecasting, and a new method based on deep learning is proposed for precipitation prediction in the era of climate big data.
Abstract: Due to the impact of weather forecasting on global human life, and to better reflect the current trend of weather changes, it is necessary to conduct research about the prediction of precipitation and provide timely and complete precipitation information for climate prediction and early warning decisions to avoid serious meteorological disasters. For the precipitation prediction problem in the era of climate big data, we propose a new method based on deep learning. In this paper, we will apply deep belief networks in weather precipitation forecasting. Deep belief networks transform the feature representation of data in the original space into a new feature space, with semantic features to improve the predictive performance. The experimental results show, compared with other forecasting methods, the feasibility of deep belief networks in the field of weather forecasting.

Journal ArticleDOI
TL;DR: This work assumes not only that these inputs are dynamic in nature, but also that they are a function of the structure of the emerging routing plan, which means these traveling times need to be dynamically re-evaluated as the solution is being constructed.
Abstract: Freight transportation is becoming an increasingly critical activity for enterprises in a global world. Moreover, the distribution activities have a non-negligible impact on the environment, as well as on the citizens’ welfare. The classical vehicle routing problem (VRP) aims at designing routes that minimize the cost of serving customers using a given set of capacitated vehicles. Some VRP variants consider traveling times, either in the objective function (e.g., including the goal of minimizing total traveling time or designing balanced routes) or as constraints (e.g., the setting of time windows or a maximum time per route). Typically, the traveling time between two customers or between one customer and the depot is assumed to be both known in advance and static. However, in real life, there are plenty of factors (predictable or not) that may affect these traveling times, e.g., traffic jams, accidents, road works, or even the weather. In this work, we analyze the VRP with dynamic traveling times. Our work assumes not only that these inputs are dynamic in nature, but also that they are a function of the structure of the emerging routing plan. In other words, these traveling times need to be dynamically re-evaluated as the solution is being constructed. In order to solve this dynamic optimization problem, a learnheuristic-based approach is proposed. Our approach integrates statistical learning techniques within a metaheuristic framework. A number of computational experiments are carried out in order to illustrate our approach and discuss its effectiveness.

Journal ArticleDOI
TL;DR: A new MAB model, named measure-use-MAB (muMAB), aiming at providing a higher flexibility, and thus a better accuracy in describing the network selection problem, is proposed, with muUCB1 emerging as the best candidate when the arms are characterized by similar mean rewards, and MLI prevailing when an arm is significantly more rewarding than others.
Abstract: Multi-armed bandit (MAB) models are a viable approach to describe the problem of best wireless network selection by a multi-Radio Access Technology (multi-RAT) device, with the goal of maximizing the quality perceived by the final user. The classical MAB model does not allow, however, to properly describe the problem of wireless network selection by a multi-RAT device, in which a device typically performs a set of measurements in order to collect information on available networks, before a selection takes place. The MAB model foresees in fact only one possible action for the player, which is the selection of one among different arms at each time step; existing arm selection algorithms thus mainly differ in the rule according to which a specific arm is selected. This work proposes a new MAB model, named measure-use-MAB (muMAB), aiming at providing a higher flexibility, and thus a better accuracy in describing the network selection problem. The muMAB model extends the classical MAB model in a twofold manner; first, it foresees two different actions: to measure and to use; second, it allows actions to span over multiple time steps. Two new algorithms designed to take advantage of the higher flexibility provided by the muMAB model are also introduced. The first one, referred to as measure-use-UCB1 (muUCB1) is derived from the well known UCB1 algorithm, while the second one, referred to as Measure with Logarithmic Interval (MLI), is appositely designed for the new model so to take advantage of the new measure action, while aggressively using the best arm. The new algorithms are compared against existing ones from the literature in the context of the muMAB model, by means of computer simulations using both synthetic and captured data. Results show that the performance of the algorithms heavily depends on the Probability Density Function (PDF) of the reward received on each arm, with different algorithms leading to the best performance depending on the PDF. Results highlight, however, that as the ratio between the time required for using an arm and the time required to measure increases, the proposed algorithms guarantee the best performance, with muUCB1 emerging as the best candidate when the arms are characterized by similar mean rewards, and MLI prevailing when an arm is significantly more rewarding than others. This calls thus for the introduction of an adaptive approach capable of adjusting the behavior of the algorithm or of switching algorithm altogether, depending on the acquired knowledge on the PDF of the reward on each arm.