scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Intelligent Computing and Cybernetics in 2017"


Journal ArticleDOI
TL;DR: The results of the UAV system tests prove that segregating the autonomy of a system as multi-dimensional and adjustable layers enables humans and/or agents to perform actions at convenient autonomy levels.
Abstract: Purpose The purpose of this paper is to propose a layered adjustable autonomy (LAA) as a dynamically adjustable autonomy model for a multi-agent system. It is mainly used to efficiently manage humans’ and agents’ shared control of autonomous systems and maintain humans’ global control over the agents. Design/methodology/approach The authors apply the LAA model in an agent-based autonomous unmanned aerial vehicle (UAV) system. The UAV system implementation consists of two parts: software and hardware. The software part represents the controller and the cognitive, and the hardware represents the computing machinery and the actuator of the UAV system. The UAV system performs three experimental scenarios of dance, surveillance and search missions. The selected scenarios demonstrate different behaviors in order to create a suitable test plan and ensure significant results. Findings The results of the UAV system tests prove that segregating the autonomy of a system as multi-dimensional and adjustable layers enables humans and/or agents to perform actions at convenient autonomy levels. Hence, reducing the adjustable autonomy drawbacks of constraining the autonomy of the agents, increasing humans’ workload and exposing the system to disturbances. Originality/value The application of the LAA model in a UAV manifests the significance of implementing dynamic adjustable autonomy. Assessing the autonomy within three phases of agents run cycle (task-selection, actions-selection and actions-execution) is an original idea that aims to direct agents’ autonomy toward performance competency. The agents’ abilities are well exploited when an incompetent agent switches with a more competent one.

28 citations


Journal ArticleDOI
TL;DR: The proposed fuzzy inference system (FIS) based on sensory information for solving the navigation challenge of UGV in cluttered and dynamic environments shows an efficient navigation strategy that overcomes the current navigation challenges in dynamic environments.
Abstract: Purpose The motion control of unmanned ground vehicles (UGV) is a challenge in the industry of automation. The purpose of this paper is to propose a fuzzy inference system (FIS) based on sensory information for solving the navigation challenge of UGV in cluttered and dynamic environments. Design/methodology/approach The representation of the dynamic environment is a key element for the operational field and for the testing of the robotic navigation system. If dynamic obstacles move randomly in the operation field, the navigation problem becomes more complicated due to the coordination of the elements for accurate navigation and collision-free path within the environmental representations. This paper considers the construction of the FIS, which consists of two controllers. The first controller uses three sensors based on the obstacles distances from the front, right and left. The second controller employs the angle difference between the heading of the vehicle and the targeted angle to obtain the optimal route based on the environment and reach the desired destination with minimal running power and delay. The proposed design shows an efficient navigation strategy that overcomes the current navigation challenges in dynamic environments. Findings Experimental analyses are conducted for three different scenarios to investigate the validation and effectiveness of the introduced controllers based on the FIS. The reported simulation results are obtained using MATLAB software package. The results show that the controllers of the FIS consistently perform the manoeuvring task and manage the route plan efficiently, even in a complex environment that is populated with dynamic obstacles. The paper demonstrates that the destination was reached optimally using the shortest free route. Research limitations/implications The paper represents efforts toward building a dynamic environment filled with dynamic obstacles that move at various speeds and directions. The methodology of designing the FIS is accomplished to guide the UGV to the desired destination while avoiding collisions with obstacles. However, the methodology is approached using two-dimensional analyses. Hence, the paper suggests several extensions and variations to develop a three-dimensional strategy for further improvement. Originality/value This paper presents the design of a FIS and its characterizations in dynamic environments, specifically for obstacles that move at different velocities. This facilitates an improved functionality of the operation of UGV.

20 citations


Journal ArticleDOI
TL;DR: The design and characterization of a novel and intelligent database system to process and manage the imperfection inherent to both temporal relations and intervals is presented.
Abstract: Purpose Time modeling is a crucial feature in many application domains. However, temporal information often is not crisp, but is subjective and fuzzy. The purpose of this paper is to address the issue related to the modeling and handling of imperfection inherent to both temporal relations and intervals. Design/methodology/approach On the one hand, fuzzy extensions of Allen temporal relations are investigated and, on the other hand, extended temporal relations to define the positions of two fuzzy time intervals are introduced. Then, a database system, called Fuzzy Temporal Information Management and Exploitation (Fuzz-TIME), is developed for the purpose of processing fuzzy temporal queries. Findings To evaluate the proposal, the authors have implemented a Fuzz-TIME system and created a fuzzy historical database for the querying purpose. Some demonstrative scenarios from history domain are proposed and discussed. Research limitations/implications The authors have conducted some experiments on archaeological data to show the effectiveness of the Fuzz-TIME system. However, thorough experiments on large-scale databases are highly desirable to show the behavior of the tool with respect to the performance and time execution criteria. Practical implications The tool developed (Fuzz-TIME) can have many practical applications where time information has to be dealt with. In particular, in several real-world applications like history, medicine, criminal and financial domains, where time is often perceived or expressed in an imprecise/fuzzy manner. Social implications The social implications of this work can be expected, more particularly, in two domains: in the museum to manage, exploit and analysis the piece of information related to archives and historic data; and in the hospitals/medical organizations to deal with time information inherent to data about patients and diseases. Originality/value This paper presents the design and characterization of a novel and intelligent database system to process and manage the imperfection inherent to both temporal relations and intervals.

20 citations


Journal ArticleDOI
TL;DR: Closed-loop stability is proved using the Lyapunov stability theory and sliding mode control based on a fuzzy supervisor system can sufficiently ensure perfect tracking and controlling in the presence of uncertainties.
Abstract: Purpose The purpose of this paper is to address the problem of control in a typical chaotic power system. Chaotic oscillations cannot only extremely endanger the stabilization of the power system but they can also not be controlled by adding the traditional controllers. So, the sliding mode control based on a fuzzy supervisor can sufficiently ensure perfect tracking and controlling in the presence of uncertainties. Closed-loop stability is proved using the Lyapunov stability theory. The simulation results show the effectiveness of the proposed method in damping chaotic oscillations of the power system, eliminating control signal chattering and also show less control effort in comparison with the methods considered in previous literatures. Design/methodology/approach The sliding mode control based on a fuzzy supervisor can sufficiently ensure perfect tracking and controlling in the presence of uncertainties. Closed-loop stability is proved using the Lyapunov stability theory. Findings Closed-loop stability is proved using the Lyapunov stability theory. The simulation results show the effectiveness of the proposed method in damping chaotic oscillations of power system, eliminating control signal chattering and also less control effort in comparison with the methods considered in previous literatures. Originality/value Main contributions of the paper are as follows: the chaotic behavior of power systems with two uncertainty parameters and tracking reference signal for the control of generator angle and the controller signal are discussed; designing sliding mode control based on a fuzzy supervisor in order to practically implement for the first time; while the generator speed is constant, the proposed controller will enable the power system to go in any desired trajectory for generator angle at first time; stability of the closed-loop sliding mode control based on the fuzzy supervisor system is proved using the Lyapunov stability theory; simulation of the proposed controller shows that the chattering is low control signal.

17 citations


Journal ArticleDOI
TL;DR: This research provides an excellent design protocol for UAV loss detection and replacement scheme and presents the framework of the multi-agent and protocol design for monitoring the network of a group of UAVs.
Abstract: Purpose Cooperative control of a group of unmanned aerial vehicles (UAVs) is an important area of research. The purpose of this paper is to explore multi-UAV control in the framework of providing surveillance of areas of interest with automatic loss detection and replacement capabilities. Design/methodology/approach The research is based on the concept of the multi-agent system. The authors present the framework of the multi-agent and protocol design for monitoring the network of a group of UAVs. Findings If one or more UAVs which is conducting a high priority surveillance task is lost, the system can self-arrange for another UAV to replace the lost UAV and continue to execute its task. This research provides an excellent design protocol for UAV loss detection and replacement scheme. Research limitations/implications One of the major limitations of this research is that we have only two types of priority levels, high or low. If the priority is more than two levels, for example, high priority 1, high priority 2, or high priority 3, the replacement has not yet been implemented. Originality/value This paper contributes to the following two aspects of the scientific knowledge. The first contribution is the design of an agent model which jointly considers system architecture, communication, control logic and target monitoring. The second contribution includes the decentralized and automatic UAV loss detection and replacement algorithm.

17 citations


Journal ArticleDOI
TL;DR: By using Fib-Dij to replace Floyd’s algorithm, an improved Isomap method based on Fib- Dij is proposed, showing the consistency with C-Isomap and marked improvements in terms of the high speed.
Abstract: Purpose Isometric feature mapping (Isomap) is a very popular manifold learning method and is widely used in dimensionality reduction and data visualization. The most time-consuming step in Isomap is to compute the shortest paths between all pairs of data points based on a neighbourhood graph. The classical Isomap (C-Isomap) is very slow, due to the use of Floyd’s algorithm to compute the shortest paths. The purpose of this paper is to speed up Isomap. Design/methodology/approach Through theoretical analysis, it is found that the neighbourhood graph in Isomap is sparse. In this case, the Dijkstra’s algorithm with Fibonacci heap (Fib-Dij) is faster than Floyd’s algorithm. In this paper, an improved Isomap method based on Fib-Dij is proposed. By using Fib-Dij to replace Floyd’s algorithm, an improved Isomap method is presented in this paper. Findings Using the S-curve, the Swiss-roll, the Frey face database, the mixed national institute of standards and technology database of handwritten digits and a face image database, the performance of the proposed method is compared with C-Isomap, showing the consistency with C-Isomap and marked improvements in terms of the high speed. Simulations also demonstrate that Fib-Dij reduces the computation time of the shortest paths from O(N3) to O(N2lgN). Research limitations/implications Due to the limitations of the computer, the sizes of the data sets in this paper are all smaller than 3,000. Therefore, researchers are encouraged to test the proposed algorithm on larger data sets. Originality/value The new method based on Fib-Dij can greatly improve the speed of Isomap.

9 citations


Journal ArticleDOI
TL;DR: An on-line modeling and controlling scheme based on the dynamic recurrent neural network for wastewater treatment system and results show that the proposed control method achieves better performance compared to other methods.
Abstract: Purpose The purpose of this paper is to present an on-line modeling and controlling scheme based on the dynamic recurrent neural network for wastewater treatment system. Design/methodology/approach A control strategy based on rule adaptive recurrent neural network (RARFNN) is proposed in this paper to control the dissolved oxygen (DO) concentration and nitrate nitrogen (SNo) concentration. The structure of the RARFNN is self-organized by a rule adaptive algorithm, and the rule adaptive algorithm considers the overall information processing ability of neural network. Furthermore, a stability analysis method is given to prove the convergence of the proposed RARFNN. Findings By application in the control problem of wastewater treatment process (WWTP), results show that the proposed control method achieves better performance compared to other methods. Originality/value The proposed on-line modeling and controlling method uses the RARFNN to model and control the dynamic WWTP. The RARFNN can adjust its structure and parameters according to the changes of biochemical reactions and pollutant concentrations. And, the rule adaptive mechanism considers the overall information processing ability judgment of the neural network, which can ensure that the neural network contains the information of the biochemical reactions.

8 citations


Journal ArticleDOI
TL;DR: The experiments proved that the TML language is easy to use and expressive enough to formulate adaptive missions in dynamic environments, and showed thatThe TML interpreter is efficient to execute multi-robot aerial missions and reusable for different platforms.
Abstract: Purpose The purpose of this paper is to describe the specification language TML for adaptive mission plans that the authors designed and implemented for the open-source framework Aerostack for aerial robotics. Design/methodology/approach The TML language combines a task-based hierarchical approach together with a more flexible representation, rule-based reactive planning, to facilitate adaptability. This approach includes additional notions that abstract programming details. The authors built an interpreter integrated in the software framework Aerostack. The interpreter was validated with flight experiments for multi-robot missions in dynamic environments. Findings The experiments proved that the TML language is easy to use and expressive enough to formulate adaptive missions in dynamic environments. The experiments also showed that the TML interpreter is efficient to execute multi-robot aerial missions and reusable for different platforms. The TML interpreter is able to verify the mission plan before its execution, which increases robustness and safety, avoiding the execution of certain plans that are not feasible. Originality/value One of the main contributions of this work is the availability of a reliable solution to specify aerial mission plans, integrated in an active open-source project with periodic releases. To the best knowledge of the authors, there are not solutions similar to this in other active open-source projects. As additional contributions, TML uses an original combination of representations for adaptive mission plans (i.e. task trees with original abstract notions and rule-based reactive planning) together with the demonstration of its adequacy for aerial robotics.

8 citations


Journal ArticleDOI
TL;DR: The proposed fusion techniques have successfully outclassed the state-of-the-art techniques in classification and retrieval performances and encouraged further research on dimensionality reduction of feature vectors for enhanced classification results.
Abstract: Purpose Current practices in data classification and retrieval have experienced a surge in the use of multimedia content. Identification of desired information from the huge image databases has been facing increased complexities for designing an efficient feature extraction process. Conventional approaches of image classification with text-based image annotation have faced assorted limitations due to erroneous interpretation of vocabulary and huge time consumption involved due to manual annotation. Content-based image recognition has emerged as an alternative to combat the aforesaid limitations. However, exploring rich feature content in an image with a single technique has lesser probability of extract meaningful signatures compared to multi-technique feature extraction. Therefore, the purpose of this paper is to explore the possibilities of enhanced content-based image recognition by fusion of classification decision obtained using diverse feature extraction techniques. Design/methodology/approach Three novel techniques of feature extraction have been introduced in this paper and have been tested with four different classifiers individually. The four classifiers used for performance testing were K nearest neighbor (KNN) classifier, RIDOR classifier, artificial neural network classifier and support vector machine classifier. Thereafter, classification decisions obtained using KNN classifier for different feature extraction techniques have been integrated by Z-score normalization and feature scaling to create fusion-based framework of image recognition. It has been followed by the introduction of a fusion-based retrieval model to validate the retrieval performance with classified query. Earlier works on content-based image identification have adopted fusion-based approach. However, to the best of the authors’ knowledge, fusion-based query classification has been addressed for the first time as a precursor of retrieval in this work. Findings The proposed fusion techniques have successfully outclassed the state-of-the-art techniques in classification and retrieval performances. Four public data sets, namely, Wang data set, Oliva and Torralba (OT-scene) data set, Corel data set and Caltech data set comprising of 22,615 images on the whole are used for the evaluation purpose. Originality/value To the best of the authors’ knowledge, fusion-based query classification has been addressed for the first time as a precursor of retrieval in this work. The novel idea of exploring rich image features by fusion of multiple feature extraction techniques has also encouraged further research on dimensionality reduction of feature vectors for enhanced classification results.

8 citations


Journal ArticleDOI
TL;DR: The outcomes of the paper show that the synthetic attributes have positively improved the performance of the classification algorithms, and also they have been highly ranked according to their influence to the target variable.
Abstract: Purpose The purpose of this paper is to present an empirical study on the effect of two synthetic attributes to popular classification algorithms on data originating from student transcripts. The attributes represent past performance achievements in a course, which are defined as global performance (GP) and local performance (LP). GP of a course is an aggregated performance achieved by all students who have taken this course, and LP of a course is an aggregated performance achieved in the prerequisite courses by the student taking the course. Design/methodology/approach The paper uses Educational Data Mining techniques to predict student performance in courses, where it identifies the relevant attributes that are the most key influencers for predicting the final grade (performance) and reports the effect of the two suggested attributes on the classification algorithms. As a research paradigm, the paper follows Cross-Industry Standard Process for Data Mining using RapidMiner Studio software tool. Six classification algorithms are experimented: C4.5 and CART Decision Trees, Naive Bayes, k-neighboring, rule-based induction and support vector machines. Findings The outcomes of the paper show that the synthetic attributes have positively improved the performance of the classification algorithms, and also they have been highly ranked according to their influence to the target variable. Originality/value This paper proposes two synthetic attributes that are integrated into real data set. The key motivation is to improve the quality of the data and make classification algorithms perform better. The paper also presents empirical results showing the effect of these attributes on selected classification algorithms.

6 citations


Journal ArticleDOI
TL;DR: The proposed protocol can successfully achieve the predefined time-varying formation in finite time under jointly connected topologies while tracking the trajectory generated by the leader.
Abstract: Purpose The purpose of this paper is to investigate the time-varying finite-time formation tracking control problem for multiple unmanned aerial vehicle systems under switching topologies, where the states of the unmanned aerial vehicles need to form desired time-varying formations while tracking the trajectory of the virtual leader in finite time under jointly connected topologies. Design/methodology/approach A consensus-based formation control protocol is constructed to achieve the desired formation. In this paper, the time-varying formation is specified by a piecewise continuously differentiable vector, while the finite-time convergence is guaranteed by utilizing a non-linear function. Based on the graph theory, the finite-time stability of the close-loop system with the proposed control protocol under jointly connected topologies is proven by applying LaSalle’s invariance principle and the theory of homogeneity with dilation. Findings The effectiveness of the proposed protocol is verified by numerical simulations. Consequently, the proposed protocol can successfully achieve the predefined time-varying formation in finite time under jointly connected topologies while tracking the trajectory generated by the leader. Originality/value This paper proposes a solution to simultaneously solve the control problems of time-varying formation tracking, finite-time convergence, and switching topologies.

Journal ArticleDOI
TL;DR: A Bayesian theoretical model for the resource allocation problem under uncertainty is designed and the rational learning method is introduced for optimizing the decision-making process of agents for achieving Bayesian Nash equilibrium point.
Abstract: Purpose The paper aims to build the connections between game theory and the resource allocation problem with general uncertainty. It proposes modeling the distributed resource allocation problem by Bayesian game. During this paper, three basic kinds of uncertainties are discussed. Therefore, the purpose of this paper is to build the connections between game theory and the resource allocation problem with general uncertainty. Design/methodology/approach In this paper, the Bayesian game is proposed for modeling the resource allocation problem with uncertainty. The basic game theoretical model contains three parts: agents, utility function, and decision-making process. Therefore, the probabilistic weighted Shapley value (WSV) is applied to design the utility function of the agents. For achieving the Bayesian Nash equilibrium point, the rational learning method is introduced for optimizing the decision-making process of the agents. Findings The paper provides empirical insights about how the game theoretical model deals with the resource allocation problem uncertainty. A probabilistic WSV function was proposed to design the utility function of agents. Moreover, the rational learning was used to optimize the decision-making process of agents for achieving Bayesian Nash equilibrium point. By comparing with the models with full information, the simulation results illustrated the effectiveness of the Bayesian game theoretical methods for the resource allocation problem under uncertainty. Originality/value This paper designs a Bayesian theoretical model for the resource allocation problem under uncertainty. The relationships between the Bayesian game and the resource allocation problem are discussed.

Journal ArticleDOI
TL;DR: A set of novel features to determine the gist of a given scene based on dominant color, dominant direction, openness and roughness features is proposed, capable of narrowing the semantic gap between low-level image representation and high-level human perception.
Abstract: Purpose The purpose of this paper is to build a classification system which mimics the perceptual ability of human vision, in gathering knowledge about the structure, content and the surrounding environment of a real-world natural scene, at a quick glance accurately. This paper proposes a set of novel features to determine the gist of a given scene based on dominant color, dominant direction, openness and roughness features. Design/methodology/approach The classification system is designed at two different levels. At the first level, a set of low level features are extracted for each semantic feature. At the second level the extracted features are subjected to the process of feature evaluation, based on inter-class and intra-class distances. The most discriminating features are retained and used for training the support vector machine (SVM) classifier for two different data sets. Findings Accuracy of the proposed system has been evaluated on two data sets: the well-known Oliva-Torralba data set and the customized image data set comprising of high-resolution images of natural landscapes. The experimentation on these two data sets with the proposed novel feature set and SVM classifier has provided 92.68 percent average classification accuracy, using ten-fold cross validation approach. The set of proposed features efficiently represent visual information and are therefore capable of narrowing the semantic gap between low-level image representation and high-level human perception. Originality/value The method presented in this paper represents a new approach for extracting low-level features of reduced dimensionality that is able to model human perception for the task of scene classification. The methods of mapping primitive features to high-level features are intuitive to the user and are capable of reducing the semantic gap. The proposed feature evaluation technique is general and can be applied across any domain.

Journal ArticleDOI
TL;DR: A threat profiling and ECC-based mutual and multi-level authentication for the security of IoTs and attack analysis is carried out to prove the robustness of the proposed protocol against the password guessing attack, impersonation attack, server spoofing attack, stolen verifier attack and reply attack.
Abstract: Purpose Due to the connectivity of the multiple devices and the systems on the same network, rapid development has become possible in Internet of Things (IoTs) for the last decade. But, IoT is mostly affected with severe security challenges due to the potential vulnerabilities happened through the multiple connectivity of sensors, devices and system. In order to handle the security challenges, literature presents a handful of security protocols for IoT. The purpose of this paper is to present a threat profiling and elliptic curve cryptography (ECC)-based mutual and multi-level authentication for the security of IoTs. This work contains two security attributes like memory and machine-related attributes for maintaining the profile table. Also, the profile table stores the value after encrypting the value with ECC to avoid storage resilience using the proposed protocol. Furthermore, three entities like, IoT device, server and authorization centre (AC) performs the verification based on seven levels mutually to provide the resilience against most of the widely accepted attacks. Finally, DPWSim is utilized for simulation of IoT and verification of proposed protocol to show that the protocol is secure against passive and active attacks. Design/methodology/approach In this work, the authors have presented a threat profiling and ECC-based mutual and multi-level authentication for the security of IoTs. This work contains two security attributes like memory and machine-related attributes for maintaining the profile table. Also, the profile table stores the value after encrypting the value with ECC to avoid storage resilience using the proposed protocol. Furthermore, three entities like, IoT device, server and AC performs the verification based on seven levels mutually to provide the resilience against most of the widely accepted attacks. Findings DPWSim is utilized for simulation of IoT and verification of the proposed protocol to show that this protocol is secure against passive and active attacks. Also, attack analysis is carried out to prove the robustness of the proposed protocol against the password guessing attack, impersonation attack, server spoofing attack, stolen verifier attack and reply attack. Originality/value This paper presents a threat profiling and ECC-based mutual and multi-level authentication for the security of IoTs.

Journal ArticleDOI
TL;DR: This is the first attempt at implementing the mobile agent technology with the semantic web service technology and the integration of user profile in the service discovery process facilitates the expression of the user needs and makes intelligible the selected service.
Abstract: Purpose The success of web services involved the adoption of this technology by different service providers through the web, which increased the number of web services, as a result making their discovery a tedious task. The UDDI standard has been proposed for web service publication and discovery. However, it lacks sufficient semantic description in the content of web services, which makes it difficult to find and compose suitable web services during the analysis, search, and matching processes. In addition, few works on semantic web services discovery take into account the user’s profile. The purpose of this paper is to optimize the web services discovery by reducing the search space and increasing the number of relevant services. Design/methodology/approach The authors propose a new approach for the semantic web services discovery based on the mobile agent, user profile and metadata catalog. In the approach, each user can be described by a profile which is represented in two dimensions: personal dimension and preferences dimension. The description of web service is based on two levels: metadata catalog and WSDL. Findings First, the semantic web services discovery reduces the number of relevant services through the application of matching algorithm “semantic match”. The result of this first matching restricts the search space at the level of UDDI registry, which allows the users to have good results for the “functional match”. Second, the use of mobile agents as a communication entity reduces the traffic on the network and the quantity of exchanged information. Finally, the integration of user profile in the service discovery process facilitates the expression of the user needs and makes intelligible the selected service. Originality/value To the best knowledge of the authors, this is the first attempt at implementing the mobile agent technology with the semantic web service technology.

Journal ArticleDOI
TL;DR: A distributed real-time data prediction framework for large-scale time-series data, which can exactly achieve the requirement of the effective management, prediction efficiency, accuracy, and high concurrency for massive data sources is provided.
Abstract: Purpose The purpose of this paper is to propose a data prediction framework for scenarios which require forecasting demand for large-scale data sources, e.g., sensor networks, securities exchange, electric power secondary system, etc. Concretely, the proposed framework should handle several difficult requirements including the management of gigantic data sources, the need for a fast self-adaptive algorithm, the relatively accurate prediction of multiple time series, and the real-time demand. Design/methodology/approach First, the autoregressive integrated moving average-based prediction algorithm is introduced. Second, the processing framework is designed, which includes a time-series data storage model based on the HBase, and a real-time distributed prediction platform based on Storm. Then, the work principle of this platform is described. Finally, a proof-of-concept testbed is illustrated to verify the proposed framework. Findings Several tests based on Power Grid monitoring data are provided for the proposed framework. The experimental results indicate that prediction data are basically consistent with actual data, processing efficiency is relatively high, and resources consumption is reasonable. Originality/value This paper provides a distributed real-time data prediction framework for large-scale time-series data, which can exactly achieve the requirement of the effective management, prediction efficiency, accuracy, and high concurrency for massive data sources.

Journal ArticleDOI
TL;DR: This method provides a practical candidate for the fault diagnosis of rolling bearings in the industrial applications and shows that the SVM classifier with the db4 wavelet base function in the db wavelet family has the best fault diagnosis accuracy.
Abstract: Purpose The purpose of this paper is to provide a fault diagnosis method for rolling bearings. Rolling bearings are widely used in industrial appliances, and their fault diagnosis is of great importance and has drawn more and more attention. Based on the common failure mechanism of failure modes of rolling bearings, this paper proposes a novel compound data classification method based on the discrete wavelet transform and the support vector machine (SVM) and applies it in the fault diagnosis of rolling bearings. Design/methodology/approach Vibration signal contains large quantity of information of bearing status and this paper uses various types of wavelet base functions to perform discrete wavelet transform of vibration and denoise. Feature vectors are constructed based on several time-domain indices of the denoised signal. SVM is then used to perform classification and fault diagnosis. Then the optimal wavelet base function is determined based on the diagnosis accuracy. Findings Experiments of fault diagnosis of rolling bearings are carried out and wavelet functions in several wavelet families were tested. The results show that the SVM classifier with the db4 wavelet base function in the db wavelet family has the best fault diagnosis accuracy. Originality/value This method provides a practical candidate for the fault diagnosis of rolling bearings in the industrial applications.

Journal ArticleDOI
TL;DR: The author’s distributed emitter parameter refinement method is able to infer the underlying true parameter values from the huge measurement data efficiently in a distributed working mode when compared against the benchmark clustering methods.
Abstract: Purpose Emitter parameter estimation via signal sorting is crucial for communication, electronic reconnaissance and radar intelligence analysis. However, due to problems of transmitter circuit, environmental noises and certain unknown interference sources, the estimated emitter parameter measurements are still inaccurate and biased. As a result, it is indispensable to further refine the parameter values. Though the benchmark clustering algorithms are assumed to be capable of inferring the true parameter values by discovering cluster centers, the high computational and communication cost makes them difficult to adapt for distributed learning on massive measurement data. The paper aims to discuss these issues. Design/methodology/approach In this work, the author brings forward a distributed emitter parameter refinement method based on maximum likelihood. The author’s method is able to infer the underlying true parameter values from the huge measurement data efficiently in a distributed working mode. Findings Experimental results on a series of synthetic data indicate the effectiveness and efficiency of the author’s method when compared against the benchmark clustering methods. Originality/value With the refined parameter values, the complex stochastic parameter patterns could be discovered and the emitters could be identified by merging observations of consistent parameter values together. Actually, the author is in the process of applying her distributed parameter refinement method for PRI parameter pattern discovery and emitter identification. The superior performance ensures its wide application in both civil and military fields.

Journal ArticleDOI
TL;DR: A new algorithm chaotic pigeon-inspired optimization (CPIO) is proposed, which can effectively improve the computing efficiency of the basic Itti’s model for saliency-based detection and can be extensively applied for fast, accurate and multi-target detections in aerial images.
Abstract: Purpose The purpose of this paper is to propose a new algorithm chaotic pigeon-inspired optimization (CPIO), which can effectively improve the computing efficiency of the basic Itti’s model for saliency-based detection. The CPIO algorithm and relevant applications are aimed at air surveillance for target detection. Design/methodology/approach To compare the improvements of the performance on Itti’s model, three bio-inspired algorithms including particle swarm optimization (PSO), brain storm optimization (BSO) and CPIO are applied to optimize the weight coefficients of each feature map in the saliency computation. Findings According to the experimental results in optimized Itti’s model, CPIO outperforms PSO in terms of computing efficiency and is superior to BSO in terms of searching ability. Therefore, CPIO provides the best overall properties among the three algorithms. Practical implications The algorithm proposed in this paper can be extensively applied for fast, accurate and multi-target detections in aerial images. Originality/value CPIO algorithm is originally proposed, which is very promising in solving complicated optimization problems.

Journal ArticleDOI
TL;DR: The novel wavelet transform-based steganographic method is proposed for secure data communication using OFDM system and achieves the higher PSNR of 71.07 dB that proves the confidentiality of the message.
Abstract: Purpose Fueled by the rapid growth of internet, steganography has emerged as one of the promising techniques in the communication system to obscure the data. Steganography is defined as the process of concealing the data or message within media files without affecting the perception of the image. Media files, like audio, video, image, etc., are utilized to embed the message. Nowadays, steganography is also used to transmit the medical information or diagnostic reports. The paper aims to discuss these issues. Design/methodology/approach In this paper, the novel wavelet transform-based steganographic method is proposed for secure data communication using OFDM system. The embedding and extraction process in the proposed steganography method exploits the wavelet transform. Initially, the cost matrix is estimated by the following three aspects: pixel intensity, edge transformation and wavelet transform. The cost estimation matrix provides the location of the cover image where the message is to be entrenched. Then, the wavelet transform is utilized to embed the message into the cover image according to the cost value. Subsequently, in the extraction process, the wavelet transform is applied to the embedded image to retrieve the message efficiently. Finally, in order to transfer the secret information over the channel, the newly developed wavelet-based steganographic method is employed for the OFDM system. Findings The experimental results are evaluated and performance is analyzed using PSNR and MSE parameters and then compared with existing systems. Thus, the outcome of our wavelet transform steganographic method achieves the PSNR of 71.5 dB which ensures the high imperceptibility of the image. Then, the outcome of the OFDM-based proposed steganographic method attains the higher PSNR of 71.07 dB that proves the confidentiality of the message. Originality/value In the authors’ previous work, the embedding and extraction process was done based on the cost estimation matrix. To enhance the security throughout the communication system, the novel wavelet-based embedding and extraction process is applied to the OFDM system in this paper. The idea behind this method is to attain a higher imperceptibility and robustness of the image.

Journal ArticleDOI
Lie Yu, Jia Chen, Yukang Tian, Yunzhou Sun, Lei Ding 
TL;DR: This is the first study to use two independent PID controllers to realize stable hovering control for UAS and it is also the first to use the velocity of the UAS to calculate the desired position.
Abstract: Purpose The purpose of this paper is to present a control strategy which uses two independent PID controllers to realize the hovering control for unmanned aerial systems (UASs). In addition, the aim of using two PID controller is to achieve the position control and velocity control simultaneously. Design/methodology/approach The dynamic of the UASs is mathematically modeled. One PID controller is used for position tracking control, while the other is selected for the vertical component of velocity tracking control. Meanwhile, fuzzy logic algorithm is presented to use the actual horizontal component of velocity to compute the desired position. Findings Based on this fuzzy logic algorithm, the control error of the horizontal component of velocity tracking control is narrowed gradually to be zero. The results show that the fuzzy logic algorithm can make the UASs hover still in the air and vertical to the ground. Social implications The acquired results are based on simulation not experiment. Originality/value This is the first study to use two independent PID controllers to realize stable hovering control for UAS. It is also the first to use the velocity of the UAS to calculate the desired position.

Journal ArticleDOI
TL;DR: The accompanied regularization method with each of the two used methods proved its efficiency in handling many problems especially ill-posed problems, such as the Fredholm integral equation of the first kind.
Abstract: In this paper, the exact solutions of the Schlomilch’s integral equation and its linear and non-linear generalized formulas with application are solved by using two efficient iterative methods. The Schlomilch’s integral equations have many applications in atmospheric, terrestrial physics and ionospheric problems. They describe the density profile of electrons from the ionospheric for awry occurrence of the quasi-transverse approximations. The paper aims to discuss these issues.,First, the authors apply a regularization method combined with the standard homotopy analysis method to find the exact solutions for all forms of the Schlomilch’s integral equation. Second, the authors implement the regularization method with the variational iteration method for the same purpose. The effectiveness of the regularization-Homotopy method and the regularization-variational method is shown by using them for several illustrative examples, which have been solved by other authors using the so-called regularization-Adomian method.,The implementation of the two methods demonstrates the usefulness in finding exact solutions.,The authors have applied the developed methodology to the solution of the Rayleigh equation, which is an important equation in fluid dynamics and has a variety of applications in different fields of science and engineering. These include the analysis of batch distillation in chemistry, scattering of electromagnetic waves in physics, isotopic data in contaminant hydrogeology and others.,In this paper, two reliable methods have been implemented to solve several examples, where those examples represent the main types of the Schlomilch’s integral models. Each method has been accompanied with the use of the regularization method. This process constructs an efficient dealing to get the exact solutions of the linear and non-linear Schlomilch’s integral equation which is easy to implement. In addition to that, the accompanied regularization method with each of the two used methods proved its efficiency in handling many problems especially ill-posed problems, such as the Fredholm integral equation of the first kind.

Journal ArticleDOI
TL;DR: The purpose of this paper is to improve the control precision of the station-keeping control for a stratosphere airship through the feedforward-feedback PID controller which is designed by the wind speed prediction based on the incremental extreme learning machine (I-ELM).
Abstract: The purpose of this paper is to improve the control precision of the station-keeping control for a stratosphere airship through the feedforward-feedback PID controller which is designed by the wind speed prediction based on the incremental extreme learning machine (I-ELM).,First of all, the online prediction of wind speed is implemented by the I-ELM with rolling time. Second, the feedforward-feedback PID controller is designed through the position information of the airship and the predicted wind speed. In the end, the one-dimensional dynamic model of the stratosphere airship is built, and the controller is applied in the numerical simulation.,Based on the conducted numerical simulations, some valuable conclusions are obtained. First, through the comparison between the predicted value and true value of the wind speed, the wind speed prediction based on I-ELM is very accurate. Second, the feedforward-feedback PID controller designed in this paper is very effective.,This paper is very valuable to the research of a high-accuracy station-keeping control of stratosphere airship.

Journal ArticleDOI
TL;DR: The introduction of the change of variables and the method of successive approximations for the solution of hyperbolic PDE and boundary control of the reaction-diffusion system is interesting.
Abstract: Purpose The purpose of this paper is to investigate the analytical solution of a hyperbolic partial differential equation (PDE) and its application. Design/methodology/approach The change of variables and the method of successive approximations are introduced. The Volterra transformation and boundary control scheme are adopted in the analysis of the reaction-diffusion system. Findings A detailed and complete calculation process of the analytical solution of hyperbolic PDE (1)-(3) is given. Based on the Volterra transformation, a reaction-diffusion system is controlled by boundary control. Originality/value The introduced approach is interesting for the solution of hyperbolic PDE and boundary control of the reaction-diffusion system.

Journal ArticleDOI
TL;DR: Experimental results and comparison between the constructed models conclude that ANFIS with Fuzzy C-means (FCM) partitioning model provides better accuracy in predicting the class with lowest mean square error (MSE) value.
Abstract: Purpose As far as the treatment of most complex issues in the design is concerned, approaches based on classical artificial intelligence are inferior compared to the ones based on computational intelligence, particularly this involves dealing with vagueness, multi-objectivity and good amount of possible solutions. In practical applications, computational techniques have given best results and the research in this field is continuously growing. The purpose of this paper is to search for a general and effective intelligent tool for prediction of patient survival after surgery. The present study involves the construction of such intelligent computational models using different configurations, including data partitioning techniques that have been experimentally evaluated by applying them over realistic medical data set for the prediction of survival in pancreatic cancer patients. Design/methodology/approach On the basis of the experiments and research performed over the data belonging to various fields using different intelligent tools, the authors infer that combining or integrating the qualification aspects of fuzzy inference system and quantification aspects of artificial neural network can prove an efficient and better model for prediction. The authors have constructed three soft computing-based adaptive neuro-fuzzy inference system (ANFIS) models with different configurations and data partitioning techniques with an aim to search capable predictive tools that could deal with nonlinear and complex data. After evaluating the models over three shuffles of data (training set, test set and full set), the performances were compared in order to find the best design for prediction of patient survival after surgery. The construction and implementation of models have been performed using MATLAB simulator. Findings On applying the hybrid intelligent neuro-fuzzy models with different configurations, the authors were able to find its advantage in predicting the survival of patients with pancreatic cancer. Experimental results and comparison between the constructed models conclude that ANFIS with Fuzzy C-means (FCM) partitioning model provides better accuracy in predicting the class with lowest mean square error (MSE) value. Apart from MSE value, other evaluation measure values for FCM partitioning prove to be better than the rest of the models. Therefore, the results demonstrate that the model can be applied to other biomedicine and engineering fields dealing with different complex issues related to imprecision and uncertainty. Originality/value The originality of paper includes framework showing two-way flow for fuzzy system construction which is further used by the authors in designing the three simulation models with different configurations, including the partitioning methods for prediction of patient survival after surgery. Several experiments were carried out using different shuffles of data to validate the parameters of the model. The performances of the models were compared using various evaluation measures such as MSE.

Journal ArticleDOI
TL;DR: The paper presented presents the first application of adaptive selection based on the gradient value of mid-IR sensor data, applied to the real-time determining control state by classification with the SVM algorithm for esterification process control to increase the efficiency.
Abstract: Purpose The production of glycerol derivatives by the esterification process is subject to many constraints related to the yield of the production target and the lack of process efficiency. An accurate monitoring and controlling of the process can improve production yield and efficiency. The purpose of this paper is to propose a real-time optimization (RTO) using gradient adaptive selection and classification from infrared sensor measurement to cover various disturbances and uncertainties in the reactor. Design/methodology/approach The integration of the esterification process optimization using self-optimization (SO) was developed with classification process was combined with necessary condition optimum (NCO) as gradient adaptive selection, supported with laboratory scaled medium wavelength infrared (mid-IR) sensors, and measured the proposed optimization system indicator in the batch process. Business Process Modeling and Notation (BPMN 2.0) was built to describe the tasks of SO workflow in collaboration with NCO as an abstraction for the conceptual phase. Next, Stateflow modeling was deployed to simulate the three states of gradient-based adaptive control combined with support vector machine (SVM) classification and Arduino microcontroller for implementation. Findings This new method shows that the real-time optimization responsiveness of control increased product yield up to 13 percent, lower error measurement with percentage error 1.11 percent, reduced the process duration up to 22 minutes, with an effective range of stirrer rotation set between 300 and 400 rpm and final temperature between 200 and 210°C which was more efficient, as it consumed less energy. Research limitations/implications In this research the authors just have an experiment for the esterification process using glycerol, but as a development concept of RTO, it would be possible to apply for another chemical reaction or system. Practical implications This research introduces new development of an RTO approach to optimal control and as such marks the starting point for more research of its properties. As the methodology is generic, it can be applied to different optimization problems for a batch system in chemical industries. Originality/value The paper presented is original as it presents the first application of adaptive selection based on the gradient value of mid-IR sensor data, applied to the real-time determining control state by classification with the SVM algorithm for esterification process control to increase the efficiency.

Journal ArticleDOI
TL;DR: This study brings a technique to design a coil antenna for a damaged NFC tag to retrieve all the information without losing even a single bit of sensitive information.
Abstract: Purpose The purpose of this paper is to discuss a technique of restoring data from a broken/damaged near-field communication (NFC) tag whose coil is damaged and seems unrecoverable. Design/methodology/approach This paper discusses a method to restore data from damaged NFC tags by designing a coil that matches the technical specification of NFC for restoring information. In this paper, an NFC tag with a broken antenna coil and its operational NFC chip is used for restoring data by making an external loop antenna for the same chip. Findings If the NFC tag is damaged, the information stored on the tag can be lost and can cause serious inconvenience. This research provides an excellent mechanism for retrieving all the information accurately from a damaged NFC tag provided the NFC chip is not damaged. Research limitations/implications One of the major limitations of this research is that the NFC chip remains intact without any damages. Data can only be recoverable if just the antenna of the NFC tag is damaged; any damage to the NFC chip would make it impossible for the data to be recoverable. Practical implications The research is carried out with limited resources in an academic institute and hence cannot be compared to antenna designs of the industry. Furthermore, industry vendors are using aluminum to design the coil; however, in this study a copper coil is used for coil design since it is far less expensive than aluminum coil. Originality/value NFC is a rather new short-range wireless technology and not much work is done in this field as far as antenna study is concerned. This study brings a technique to design a coil antenna for a damaged NFC tag to retrieve all the information without losing even a single bit of sensitive information.

Journal ArticleDOI
TL;DR: A method for the identification of fuzzy model parameters ensuring the stability of all local models is introduced and the proposed fuzzy internal model control approaches ensure robustness against parametric uncertainties.
Abstract: Purpose The purpose of this paper is to use the internal model control to deal with nonlinear stable systems affected by parametric uncertainties. Design/methodology/approach The dynamics of a considered system are approximated by a Takagi-Sugeno fuzzy model. The parameters of the fuzzy rules premises are determined manually. However, the parameters of the fuzzy rules conclusions are updated using the descent gradient method under inequality constraints in order to ensure the stability of each local model. In fact, without making these constraints the training algorithm can procure one or several unstable local models even if the desired accuracy in the training step is achieved. The considered robust control approach is the internal model. It is synthesized based on the Takagi-Sugeno fuzzy model. Two control strategies are considered. The first one is based on the parallel distribution compensation principle. It consists in associating an internal model control for each local model. However, for the second strategy, the control law is computed based on the global Takagi-Sugeno fuzzy model. Findings According to the simulation results, the stability of all local models is obtained and the proposed fuzzy internal model control approaches ensure robustness against parametric uncertainties. Originality/value This paper introduces a method for the identification of fuzzy model parameters ensuring the stability of all local models. Using the resulting fuzzy model, two fuzzy internal model control designs are presented.

Journal ArticleDOI
TL;DR: An approach to increase integration rate of elements in a three-level inverter based on decrease in the dimension of elements of the inverter due to manufacturing of these elements by diffusion or ion implantation in a heterostructure is introduced.
Abstract: Purpose The purpose of this paper is to analyze the redistribution of dopant and radiation defects to determine conditions which correspond to decreasing of elements in the considered inverter and at the same time to increase their density. Design/methodology/approach In this paper, the authors introduce an approach to increase integration rate of elements in a three-level inverter. The approach is based on decrease in the dimension of elements of the inverter (diodes and bipolar transistors) due to manufacturing of these elements by diffusion or ion implantation in a heterostructure with specific configuration and optimization of annealing of dopant and radiation defects. Findings The authors formulate recommendations to increase density of elements of the inverter with a decrease in their dimensions. Practical implications Optimization of manufacturing of integrated circuits and their elements. Originality/value The results of this paper are based on original analysis of transport of dopant with account transport and interaction of radiation defects.

Journal ArticleDOI
TL;DR: An evolutionary algorithm, covariance matrix adaptation evolution strategy (CMA-ES), is used to approximate the IBB based on skeleton and symmetry of input character mesh model, and the optimal position and scale information of IBB can be found.
Abstract: Purpose In the process of robot shell design, it is necessary to match the shape of the input 3D original character mesh model and robot endoskeleton, in order to make the input model fit for robot and avoid collision. So, the purpose of this paper is to find an object of reference, which can be used for the process of shape matching. Design/methodology/approach In this work, the authors propose an interior bounded box (IBB) approach that derives from oriented bounding box (OBB). This kind of box is inside the closed mesh model. At the same time, it has maximum volume which is aligned with the object axis but is enclosed by all the mesh vertices. Based on the IBB of input mesh model and the OBB of robot endoskeleton, the authors can complete the process of shape matching. In this paper, the authors use an evolutionary algorithm, covariance matrix adaptation evolution strategy (CMA-ES), to approximate the IBB based on skeleton and symmetry of input character mesh model. Findings Based on the evolutionary algorithm CMA-ES, the optimal position and scale information of IBB can be found. The authors can obtain satisfactory IBB result after this optimization process. The output IBB has maximum volume and is enveloped by the input character mesh model as well. Originality/value To the best knowledge of the authors, the IBB is first proposed and used in the field of robot shell design. Taking advantage of the IBB, people can quickly obtain a shell model that fit for robot. At the same time, it can avoid collision between shell model and the robot endoskeleton.