scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Computers Communications & Control in 2016"


Journal ArticleDOI
TL;DR: The results of this study show that the extended fuzzy EDAS method is efficient and has good tability for solving MCDM problems.
Abstract: In the real-world problems, we are likely confronted with some alternatives that eed to be evaluated with respect to multiple conflicting criteria. Multi-criteria ecision-making (MCDM) refers to making decisions in such a situation. There are any methods and techniques available for solving MCDM problems. The evaluation ased on distance from average solution (EDAS) method is an efficient multi-criteria ecision-making method. Because the uncertainty is usually an inevitable part of he MCDM problems, fuzzy MCDM methods can be very useful for dealing with the eal-world decision-making problems. In this study, we extend the EDAS method o handle the MCDM problems in the fuzzy environment. A case study of supplier election is used to show the procedure of the proposed method and applicability of t. Also, we perform a sensitivity analysis by using simulated weights for criteria to xamine the stability and validity of the results of the proposed method. The results f this study show that the extended fuzzy EDAS method is efficient and has good tability for solving MCDM problems.

243 citations


Journal ArticleDOI
TL;DR: The efficiency of selected features has been investigated on more machine learning algorithms (feed-forward artificial neural network, k-nearest neighbor and decision tree) where an independent database was the data source and demonstrates that machine learning in combination with feature selection can overcome other classification approaches.
Abstract: Human activity recognition (HAR) is one of those research areas whose importance and popularity have notably increased in recent years. HAR can be seen as a general machine learning problem which requires feature extraction and feature selection. In previous articles different features were extracted from time, frequency and wavelet domains for HAR but it is not clear that, how to determine the best feature combination which maximizes the performance of a machine learning algorithm. The aim of this paper is to present the most relevant feature extraction methods in HAR and to compare them with widely-used filter and wrapper feature selection algorithms. This work is an extended version of [1]a where we tested the efficiency of filter and wrapper feature selection algorithms in combination with artificial neural networks. In this paper the efficiency of selected features has been investigated on more machine learning algorithms (feed-forward artificial neural network, k-nearest neighbor and decision tree) where an independent database was the data source. The result demonstrates that machine learning in combination with feature selection can overcome other classification approaches.

43 citations


Journal ArticleDOI
TL;DR: The results prove that he ELM learning performs better than the other methods, and this study compares the five models namely Feed orward networks without feedback, Feed forward back propagation networks, Radial asis function, ELMAN networks and ELMlearning model.
Abstract: In recent years, the investors pay major attention to invest in gold market ecause of huge profits in the future. Gold is the only commodity which maintains ts value even in the economic and financial crisis. Also, the gold prices are closely elated with other commodities. The future gold price prediction becomes the warning ystem for the investors due to unforeseen risk in the market. Hence, an accurate gold rice forecasting is required to foresee the business trends. This paper concentrates on orecasting the future gold prices from four commodities like historical data’s of gold rices, silver prices, Crude oil prices, Standard and Poor’s 500 stock index (S&P500) ndex and foreign exchange rate. The period used for the study is from 1st January 000 to 31st April 2014. In this paper, a learning algorithm for single hidden layered eed forward neural networks called Extreme Learning Machine (ELM) is used which as good learning ability. Also, this study compares the five models namely Feed orward networks without feedback, Feed forward back propagation networks, Radial asis function, ELMAN networks and ELM learning model. The results prove that he ELM learning performs better than the other methods.

40 citations


Journal ArticleDOI
TL;DR: The concept of fuzzy b-metric space is introduced and studied, generalizing both the notion of fuzzy metric space introduced by I. Kramosil and J. Michalek and the concept of b- Metric space, and a decomposition theorem for a fuzzy quasipseudo- b- metric into an ascending family of quasi-pseudo-b-metrics is established.
Abstract: Metric spaces and their various generalizations occur frequently in computer science applications. This is the reason why, in this paper, we introduced and studied the concept of fuzzy b-metric space, generalizing, in this way, both the notion of fuzzy metric space introduced by I. Kramosil and J. Michalek and the concept of b-metric space. On the other hand, we introduced the concept of fuzzy quasi-bmetric space, extending the notion of fuzzy quasi metric space recently introduced by V. Gregori and S. Romaguera. Finally, a decomposition theorem for a fuzzy quasipseudo- b-metric into an ascending family of quasi-pseudo-b-metrics is established. The use of fuzzy b-metric spaces and fuzzy quasi-b-metric spaces in the study of denotational semantics and their applications in control theory will be an important next step.

40 citations


Journal ArticleDOI
TL;DR: A new INVAR Method for a multiple criteria analysis (Degree of Project Utility and Investment Value Assessments along with Recommendation Provisions) is recommended, which can be for a sustainable building assessment.
Abstract: This article recommends a new INVAR Method for a multiple criteria analysis (Degree of Project Utility and Investment Value Assessments along with Recommendation Provisions). Its use can be for a sustainable building assessment. The INVAR Method can additionally assist in determining the investment value of a project under deliberation and provide digital recommendations for improving projects. Furthermore, the INVAR Method can optimize the selected criterion seeking that the project under deliberation would be equally competitive in the market, as compared to the other projects under comparison. The INVAR Method is additionally able to calculate the value that the project under deliberation should be for this project to become the best among those under deliberation. The case studies presented in this research are for demonstrating this developed method.

38 citations


Journal ArticleDOI
TL;DR: This chapter aims at briefly discussing biometric related items, including principles, definitions, biometric modalities and technologies along with their advantages, disadvantages or limitations, and biometric standards, targeting unfamiliar readers.
Abstract: In a nutshell, a biometric security system requires a user to provide some biometric features which are then verified against some stored biometric templates. Nowadays, the traditional password based authentication method tends to be replaced by advanced biometrics technologies. Biometric based authentication is becoming increasingly appealing and common for most of the human-computer interaction devices. To give only one recent example, Microsoft augmented its brand new Windows 10 OS version with the capability of supporting face recognition when the user login in. This chapter does not intend to cover a comprehensive and detailed list of biometric techniques. The chapter rather aims at briefly discussing biometric related items, including principles, definitions, biometric modalities and technologies along with their advantages, disadvantages or limitations, and biometric standards, targeting unfamiliar readers. It also mentions the attributes of a biometric system as well as attacks on biometrics. Important reference sources are pointed out so that the interested reader may gain deeper in-depth knowledge by consulting them.

31 citations


Journal ArticleDOI
TL;DR: An ontology based solution to extract and reuse FMEA knowledge from the textual documents is proposed, along with its implementation, reasoning, and data retrieval through it for automotive domain.
Abstract: Risk mitigation has always been a special concern for organization’s strategic management. Various tools and techniques have been developed to manage risk in an effective way. Failure Mode and Effects Analysis (FMEA) is one of the tools used for effective assessment of risk. It analyzes all potential failure modes, their causes, and effects on a product or process. Moreover it recommends actions to mitigate failures in order to enhance product reliability. Organizations spend their resources and domain experts make their efforts to complete this analysis. It further helps organizations identify the expected risks and plan strategies in advance to tackle them. But unfortunately the analysis produced after spending a lot of organizational assets and experts’ struggles, is not reusable due to its natural language text based description. Information and communication technology experts proposed some solutions but they are associated with some deficiencies. Authors in [13] proposed an ontology based solution to extract and reuse FMEA knowledge from the textual documents, and this article is the first step towards its implementation. In this article we proposed our ontology for Process Failure Mode and Effects Analysis (PFMEA) for automotive domain, along with its implementation, reasoning, and data retrieval through it.

30 citations


Journal ArticleDOI
TL;DR: The results show that, the proposed approach is able to effectively online update the recommendation model from a sequence of rating observation and outperforms other baseline methods in terms of RMSE.
Abstract: With the rapid growth of Internet information, our individual processing capacity has become over-whelming. Thus, we really need recommender systems to provide us with items online in real time. In reality, a user’s interest and an item’s popularity are always changing over time. Therefore, recommendation approaches should take such changes into consideration. In this paper, we propose two approaches, i.e., First Order Sparse Collaborative Filtering (SOCFI) and Second Order Sparse Online Collaborative Filtering (SOCFII), to deal with the user-item ratings for online collaborative filtering. We conduct some experiments on such real data sets as Movie- Lens100K and MovieLens1M, to evaluate our proposed methods. The results show that, our proposed approach is able to effectively online update the recommendation model from a sequence of rating observation. And in terms of RMSE, our proposed approach outperforms other baseline methods.

23 citations


Journal ArticleDOI
TL;DR: It is proved that numerical P systems with a lower-threshold, with one membrane and linear production functions, working both in the all- parallel mode and in the one-parallel mode are universal.
Abstract: Numerical P systems are a class of P systems inspired both from the structure of living cells and from economics. In this work, a control of using evolution programs is introduced into numerical P systems: a threshold is considered and a program can be applied only when the values of the variables involved in the production function of the program are greater than/equal to (lower-threshold) or smaller than/equal to (upper-threshold) the threshold. The computational power of numerical P systems with lower-threshold or upper-threshold is investigated. It is proved that numerical P systems with a lower-threshold, with one membrane and linear production functions, working both in the all-parallel mode and in the one-parallel mode are universal. The result is also extended to numerical P systems with an upperthreshold, by proving the equivalence of the numerical P systems with lower- and upper-thresholds.

21 citations


Journal ArticleDOI
TL;DR: The results of this study show that quality if service (QoS) measures of such systems can be evaluated efficiently and accurately using the proposed analytical model, however, the performance results have shown that it is still necessary to explore an effective model for operational spaces.
Abstract: Wireless and mobile communication systems have evolved considerably in recent years. Seamless mobility is one of the main challenges facing mobile users in wireless and mobile systems. However, highly mobile users lead to a high number of handover failures and unnecessary handovers due to the limited resources and coverage limitations with a high mobile speed. The traditional handover models are unable to cope with high mobile users in such environments. This paper proposes, an intelligent handover decision approach to minimize the probability of handover failures and unnecessary handovers whilst maximizing the usage of resources in highly mobile environments. The proposed approach is based on modelling the system using a Markov chain to enhance the system’s performance in terms of blocking probability, mean queue length and transmission delay. The results are compared with the traditional handover model. Simulation is also employed to validate the accuracy of the proposed model. Numerical results have shown that the proposed method outperforms the traditional algorithm over a wide range of handover failures and significantly reduced the number of such failures and unnecessary handovers. The results of this study show that quality if service (QoS) measures of such systems can be evaluated efficiently and accurately using the proposed analytical model. However, the performance results have also shown that it is still necessary to explore an effective model for operational spaces. In addition, the proposed model can also be adapted to various types of networks considering the high speed of the mobile user and the radius of the network.

16 citations


Journal ArticleDOI
TL;DR: A new consistency-based method to integrate multiplicative and fuzzy preference relations, which is based on a cosine similarity measure to derive a collective priority vector, which achieves the largest cosine values in all three examples.
Abstract: Group decision making (GDM) problem based on different preference relations aims to obtain a collective opinion based on various preference structures provided by a group of decision makers (DMs) or experts, those who have varying backgrounds and interests in real world. The decision process in proposed question includes three steps: integrating varying preference structures, reaching consensus opinion, selecting the best alternative. Two major approaches: preference transformation and optimization methods have been developed to deal with the issue in first step. However, the transformation processes causes information lose and existing optimization methods are so computationally complex that it is not easy to be used by management practice. This study proposes a new consistency-based method to integrate multiplicative and fuzzy preference relations, which is based on a cosine similarity measure to derive a collective priority vector. The basic idea is that a collective priority vector should be as similar per column as possible to a pairwise comparative matrix (PCM) in order to assure the group preference has highest consistency for each decision makers. The model is computationally simple, because it can be solved using a Lagrangian approach and obtain a collective priority vector following four simple steps. The proposed method can further used to derive priority vector of fuzzy AHP. Using three illustrative examples, the effectiveness and simpleness of the proposed model is demonstrated by comparison with other methods. The results show that the proposed model achieves the largest cosine values in all three examples, indicating the solution is the nearest theoretical perfectly consistent opinion for each decision makers.

Journal ArticleDOI
TL;DR: A new model based on big data analysis is proposed, which can avoid the influence brought by adjustment of network traffic distribution, increase detection accuracy and reduce the false negative rate.
Abstract: Anomaly network detection is a very important way to analyze and detect malicious behavior in network. How to effectively detect anomaly network flow under the pressure of big data is a very important area, which has attracted more and more researchers’ attention. In this paper, we propose a new model based on big data analysis, which can avoid the influence brought by adjustment of network traffic distribution, increase detection accuracy and reduce the false negative rate. Simulation results reveal that, compared with k-means, decision tree and random forest algorithms, the proposed model has a much better performance, which can achieve a detection rate of 95.4% on normal data, 98.6% on DoS attack, 93.9% on Probe attack, 56.1% on U2R attack, and 77.2% on R2L attack.

Journal ArticleDOI
TL;DR: A suggested system in mobile manipulation is proposed including an IBVS with an eye-in-hand camera configuration system, which shows good performance in a rescue scenario and challenges and limitations of pplying visual servoing in mobile manipulations are discussed.
Abstract: Mobile robots that integrate visual servo control for facilitating autonomous grasping nd manipulation are the focus of this paper. In view of mobility, they have wider pplication than traditional fixed-based robots with visual servoing. Visual servoing s widely used in mobile robot navigation. However, there are not so many report or applying it to mobile manipulation. In this paper, challenges and limitations of pplying visual servoing in mobile manipulation are discussed. Next, two classical pproaches (image-based visual servoing (IBVS) and position-based visual servoing (PBVS)) are introduced aloing with their advantages and disadvantages. Simulations n Matlab are carried out using the two methods, there advantages and drawbacks are llustrated and discussed. On this basis, a suggested system in mobile manipulation s proposed including an IBVS with an eye-in-hand camera configuration system. imulations and experimentations are carried with this robot configuration in a earch and rescue scenario, which show good performance.

Journal ArticleDOI
TL;DR: The research described in this paper is aimed to develop a free, fast, and easy to use software tool for the assessment of creativity in the educational context, based on a novel approach on detecting the factors known to block the creativity, like stereotypical hinking, and social conformity.
Abstract: It is difficult to measure something we cannot clearly define. No wonder hat, for the over 100 definitions of the creativity proposed in the literature, there re almost as many scales and assessment tools. Most of these instruments have been esigned for research purposes, and are difficult to apply and score, especially in the ducational environment. Not to mention that they are expensive. he research described in this paper is aimed to develop a free, fast, and easy to se software tool for the assessment of creativity in the educational context. To this urpose, we have designed a new scale with 20 items, based on a novel approach ocusing on detecting the factors known to block the creativity, like stereotypical hinking, and social conformity. The user input is collected through a web based nterface, and the actual interpretation of the results is automated by means of a uzzy logic algorithm. The proposed solution is interesting because it can be easily ntegrated in almost any e-learning platform, or used as a stand-alone tool for tracing he evolution of the students involved in courses for the development of creative hinking skills, and also for possible other applications.

Journal ArticleDOI
TL;DR: In the work presented in this paper, data-driven control is used to tune an Internal Model Control and this methodology has been successfully applied to a Activated Sludge Process (ASP) based wastewater treatment.
Abstract: In the work presented in this paper, data-driven control is used to tune an Internal Model Control. Despite the fact that it may be contradictory to apply a model-free method to a model-based controller, this methodology has been successfully applied to a Activated Sludge Process (ASP) based wastewater treatment. In addition a feedforward controller over the influent substrate concentration was also computed using the virtual reference feedback tuning and applied to the same wastewater process to see the effect over the dissolved oxygen and the substrate concentration at the effluent.

Journal ArticleDOI
TL;DR: It is establish that exist correlation between the energetic consumption of the Ball and SAG Mills, regarding the East, West temperature and winding and that there is a difference in energy consumption between the mills of the same group.
Abstract: It is proposed an analysis of the related variables with the energetic consumption in the process of concentrate of copper; specifically ball mills and SAG. The methodology considers the analysis of great volumes of data, which allows to identify the variables of interest (tonnage, temperature and power) to reach to an improvement plan in the energetic efficiency. The correct processing of the great volumen of data, previous imputation to the null data, not informed and out of range, coming from the milling process of copper, a decision support systems integrated, it allows to obtain clear and on line information for the decision making. As results it is establish that exist correlation between the energetic consumption of the Ball and SAG Mills, regarding the East, West temperature and winding. Nevertheless, it is not observed correlation between the energetic consumption of the Ball Mills and the SAG Mills, regarding to the tonnages of feed of SAG Mill. In consequence, From the experimental design, a similarity of behavior between two groups of different mills was determined in lines process . In addition, it was determined that there is a difference in energy consumption between the mills of the same group. This approach modifies the method presented in [1].

Journal ArticleDOI
TL;DR: This work introduces with full details such a method, which allows for defining the exact increments or decrements associated with the thresholds before vector migrations take place, and experimentally compares this method with several regression methods.
Abstract: Training a support vector machine (SVM) for regression (function approximation) in an incremental/decremental way consists essentially in migrating the input vectors in and out of the support vector set with specific modification of the associated thresholds. We introduce with full details such a method, which allows for defining the exact increments or decrements associated with the thresholds before vector migrations take place. Two delicate issues are especially addressed: the variation of the regularization parameter (for tuning the model performance) and the extreme situations where the support vector set becomes empty. We experimentally compare our method with several regression methods: the multilayer perceptron, two standard SVM implementations, and two models based on adaptive resonance theory.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed STEHIX (Spatio-TEmporal Hbase IndeX) which is a two-level index structure for HBase to process spatio-temporal queries.
Abstract: Comparing to last decade, technologies to gather spatio-temporal data are more and more developed and easy to use or deploy, thus tens of billions, even trillions of sensed data are accumulated, which poses a challenge to spatio-temporal Decision Support System (stDSS). Traditional database hardly supports such huge volume, and tends to bring performance bottleneck to the analysis platform. Hence in this paper, we argue to use NoSQL database, HBase, to replace traditional back-end storage system. Under such context, the well-studied spatio-temporal querying techniques in traditional database should be shifted to HBase system parallel. However, this problem is not solved well in HBase, as many previous works tackle the problem only by designing schema, i.e., designing row key and column key formation for HBase, which we don’t believe is an effective solution. In this paper, we address this problem from nature level of HBase, and propose an index structure as a built-in component for HBase. STEHIX (Spatio-TEmporal Hbase IndeX) is adapted to two-level architecture of HBase and suitable for HBase to process spatio-temporal queries. It is composed of index in the meta table (the first level) and region index (the second level) for indexing inner structure of HBase regions. Base on this structure, three queries, range query, kNN query and GNN query are solved by proposing algorithms, respectively. For achieving load balancing and scalable kNN query, two optimizations are also presented. We implement STEHIX and conduct experiments on real dataset, and the results show our design outperforms a previous work in many aspects.

Journal ArticleDOI
TL;DR: A deeper analysis of social networks and of processes developing on social networks, including used lexicon, dynamics of messages related to a specific type of topic, and relationships of the processes on SNs with external events is performed.
Abstract: This paper provides a detailed and long-period statistics of the use of synonyms in posts related to specific events on social networks (SNs), an extended analysis of the correlations of the flows of the synonyms in such posts, a study of the applicability of Zipf’s law to posts related to specific events on SNs, and an analysis of the dynamics of the fluxes of synonyms in the posts. The paper also introduces the study of the distances in the phase space for the characterization of the dynamics of the word fluxes on social networks. This article is a partial report on recent research performed for a deeper analysis of social networks and of processes developing on social networks, including used lexicon, dynamics of messages related to a specific type of topic, and relationships of the processes on SNs with external events.

Journal ArticleDOI
TL;DR: This paper first suggests a means to derive implicit rating information from the transaction data of an online shopping mall and then proposes a new user similarity function to mitigate the sparsity problem.
Abstract: Many online shopping malls have implemented personalized recommendation systems to improve customer retention in the age of high competition and information overload. Sellers make use of these recommendation systems to survive high competition and buyers utilize them to find proper product information for their own needs. However, transaction data of most online shopping malls prevent us from using collaborative filtering (CF) technique to recommend products, for the following two reasons: 1) explicit rating information is rarely available in the transaction data; 2) the sparsity problem usually occurs in the data, which makes it difficult to identify reliable neighbors, resulting in less effective recommendations. Therefore, this paper first suggests a means to derive implicit rating information from the transaction data of an online shopping mall and then proposes a new user similarity function to mitigate the sparsity problem. The new user similarity function computes the user similarity of two users if they rated similar items, while the user similarity function of traditional CF technique computes it only if they rated common items. Results from several experiments using an online shopping mall dataset in Korea demonstrate that our approach significantly outperforms the traditional CF technique.

Journal ArticleDOI
TL;DR: Two side-channel attacks against the McEliece public-key cryptosystem are exploiting timing differences on the Patterson decoding algorithm in order to reveal one part of the secret key: the support permutation.
Abstract: In this paper, we detail two side-channel attacks against the McEliece public-key cryptosystem. They are exploiting timing differences on the Patterson decoding algorithm in order to reveal one part of the secret key: the support permutation. The first one is improving two existing timing attacks and uses the correlation between two different steps of the decoding algorithm. This improvement can be deployed on all error-vectors with Hamming weight smaller than a quarter of the minimum distance of the code. The second attack targets the evaluation of the error locator polynomial and succeeds on several different decoding algorithms. We also give an appropriate countermeasure.

Journal ArticleDOI
TL;DR: The modified algorithm can keep the population diversity well in the middle stage of the iterative process and it can improve the mean best of the algorithm and the success rate of search.
Abstract: Aiming at the two characteristics of premature convergence of particle swarm optimization that the particle velocity approaches 0 and particle swarm congregate, this paper learns from the annealing function of the simulated annealing algorithm and adaptively and dynamically adjusts inertia weights according to the velocity information of particles to avoid approaching 0 untimely. This paper uses the good uniformity of Anderson chaotic mapping and performs chaos perturbation to part of particles based on the information of variance of the population’s fitness to avoid the untimely aggregation of particle swarm. The numerical simulations of five test functions are performed and the results are compared with several swarm intelligence heuristic algorithms. The results shows that the modified algorithm can keep the population diversity well in the middle stage of the iterative process and it can improve the mean best of the algorithm and the success rate of search.

Journal ArticleDOI
TL;DR: AudiT is presented, a multi-domain SDN policy verifier that identifies whether an rigin policy is enforced by foreign domains and its application is illustrated using an example considering ultiple SDN networks.
Abstract: Programmable Network like SDN allows administrators to program network nfrastructure according to service demand and custom-defined policies. Network olicies are interpreted by the centralized controller to define actions and rules to rocess the network traffic on devices that belong to a single domain. However, actual etworks are multi-domain where several domains are interconnected. Then, because DN controllers in a domain cannot define nor monitor policies in other domains, etwork administrators cannot ensure that their own policies, origin policies are being nforced by the domains not directly managed by them (i.e. foreign domains). e present AudiT, a multi-domain SDN policy verifier that identifies whether an rigin policy is enforced by foreign domains. AudiT comprises (1) model for network opology, policies, and flows, (2) an Audit protocol to gather information about the ctions performed by network devices to carry the flows of interest, and (3) a validation ngine that takes that information and detects security policy violations, and (4) an extension to the OpenFlow protocol to enable external auditing. This paper resents our approach and illustrates its application using an example considering ultiple SDN networks.

Journal ArticleDOI
TL;DR: This article states and solves the maximum flow in directed (1, n) planar dynamic networks in the stationary case.
Abstract: An nontrivial extension of the maximal static flow problem is the maximal dynamic flow model, where the transit time to traverse an arc is taken into consideration. If the network parameters as capacities, arc traversal times, and so on, are constant over time, then a dynamic flow problem is said to be stationary. Research on flow in planar static network is motivated by the fact that more efficient algorithms can be developed by exploiting the planar structure of the graph. This article states and solves the maximum flow in directed (1, n) planar dynamic networks in the stationary case.

Journal ArticleDOI
TL;DR: An accurate ystem model for the FCBTM is established, in which a novel three-dimensional adaptive fuzzy ID controller is designed, and the simulation results show that the proposed daptive fuzzy control method is not only robust to the external disturbance but also more excellent dynamic and steady-state characteristics than traditional ones.
Abstract: The process of tension control for material testing using the Flexible ircuit Board testing machine (FCBTM) is featured with multi-variable, nonlinearity, ime delays and time variation. In order to ensure the tension precision, the stability of ervo motor’ speed and the reliability of test results, this paper establishes an accurate ystem model for the FCBTM, in which a novel three-dimensional adaptive fuzzy ID controller is designed. Specially, the simulation results show that the proposed daptive fuzzy control method is not only robust to the external disturbance but also ith more excellent dynamic and steady-state characteristics than traditional ones.

Journal ArticleDOI
TL;DR: Results of optimizing the normalized utual information and normalized cross correlation similarity metrics validated the efficacy and precision of the proposed method by using a freely available medical mage database.
Abstract: Image registration (IR) is the process of geometric overlaying or alignment f two or more 2D/3D images of the same scene (unimodal registration), taken r not at different time slots, from different angles, and/or by different image acquisition ystems (multimodal registration). Technically, image registration implies complex optimization of different parameters, performed at local or/and global evel. Local optimization methods often fail because functions of the involved metrics ith respect to transformation parameters are generally nonconvex and irregular, and lobal methods are required, at least at the beginning of the procedure. This paper resents a new evolutionary and bio-inspired robust approach for IR, Bacterial Foraging ptimization Algorithm (BFOA), which is adapted for PET-CT multimodal nd magnetic resonance image rigid registration. Results of optimizing the normalized utual information and normalized cross correlation similarity metrics validated he efficacy and precision of the proposed method by using a freely available medical mage database.

Journal ArticleDOI
TL;DR: Routing method that is based on information about nodes social behavior and their social relations in sparse structure of network is presented and takes advantage of friendship relationships between nodes and uses historic information to create groups of friends for each node, which is used in buffer management and forwarding phase of routing.
Abstract: This article presents routing algorithm in Delay and Disruptive Tolerant Networks (DTN). The main idea of this work is routing method that is based on information about nodes social behavior and their social relations in sparse structure of network. The algorithm takes advantage of friendship relationships between nodes and uses historic information to create groups of friends for each node, which is used in buffer management and forwarding phase of routing. Beside the routing method, mechanisms of collecting and exchanging of maintenance information between nodes is described. The algorithm was tested using The ONE simulation tool especially designed for DTN scenario and compared with miscellaneous popular solutions.

Journal ArticleDOI
TL;DR: This work presents a supervisory control strategy for Networked Control Systems (NCSs) that incorporates the delay dynamics within the fuzzy rules based upon a real-time hierarchical scheduling strategy.
Abstract: This work presents a supervisory control strategy for Networked Control Systems (NCSs). This shows the identification and control of the plant using fuzzy theory. The fuzzy model incorporates the delay dynamics within the fuzzy rules based upon a real-time hierarchical scheduling strategy. A hierarchical scheduling Priority Exchange algorithm is used based upon codesign strategy following mutual correlation among control and network algorithms in order to bounded time delays. A system of magnetic levitation is presented as a case study.

Journal ArticleDOI
TL;DR: A framework including overcoming the data problem of online comments using the efficient online-LDA approach, electing meaningful topics from the imbalanced data, and summarizing the opinion f comments with high precision and recall is proposed, which can gain a significant performance improvement on pinion summarization.
Abstract: Customer reviews and comments on web pages are important information n our daily life. For example, we prefer to choose a hotel with positive comments rom previous customers. As the huge amounts of such information demonstrate the haracteristics of big data, it places heavy burdens on the assimilation of the customercontributed pinions. To overcoming this problem, we study an efficient opinion ummarization approach for a set of massive user reviews and comments associated ith an online resource, to summarize the opinions into two categories, i.e., positive nd negative. In this paper, we proposed a framework including: (1) overcoming the ig data problem of online comments using the efficient online-LDA approach; (2) electing meaningful topics from the imbalanced data; (3) summarizing the opinion f comments with high precision and recall. This framework is different from much f the previous work in that the topics are pre-defined and selected the topics for etter opinion summarization. To evaluate the proposed framework, we perform the xperiments on a dataset of hotel reviews for the variety of topics contained. The esults show that our framework can gain a significant performance improvement on pinion summarization.

Journal ArticleDOI
TL;DR: Authors overview the most recent Prostate MR image segmentation challenge results and provide insights on T2-weighted MRI scan images automated prostate segmentation problem by comparing the best obtained automatic segmentation algorithms and applying them to 2D prostate segmentsation case.
Abstract: The prostate cancer is the second most frequent tumor amongst men. Statistics shows that biopsy reveals only 70-80% clinical cancer cases. Multiparametric magnetic resonance imaging (MRI) technique comes to play and is used to help to determine the location to perform a biopsy. With the aim to automating the biopsy localization, prostate segmentation has to be performed in magnetic resonance images. Computer image analysis methods play the key role here. The problem of automated prostate magnetic resonance (MR) image segmentation is burdened by the fact that MRI signal intensity is not standardized: field of view and image appearance is for a large part determined by acquisition protocol, field strength, coil profile and scanner type. Authors overview the most recent Prostate MR image segmentation challenge results and provide insights on T2-weighted MRI scan images automated prostate segmentation problem by comparing the best obtained automatic segmentation algorithms and applying them to 2D prostate segmentation case. The most important benefit of this research will have medical doctors involved in the management of the cancer.