scispace - formally typeset
Search or ask a question

Showing papers in "Studies in computational intelligence in 2019"


Book ChapterDOI
TL;DR: The ARTI as mentioned in this paper architecture is a reference architecture for Holonic manufacturing systems, and it is based on the first version of the PROSA architecture, which was developed by Jo Wyns and Hendrik Van Brussel.
Abstract: This paper discusses the history of PROSA, a reference architecture for Holonic Manufacturing Systems. PROSA’s journey started with a meeting in Leuven during which the ‘spaghetti diagram’ – created by Jan Detand – triggered a discussion that resulted in the first version of PROSA. Jo Wyns and Hendrik Van Brussel recognized its potential and their efforts resulted in the doctoral thesis from Jo and the widely-known paper on PROSA in Computers in Industry. Building on a solid foundation, the PROSA team has added the D-MAS architectural pattern, has differentiated between decision-making Intelligent Agents and reality-reflecting Intelligent Beings, and has expanded the range of applications well beyond manufacturing. Today, the PROSA architecture has been refined and retitled into the ARTI – Activity Resource Type Instance – architecture. This paper discusses this journey emphasizing to what extent ARTI and PROSA reflect Scientific Laws of the Artificial – as envisioned by Nobel Prize winner Herbert Simon. In view of the Industry 4.0 ambitions, PROSA’s and ARTI’s features and properties become the translation of the inevitable implications of bounded rationality. In other words, deviating from the reference architecture (beyond rephrasing its terminology) will have inescapable drawbacks as regards scalability, scope-ability, viability and longevity. These reference architectures will be inescapable in the manner that a mechanical engineer cannot afford to ignore Newton’s Laws (e.g. gravity) or an energy engineer cannot ignore Carnot’s Principles (e.g. the impossibility of a perpetuum mobile). But, perhaps, Einstein’s theories are the more accurate metaphor where Newton’s Laws are adequate at low velocities while relativity theory becomes essential when getting closer to the speed of light. Here, the ambitions of Industry 4.0 bring the operating point closer to the speed of light with its demand for scalability, viability, adaptability, openness and interoperability. Industry 4.0 ambitions render bounded rationality and its implications inevitable.

24 citations


Book ChapterDOI
TL;DR: An inventory management model, which is based on the maximum and minimum quantity of products to be ordered by the company, and also a model EOQ with an adjustment in the reorder point which it was verified a small increment in business sales by 5% during the first 11 days.
Abstract: The goal of this study is to create an inventory management model that will be able to estimate the control of the perishable products of a business by using probabilistic distributions. The problem arises since the stores or mini markets owners have not defined a clear concept in how to maintain an inventory in optimal conditions, especially regarding perishable products because they only have a maximum time of a week to be sold them. To solve this problem, we used specific algorithms that will help us in the handling of large amounts of data such as Monte Carlo simulation, so that we were able to use probabilistic distributions to determine the economic order quantity (EOQ) of perishable products based on weekly demand. As a result, we obtained an inventory management model, which is based on the maximum and minimum quantity of products to be ordered by the company, and also a model EOQ with an adjustment in the reorder point which it was verified a small increment in business sales by 5% during the first 11 days.

19 citations


Book ChapterDOI
TL;DR: The main goal of this chapter is to provide the overview of fundamentals and advances in hyperspectral image enhancement, denoising and restoration, classical classification techniques and the most recently popular classification algorithm.
Abstract: Hyperspectral remote sensing has received considerable interest in recent years for a variety of industrial applications including urban mapping, precision agriculture, environmental monitoring, and military surveillance as well as computer vision applications. It can capture hyperspectral image (HSI) with a lager number of land-cover information. With the increasing industrial demand in using HSI, there is a must for more efficient and effective methods and data analysis techniques that can deal with the vast data volume of hyperspectral imagery. The main goal of this chapter is to provide the overview of fundamentals and advances in hyperspectral images. The hyperspectral image enhancement, denoising and restoration, classical classification techniques and the most recently popular classification algorithm are discussed with more details. Besides, the standard hyperspectral datasets used for the research purposes are covered in this chapter.

18 citations


Book ChapterDOI
TL;DR: The main objective is to evaluate the number of people who arrive at a public transport service station in order to be able to minimize monetary losses, the product of the defection of the people of the waiting line of this station.
Abstract: This study presents a proposal to determine solutions to the models of queue theory through the use of simulation. The main objective is to evaluate the number of people who arrive at a public transport service station in order to be able to minimize monetary losses, the product of the defection of the people of the waiting line of this station. To evaluate the model, we proceeded to use tools that allow simulating random values based on probability distributions; such as the Log-Normal probability distribution, and the Binomial distribution.

16 citations


Book ChapterDOI
TL;DR: The study showed that, the classification accuracy is higher when both verb and noun class features are taken into consideration, and the same trend was observed in the study when the training data contained only noun class Features, i.e., nounclass features dominates the verb class features.
Abstract: Preposition sense disambiguation has huge significance in Natural language processing tasks such as Machine Translation. Transferring the various senses of a simple preposition in source language to a set of senses in target language has high complexity due to these many-to-many relationships, particularly in English-Malayalam machine translation. In order to reduce this complexity in the transfer of senses, in this paper, we used linguistic information such as noun class features and verb class features of the respective noun and verb correlated to the target simple preposition. The effect of these linguistic features for the proper classification of the senses (postposition in Malayalam) is studied with the help of several machine learning algorithms. The study showed that, the classification accuracy is higher when both verb and noun class features are taken into consideration. In linguistics, the major factor that decides the sense of the preposition is the noun in the prepositional phrase. The same trend was observed in the study when the training data contained only noun class features. i.e., noun class features dominates the verb class features.

12 citations


Book ChapterDOI
TL;DR: This chapter proposes an intermediate language and shows how ShEx and SHACL can be converted to it and enumerates some challenges and trends that the authors foresee with regards to RDF validation.
Abstract: The RDF data model forms a cornerstone of the Semantic Web technology stack. Although there have been different proposals for RDF serialization syntaxes, the underlying simple data model enables great flexibility which allows it to be successfully employed in many different scenarios and to form the basis on which other technologies are developed. In order to apply an RDF-based approach in practice it is necessary to communicate the structure of the data that is being stored or represented. Data quality is of paramount importance for the acceptance of RDF as a data representation language and it must be enabled by the use of tools that can check if some data conforms to some specific structure. There have been several recent proposals for RDF validation languages like ShEx and SHACL. In this chapter, we describe both proposals and enumerate some challenges and trends that we foresee with regards to RDF validation. We devote more space to what we consider one of the main challenges, which is to compare ShEx and SHACL and to understand their underlying foundations. To that end, we propose an intermediate language and show how ShEx and SHACL can be converted to it.

11 citations


Book ChapterDOI
TL;DR: This study presents the use of probability distributions and the theory of systems simulation applied to real-life problems, with the aim of giving researchers and Students a guide to facilitate its application within of investigative work.
Abstract: This study presents the use of probability distributions and the theory of systems simulation applied to real-life problems, with the aim of giving researchers and Students a guide to facilitate its application within of investigative work. It was carried out project-based learning (PBL) through a project in the classroom in order to help students to recognize, develop and apply feasibly the different types of probability distributions in real life problems.

11 citations


Book ChapterDOI
TL;DR: The use of random numbers, to sample the values of probability variables, allows obtaining solutions to mathematical problems such as the Monte Carlo method, that allows to model stochastic parameters or deterministic based on random sampling as discussed by the authors.
Abstract: The sequential use of random numbers, to sample the values of probability variables, allows obtaining solutions to mathematical problems such as the Monte Carlo method, that allows to model stochastic parameters or deterministic based on random sampling. To justify the use of this method is needed knowing concepts such as the weak law of large numbers and the central boundary theorem.

8 citations


Book ChapterDOI
TL;DR: This chapter proposes a random seeding LFSR-based truly random number generator (TRNG) which is not only of low complexity, like the aforementioned PRNGs, but is also ‘truly random’ in nature.
Abstract: Rapid developments in the field of cryptography and hardware security have increased the need for random number generators which are not only of low-complexity but are also secure to the point of being undeterminable A random number generator is a part of most security systems, so it should be simple and area efficient Many modern-day pseudorandom number generators (PRNGs) make use of linear feedback shift registers (LFSRs) Though these PRNGs are of low complexity, they fall short when it comes to being secure since they are not truly random in nature Thus, in this chapter we propose a random seeding LFSR-based truly random number generator (TRNG) which is not only of low complexity, like the aforementioned PRNGs, but is also ‘truly random’ in nature Our proposed design generates an n-bit truly random number sequence that can be used for a variety of hardware security based applications Based on our proposed n-bit TRNG design, we illustrate an example which generates 16-bit truly random sequences, and a detailed analysis is shown based on National Institute of Standards and Technology (NIST) tests to highlight its randomness

8 citations


Book ChapterDOI
TL;DR: This chapter presents a transfer learning based approach for scene classification using a pre-trained Convolutional Neural Network as a feature extractor and a dimensionality reduction technique known as principal component analysis (PCA) on the feature vector.
Abstract: Categorization of scene images is considered as a challenging prospect due to the fact that different classes of scene images often share similar image statistics. This chapter presents a transfer learning based approach for scene classification. A pre-trained Convolutional Neural Network (CNN) is used as a feature extractor for the images. The pre-trained network along with classifiers such as Support Vector Machines (SVM) or Multi Layer Perceptron (MLP) are used to classify the images. Also, the effect of single plane images such as, RGB2Gray, SVD Decolorized and Modified SVD decolorized images are analysed based on classification accuracy, class-wise precision, recall, F1-score and equal error rate (EER). The classification experiment for SVM was also done using a dimensionality reduction technique known as principal component analysis (PCA) on the feature vector. By comparing the results of models trained on RGB images with those grayscale images, the difference in the results is very small. These grayscale images were capable of retaining the required shape and texture information from the original RGB images and were also sufficient to categorize the classes of the given scene images.

7 citations


Book ChapterDOI
TL;DR: This paper investigates a real-world application of the free energy distance between nodes of a graph by proposing an improved extension of the existing Fraud Detection System named APATE, which relies on a new way of computing thefree energy distance based on paths of increasing length, and scaling on large, sparse, graphs.
Abstract: This paper investigates a real-world application of the free energy distance between nodes of a graph [14, 20] by proposing an improved extension of the existing Fraud Detection System named APATE [36]. It relies on a new way of computing the free energy distance based on paths of increasing length, and scaling on large, sparse, graphs. This new approach is assessed on a real-world large-scale e-commerce payment transactions dataset obtained from a major Belgian credit card issuer. Our results show that the free-energy based approach reduces the computation time by one half while maintaining state-of-the art performance in term of Precision@100 on fraudulent card prediction.

Book ChapterDOI
TL;DR: This paper identifies risk-aware mean-field-type optimal strategies for the decision-makers in blockchain-based distributed power networks with several different entities: investors, consumers, prosumers, producers and miners.
Abstract: In this paper we examine mean-field-type games in blockchain-based distributed power networks with several different entities: investors, consumers, prosumers, producers and miners. Under a simple model of jump-diffusion and regime switching processes, we identify risk-aware mean-field-type optimal strategies for the decision-makers.

Book ChapterDOI
TL;DR: A way of driving theory into practice the basic processes around the systems simulation as well as their implications regarding analysis, and implementation of information technology is suggested.
Abstract: In the 21st century, Education is undergoing a series of transformations both inside and outside the classroom. Despite changes in education, knowing and understanding the teaching-learning process is key to creating effective pedagogical action. In the process, the most significant teacher task is to accompany the student’s learning. Being this accompaniment through a concise and legible exhibition of concepts and projects performed in the classroom; that is to say, a way of driving theory into practice the basic processes around the systems simulation as well as their implications regarding analysis, and implementation of information technology.

Book ChapterDOI
TL;DR: The outcome of the proposed approach provides the promising results in identification and verification of the spots disease in the plants.
Abstract: Disease identification of plants has been proved to be beneficial for agro industries, research, and environment. Due to the era of industrialization, vegetation is shrinking. Early detection of diseases by processing the image of the leaf can be rewarding and helpful in making our environment healthier and green. Data clustering is an unsubstantiated learning technology where pattern recognition is used extensively to identify diseases in plants and its main cause. The objective is divided into two components. First, the identification of the symptoms on the basis of primary cause using K-mean. Second, validating the clusters using Elitist based Teaching Learning Based Optimization (ETLBO), and finally comparing existing models with the proposed model. Implementation involves relevant data acquisition followed by preprocessing of images. It is followed by feature extraction stage to get the best results in further classification stage. A K-mean and ETLBO algorithms are used for identification and clustering of diseases in plants. The implementation proves the suggested technique demonstrates better results on the basis of Histogram of Gradient (HoG) features. The chapter is organized as follows. In the introduction section, we have briefly explained about the existing and proposed methods. In the proposed approach section, different methods have been discussed in training and testing phases. The next section describes the algorithms used in the proposed approach followed by the experimental setup section. At the end, we have discussed analysis and comparison of experimental results. The outcome of the proposed approach provides the promising results in identification and verification of the spots disease in the plants.

Book ChapterDOI
TL;DR: This is the first implementation of machine comprehension models in code-mixed Hindi language using deep neural networks and a new architecture is proposed in this work by combining two of the best performing networks.
Abstract: The domain of artificial intelligence revolutionizes the way in which humans interact with machines. Machine comprehension is one of the latest fields under natural language processing that holds the capability for huge improvement in artificial intelligence. Machine comprehension technique gives systems the ability to understand a passage given by user and answer questions asked from it, which is an evolved version of traditional question answering technique. Machine comprehension is a main technique that falls under the category of natural language understanding, which exposes the amount of understanding required for a model to find the area of interest from a passage. The scope for the implementation of this technique is very high in India due to the availability of different regional languages. This work focused on the incorporation of machine comprehension technique in code-mixed Hindi language. A detailed comparison study on the performance of dataset in several deep learning approaches including End to End Memory Network, Dynamic Memory Network, Recurrent Neural Network, Long Short-Term Memory Network and Gated Recurrent Unit are evaluated. The best suited model for the dataset used is identified from the comparison study. A new architecture is proposed in this work by combining two of the best performing networks. To improve the model with respect to various ways of answering questions from a passage the natural language processing technique of distributed word representation was performed on the best model identified. The model was improved by applying pre-trained fastText embeddings for word representations. This is the first implementation of machine comprehension models in code-mixed Hindi language using deep neural networks. The work analyses the performance of all five models implemented, which will be helpful for future researches on Machine Comprehension technique in code-mixed Indian languages.

Book ChapterDOI
TL;DR: In this paper, the authors analyze the use of pseudo numbers in mixed or linear congruence methods, multiplicative congruency methods, additive congruences, and additive methods.
Abstract: The pseudo numbers are the essential basis of the simulation. Usually, all randomness involved in the model is obtained from a random number generator that produces a succession of values that are supposed to be realizations of a sequence of independent random variables and identically distributed uniforms U (0, 1). To be more explicit about the use of pseudo numbers, we will analyze concepts such as mixed or linear congruence method, multiplicative congruence method, additive congruence method.

Book ChapterDOI
TL;DR: The present study on word sense disambiguation of Malayalam aims at to understand the causes for lexical ambiguity and propose two approaches: cluster and deep learning approaches.
Abstract: The present study on word sense disambiguation of Malayalam aims at to understand the causes for lexical ambiguity and finding was to resolve the lexical ambiguity. It has been understood that homonymy and polysemy are the reason for creating ambiguity. Here we are concerned with ambiguity due to homonymy. To resolve the ambiguity we propose two approaches: cluster and deep learning approaches. Certain number of ambiguous words is collected with their occurrence in sentences. Cluster approach is a supervised approach involving POS tagging, lemmatization and sense annotation. The context words are identified for each sense of the experimental ambiguous words. A collocational dictionary is prepared based on this. WSD is implemented using the collocational dictionary. The neural network approach is based on deep learning. It is a corpus driven approach in which the necessary information for disambiguating homonymous words is extracted from the corpus itself. The quantity of the corpus used for WSD decides the accuracy of this approach.

Book ChapterDOI
TL;DR: The next step is to develop an MT system for translation of English periphrastic causative constructions into their equivalent Hindi causative forms, pointing out where they differ and where they can be easily matched.
Abstract: Machine Translation is an application of Computational Linguistics which is primarily concerned with designing software that translates a text from one natural language to another natural language. This is a complex process since processing of natural language requires linguistic knowledge of the source language and the target language from word level to sentence level and their contrasting features at all these levels. This is a difficult task, because all natural languages are highly expressive. That means a single word can have many meanings and many words can have a single meaning. So finding equivalences between the source and the target languages is a challenging task. The linguistic units like the prepositions and auxiliary verbs are polysemous. The English auxiliary verbs like ‘be’, ‘have’ and ‘do’ and the other semi-auxiliary verbs like ‘get’, ‘let’, ‘make’ and ‘help’, etc., are expressive. They act as helping verbs or verbs functioning as tense, aspect and modality markers or copulative verbs or causative verbs. The syntax and semantics of a sentence is based on the main verb. The concept of causation is part of the semantics of the verb itself. Causation means that one named entity (NP1) makes somebody else do something or causes another named entity (NP2) to be in a certain state. Semantically causative verbs refer to a causative situation which has two components: (a) the causing situation or the antecedent, (b) the caused situation or the consequent. These two combine to make a causative situation. There are three types of causatives are identified in natural languages. They are—morphological causatives, lexical causatives and periphrastic causatives. This study mainly focuses on resolving the issues related to the translation of English periphrastic causative sentences with the auxiliary verbs ‘have’, ‘get’, and ‘make’ into Hindi. A contrastive study of the two languages on causative formation has been made as a first step in this direction. The next step is to develop an MT system for translation of English periphrastic causative constructions into their equivalent Hindi causative forms. In English causative meaning is realized by the use of auxiliary verbs rather than by inflection. Hindi makes use of inflectional suffixes to realize the causative meaning. So causativization in English is different from that in Hindi. As already noted, English shows periphrastic causation whereas Hindi shows morphological causation. In Hindi all causative verb forms show inflection, Person, Number and Gender (PNG) marking and specific causative functions. English takes advantage of a set of verbs like ‘have’, ‘make’, ‘get’ ‘need’ and ‘help’ to bring out the causative meaning. At the time of translation from English to Hindi the selection of causative verb form is very important. The selection of translation equivalence in Hindi depends on many factors. Hindi has two causative inflections—the direct causative and the indirect causative forms. In Hindi, the causative verbs show all the other characteristic features of transitive verbs. They also indicate tense and PNG inflections like any other transitive verb. Transferring the causative information in the source language (English) to the target language (Hindi) is a real challenge. Classification of English verbs by Levin comes handy to solve certain problems which occur while transferring the periphrastic causative construction from English to Hindi. In this paper we have elaborated on the contrastive nature of causative constructions in English and Hindi, pointing out where they differ and where they can be easily matched. Linguistic rules are written to map causative constructions from English to Hindi. Also a system is developed to implement the linguistic rules and to convert the causative constructions from English to Hindi. After collecting the different types of periphrastic causative sentences from the source language, we find out their translation equivalence in Hindi on the basis of the main verb in the source language (SL). We identified the 42 different causative verb forms in Hindi. Then we prepared a set of separate linguistic rules for the transfer of the causative constructions of source language into target language. These transfer rules are utilized to develop a Rule-based Machine Translation system (RBMTs) for translating periphrastic causative sentences from English to Hindi. The output of this newly developed system has been verified by human evaluators. Translation of different types of causative sentences gives commendable result (more the 80% accuracy).

Book ChapterDOI
TL;DR: This paper proposes and implements architecture for fractal image compression and reduces the encoding time to nanoseconds and has been implemented on a Xilinx Spartan-6 FPGA board and tested on medical images.
Abstract: Medical images contain voluminous data Transmission of medical images over the Internet using low bandwidth and fast transmission is a major challenge Image compression techniques address this issue using cost-effective methods Fractal image compression is a compression technique used in digital image compression to increase the transmission rate with low bandwidth This is based on the fact that some parts of an image always resemble another part of the same image This concept is called self-similarity Fractal image compression has long encoding time of seconds and fast decoding time In this paper, we propose and implement architecture for fractal image compression This architecture has been implemented on a Xilinx Spartan-6 FPGA board and tested on medical images The proposed implementation reduces the encoding time to nanoseconds

Book ChapterDOI
TL;DR: Simulation is the process of designing and developing a computerized model of a system or process and conducting experiments to understand the system behavior or to evaluate several strategies which the system can be operated based on probabilistic models that allow to generate random variables and obtain significant results.
Abstract: Simulation is the process of designing and developing a computerized model of a system or process and conducting experiments. To understand the system behavior or to evaluate several strategies which the system can be operated based on probabilistic models that allow to generate random variables and obtain significant results through methods such as inverse transform, accept-reject, composition and convolution methods.

Book ChapterDOI
TL;DR: A novel rodent avoidance test based on tasks of instrumental learning: pedal-pressing in an operant box results in a reward, which is either a piece of food in a feeder or an escape-platform (footshock-avoidance behavior).
Abstract: This paper presents a novel rodent avoidance test. We have developed a specialized device and procedures that expand the possibilities for exploration of the processes of learning and memory in a psychophysiological experiment. The device consists of a current stimulating electrode-platform and custom software that allows to control and record real-time experimental protocols as well as reconstructs animal movement paths. The device can be used to carry out typical footshock-avoidance tests, such as passive, active, modified active and pedal-press avoidance tasks. It can also be utilized in the studies of prosocial behavior, including cooperation, competition, emotional contagion and empathy. This novel footshock-avoidance test procedure allows flexible current-stimulating settings. In our work, we have used slow-rising current. A test animal can choose between the current rise and time-out intervals as a signal for action in footshock avoidable tasks. This represents a choice between escape and avoidance. This method can be used to explore individual differences in decision-making and choice of avoidance strategies. It has been shown previously that a behavioral act, for example, pedal-pressing is ensured by motivation-dependent brain activity (avoidance or approach). We have created an experimental design based on tasks of instrumental learning: pedal-pressing in an operant box results in a reward, which is either a piece of food in a feeder (food-acquisition behavior) or an escape-platform (footshock-avoidance behavior). Data recording and analysis were performed using custom software, the open source Accord.NET Framework was used for real-time object detection and tracking.

Book ChapterDOI
TL;DR: In this paper, the dynamics of spectral parameters of heart rate variability (HRV) can be used as an indicator of the system mismatch observed when functional systems with contradictory characteristics are actualized simultaneously.
Abstract: Variability in beat-to-beat heart activity reflects the dynamics of heart-brain interactions. From the positions of the system evolutionary theory, any behaviour is based on simultaneous actualization of functional systems formed at different stages of phylo- and ontogenesis. Each functional system is comprised by neurons and other body cells, the activity of which contributes to achieving an adaptive outcome for the whole organism. In this study we hypothesized that the dynamics of spectral parameters of heart rate variability (HRV) can be used as an indicator of the system mismatch observed when functional systems with contradictory characteristics are actualized simultaneously. We presented 4–11-year-old children (N = 34) with a set of moral dilemmas describing situations where an in-group member achieved optional benefits by acting unfairly and endangering lives of out-group members. The results showed that LF/HF ratio of HRV was higher in children with developed moral attitudes for fairness toward out-groups as compared to children who showed preference for in-group members despite the unfair outcome for the out-group. Thus, the system mismatch in situations with a moral conflict is shown to be reflected in the dynamics of heart activity.

Book ChapterDOI
TL;DR: In this article, the benefits of incorporating interval-valued fuzzy sets into the Bousi-Prolog system are analysed and a syntax, declarative semantics and implementation for this extension is presented and formalised.
Abstract: In this paper we analyse the benefits of incorporating interval-valued fuzzy sets into the Bousi-Prolog system. A syntax, declarative semantics and implementation for this extension is presented and formalised. We show, by using potential applications, that fuzzy logic programming frameworks enhanced with them can correctly work together with lexical resources and ontologies in order to improve their capabilities for knowledge representation and reasoning.