scispace - formally typeset
Search or ask a question

Showing papers in "Intelligent Decision Technologies in 2015"


Proceedings ArticleDOI
TL;DR: This paper proposes methods to transformation camouflaged netlist to its security-equivalent logic locked netlist and vice versa, and assess and compare the effectiveness of the two techniques using the same set of analysis algorithms and tools.
Abstract: The globalization of IC design has resulted in security vulnerabilities and trust issues such as piracy, overbuilding, reverse engineering and Hardware Trojans. Logic locking and IC camouflaging are two techniques that help thwart piracy and reverse engineering attacks by making modifications at the netlist level and the layout level, respectively. In this paper, we analyze the similarities and differences among logic locking and IC camouflaging. We p resent methods to transformation camouflaged netlist to its security-equivalent logic locked netlist and vice versa. The proposed transformations enable the switch from one defense technique to the other, and assess and compare the effectiveness of the two techniques using the same set of analysis algorithms and tools.

39 citations


Proceedings ArticleDOI
TL;DR: An automatic test case generation algorithm that can be used to provide ECG records for testing purposes and is based on a range of the parameters related ECG components such as amplitude, duration, slope, and physical characteristics.
Abstract: The use of biomedical sensors, be it attached or embedded inside a human body, to monitor various physiological parameters is increasing at a significant rate due to continued advances in miniaturizations and materials Testing and verification of the algorithms used in processing the physiological parameters of concern is essential, given the sensitivity of their usage Simulation is a technique that is widely used to achieve this However, proper test cases are required in order to carry out the simulation process ElectroCardioGram (ECG) is one of the most commonly used and studied physiological signal, yet, algorithms that handle ECG recorded data are not being tested and verified thoroughly due to the lack of proper test cases This paper presents an automatic test case generation algorithm that can be used to provide ECG records for testing purposes The algorithm is based on a range of the parameters related ECG components such as amplitude, duration, slope, and physical characteristics Hence, the required shape of the ECG can be controlled in order to be able to cover a wide range of scenarios during testing We provide an implementation of the algorithm and illustrate it on two different ECG sensor algorithms

22 citations


Journal ArticleDOI
TL;DR: The findings from the survey indicate that artificial intelligence and signal processing based techniques are more efficient when compared to traditional financial forecasting techniques and these techniques appear well suited for the task of financial forecasting.
Abstract: Financial forecasting is an area of research which has been attracting a lot of attention recently from practitioners in the field of artificial intelligence. Apart from the economic benefits of accurate financial prediction, the inherent nonlinearities in financial data make the task of analyzing and forecasting an extremely challenging task. This paper presents a survey of more than 100 articles published over two centuries from 1933 up to 2013 in an attempt to identify the developments and trends in the field of financial forecasting with focus on application of artificial intelligence for the purpose. The findings from the survey indicate that artificial intelligence and signal processing based techniques are more efficient when compared to traditional financial forecasting techniques and these techniques appear well suited for the task of financial forecasting. Some of the issues that need addressing are discussed in brief. A novel technique for selection of the input dataset size for ensuring best possible forecast accuracy is also presented. The results confirm the effectiveness of the proposed technique in improving the accuracy of forecasts.

22 citations


Book ChapterDOI
TL;DR: Experimentation conducted in a controlled simulation environment shows that student-pilots spend too much time looking at inboard instruments inside the cockpit, and preliminary results show that different notifications bring modifications of the visual gaze pattern.
Abstract: French ab initio military pilots are trained to operate a new generation of aircraft equipped with glass cockpit avionics (Rafale, A400 M). However gaze scanning teachings can still be improved and remain a topic of great interest. Eye tracking devices can record trainee gaze patterns in order to compare them with correct ones. This paper presents experimentation conducted in a controlled simulation environment where trainee behaviors were analyzed with notifications given in real-time. In line with other research in civil aviation, this experimentation shows that student-pilots spend too much time looking at inboard instruments (inside the cockpit). In addition, preliminary results show that different notifications bring modifications of the visual gaze pattern. Finally we discuss future strategies to support a more efficient pilot training thanks to real-time gaze recording and its analysis.

20 citations


Journal ArticleDOI
TL;DR: A GA optimized technical indicator decision tree-SVM based intelligent recommender system is proposed, which can learn patterns from the stock price movements and then recommend appropriate one-day-ahead trading strategy.
Abstract: . Generating consistent profits from stock markets is considered to be a challenging task, especially due to the nonlinear nature of the stock price movements. Traders need to have a deep understanding of the market behavior patterns in order to trade successfully. In this study, a GA optimized technical indicator decision tree-SVM based intelligent recommender system is proposed, which can learn patterns from the stock price movements and then recommend appropriate one-day-ahead trading strategy. The recommender system takes the task of identifying stock price patterns on itself, allowing even a lay-user, who is not well versed in stock market behavior, to trade profitably on a consistent basis. The efficacy of the proposed system is validated on four different stocks belonging to two different stock markets (India and UK) over three different time frames for each stock. Performance of the proposed system is validated using fifteen different measures. Performance is compared with traditional technical indicator based trading and the traditional buy and hold strategy. Results indicate that the proposed system is capable of generating profits for all the stocks in both the stock markets considered.

17 citations


Journal ArticleDOI
TL;DR: A decision aided system based upon multi-objective programming is proposed for addressing the optimal trade-off among costs, risks and sustainability of waste management system from long-term perspective.
Abstract: Nowadays, sustainable development has increasingly drawing public attention due to environmental, economic and social reasons. To pursue a more competitive and sustainable society needs greater concentration on waste management. However, waste management is a worldwide challenge, because several interactive influencing factors, i.e., costs, risks, equity, etc., and the characteristics of different regions have to be simultaneously taken into consideration. The optimal solution for one influencing factor is usually not a good choice for another. Therefore, it is highly preferred to develop a sophisticated system analysing tool for managing those interactive influencing factors as well as the characteristics of different waste management systems in an efficient and sustainable manner. In this paper, a decision aided system based upon multi-objective programming is proposed for addressing the optimal trade-off among costs, risks and sustainability of waste management system from long-term perspective. A theoretical framework of sustainable waste management is first established, and the mathematical model as well as the computation method is then formulated accordingly. To present the application of this model, an illustrative calculation is performed as well, and the model computation in this paper is done by using programming language in a professional optimization software Lingo 13.0.

17 citations


Journal ArticleDOI
TL;DR: This paper discusses the theory and implementation of the framework of comparators, a framework for designing a network of compound object comparators as well as practical implementation using the framework.
Abstract: This paper discusses the theory and implementation of the framework of comparators. You can find here a detailed description of designing a network of compound object comparators as well as practical implementation using the framework. There are two case studies which show the spectrum of possible usages of the framework. The paper also contains fundamental knowledge about other frameworks and theories which are used here.

12 citations




Proceedings ArticleDOI
TL;DR: A flexible heterogeneous multi-core architecture for embedded LTE MIMO-OFDM system using Field Programmable Gate Array (FPGA) is proposed and their performances are evaluated on the Xilinx Zynq FPGA platform.
Abstract: The fast development of high-speed railway (HSR), as a high-mobility intelligent transportation system (ITS), and the growing demand of broadband services for HSR users, introduce new challenges to wireless communication systems 4G Long Term Evolution (LTE) standard has been widely used to satisfy the HSR communication system needs The key part of 4G LTE standard is the Orthogonal Frequency Division Multiplexing (OFDM) modulation In order to achieve a reliable communication and meet the demands of high performance processing and low energy consumption of HSR, we propose a flexible heterogeneous multi-core architecture for embedded LTE MIMO-OFDM system using Field Programmable Gate Array (FPGA) In this paper, different multi-core configurations of the LTE MIMO-OFDM are explored and their performances are evaluated on the Xilinx Zynq FPGA platform The consumed area, power, and execution times of the different configurations are analyzed and compared in order to propose the most efficient architecture for this application

11 citations


Journal ArticleDOI
TL;DR: The research approach has been found beneficial both sides: innovation capability has increased remarkably in both partners and the model contains technology expertise and business knowledge with collaboration involving the enterprises, the research personnel and educational personnel.
Abstract: In this paper, we present our approach and experiences for enhancing innovations and innovation capability with cognitive infocommunications The experiences are gathered in collaborative applied research projects between the enterprises and the research team Our model also integrates applied research and education It contains technology expertise and business knowledge with collaboration involving the enterprises, the research personnel and educational personnel The model emphasizes to proactively take into account the needs of current and future customers The research approach has been found beneficial both sides: innovation capability has increased remarkably in both partners The collaborative applied research projects have produced tens of new business opportunities and many of them have already been used in the enterprises Cognitive infocommunications has a remarkable role in many of these innovations The developed concept is planned to be extended to wider users in the future via gamification where use, design, elements and characteristics for computer games are used in non-game contexts

Proceedings ArticleDOI
TL;DR: This work proposes to use obfuscation methods to facilitate power and path delay analysis based HT detection methods and proposes to increase the proportion of HT dynamic power to the total dynamic power of circuit, while the circuit is being obfuscated.
Abstract: Integrated Circuit (IC) piracy and malicious alteration, named as Hardware Trojan (HT), are two important threats which may happen in untrusted foundries. Functionality obfuscation has been proposed against IP/IC piracy. Obfuscation can also offer opportunities to defeat HT insertion, because the HT designer cannot understand the functionality of the obfuscated ICs. In addition various HT detection methods have been proposed based on conventional functional or structural tests, and side channel analysis. Conventional functional or structural tests are inefficient if the HT is not completely activated. HT detection by side channel analysis faces Process Variation (PV) and Environment Variation (EV). An HT is detectable if its effect is significant among PV and EV. In this work we propose to use obfuscation methods to facilitate power and path delay analysis based HT detection methods. Since shorter paths have less PV than longer paths; the first approach is to generate shorter paths for nets that only belong to long paths, while the circuit is being obfuscated. The second suggested approach is to increase the proportion of HT dynamic power to the total dynamic power of circuit, while the circuit is being obfuscated. The success of power analysis based HT detection methods is increased by increasing this proportion.

Proceedings ArticleDOI
TL;DR: Bias Temperature Instability analysis for the SRAM write driver shows that as technology scales down, BTI impact on write delay increases; the 22nm design can degrade up to 1.9x more than the 45nm design at nominal operation conditions.
Abstract: Bias Temperature Instability (BTI) has become a major reliability challenge for nano-scaled devices This paper presents BTI analysis for the SRAM write driver Its evaluation metric, the write delay (WD), is analyzed for various supply voltages and temperatures for three technology nodes, ie, 45nm, 32nm, and 22nm The results show that as technology scales down, BTI impact on write delay (ie, its average and +/− 3σ variations) increases; the 22nm design can degrade up to 19x more than the 45nm design at nominal operation conditions In addition, the result shows that an increment in supply voltage (ie, from −10% Vdd to +10% Vdd) increases the relative write delay during the operational lifetime Furthermore, the results show that a temperature increment accelerates the BTI induced write delay significantly; while at 298K the degradation is up to 47%, it increases to 414% at 398K for the 22nm technology node

Proceedings ArticleDOI
TL;DR: A new method of high level test generation based on the concept of test groups to prove the correctness of a part of system functionality is proposed, which can be regarded as a generalization of the logic level test pair approach for identifying fault-free wires in gate-level networks.
Abstract: A new method of high level test generation based on the concept of test groups to prove the correctness of a part of system functionality is proposed. High-level faults of any multiplicity are assumed to be present in the system, however, there will be no need to enumerate them. Unlike the known approaches, we do not target the faults as test objectives. The goal of using the test groups is to extend step by step the fault-free core of the system by exploiting the knowledge about already successfully tested parts of the system. In case when the proof fails, fault diagnosis will follow. To cope with the complexity of multiple fault masking mechanisms, high-level decision diagrams (HLDD) are used. The proposed method can be regarded as a generalization of the logic level test pair approach for identifying fault-free wires in gate-level networks. Preliminary experimental results, and a discussion of the complexity of the method is presented.

Journal ArticleDOI
TL;DR: The paper problematizes engineering design by dividing knowledge into the categories technically constructed (explicit) and socially constructed (tacit) and contributes the assumed effects of a perspective shift that could guide the development of computational tools.
Abstract: A new vision in manufacturing is to develop product-service integrated value solutions. Today, few firms have fully realized this vision because they are not able to support the reasoning in the early stages of design. The purpose of this paper is to discuss engineers' cognitive challenge when replacing the core product rationale with value logic. The paper problematizes engineering design by dividing knowledge into the categories technically constructed (explicit) and socially constructed (tacit). In doing so, this study contributes the assumed effects of a perspective shift that could guide the development of computational tools.

Proceedings ArticleDOI
TL;DR: The proposed TPG can reduce the switching activity with an average of 84% and 72%, respectively, and various properties of the proposed technique and the methodology of the design are presented in this paper.
Abstract: This paper presents the use of switch-tail ring counter as low transition test pattern generator (TPG) to reduce power consumption in test-per-clock and test-per-scan built-in self-test (BIST) applications. The proposed TPG is implemented by dividing the register in the test mode into many switch-tail ring counters. These counters are fed with a seed in such a way to produce a single transition between consecutive test patterns. Also each of these ring counters are triggered with a clock and control signal such that not all the counters are triggered with each clock in order to reduce the switching activity in the inputs of the circuit-under-test (CUT). The proposed technique can be used for test-per-clock and test-per-scan BIST. Various properties of the proposed technique and the methodology of the design are presented in this paper. Experimental results for ISCAS'85 (test-per-clock) and for the ISCAS'89 (test-per-scan) benchmark circuits show that the proposed design can reduce the switching activity with an average of 84% and 72%, respectively.

Journal ArticleDOI
TL;DR: Novel approaches for handling multi- valued and continuous attributes adequate for the naive Bayes classifier and decision trees classifier are proposed, and the resulting hybrid approach significantly increases recommendation accuracy compared to collaborative filtering while reducing the risk of over specification.
Abstract: Machine learning algorithms are often used in content-based recommender systems since a recommendation task can naturally be reduced to a classification problem: A recommender needs to learn a classifier for a given user where learning examples are characteristics of items previously liked/bought/seen by the user. However, multi-valued and continuous attributes require special approaches for classifier implementation as they can significantly influence classifier accuracy. In this paper we propose novel approaches for handling multi- valued and continuous attributes adequate for the naive Bayes classifier and decision trees classifier, and tune it for content-based movie recommendation. We evaluate the performance of the resulting approaches using the MovieLens data set enriched with movie details retrieved from the Internet Movie Database. Our empirical results demonstrate that the naive Bayes classifier is more suitable for content-based movie recommendation than the decision trees algorithm. In addition, the naive Bayes classifier achieves better results with smart discretization of continuous attributes compared to the approach which models continuous attributes with a Gaussian distribution. Finally, we combine our best performing content-based algorithm with the k-means clustering algorithm typically used for collaborative filtering, and evaluate the performance of the resulting hybrid approach for a movie recommendation task. The experimental results clearly show that the hybrid approach significantly increases recommendation accuracy compared to collaborative filtering while reducing the risk of over specification, which is a typical problem of content-based approaches.

Journal ArticleDOI
TL;DR: The proposed ensemble methods provide significant improvement of accuracy compared to individual classifiers and the proposed bagged RBF and SVM performs significantly better than ECOC and Dagging andThe proposed hybrid RBF-SVM performs significant better than voting and stacking.
Abstract: One of the major developments in machine learning in the past decade is the ensemble method, which finds highly accurate classifier by combining many moderately accurate component classifiers. In this research work, new ensemble classification methods are proposed with homogeneous ensemble classifier using bagging and heterogeneous ensemble classifier using arcing and their performances are analyzed in terms of accuracy. A Classifier ensemble is designed using Radial Basis Function RBF and Support Vector Machine SVM as base classifiers. The feasibility and the benefits of the proposed approaches are demonstrated by the means of standard datasets of direct marketing. The main originality of the proposed approach is based on three main parts: preprocessing phase, classification phase and combining phase. A wide range of comparative experiments are conducted for standard datasets of direct marketing. The performance of the proposed homogeneous and heterogeneous ensemble classifiers are compared to the performance of other standard homogeneous and heterogeneous ensemble methods. The standard homogeneous ensemble methods include Error correcting output codes, dagging and heterogeneous ensemble methods include majority voting, stacking. The proposed ensemble methods provide significant improvement of accuracy compared to individual classifiers and the proposed bagged RBF and SVM performs significantly better than ECOC and Dagging and the proposed hybrid RBF-SVM performs significantly better than voting and stacking. Also heterogeneous models exhibit better results than homogeneous models for standard datasets of direct marketing.

Proceedings ArticleDOI
TL;DR: This study first surveys the memory-access optimization profilers, then provides a detailed comparison of data-communication profilers and highlights their strong and weak aspects, and makes recommendations for improving existing data- communication profilers or designing future ones.
Abstract: With the advent of technology, multi-core architectures are prevalent in embedded, general-purpose as well as high-performance computing. Efficient utilization of these platforms in an architecture agnostic way is an extremely challenging task. Hence, profiling tools are essential for programmers to optimize the applications for these architectures and understand the bottlenecks. Typical bottlenecks are irregular memory-access patterns and data-communication among cores which may reduce anticipated performance improvement. In this study, we first survey the memory-access optimization profilers. Thereafter, we provide a detailed comparison of data-communication profilers and highlight their strong and weak aspects. Finally, recommendations for improving existing data-communication profilers and/or designing future ones are thoroughly discussed.

Journal ArticleDOI
TL;DR: Gini Index-based image complementing that utilises the relative frequency of occurrence of pixel intensities in digital images is proposed that lends support to decision-making in image analysis and image understanding.
Abstract: The focus of this article is on Gini Index-based image complementing that utilises the relative frequency of occurrence of pixel intensities in digital images. The outcome of the proposed approach to image complementing is that the transformation function maps a complemented image to a shape that is usually not a straight line based on pixels and image information. The proposed approach lends support to decision-making in image analysis and image understanding. A practical application of the proposed to image complementing is given in terms of the analysis of medical images.

Proceedings ArticleDOI
TL;DR: This paper surveys different solution methodologies that are built so far and presents their SoC verification platform using HW emulation & Co-modeling Testbench technologies.
Abstract: Hardware-assisted verification, or emulation, delivers the capacity and performance for extremely fast, full System-on-Chip (SoC) testing. Emulation enables longer test cases and more tests to be run in less time. In doing so, it allows more design requirements to be covered while more bugs are uncovered. However, emulation is no longer only about performance and capacity. The landscape is shifting beyond these two fundamental benefits in terms of all that can be accomplished virtually with an emulator. As a result, leading electronic-design companies want to take advantage of the benefits of both megahertz verification and a fully virtual, block to SoC level accelerated verification flow. In this paper, we survey different solution methodologies that are built so far and present our SoC verification platform using HW emulation & Co-modeling Testbench technologies. High-performance, high-capacity hardware-assisted emulators and co-modeling Testbench technology can speed up to 10,000x the verification of any System-On-Chip.

Proceedings ArticleDOI
Yervant Zorian1
TL;DR: This tutorial covers hierarchical test trends and solutions based on IEEE test standards, such as IEEE 1500, 1687 and 1149.1, along with intelligent infrastructure IP to help achieve the above advantages.
Abstract: Today's IoT design teams, use heterogeneous IP blocks from numerous sources in several-levels of hierarchy. To ensure manufacturing quality and filed reliability for such IoTs, DFT designers adopt new test solutions across heterogeneous IP, which is meant to enable concurrent test, power reductions during test, DFT closure, isolated debug and diagnosis, pattern porting, calibration, and uniform access. This tutorial covers hierarchical test trends and solutions based on IEEE test standards, such as IEEE 1500, 1687 and 1149.1, along with intelligent infrastructure IP to help achieve the above advantages. This keynote, besides discussing the key trends and challenges of IoT, will cover solutions to handle the wide range of requirements for robustness. It will also address post-silicon analysis and yield optimization trade-offs using volume diagnostic, and failure coordinate calculation. With the proliferation of IoT, this keynote will cover the infrastructure IP needed to address the above challenges.

Proceedings ArticleDOI
TL;DR: This paper defines a methodology for creating test scenarios and making use of object oriented principles to build composite layered scenario sequences with a generic parallel stimuli synchronization process and built this methodology as a generic library code to be reused in many designs.
Abstract: Verification architects need to make use of randomness supported by System Verilog and be able to define a generic path for the test to follow. This path represents a subset of features, and allows the test to randomly explore the design space to explore corners in depth. Setting up a test case for such designs requires a well-defined stimulus generation methodology. Off-the-shelf scenario libraries and a synchronization and scheduling process methodology for the parallel stimuli need to be reused across several test cases. In this paper, we define a methodology for creating test scenarios and making use of object oriented principles to build composite layered scenario sequences with a generic parallel stimuli synchronization process. We built our methodology as a generic library code to be reused in many designs. A recent memory controller design is used to demonstrate our methodology. The results of applying this methodology on test cases show enhancements on coverage closure and performance.

Journal ArticleDOI
TL;DR: This study proposes a new concept of granular rule-based models whose rules assume a format "if G(Ai) then G(fi) "w hereG(.)s are granular generalizations of the numeric conditions and conclusions of the rules.
Abstract: In this study, we propose a new concept of granular rule-based models whose rules assume a format "if G(Ai) then G(fi) "w hereG(.)s are granular generalizations of the numeric conditions and conclusions of the rules. Those generalizations can be expressed e.g., in terms of interval-valued, type-2 or probabilistic fuzzy sets. We discuss several classes of fuzzy models depending upon available information granules and offer a motivation present behind their emergence. The design of these granular architectures exploits the essentials of Granular Computing such as a principle of justifiable granularity and an optimal allocation of information granularity. Detailed investigations of the performance indexes (objective functions) along with the related optimization schemes are covered as well.

Journal ArticleDOI
TL;DR: Simulation outcomes show the potential capability of the multi-agent system approach for managing distributed and moving energy storage systems in smart grid and effectiveness of the proposed control and management strategy.
Abstract: This paper proposes an infrastructure for managing distributed power systems with distributed and moving energy storage elements. A flexible and extendible control strategy is applied for smart charging and discharging of energy storage elements to lower the operational cost of energy storage elements and reduce the peak load of the smart gird. A Multi-Agent System MAS was developed according to the industrial standards for providing a plug and play platform for managing distributed and moving energy storage elements. Short-term management of energy storage elements is mathematically formulated as a nonlinear mixed-integer optimization problem. An intelligent energy management strategy was implemented on the multi-agent system to provide short-term energy storage requirements in smart grid. Simulation outcomes show the potential capability of the multi-agent system approach for managing distributed and moving energy storage systems in smart grid and effectiveness of the proposed control and management strategy.

Proceedings ArticleDOI
TL;DR: The emergence of mobile apps from simple to complex applications, failures (software & hardware) in mobile apps, design constraints, and challenges associated with fault tolerance systems are discussed.
Abstract: Mobile applications are a part of human life, ranging from simple tasks such as e-mails to critical operations such as security surveillances. Referable to the different softwares and hardwares used in mobile devices, failure of a mobile application is unavoidable. Failure of mobile applications poses a serious threat to the success of a mobile software. Also, those failures can result in a great loss to the end users who use mission critical applications (such as banking apps). In this paper, we discuss about the emergence of mobile apps from simple to complex applications, failures (software & hardware) in mobile apps, design constraints, and challenges associated with fault tolerance systems.

Proceedings ArticleDOI
TL;DR: This paper has addressed the problem of high test power consumption in system-on-a-chip SOC and proposed genetic algorithm based approach for power aware test planning and takes into account minimum power during testing, where the goal is to optimize the total test time with minimum power.
Abstract: System-on-a-chip (SOC) uses embedded cores those require a test access architecture called Test Access Mechanism (TAM) to access the cores for the purpose of testing. Optimization of TAM and test time at SOC level is an important area of research. However, the interconnect between the cores of SOC contribute to circuit delay and power consumption. Power and thermal issues are major concern, specially during testing the design under test (DUT) consumes significantly more power in test mode than in normal operation. To reduce the interconnect 3D IC is a solution where multiple device layers are stacked together. The problem of high test power consumption can be solved by the use of power aware test planning in 3D IC. In this paper, we have addressed the problem and proposed genetic algorithm based approach for power aware test planning. Given a TAM width available to test a SOC, our algorithm partitions this width into different groups and places the cores of these groups in different layers in core based SOCs based on 3D IC technology with the goal to optimize the total test time under certain power limit. In addition to this our technique also takes into account minimum power during testing, where the goal is to optimize the total test time with minimum power. The experimental results establish the effectiveness of our algorithm.


Journal ArticleDOI
TL;DR: This paper presents the state of the art and experiences in developing easier human – robot interaction to help even inexperienced operators use robots and shows a variety of ways that inter-cognitive communication between human and artificial cognitive systems can be utilized in robotics.
Abstract: Cognitive infocommunication aspects are obvious when developing collaboration between humans and co-worker robots. It is typical that new technology for production operations is not available as plug-and-play type solutions, but rather, requires customization in every application. That is especially the case with industrial robots. Industrial robot programming is currently carried out often by a tedious and time-consuming teaching method. In this paper, our main goal is to present the state-of-the – art and our experiences in developing easier human – robot interaction to help even inexperienced operators use robots. These examples show a variety of ways that inter-cognitive communication between human and artificial cognitive systems can be utilized in robotics. Many of these examples are already taken into everyday use. We also present our system and software architecture used in the development of generic industrial robot programming for easy-to-use applications, as well as some examples of our service robot development.

Journal ArticleDOI
TL;DR: A computational model of stress based on objective human responses collected from human observers of environments and extended to include a genetic algorithm which was used to select features that were better for stress recognition and reduce the use of redundant features is developed.
Abstract: Stress is a major problem in our society today and poses major concerns for the future. It is important to gain an objective understanding of how average individuals respond to events they observe in typical environments they encounter. We developed a computational model of stress based on objective human responses collected from human observers of environments. In the process, we investigated whether a computational model can be developed to recognize observer stress in abstract virtual environments text, virtual environments films and real environments real-life settings using physiological and physical response sensor signals. Our work proposes an architecture for a computational observer stress model. The architecture was used it to implement models for the different types of environments. Sensors appropriate to the different types of environment were investigated where the aims were to achieve unobtrusive methods for stress response signal collection, reduce encumbrance and hence, enhance methods to capture natural observer behaviors and produce stress models that recognized stress more robustly. We discuss the motivations for each investigation and detail the experiments we conducted to collect stress data sets for observers of the different types of environments. We describe individual-independent artificial neural network and support vector machine based model classifiers that were developed to recognize stress patterns from observer response signals. The classifiers were extended to include a genetic algorithm which was used to select features that were better for stress recognition and reduce the use of redundant features. The outcomes of this research provide a possible future extension on managing stress objectively.