scispace - formally typeset
Search or ask a question

Showing papers on "Situation awareness published in 2017"


Journal ArticleDOI
TL;DR: Key design interventions for improving human performance in interacting with autonomous systems are integrated in the model, including human–automation interface features and central automation interaction paradigms comprising levels of automation, adaptive automation, and granularity of control approaches.
Abstract: As autonomous and semiautonomous systems are developed for automotive, aviation, cyber, robotics and other applications, the ability of human operators to effectively oversee and interact with them when needed poses a significant challenge. An automation conundrum exists in which as more autonomy is added to a system, and its reliability and robustness increase, the lower the situation awareness of human operators and the less likely that they will be able to take over manual control when needed. The human-autonomy systems oversight model integrates several decades of relevant autonomy research on operator situation awareness, out-of-the-loop performance problems, monitoring, and trust, which are all major challenges underlying the automation conundrum. Key design interventions for improving human performance in interacting with autonomous systems are integrated in the model, including human-automation interface features and central automation interaction paradigms comprising levels of automation, adaptive automation, and granularity of control approaches. Recommendations for the design of human-autonomy interfaces are presented and directions for future research discussed.

393 citations


Journal ArticleDOI
TL;DR: A highly interactive and immersive Virtual Reality Training System (VRTS) (“beWare of the Robot”) in terms of a serious game that simulates in real-time the cooperation between industrial robotic manipulators and humans, executing simple manufacturing tasks is presented.
Abstract: This paper presents a highly interactive and immersive Virtual Reality Training System (VRTS) (“beWare of the Robot”) in terms of a serious game that simulates in real-time the cooperation between industrial robotic manipulators and humans, executing simple manufacturing tasks. The scenario presented refers to collaborative handling in tape-laying for building aerospace composite parts. The tools, models and techniques developed and used to build the “beWare of the Robot” application are described. System setup and configuration are presented in detail, as well as user tracking and navigation issues. Special emphasis is given to the interaction techniques used to facilitate implementation of virtual human–robot (HR) collaboration. Safety issues, such as contacts and collisions are mainly tackled through “emergencies”, i.e. warning signals in terms of visual stimuli and sound alarms. Mental safety is of utmost priority and the user is provided augmented situational awareness and enhanced perception of the robot’s motion due to immersion and real-time interaction offered by the VRTS as well as by special warning stimuli. The short-term goal of the research was to investigate users’ enhanced experience and behaviour inside the virtual world while cooperating with the robot and positive pertinent preliminary findings are presented and briefly discussed. In the longer term, the system can be used to investigate acceptability of H–R collaboration and, ultimately, serve as a platform for programming collaborative H–R manufacturing cells.

156 citations


Journal ArticleDOI
TL;DR: Two underlying models for the task of real-time identification of dynamic events leading to a layer of situational awareness that can become a reality due to increased penetration of phasor measurement units in transmission systems are explored.
Abstract: This paper explores the task of real-time identification of dynamic events leading to a layer of situational awareness that can become a reality due to increased penetration of phasor measurement units in transmission systems. Two underlying models for this task-data driven and physics based-are explored with examples. Challenges, advantages, and drawbacks of each model are discussed based on the availability of data, attributes of such data, and processing options. Potential applications of the task to improve security of power system protection and anomaly detection in the case of a cyberattack are conceptualized. Some known issues in data communications are discussed vis-a-vis the requirements imposed by the proposed task.

150 citations


Journal ArticleDOI
TL;DR: This survey identifies the specific requirements of an IPS for emergency responders and provides a tutorial coverage of the localization techniques and methods, highlighting the pros and cons of their use.
Abstract: The availability of a reliable and accurate indoor positioning system (IPS) for emergency responders during on-duty missions is regarded as an essential tool to improve situational awareness of both the emergency responders and the incident commander. This tool would facilitate the mission planning, coordination, and accomplishment, as well as decrease the number of on-duty deaths. Due to the absence of global positioning system signal in indoor environments, many other signals and sensors have been proposed for indoor usage. However, the challenging scenarios faced by emergency responders imply explicit restrictions and requirements on the design of an IPS, making the use of some technologies, techniques, and methods inadequate on these scenarios. This survey identifies the specific requirements of an IPS for emergency responders and provides a tutorial coverage of the localization techniques and methods, highlighting the pros and cons of their use. Then, the existing IPSs specifically developed for emergency scenarios are reviewed and compared with a focus on the design choices, requirements, and additional features. By doing so, an overview of current IPS schemes as well as their performance is given. Finally, we discuss the main issues of the existing IPSs and some future directions.

133 citations


Journal ArticleDOI
TL;DR: This work conducted a series of human subject experiments to investigate the ways in which human factors influence the design of computational techniques, and provides design guidelines for the development of intelligent collaborative robots based on the results.
Abstract: Advancements in robotic technology are making it increasingly possible to integrate robots into the human workspace in order to improve productivity and decrease worker strain resulting from the pe...

129 citations


Proceedings ArticleDOI
02 May 2017
TL;DR: A novel design for a home control interface in the form of a social robot, commanded via tangible icons and giving feedback through expressive gestures is presented, suggesting that embodied social robots could provide for an engaging interface with high situation awareness, but also that their usability remains a considerable design challenge.
Abstract: With domestic technology on the rise, the quantity and complexity of smart-home devices are becoming an important interaction design challenge. We present a novel design for a home control interface in the form of a social robot, commanded via tangible icons and giving feedback through expressive gestures. We experimentally compare the robot to three common smart-home interfaces: a voice-control loudspeaker; a wall-mounted touch-screen; and a mobile application. Our findings suggest that interfaces that rate higher on flow rate lower on usability, and vice versa. Participants' sense of control is highest using familiar interfaces, and lowest using voice control. Situation awareness is highest using the robot, and also lowest using voice control. These findings raise questions about voice control as a smart-home interface, and suggest that embodied social robots could provide for an engaging interface with high situation awareness, but also that their usability remains a considerable design challenge.

105 citations


Journal ArticleDOI
TL;DR: This survey aims to reconcile the view of the security community and the perspective of aviation professionals concerning the safety of air traffic communication technologies and comprehensively analyze vulnerabilities and existing attacks.
Abstract: More than a dozen wireless technologies are used by air traffic communication systems during different flight phases. From a conceptual perspective, all of them are insecure, as security was never part of their design. Recent contributions from academic and hacking communities have exploited this inherent vulnerability to demonstrate attacks on some of these technologies. However, not all of these contributions have resonated widely within aviation circles. At the same time, the security community lacks certain aviation domain knowledge, preventing aviation authorities from giving credence to their findings. In this survey, we aim to reconcile the view of the security community and the perspective of aviation professionals concerning the safety of air traffic communication technologies. To achieve this, we first provide a systematization of the applications of wireless technologies upon which civil aviation relies. Based on these applications, we comprehensively analyze vulnerabilities and existing attacks. We further survey the existing research on countermeasures and categorize it into approaches that are applicable in the short term and research of secure new technologies deployable in the long term. Since not all of the required aviation knowledge is codified in academic publications, we additionally examine the existing aviation standards and survey 242 international aviation experts. Besides their domain knowledge, we also analyze the awareness of members of the aviation community concerning the security of wireless systems and collect their expert opinions on the potential impact of concrete attack scenarios using these technologies.

96 citations


Journal ArticleDOI
TL;DR: In this paper, a driving simulator study was carried out to investigate the effect of anticipatory information and non-driving-related task involvement on drivers' monitoring behavior and transition of control while driving with a Traffic Jam Assist.
Abstract: Vehicle automation is expected to improve traffic safety. However, previous research indicates that high levels of automation may bring about unintended consequences, specifically in relation to being out of the control loop of the vehicle, such as reduced monitoring of the task, situation awareness, and attention. These changes in driver behavior become especially important in the case of a manual takeover request by the vehicle. A driving simulator study was carried out to investigate the effect of anticipatory information and non-driving-related task involvement on drivers’ monitoring behavior and transition of control while driving with a Traffic Jam Assist. The traffic jam assist handled lateral and longitudinal control at speeds below 50 km/h on a highway and required drivers to resume control beyond this system boundary. Anticipation was tested by sending a takeover request to the driver at 50 km/h (i.e. [anticipated] system boundary) or at 30 km/h (i.e. [unanticipated] system failure) and by traffic density preceding the takeover request, depending on the experimental condition. The results showed that the anticipatory information from the automated system influenced the monitoring behavior of the drivers preceding the transition of control, but not their performance during the takeover of control from the vehicle. Furthermore, despite relatively short reaction time to take over control of the vehicle, drivers needed a prolonged period to gain vehicle lateral control, regardless of the presence of anticipatory information or of a non-driving-related task. Performing a non-driving-related task resulted in a longer reaction time. Nevertheless, non-driving-related task involvement did not have an effect on the vehicle lateral control or monitoring of the traffic environment. The results of this study highlight the importance of a transition period rather than pure reaction time during a takeover process. We discuss the theoretical and practical implications of these findings.

88 citations


Book ChapterDOI
01 Jan 2017
TL;DR: It is suggested to implement cooperative interfaces to enable automated driving even with imperfect automation, and to consider four basic requirements for driver–vehicle cooperation: mutual predictability, directability, shared situation representation, and calibrated trust in automation.
Abstract: As long as automated vehicles are not able to handle driving in every possible situation, drivers will still have to take part in the driving task from time to time. Recent research focused on handing over control entirely when automated systems reach their boundaries. Our overview on research in this domain shows that handovers are feasible, however, they are not a satisfactory solution since human factor issues such as reduced situation awareness arise in automated driving. In consequence, we suggest to implement cooperative interfaces to enable automated driving even with imperfect automation. We recommend to consider four basic requirements for driver–vehicle cooperation: mutual predictability, directability, shared situation representation, and calibrated trust in automation. We present research that can be seen as a step towards cooperative interfaces in regard to these requirements. Nevertheless, these systems are only solutions for parts of future cooperative interfaces and interaction concepts. Future design of interaction concepts in automated driving should integrate the cooperative approach in total in order to achieve safe and comfortable automated mobility.

84 citations


Journal ArticleDOI
TL;DR: An integrated data combination and data management architecture that is able to accommodate real‐time data gathered by a fleet of robotic vehicles on a crisis site, and which allows for reusing recorded exercises with real robots and rescue teams for training purposes and teaching search‐and‐rescue personnel how to handle the different robotic tools is presented.
Abstract: Search-and-rescue operations have recently been confronted with the introduction of robotic tools that assist the human search-and-rescue workers in their dangerous but life-saving job of searching for human survivors after major catastrophes. However, the world of search and rescue is highly reliant on strict procedures for the transfer of messages, alarms, data, and command and control over the deployed assets. The introduction of robotic tools into this world causes an important structural change in this procedural toolchain. Moreover, the introduction of search-and-rescue robots acting as data gatherers could potentially lead to an information overload toward the human search-and-rescue workers, if the data acquired by these robotic tools are not managed in an intelligent way. With that in mind, we present in this paper an integrated data combination and data management architecture that is able to accommodate real-time data gathered by a fleet of robotic vehicles on a crisis site, and we present and publish these data in a way that is easy to understand by end-users. In the scope of this paper, a fleet of unmanned ground and aerial search-and-rescue vehicles is considered, developed within the scope of the European ICARUS project. As a first step toward the integrated data-management methodology, the different robotic systems require an interoperable framework in order to pass data from one to another and toward the unified command and control station. As a second step, a data fusion methodology will be presented, combining the data acquired by the different heterogenic robotic systems. The computation needed for this process is done in a novel mobile data center and then (as a third step) published in a software as a service (SaaS) model. The SaaS model helps in providing access to robotic data over ubiquitous Ethernet connections. As a final step, we show how the presented data-management architecture allows for reusing recorded exercises with real robots and rescue teams for training purposes and teaching search-and-rescue personnel how to handle the different robotic tools. The system was validated in two experiments. First, in the controlled environment of a military testing base, a fleet of unmanned ground and aerial vehicles was deployed in an earthquake-response scenario. The data gathered by the different interoperable robotic systems were combined by a novel mobile data center and presented to the end-user public. Second, an unmanned aerial system was deployed on an actual mission with an international relief team to help with the relief operations after major flooding in Bosnia in the spring of 2014. Due to the nature of the event (floods), no ground vehicles were deployed here, but all data acquired by the aerial system (mainly three-dimensional maps) were stored in the ICARUS data center, where they were securely published for authorized personnel all over the world. This mission (which is, to our knowledge, the first recorded deployment of an unmanned aerial system by an official governmental international search-and-rescue team in another country) proved also the concept of the procedural integration of the ICARUS data management system into the existing procedural toolchain of the search and rescue workers, and this in an international context (deployment from Belgium to Bosnia). The feedback received from the search-and-rescue personnel on both validation exercises was highly positive, proving that the ICARUS data management system can efficiently increase the situational awareness of the search-and-rescue personnel.

77 citations


Journal ArticleDOI
25 May 2017
TL;DR: This work utilizes the recently introduced concept of model-based communication and design a new strategy based on small- and large-scale modeling of vehicle movement dynamics, which is compared with the conventional design of cooperative adaptive cruise control.
Abstract: Connected vehicle applications rely on wireless communication for achieving real-time situational awareness and enabling automated actions or driver warnings Given the constraints of the communication systems, it is critical to employ communication strategies and approaches that allow robust situational awareness In this work, we utilize the recently introduced concept of model-based communication and design a new strategy based on small- and large-scale modeling of vehicle movement dynamics Our approach is to describe the small-scale structure of the remote vehicle movement (eg, braking, accelerating) by a set of dynamic models (ARX models) and represent the large-scale structure (eg, free following, turning) by coupling these ARX models together into a Markov chain The effect of this design is investigated for the case of cooperative adaptive cruise control application Assuming model-based communication approach is used with the coupled model, a novel stochastic model predictive method is proposed to achieve cruise control goals We use actual highway driving maneuvers to compare the proposed methodology with the conventional design of cooperative adaptive cruise control

Patent
13 Apr 2017
TL;DR: In this paper, the authors present a real-time security, integrity, and reliability postures of operational (OT), information (IT), and security (ST) systems, as well as slower changing security and operational blueprint, policies, processes, and rules governing the enterprise security and business risk management process, dynamically evolve and adapt to domain, context, and situational awareness, and the controls implemented across the operational and information systems that are controlled.
Abstract: Real time security, integrity, and reliability postures of operational (OT), information (IT), and security (ST) systems, as well as slower changing security and operational blueprint, policies, processes, and rules governing the enterprise security and business risk management process, dynamically evolve and adapt to domain, context, and situational awareness, as well as the controls implemented across the operational and information systems that are controlled. Embodiments of the invention are systematized and pervasively applied across interconnected, interdependent, and diverse operational, information, and security systems to mitigate system-wide business risk, to improve efficiency and effectiveness of business processes and to enhance security control which conventional perimeter, network, or host based control and protection schemes cannot successfully perform.

Journal ArticleDOI
TL;DR: A meta-analysis of 27 UAV swarm management papers focused on human-system interface and human factors concerns is presented in this article, providing an overview of the advantages, challenges and limitations of current UAV management interfaces, as well as information on how these interfaces are currently evaluated.

Proceedings ArticleDOI
27 Mar 2017
TL;DR: This work presents a remote monitoring and diagnostic system that provides a holistic perspective of patients and their health conditions and discusses how the concept of self-awareness can be used in various parts of the system such as information collection through wearable sensors, confidence assessment of the sensory data, the knowledge base of the patient's health situation, and automation of reasoning about the health situation.
Abstract: In healthcare, effective monitoring of patients plays a key role in detecting health deterioration early enough. Many signs of deterioration exist as early as 24 hours prior having a serious impact on the health of a person. As hospitalization times have to be minimized, in-home or remote early warning systems can fill the gap by allowing in-home care while having the potentially problematic conditions and their signs under surveillance and control. This work presents a remote monitoring and diagnostic system that provides a holistic perspective of patients and their health conditions. We discuss how the concept of self-awareness can be used in various parts of the system such as information collection through wearable sensors, confidence assessment of the sensory data, the knowledge base of the patient's health situation, and automation of reasoning about the health situation. Our approach to self-awareness provides (i) situation awareness to consider the impact of variations such as sleeping, walking, running, and resting, (ii) system personalization by reflecting parameters such as age, body mass index, and gender, and (iii) the attention property of self-awareness to improve the energy efficiency and dependability of the system via adjusting the priorities of the sensory data collection. We evaluate the proposed method using a full system demonstration.

01 Jan 2017
TL;DR: The field testing and validation of two different architectures of event-based imaging sensors, inspired by a biological retina, make them ideally suitable to meeting the demanding challenges required by space-based SSA systems.

Journal ArticleDOI
TL;DR: The use of Director—the open‐source user interface developed by Team MIT to pilot the Atlas robot in the DARPA Robotics Challenge (DRC) resulted in efficient high‐level task operation while being fully competitive with approaches focusing on teleoperation by highly trained operators.
Abstract: Operating a high degree of freedom mobile manipulator, such as a humanoid, in a field scenario requires constant situational awareness, capable perception modules, and effective mechanisms for interactive motion planning and control. A well-designed operator interface presents the operator with enough context to quickly carry out a mission and the flexibility to handle unforeseen operating scenarios robustly. By contrast, an unintuitive user interface can increase the risk of catastrophic operator error by overwhelming the user with unnecessary information. With these principles in mind, we present the philosophy and design decisions behind Director-the open-source user interface developed by Team MIT to pilot the Atlas robot in the DARPA Robotics Challenge DRC. At the heart of Director is an integrated task execution system that specifies sequences of actions needed to achieve a substantive task, such as drilling a wall or climbing a staircase. These task sequences, developed a priori, make online queries to automated perception and planning algorithms with outputs that can be reviewed by the operator and executed by our whole-body controller. Our use of Director at the DRC resulted in efficient high-level task operation while being fully competitive with approaches focusing on teleoperation by highly trained operators. We discuss the primary interface elements that comprise Director, and we provide an analysis of its successful use at the DRC.

Proceedings ArticleDOI
01 Apr 2017
TL;DR: This work illustrates the CAPS modeling languages used to describe the software architecture, hardware configuration, and physical space views for a situational aware CPS.
Abstract: This paper proposes CAPS, an architecture-drivenmodeling framework for the development of Situational AwareCyber-Physical Systems Situational Awareness involves being aware of what ishappening in the surroundings, and using this informationto decide and act It has been recognized as a critical, yet often elusive, foundation for successful decision-makingin complex systems With the advent of cyber-physical systems(CPS), situational awareness is playing an increasinglyimportant role especially in crowd and fleets management, infrastructure monitoring, and smart city applications Whilespecializing cyber physical systems, Situational Aware CPSrequires the continuous monitoring of environmental conditionsand events with respect to time and space New architecturalconcerns arise, especially related to the sense, compute &communication paradigm, the use of domain-specific hardwarecomponents, and the cyber-physical space dimension This work illustrates the CAPS modeling languages usedto describe the software architecture, hardware configuration, and physical space views for a situational aware CPS

Proceedings ArticleDOI
02 May 2017
TL;DR: This work has developed a technique, named Daze, to measure situation awareness through real-time, in-situ event alerts and performs simulator-based and on-road test deployments to check that Daze could characterize drivers' awareness of their immediate environment and understand practical aspects of the technique's use.
Abstract: Until vehicles are fully autonomous, safety, legal and ethical obligations require that drivers remain aware of the driving situation. Key decisions about whether a driver can take over when the vehicle is confused, or its capabilities are degraded, depend on understanding whether he or she is responsive and aware of external conditions. The leading techniques for measuring situation awareness in simulated environments are ill-suited to autonomous driving scenarios, and particularly to on-road testing. We have developed a technique, named Daze, to measure situation awareness through real-time, in-situ event alerts. The technique is ecologically valid: it resembles applications people use in actual driving. It is also flexible: it can be used in both simulator and on-road research settings. We performed simulator-based and on-road test deployments to (a) check that Daze could characterize drivers' awareness of their immediate environment and (b) understand practical aspects of the technique's use. Our contributions include the Daze technique, examples of collected data, and ways to analyze such data.

Book ChapterDOI
05 Jul 2017
TL;DR: In this paper, the concept of shared mental models can be used to explain team situational awareness, which is critical to performance in complex team environments, and potential training strategies for enhancing team situational aware are derived from this model.
Abstract: This paper examines a skill that is critical to performance in complex team environments—team situational awareness. Specifically, it describes how the concept of shared mental models can be used to explain team situational awareness. A model is presented which explains how shared mental models are transformed into team situational awareness to enable effective team performance in dynamic task situations. Potential training strategies for enhancing team situational awareness are derived from this model and are discussed.

Proceedings ArticleDOI
29 Jun 2017
TL;DR: A taxonomy of handover and handback (i.e., from manual to automatic control and vice versa) is proposed to be used by practitioners and researchers to help assure the duration of those periods are clearly defined, and accordingly, studies examining them are comparable and have repeatable results.
Abstract: In this paper, a taxonomy of handover and handback (i.e., from manual to automatic control and vice versa) is proposed to be used by practitioners and researchers to help assure the duration of those periods are clearly defined, and accordingly, studies examining them are comparable and have repeatable results. Furthermore, use of this framework will help assure that those implementing automation will do so in a comprehensive manner. The taxonomy is more detailed than that in SAE Standard J3114. Handover includes the phases preparation, perception (of the handover signal), suspension (of in-vehicle tasks) and the actual process of taking over, which can be subdivided into sufficient (to steer and control speed) and full (where situation awareness is complete) control. Furthermore, handover can be imminent, scheduled, or user-initiated. For handback, the phases are initialization, the actual handback, and re-engagement (of the driver). Handback may be optional or mandatory and user- or system initiated. For both handover and handback processes, the duration and change of the control transfer (as a function of time) needs to be precisely described/specified.

Journal ArticleDOI
TL;DR: This paper develops this cognitive engineering based approach and proposes novel quantitative measures of operators’ situation awareness based on eye gaze dynamics and demonstrates that the proposed measures reliably identify the situation awareness of the participants during various phases of abnormal situation management.

Journal ArticleDOI
27 Jul 2017-Sensors
TL;DR: The results of the workload and situational awareness tests show that virtual reality improves the situational awareness without increasing the workload of operators, whereas the effects of predictive components are not significant and depend on their implementation.
Abstract: Multi-robot missions are a challenge for operators in terms of workload and situational awareness. These operators have to receive data from the robots, extract information, understand the situation properly, make decisions, generate the adequate commands, and send them to the robots. The consequences of excessive workload and lack of awareness can vary from inefficiencies to accidents. This work focuses on the study of future operator interfaces of multi-robot systems, taking into account relevant issues such as multimodal interactions, immersive devices, predictive capabilities and adaptive displays. Specifically, four interfaces have been designed and developed: a conventional, a predictive conventional, a virtual reality and a predictive virtual reality interface. The four interfaces have been validated by the performance of twenty-four operators that supervised eight multi-robot missions of fire surveillance and extinguishing. The results of the workload and situational awareness tests show that virtual reality improves the situational awareness without increasing the workload of operators, whereas the effects of predictive components are not significant and depend on their implementation.

Journal ArticleDOI
TL;DR: It is observed that transparency has a significant effect on situation awareness, trust, and cognitive processing in operators’ situation awareness in human-robot teams.

Journal ArticleDOI
TL;DR: In this work, a training methodology based on the concept of briefing/debriefing is adopted based on previous literature and the efficiency of the proposed framework is validated in a conceptual case study.

Journal ArticleDOI
TL;DR: This book falls short of its stated intentions and as a reference for engineering managers, but as educational material, it could be valuable, but I would be inclined to recommend other materials to my professional staff.
Abstract: (2017). Designing for Situation Awareness: An Approach to User-Centered Design, Second Edition. Quality Management Journal: Vol. 24, No. 2, pp. 56-56.

Journal ArticleDOI
TL;DR: This research paper aims to provide a framework for efficient project management by reducing the time for decision-making based on IoT technologies by dynamically establishing situational awareness on the top of existing manufacturing processes.
Abstract: Factories of the Future roadmap identifies that the major challenges manufacturing companies face today are the growing complexity of their processes, therefore affecting the overall process of decision making. Thus, it is a very important research area to integrate the evolving technologies in the domain of Internet of Things IoT into the applications used for project management. At the same time, to capture events on the shop-floor and determine the meaning of information about those perceived events would be an important aspect for making decisions in heterogeneous, highly dynamic environments. This research paper aims to provide a framework for efficient project management by reducing the time for decision-making based on IoT technologies. The goal is pursued by dynamically establishing situational awareness on the top of existing manufacturing processes. The proposed framework is validated in real industrial scenario by implementing a platform for efficient project management within the domain of construction industry.

Journal ArticleDOI
TL;DR: An overview of the existing research results in multimodal data analysis in AAL environment to improve the living environment of the seniors is given, and it attempts to bring efficiency in complex event processing for real-time situational awareness.
Abstract: The success of providing smart healthcare services in ambient assisted living (AAL) largely depends on an effective prediction of situations in the environment. Situation awareness in AAL is to determine the environment smartness by perceiving information related to the surroundings and human behavioral changes. In AAL environment, there are plenty of ways to collect data about its inhabitants, such as through cameras, microphones, and other sensors. The collected data are complicated enough to go for an efficient processing in perceiving the situation. This paper gives an overview of the existing research results in multimodal data analysis in AAL environment to improve the living environment of the seniors, and it attempts to bring efficiency in complex event processing for real-time situational awareness. This paper thus considers multimodal sensing for detection of current situations as well as to predict future situations using decision-tree and association analysis algorithms. To illustrate the proposed approach, we consider elderly activity recognition in the AAL environment.

Journal ArticleDOI
TL;DR: In order to investigate and analyze operator’s ESA in the digitized main control room of a nuclear power plant, the model of ESA is established, the classification system is developed based on the built ESA model, and the analysis method is constructed on the basis of the observation of simulator and operator surveys.

Journal ArticleDOI
01 Jun 2017
TL;DR: Results indicate that, despite the suitability of the conventional approaches in remote applications, the proposed interface approach provides comparable task performance and user experiences in shared spaces without the need to install operator stations or vision systems on or around the robot.
Abstract: Although user interfaces with gesture-based input and augmented graphics have promoted intuitive human-robot interactions (HRI), they are often implemented in remote applications on research-grade platforms requiring significant training and limiting operator mobility. This paper proposes a mobile mixed-reality interface approach to enhance HRI in shared spaces. As a user points a mobile device at the robot's workspace, a mixed-reality environment is rendered providing a common frame of reference for the user and robot to effectively communicate spatial information for performing object manipulation tasks, improving the user's situational awareness while interacting with augmented graphics to intuitively command the robot. An evaluation with participants is conducted to examine task performance and user experience associated with the proposed interface strategy in comparison to conventional approaches that utilize egocentric or exocentric views from cameras mounted on the robot or in the environment, respectively. Results indicate that, despite the suitability of the conventional approaches in remote applications, the proposed interface approach provides comparable task performance and user experiences in shared spaces without the need to install operator stations or vision systems on or around the robot. Moreover, the proposed interface approach provides users the flexibility to direct robots from their own visual perspective (at the expense of some physical workload) and leverages the sensing capabilities of the tablet to expand the robot's perceptual range.

Proceedings ArticleDOI
12 Jun 2017
TL;DR: This work explores how convergent trends in video sensing, crowd sourcing and edge computing can be harnessed to create a shared real-time information system for situational awareness in vehicular systems that span driverless and drivered vehicles.
Abstract: Situational awareness involves the timely acquisition of knowledge about real-world events, distillation of those events into higher-level conceptual constructs, and their synthesis into a coherent context-sensitive view. We explore how convergent trends in video sensing, crowd sourcing and edge computing can be harnessed to create a shared real-time information system for situational awareness in vehicular systems that span driverless and drivered vehicles.