scispace - formally typeset
Search or ask a question
Author

Xin-an Fan

Bio: Xin-an Fan is an academic researcher from Beijing Institute of Technology. The author has contributed to research in topics: Brain–computer interface & Mobile robot. The author has an hindex of 9, co-authored 16 publications receiving 561 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A comprehensive review of the complete systems, key techniques, and evaluation issues of brain-controlled mobile robots along with some insights into related future research and development issues is provided.
Abstract: EEG-based brain-controlled mobile robots can serve as powerful aids for severely disabled people in their daily life, especially to help them move voluntarily. In this paper, we provide a comprehensive review of the complete systems, key techniques, and evaluation issues of brain-controlled mobile robots along with some insights into related future research and development issues. We first review and classify various complete systems of brain-controlled mobile robots into two categories from the perspective of their operational modes. We then describe key techniques that are used in these brain-controlled mobile robots including the brain-computer interface techniques and shared control techniques. This description is followed by an analysis of the evaluation issues of brain-controlled mobile robots including participants, tasks and environments, and evaluation metrics. We conclude this paper with a discussion of the current challenges and future research directions.

324 citations

Journal ArticleDOI
TL;DR: A new steady-state visually evoked potential brain-computer interface with visual stimuli presented on a windshield via a head-up display is proposed, and this BCI is applied in conjunction with an alpha rhythm to control a simulated vehicle with a 14-DOF vehicle dynamics model.
Abstract: In this paper, we propose a new steady-state visually evoked potential (SSVEP) brain–computer interface (BCI) with visual stimuli presented on a windshield via a head-up display, and we apply this BCI in conjunction with an alpha rhythm to control a simulated vehicle with a 14-DOF vehicle dynamics model. A linear discriminant analysis classifier is applied to detect the alpha rhythm, which is used to control the starting and stopping of the vehicle. The classification models of the SSVEP BCI with three commands (i.e., turning left, turning right, and going forward) are built by using a support vector machine with frequency domain features. A real-time brain-controlled simulated vehicle is developed and tested by using four participants to perform a driving task online, including vehicle starting and stopping, lane keeping, avoiding obstacles, and curve negotiation. Experimental results show the feasibility of using the human “mind” alone to control a vehicle, at least for some users.

79 citations

Journal ArticleDOI
TL;DR: The proposed interface provides individuals with severe motor disabilities with an interface to use intelligent vehicles and thus improve their mobility, but also facilitates the research on driver-vehicle interface, multimodal interaction, and intelligent vehicles.
Abstract: In this paper, we propose a novel driver–vehicle interface for individuals with severe neuromuscular disabilities to use intelligent vehicles by using P300 and steady-state visual evoked potential (SSVEP) brain–computer interfaces (BCIs) to select a destination and test its performance in the laboratory and real driving conditions. The proposed interface consists of two components: the selection component based on a P300 BCI and the confirmation component based on an SSVEP BCI. Furthermore, the accuracy and selection time models of the interface are built to help analyze the performance of the entire system. Experimental results from 16 participants collected in the laboratory and real driving scenarios show that the average accuracy of the system in the real driving conditions is about 99% with an average selection time of about 26 s. More importantly, the proposed system improves the accuracy of destination selection compared with a single P300 BCI-based selection system, particularly for those participants with relatively low level of accuracy in using the P300 BCI. This study not only provides individuals with severe motor disabilities with an interface to use intelligent vehicles and thus improve their mobility, but also facilitates the research on driver–vehicle interface, multimodal interaction, and intelligent vehicles. Furthermore, it opens an avenue on how cognitive neuroscience may be applied to intelligent vehicles.

74 citations

Journal ArticleDOI
TL;DR: Experimental results from eight participants show that the BCI-based model of destination selection can be built with EEG data from eight channels, and participants can use this BCI to select a desired destination with an accuracy value of 93.6% within about 12 s of selection time.
Abstract: In this paper, we propose a P300 brain-computer interface (BCI) with visual stimuli presented on a head-up display and we apply this BCI for selecting destinations of a simulated vehicle in a virtual scene. To improve the usability of the selection system, we analyze the effects of the number of electroencephalogram (EEG) rounds on system performance. Experimental results from eight participants show that the BCI-based model of destination selection can be built with EEG data from eight channels, and participants can use this BCI to select a desired destination with an accuracy value of 93.6% ± 1.6% (mean value with standard error) in about 12 s of selection time. This paper lays a foundation for developing vehicles that use a BCI to select a desired destination from a list of predefined destinations and then use an autonomous navigation system to reach the desired destination.

53 citations

Journal ArticleDOI
01 Aug 2017
TL;DR: This study provides some insights into the simulation and prediction of the performance of using BCI systems to control other external devices (e.g., mobile robots) and has potential values in helping develop a brain-controlled assistive vehicle.
Abstract: Directly using brain signals rather than limbs to steer a vehicle may not only help disabled people to control an assistive vehicle, but also provide a complementary means of control for a wider driving community. In this paper, to simulate and predict driver performance in steering a vehicle with brain signals, we propose a driver brain-controlled steering model by combining an extended queuing network-based driver model with a brain–computer interface (BCI) performance model. Experimental results suggest that the proposed driver brain-controlled steering model has performance close to that of real drivers with good performance in brain-controlled driving. The brain-controlled steering model has potential values in helping develop a brain-controlled assistive vehicle. Furthermore, this study provides some insights into the simulation and prediction of the performance of using BCI systems to control other external devices (e.g., mobile robots).

36 citations


Cited by
More filters
01 Jan 1979
TL;DR: This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis and addressing interesting real-world computer Vision and multimedia applications.
Abstract: In the real world, a realistic setting for computer vision or multimedia recognition problems is that we have some classes containing lots of training data and many classes contain a small amount of training data. Therefore, how to use frequent classes to help learning rare classes for which it is harder to collect the training data is an open question. Learning with Shared Information is an emerging topic in machine learning, computer vision and multimedia analysis. There are different level of components that can be shared during concept modeling and machine learning stages, such as sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, etc. Regarding the specific methods, multi-task learning, transfer learning and deep learning can be seen as using different strategies to share information. These learning with shared information methods are very effective in solving real-world large-scale problems. This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis. Both state-of-the-art works, as well as literature reviews, are welcome for submission. Papers addressing interesting real-world computer vision and multimedia applications are especially encouraged. Topics of interest include, but are not limited to: • Multi-task learning or transfer learning for large-scale computer vision and multimedia analysis • Deep learning for large-scale computer vision and multimedia analysis • Multi-modal approach for large-scale computer vision and multimedia analysis • Different sharing strategies, e.g., sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, • Real-world computer vision and multimedia applications based on learning with shared information, e.g., event detection, object recognition, object detection, action recognition, human head pose estimation, object tracking, location-based services, semantic indexing. • New datasets and metrics to evaluate the benefit of the proposed sharing ability for the specific computer vision or multimedia problem. • Survey papers regarding the topic of learning with shared information. Authors who are unsure whether their planned submission is in scope may contact the guest editors prior to the submission deadline with an abstract, in order to receive feedback.

1,758 citations

Journal ArticleDOI
TL;DR: The current review evaluates EEG-based BCI paradigms regarding their advantages and disadvantages from a variety of perspectives, and various EEG decoding algorithms and classification methods are evaluated.
Abstract: Advances in brain science and computer technology in the past decade have led to exciting developments in brain-computer interface (BCI), thereby making BCI a top research area in applied science. The renaissance of BCI opens new methods of neurorehabilitation for physically disabled people (e.g. paralyzed patients and amputees) and patients with brain injuries (e.g. stroke patients). Recent technological advances such as wireless recording, machine learning analysis, and real-time temporal resolution have increased interest in electroencephalographic (EEG) based BCI approaches. Many BCI studies have focused on decoding EEG signals associated with whole-body kinematics/kinetics, motor imagery, and various senses. Thus, there is a need to understand the various experimental paradigms used in EEG-based BCI systems. Moreover, given that there are many available options, it is essential to choose the most appropriate BCI application to properly manipulate a neuroprosthetic or neurorehabilitation device. The current review evaluates EEG-based BCI paradigms regarding their advantages and disadvantages from a variety of perspectives. For each paradigm, various EEG decoding algorithms and classification methods are evaluated. The applications of these paradigms with targeted patients are summarized. Finally, potential problems with EEG-based BCI systems are discussed, and possible solutions are proposed.

475 citations

Journal ArticleDOI
TL;DR: The present research suggests that the VisAWI appears to be a sound measure of visual aesthetics of websites comprising facets of both practical and theoretical interest.
Abstract: Visual aesthetics has been shown to critically affect a variety of constructs such as perceived usability, satisfaction, and pleasure. Given the importance of visual aesthetics in human-computer interaction, it is vital that it is adequately assessed. The present research aimed at providing a precise operational definition and to develop a new measure of perceived visual aesthetics of websites. Construction of the Visual Aesthetics of Website Inventory (VisAWI) was based on a comprehensive and broad definition of visual aesthetics so that the resulting instrument would completely describe the domain of interest. Four interrelated facets of perceived visual aesthetics of websites were identified and validated in a series of seven studies. Simplicity and Diversity have repeatedly been treated as formal parameters of aesthetic objects throughout the history of empirical aesthetics, Colors are a critical property of aesthetic objects, and Craftsmanship addresses the skillful and coherent integration of the relevant design dimensions. These four facets jointly represent perceived visual aesthetics, but are still distinguishable from each other and carry unique meaning. The subscales contained in the VisAWI demonstrate good internal consistencies. Evidence for the convergent, divergent, discriminative, and concurrent validity of the VisAWI is provided. Overall, the present research suggests that the VisAWI appears to be a sound measure of visual aesthetics of websites comprising facets of both practical and theoretical interest.

449 citations

Journal ArticleDOI
TL;DR: The application areas that could benefit from brain waves in facilitating or achieving their goals are shown and major usability and technical challenges that face brain signals utilization in various components of BCI system are discussed.

397 citations

Journal ArticleDOI
TL;DR: A comprehensive review of the complete systems, key techniques, and evaluation issues of brain-controlled mobile robots along with some insights into related future research and development issues is provided.
Abstract: EEG-based brain-controlled mobile robots can serve as powerful aids for severely disabled people in their daily life, especially to help them move voluntarily. In this paper, we provide a comprehensive review of the complete systems, key techniques, and evaluation issues of brain-controlled mobile robots along with some insights into related future research and development issues. We first review and classify various complete systems of brain-controlled mobile robots into two categories from the perspective of their operational modes. We then describe key techniques that are used in these brain-controlled mobile robots including the brain-computer interface techniques and shared control techniques. This description is followed by an analysis of the evaluation issues of brain-controlled mobile robots including participants, tasks and environments, and evaluation metrics. We conclude this paper with a discussion of the current challenges and future research directions.

324 citations