scispace - formally typeset
Search or ask a question
Author

Dae-Jin Kim

Bio: Dae-Jin Kim is an academic researcher from KAIST. The author has contributed to research in topics: Soft computing & Robotic arm. The author has an hindex of 9, co-authored 21 publications receiving 351 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: Some important results of design and evaluation of a wheelchair-based robotic arm system, named as KARES II (KAIST Rehabilitation Engineering Service System II), which is newly developed for the disabled are reported.
Abstract: In this paper, we report some important results of design and evaluation of a wheelchair-based robotic arm system, named as KARES II (KAIST Rehabilitation Engineering Service System II), which is newly developed for the disabled. KARES II is designed in consideration of surveyed necessary tasks for the target users (that is, people with spinal cord injury). At first, we predefined twelve important tasks according to extensive interviews and questionnaires. Next, based on these tasks, all subsystems are designed, simulated and developed. A robotic arm with active compliance and intelligent visual servoing capability is developed by using cable-driven mechanism. Various kinds of human-robot interfaces are developed to provide broad range of services according to the levels of disability. Eye-mouse, shoulder/head interface, EMG signal-based control subsystems are used for this purpose. Besides, we describe the process of integration of our rehabilitation robotic system KARES II, and discuss about user trials. A mobile platform and a wheelchair platform are two main platforms in which various subsystems are installed. For a real-world application of KARES II system, we have performed user trials with six selected potential end-users (with spinal cord injury).

98 citations

Proceedings ArticleDOI
Jeong-Su Han1, Z. Zenn Bien1, Dae-Jin Kim1, Hyong-Euk Lee1, Jong-Sung Kim 
17 Sep 2003
TL;DR: This paper classified the pre-defined motions such as rest case, forward movement, left movement, and right movement by fuzzy min-max neural networks (FMMNN) and shows the feasibility of EMG as an input interface for powered wheelchair.
Abstract: The objective of this paper is to develop a powered wheelchair controller based on EMG for users with high-level spinal cord injury. EMG is very naturally measured when the user indicating a certain direction and the force information which will be used for the speed of wheelchair is easily extracted from EMG. Furthermore, the emergency situation based on EMG will be checked relatively ease. We classified the pre-defined motions such as rest case, forward movement, left movement, and right movement by fuzzy min-max neural networks (FMMNN). This classification results and evaluation results with real users shows the feasibility of EMG as an input interface for powered wheelchair.

97 citations

Proceedings ArticleDOI
Zeungnam Bien1, Dae-Jin Kim1, Myung Jin Chung1, Dong-Soo Kwon1, Pyung-Hun Chang1 
20 Jul 2003
TL;DR: Experimental results show that all subsystems can perform the defined tasks through the robotic arm in an integrated way.
Abstract: This paper describes our ongoing project about a new wheelchair-based rehabilitation robotic system for the disabled, called KARES II (KAIST Rehabilitation Engineering Service system II). We shall concentrate on the issues of design and visual servoing of the robotic arm with three human-robot interaction subsystems: an eye-mouse, an EMG interface, and a haptic suit interface. First, the specific required tasks of the robotic arm system are defined according to extensive surveys and interviews with the potential users, i.e., the people with spinal cord injury. In order to design the robotic arm for the predefined tasks effectively, a target-oriented design procedure is adopted. Next, a visual servoing subsystem for the robotic arm is designed and is integrated to perform the predefined tasks in an uncertain/time-varying environment. Finally, various human-robot interaction devices are proposed as interface for diverse users with physical disability. One or more of these interfaces may be selected on the basis of each user's need. These diverse input devices can be used in a complementary way according to the user's preference and to the degree of disability. Experimental results show that all subsystems can perform the defined tasks through the robotic arm in an integrated way.

51 citations

Proceedings ArticleDOI
25 May 2003
TL;DR: A method to construct a personalized classifier based on novel feature selection method in the frame of fuzzy neural networks(FNN) and actual experiments/simulations show that the proposed method is effective not only in view of facial expression recognition but also of pattern classifier itself.
Abstract: Facial expression recognition is very important in many human-robot/human-computer interaction systems. Although so many researches are done, it is hard to find a practical applications in the real world due to its underestimate about individual differences among people. Thus, as a solution for such problem, we introduce a 'personalized' facial expression recognition system. Many previous works on facial expression recognition focus on the well-known six universal facial expressions (happy, sad, fear, angry, surprise and disgust) under usage of unified (or non-separated) classification approach. However, for ordinary people, it is a very difficult task to make such facial expressions without much effort and training. Instead of universal facial expressions, many people show 'personalized' or 'individualized' facial expressions typically. Thus, for dealing with such personalities, we propose a method to construct a personalized classifier based on novel feature selection method. Specifically, feature selection is done by histogram-based approach in the frame of fuzzy neural networks(FNN). Besides, we also use an integrated approach for facial expression recognition. Actual experiments/simulations show that the proposed method is effective not only in view of facial expression recognition but also in view of pattern classifier itself.

27 citations

Proceedings ArticleDOI
25 May 2005
TL;DR: The state-of-the-art reports on FER in view of SCT are overview, fuzzy observer-based approach, personalized FER system based on fuzzy neural networks, and Gabor wavelet neural networks are briefly discussed.
Abstract: The facial expression recognition (FER) is one of the biosignal-based recognition techniques which attract a lot of attention recently. To deal with its complex characteristics effectively, we adopt the soft computing techniques (SCT) such as fuzzy logic, neural networks, genetic algorithm and/or rough set technique. In this paper, we overview the state-of-the-art reports on FER in view of SCT, and introduce some interesting works done by our group on the SCT-based facial emotional expression recognition. Specifically, 1) fuzzy observer-based approach, 2) personalized FER system based on fuzzy neural networks, and 3) Gabor wavelet neural networks are briefly discussed.

21 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper reviews recent research and development in pattern recognition- and non-pattern recognition-based myoelectric control, and presents state-of-the-art achievements in terms of their type, structure, and potential application.

1,111 citations

Journal ArticleDOI
TL;DR: This work presents a method to adjust SVM parameters before classification, and examines overlapped segmentation and majority voting as two techniques to improve controller performance.
Abstract: This paper proposes and evaluates the application of support vector machine (SVM) to classify upper limb motions using myoelectric signals. It explores the optimum configuration of SVM-based myoelectric control, by suggesting an advantageous data segmentation technique, feature set, model selection approach for SVM, and postprocessing methods. This work presents a method to adjust SVM parameters before classification, and examines overlapped segmentation and majority voting as two techniques to improve controller performance. A SVM, as the core of classification in myoelectric control, is compared with two commonly used classifiers: linear discriminant analysis (LDA) and multilayer perceptron (MLP) neural networks. It demonstrates exceptional accuracy, robust performance, and low computational load. The entropy of the output of the classifier is also examined as an online index to evaluate the correctness of classification; this can be used by online training for long-term myoelectric control operations.

730 citations

Journal ArticleDOI
TL;DR: The experimental validation shows the effectiveness of the proposed driver fatigue recognition model and indicates that the contact physiological features are significant factors for inferring the fatigue state of a driver.

288 citations

Journal ArticleDOI
TL;DR: The major benefits and challenges of myoelectric interfaces are evaluated and recommendations are given, for example, for electrode placement, sampling rate, segmentation, and classifiers.

253 citations

Journal ArticleDOI
TL;DR: The implementation and validation of a hidden Markov model (HMM) for estimating human affective state in real time, using robot motions as the stimulus, and the results of the HMM affective estimation are compared to a previously implemented fuzzy inference engine.
Abstract: In order for humans and robots to interact in an effective and intuitive manner, robots must obtain information about the human affective state in response to the robot's actions. This secondary mode of interactive communication is hypothesized to permit a more natural collaboration, similar to the "body language" interaction between two cooperating humans. This paper describes the implementation and validation of a hidden Markov model (HMM) for estimating human affective state in real time, using robot motions as the stimulus. Inputs to the system are physiological signals such as heart rate, perspiration rate, and facial muscle contraction. Affective state was estimated using a two- dimensional valence-arousal representation. A robot manipulator was used to generate motions expected during human-robot interaction, and human subjects were asked to report their response to these motions. The human physiological response was also measured. Robot motions were generated using both a nominal potential field planner and a recently reported safe motion planner that minimizes the potential collision forces along the path. The robot motions were tested with 36 subjects. This data was used to train and validate the HMM model. The results of the HMM affective estimation are also compared to a previously implemented fuzzy inference engine.

216 citations