scispace - formally typeset
Search or ask a question
Author

Jong-Hwan Kim

Bio: Jong-Hwan Kim is an academic researcher from KAIST. The author has contributed to research in topics: Robot & Mobile robot. The author has an hindex of 42, co-authored 424 publications receiving 9020 citations. Previous affiliations of Jong-Hwan Kim include University of Louisiana at Lafayette & Samsung.


Papers
More filters
Journal ArticleDOI
Kuk-Hyun Han1, Jong-Hwan Kim1
TL;DR: The results show that QEA performs well, even with a small population, without premature convergence as compared to the conventional genetic algorithm, and a Q-gate is introduced as a variation operator to drive the individuals toward better solutions.
Abstract: This paper proposes a novel evolutionary algorithm inspired by quantum computing, called a quantum-inspired evolutionary algorithm (QEA), which is based on the concept and principles of quantum computing, such as a quantum bit and superposition of states. Like other evolutionary algorithms, QEA is also characterized by the representation of the individual, evaluation function, and population dynamics. However, instead of binary, numeric, or symbolic representation, QEA uses a Q-bit, defined as the smallest unit of information, for the probabilistic representation and a Q-bit individual as a string of Q-bits. A Q-gate is introduced as a variation operator to drive the individuals toward better solutions. To demonstrate its effectiveness and applicability, experiments were carried out on the knapsack problem, which is a well-known combinatorial optimization problem. The results show that QEA performs well, even with a small population, without premature convergence as compared to the conventional genetic algorithm.

1,335 citations

Proceedings ArticleDOI
Kuk-Hyun Han1, Jong-Hwan Kim1
16 Jul 2000
TL;DR: The results show that GQA is superior to other genetic algorithms using penalty functions, repair methods and decoders and can represent a linear superposition of solutions due to its probabilistic representation.
Abstract: This paper proposes a novel evolutionary computing method called a genetic quantum algorithm (GQA). GQA is based on the concept and principles of quantum computing such as qubits and superposition of states. Instead of binary, numeric, or symbolic representation, by adopting qubit chromosome as a representation GQA can represent a linear superposition of solutions due to its probabilistic representation. As genetic operators, quantum gates are employed for the search of the best solution. Rapid convergence and good global search capability characterize the performance of GQA. The effectiveness and the applicability of GQA are demonstrated by experimental results on the knapsack problem, which is a well-known combinatorial optimization problem. The results show that GQA is superior to other genetic algorithms using penalty functions, repair methods and decoders.

622 citations

Journal ArticleDOI
Jong-Min Yang1, Jong-Hwan Kim1
01 Jun 1999
TL;DR: A novel sliding mode control law is proposed for asymptotically stabilizing the mobile robot to a desired trajectory and it is shown that the proposed scheme is robust to bounded external disturbances.
Abstract: Nonholonomic mobile robots have constraints imposed on the motion that are not integrable, i.e., the constraints cannot be written as time derivatives of some function of the generalized coordinates. The position control of nonholonomic mobile robots has been an important class of control problems. In this paper, we propose a robust tracking control of nonholonomic wheeled mobile robots using sliding mode. The posture of a mobile robot is represented by polar coordinates and the dynamic equation of the robot is feedback-linearized by the computed-torque method. A novel sliding mode control law is proposed for asymptotically stabilizing the mobile robot to a desired trajectory. It is shown that the proposed scheme is robust to bounded external disturbances. Experimental results demonstrate the effectiveness of accurate tracking capability and the robust performance of the proposed scheme.

607 citations

Journal ArticleDOI
Kuk-Hyun Han1, Jong-Hwan Kim1
TL;DR: The results show that the updated QEA makes QEA more powerful than the previous QEA in terms of convergence speed, fitness, and robustness.
Abstract: From recent research on combinatorial optimization of the knapsack problem, quantum-inspired evolutionary algorithm (QEA) was proved to be better than conventional genetic algorithms. To improve the performance of the QEA, this paper proposes research issues on QEA such as a termination criterion, a Q-gate, and a two-phase scheme, for a class of numerical and combinatorial optimization problems. A new termination criterion is proposed which gives a clearer meaning on the convergence of Q-bit individuals. A novel variation operator H/sub /spl epsi// gate, which is a modified version of the rotation gate, is proposed along with a two-phase QEA scheme based on the analysis of the effect of changing the initial conditions of Q-bits of the Q-bit individual in the first phase. To demonstrate the effectiveness and applicability of the updated QEA, several experiments are carried out on a class of numerical and combinatorial optimization problems. The results show that the updated QEA makes QEA more powerful than the previous QEA in terms of convergence speed, fitness, and robustness.

446 citations

Journal ArticleDOI
Jong-Hwan Kim1, Hyun Myung1
TL;DR: Simulations indicate that the TPEP achieves an exact global solution without gradient information, with less computation time than the other optimization methods studied here, for general constrained optimization problems.
Abstract: Two evolutionary programming (EP) methods are proposed for handling nonlinear constrained optimization problems. The first, a hybrid EP, is useful when addressing heavily constrained optimization problems both in terms of computational efficiency and solution accuracy. But this method offers an exact solution only if both the mathematical form of the objective function to be minimized/maximized and its gradient are known. The second method, a two-phase EP (TPEP) removes these restrictions. The first phase uses the standard EP, while an EP formulation of the augmented Lagrangian method is employed in the second phase. Through the use of Lagrange multipliers and by gradually placing emphasis on violated constraints in the objective function whenever the best solution does not fulfill the constraints, the trial solutions are driven to the optimal point where all constraints are satisfied. Simulations indicate that the TPEP achieves an exact global solution without gradient information, with less computation time than the other optimization methods studied here, for general constrained optimization problems.

224 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
TL;DR: GA's population-based approach and ability to make pair-wise comparison in tournament selection operator are exploited to devise a penalty function approach that does not require any penalty parameter to guide the search towards the constrained optimum.

3,495 citations

Journal ArticleDOI
TL;DR: Neural Evolution of Augmenting Topologies (NEAT) as mentioned in this paper employs a principled method of crossover of different topologies, protecting structural innovation using speciation, and incrementally growing from minimal structure.
Abstract: An important question in neuroevolution is how to gain an advantage from evolving neural network topologies along with weights. We present a method, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixed-topology method on a challenging benchmark reinforcement learning task. We claim that the increased efficiency is due to (1) employing a principled method of crossover of different topologies, (2) protecting structural innovation using speciation, and (3) incrementally growing from minimal structure. We test this claim through a series of ablation studies that demonstrate that each component is necessary to the system as a whole and to each other. What results is significantly faster learning. NEAT is also an important contribution to GAs because it shows how it is possible for evolution to both optimize and complexify solutions simultaneously, offering the possibility of evolving increasingly complex solutions over generations, and strengthening the analogy with biological evolution.

3,265 citations