scispace - formally typeset
Search or ask a question
Author

Charles Lesire

Bio: Charles Lesire is an academic researcher from University of Toulouse. The author has contributed to research in topics: Robot & Computer science. The author has an hindex of 14, co-authored 53 publications receiving 506 citations. Previous affiliations of Charles Lesire include ENSAE ParisTech & Centre national de la recherche scientifique.


Papers
More filters
Book ChapterDOI
05 Nov 2012
TL;DR: The current state of MORSE is presented, highlighting its unique features in use cases, including software-in-the-loop connectivity, multiple middleware support, configurable components, varying levels of simulation abstraction, distributed implementation for large scale multi-robot simulations and a human avatar that can interact with robots in virtual environments.
Abstract: MORSE is a robotic simulation software developed by roboticists from several research laboratories. It is a framework to evaluate robotic algorithms and their integration in complex environments, modeled with the Blender 3D real-time engine which brings realistic rendering and physics simulation. The simulations can be specified at various levels of abstraction. This enables researchers to focus on their field of interest, that can range from processing low-level sensor data to the integration of a complete team of robots. After nearly three years of development, MORSE is a mature tool with a large collection of components, that provides many innovative features: software-in-the-loop connectivity, multiple middleware support, configurable components, varying levels of simulation abstraction, distributed implementation for large scale multi-robot simulations and a human avatar that can interact with robots in virtual environments. This paper presents the current state of MORSE, highlighting its unique features in use cases.

70 citations

Proceedings Article
27 Jun 2008
TL;DR: This study presents the componentization of the functional level of a robot, the synthesis of an execution controller as well as validation techniques for checking essential “safety” properties.
Abstract: Autonomous robots are complex systems that require the interaction/cooperation of numerous heterogeneous software components. Nowadays, robots are critical systems and must meet safety properties including in particular temporal and real-time constraints. We present a methodology for modeling and analyzing a robotic system using the BIP component framework integrated with an existing framework and architecture, the LAAS Architecture for Autonomous System, based on GenoM. The BIP componentization approach has been successfully used in other domains. In this study, we show how it can be seamlessly integrated in the preexisting methodology. We present the componentization of the functional level of a robot, the synthesis of an execution controller as well as validation techniques for checking essential “safety” properties.

46 citations

Book ChapterDOI
20 Jun 2005
TL;DR: This model is based on a particle filtering-like representation of the probabilistic uncertainty on the continuous part of the procedure, and a possibilistic Petri net-inspired approach to deal with the uncertainty on events.
Abstract: In the framework of the study and analysis of new flight procedures, we propose a new Petri net-based formalism to represent both continuous and discrete evolutions and uncertainties: the particle Petri net. This model is based on a particle filtering-like representation of the probabilistic uncertainty on the continuous part of the procedure, and a possibilistic Petri net-inspired approach to deal with the uncertainty on events. After introducing this formalism, we propose an analysis of an approach procedure, and a further application to the on-line tracking of pilots' activities.

43 citations

Proceedings Article
22 Sep 2007
TL;DR: A comparison of two "planning" approaches dealing with temporal and/or discrete uncertainties and with a strong emphasis on robust execution, based on timed game automata and reachability analysis, and uses the UPPAAL-TIGA system.
Abstract: Planning for real world applications, with explicit temporal representation and a robust execution is a very challenging problem. To tackle it, the planning community has proposed a number of original and successful approaches. However, there are other paradigms "outside" the Automated Planning field which may prove to be successful with respect to this objective. This paper presents a comparison of two "planning" approaches dealing with temporal and/or discrete uncertainties and with a strong emphasis on robust execution. The first approach is based on chronicles and constraint satisfaction techniques; it relies on a causal link partial order temporal planner, in our case IXTET. The second approach is based on timed game automata and reachability analysis, and uses the UPPAAL-TIGA system. The comparison is both qualitative (the kind of problems modeled and the properties of plans obtained) and quantitative (experimental results on a real example). To make this comparison possible, we propose a general scheme to translate a subset of IXTET planning problems into UPPAAL-TIGA game-reachability problems. A direct consequence of this automated process would be the possibility to apply validation and verification techniques available in the timed automata community.

29 citations

Proceedings Article
14 Jul 2013
TL;DR: This paper tackles high-level decision-making techniques for robotic missions, which involve both active sensing and symbolic goal reaching, under uncertain probabilistic environments and strong time constraints, with a POMDP model of an online multi-target detection and recognition mission by an autonomous UAV.
Abstract: This paper tackles high-level decision-making techniques for robotic missions, which involve both active sensing and symbolic goal reaching, under uncertain probabilistic environments and strong time constraints. Our case study is a POMDP model of an online multi-target detection and recognition mission by an autonomous UAV. The POMDP model of the multi-target detection and recognition problem is generated online from a list of areas of interest, which are automatically extracted at the beginning of the flight from a coarse-grained high altitude observation of the scene. The POMDP observation model relies on a statistical abstraction of an image processing algorithm's output used to detect targets. As the POMDP problem cannot be known and thus optimized before the beginning of the flight, our main contribution is an "optimize-while-execute" algorithmic framework: it drives a POMDP sub-planner to optimize and execute the POMDP policy in parallel under action duration constraints. We present new results from real outdoor flights and SAIL simulations, which highlight both the benefits of using POMDPs in multi-target detection and recognition missions, and of our "optimize-while-execute" paradigm.

28 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

01 Jan 2007
TL;DR: A translation apparatus is provided which comprises an inputting section for inputting a source document in a natural language and a layout analyzing section for analyzing layout information.
Abstract: A translation apparatus is provided which comprises: an inputting section for inputting a source document in a natural language; a layout analyzing section for analyzing layout information including cascade information, itemization information, numbered itemization information, labeled itemization information and separator line information in the source document inputted by the inputting section and specifying a translation range on the basis of the layout information; a translation processing section for translating a source document text in the specified translation range into a second language; and an outputting section for outputting a translated text provided by the translation processing section.

740 citations

Journal Article
TL;DR: In this paper, the authors present algorithms for the automatic synthesis of real-time controllers by finding a winning strategy for certain games defined by the timed-automata of Alur and Dill.
Abstract: This paper presents algorithms for the automatic synthesis of real-time controllers by finding a winning strategy for certain games defined by the timed-automata of Alur and Dill. In such games, the outcome depends on the players' actions as well as on their timing. We believe that these results will pave the way for the application of program synthesis techniques to the construction of real-time embedded systems from their specifications.

524 citations

Journal ArticleDOI
TL;DR: An autonomous robot case study illustrates the use of the behavior, interaction, priority (BIP) component framework as a unifying semantic model to ensure correctness of essential system design properties.
Abstract: An autonomous robot case study illustrates the use of the behavior, interaction, priority (BIP) component framework as a unifying semantic model to ensure correctness of essential system design properties.

280 citations

Journal ArticleDOI
TL;DR: A global overview of deliberation functions in robotics is presented and the main characteristics, design choices and constraints of these functions are discussed.

229 citations