scispace - formally typeset
Search or ask a question

Showing papers on "Mobile robot navigation published in 1983"


Proceedings Article
08 Aug 1983
TL;DR: The mathematical structuring tool used is the decomposition of a graph into its k-connected components, which allows the robot to improve navigation procedures and to recognize some concepts, such as a door, a room, or a corridor.
Abstract: We present here a method for providing a mobile robot with learning capabilities. The method is based on a model of the environment with several hierarchical levels organized by degree of abstraction. The mathematical structuring tool used is the decomposition of a graph into its k-connected components (k=2 and k=3). This structure allows the robot to improve navigation procedures and to recognize some concepts, such as a door, a room, or a corridor.

46 citations



Proceedings Article
08 Aug 1983
TL;DR: The concurrent process monitor is a part of RCS and some of SVC's are for these facilities, and adopts a "message sending" method to synchronize execution of two processes and to exchange information between processes and pu's.
Abstract: Real time intelligent robots usually consist of more than one processing unit (pu) to ensure parallel operation of several functions. Each pu in a robot executes repetitive monitoring and controlling operations as well as information exchange to and from other pu's. Since timing of each operation is independent of others, if the robot operating software supports concurrent process facilities, it would be helpful in robot programming. A self-contained robot "Yamabico 9" has been constructed to be a tool for investigating how a mobile robot understands the outer world. In order to support software production on the robot, Robot Control System (RCS) has been implemented, including simple job commands and a supervisor call (SVC) system. The concurrent process monitor is a part of RCS and some of SVC's are for these facilities. The monitor adopts a "message sending" method to synchronize execution of two processes and to exchange information between processes and pu's. An example of a concurrent process program, "walk along left walls", is given to demonstrate the describing power of our system.

32 citations


01 Jan 1983

29 citations


Patent
04 Apr 1983
TL;DR: In this article, a method for entering coordinates of starting and destination points into a navigation system for vehicles operating according to the principle of dead reckoning navigation is presented, which includes representing the coordinates in a pattern with a bar code, and setting the coordinates into the navigation system with an optical or magnetic code reader, and a device for carrying out the method.
Abstract: Method for entering coordinates of starting and destination points into a navigation system for vehicles operating according to the principle of dead reckoning navigation, which includes representing the coordinates in a pattern with a bar code, and setting the coordinates into the navigation system with an optical or magnetic code reader, and a device for carrying out the method.

18 citations


Patent
10 Jun 1983
TL;DR: In this article, a robot work instructing system is constituted of a display, a graphic processor, etc., where an operator writes the working procedure and working environment of the robot and then animations based upon three-dimensional movement are displayed on a display and control calculation is performed on the basis of such information to control the robot.
Abstract: PURPOSE:To perform work safely for a short time by displaying macro-check of a working procedure or the like on a graph by using a three-dimensional robot simulator. CONSTITUTION:A robot work instructing system is constituted of a display 19, a graphic processor, etc. An operator 20 writes the working procedure and working environment of the robot 25. On the basis of said descriptions, the system executes preprocessing 12, simulates the movement of the robot 25 by a robot operation command or the like and then animations based upon three-dimensional movement are displayed on a display 19. If there is no fault, robot operation data 17 are converted 22 into a control command to actuate the robot 25. Fine adjustment at the catch or the like of parts by the robot 25 is performed by processing picture information 29 through a television camera 26 and obtaining sensor information 27 compensated at the catching position. Control calculation 28 is performed on the basis of such information to control the robot 25.

14 citations


Proceedings ArticleDOI
01 Jun 1983

13 citations




Journal ArticleDOI
TL;DR: The description of the hardware and programming model of Watson is described, designed to act as a messenger robot carrying mail and parcels between the rooms of an office building and provides researchers with a tool for investigating the potential of using inexpensive sonar range finding systems in the navigation of mobile robots.

4 citations


Proceedings ArticleDOI
13 Dec 1983
TL;DR: In this paper, the authors present a survey of the major techniques in the literature for 3D scene analysis for the autonomous vehicle as a robot form and the requirements for these applications are considered.
Abstract: Robots are attracting increased attention in the industrial productivity crisis. As one significant approach for this nation to maintain technological leadership, the need for robot vision has become critical. The "blind" robot, while occupying an economical niche at present is severely limited and job specific, being only one step up from the numerical controlled machines. To successfully satisfy robot vision requirements a three dimensional representation of a real scene must be provided. Several image acquistion techniques are discussed with more emphasis on the laser radar type instruments. The autonomous vehicle is also discussed as a robot form, and the requirements for these applications are considered. The total computer vision system requirement is reviewed with some discussion of the major techniques in the literature for three dimensional scene analysis.© (1983) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.


Journal ArticleDOI
01 Oct 1983-Robotica
TL;DR: This paper presents an intelligent robot vision system using TOSPIX which has been newly developed to realize frequently-used and time-consuming image processing functions at low-cost and high-speed.
Abstract: This paper presents an intelligent robot vision system using TOSPIX which has been newly developed to realize frequently-used and time-consuming image processing functions at low-cost and high-speed. The vision system has been studied for use in observing surface information about electric parts (dry batteries), inspecting them and then placing good ones into a given box. Three major robot vision functions are implemented here: object recognition, inspection and position determination by binary and gray-scale image processing techniques. While binary image techniques are used in battery terminal inspection and box position determination gray-scale image processing functions are performed in a label pattern check on a battery surface, front or rear surface determination, and surface defect inspection.

Journal ArticleDOI
TL;DR: The production of a position, velocity and attitude reference system for trials of inertial and other navigation systems has formed a major objective for RAE work on integrated navigation since that time.
Abstract: Work on integrated navigation for aircraft started at RAF Farnborough at least as early as the mid 1950s. At that time the Inertial Navigator (IN) for a guided bomb was being developed and the navigation of the aircraft to the target used a combination of the weapon IN, Doppler radar and position fixing. An output of the navigation process was a calibration of the weapon IN. The operational concept was interesting in that after weapon release the aircraft had to return home using Doppler and compass. Development of this system included flight trials in raf and a & aee aircraft using the Decca Navigator chains in Southern England for position reference. The same method was used in the development of the FSP 100 IN intended for the TSR 2 which took place in 1960/63. On these trials an early airborne digital computer on board the trials aircraft enabled Decca, Doppler and the IN information to be recorded and processed in-flight in a form suitable for post-flight analysis using a ground digital computer. This technique formed the basis for producing the reference navigation system which has been used for all our subsequent trials. The production of a position, velocity and attitude reference system for trials of inertial and other navigation systems has formed a major objective for RAE work on integrated navigation since that time.

Book ChapterDOI
01 Jan 1983


Journal ArticleDOI
TL;DR: An account is given of the four main stages in a development and implementation programme: defining the requirements of the application; matching these to the vision system; developing the application software; integrating and installing the system.


Journal Article
TL;DR: In this article, a robot sensory system developed for industrial robotics is described, in which a hierarchical group of microprocessors are used to generate visual expectancies for each frame and guide interpretative and modeling processes.
Abstract: A robot sensory system developed for industrial robotics is described. Television frames and inputs from other sensors are interpreted by a hierarchically organized group of microprocessors. The system uses knowledge of object prototypes, and of robot action, to generate visual expectancies for each frame. At each level of the hierarchy, interpretative processes are guided by expectancy-generating modeling processes. The modeling processes are driven by a priori knowledge, by knowledge of the robot's movements, and by feedback from the interpretative processes. At the lowest level, other senses (proximity, tactile, force) are handled separately; above this level, they are integrated with vision into a multi-modal world model. At successively higher levels, the interpretative and modeling processes describe the world with successively higher order constructs, and over longer time periods. All levels of the hierarchy provide output, in parallel, to guide corresponding levels of a hierarchical robot control system.