scispace - formally typeset
Search or ask a question

Showing papers on "Smart camera published in 1991"


Proceedings ArticleDOI
09 Apr 1991
TL;DR: A method is presented to determine viewpoints for a robotic vision system for which object features of interest will simultaneously by visible, inside the field-of-view, in-focus, and magnified as required.
Abstract: A method is presented to determine viewpoints for a robotic vision system for which object features of interest will simultaneously by visible, inside the field-of-view, in-focus, and magnified as required. A technique that poses the problem in an optimization setting in order to determine viewpoints that satisfy all requirements simultaneously and with a margin is presented. The formulation and results of the optimization are shown, as well as experimental results in which a robot vision system is positioned and its lens is set according to this method. Camera views are taken from the computed viewpoints in order to verify that all feature detectability requirements are satisfied. >

104 citations


Proceedings ArticleDOI
02 Jun 1991
TL;DR: New techniques are presented that determine the region of light placements that guarantee specified object edges will be detected in the region where a camera and light source should be placed.
Abstract: The paper describes model-based methods to determine the region where a camera and light source should be placed. It presents new techniques that determine the region of light placements that guarantee specified object edges will be detected. These methods use geometric models of the objects and the characteristics of the camera, lens, digitiser, and edge detector to determine the region of acceptable camera and light-source locations. Other recent work in automatic camera and light-source placement is also described. >

39 citations


Patent
06 Jun 1991
TL;DR: In this article, a camera system is provided with a communication circuit which effects communication of the control information and which is capable of changing the number of communication words associated with the communication.
Abstract: A camera system, which controls the function of a lens assembly on the basis of control information serially transmitted from a camera assembly, is provided with a communication circuit which effects communication of the control information and which is capable of changing the number of communication words associated with the communication, and a circuit for transmitting, if the number of communication words is changed, the changed number of communication words from the camera assembly to the lens assembly.

31 citations


Proceedings ArticleDOI
09 Apr 1991
TL;DR: It is shown that robot operation and camera motion are guided by visual feedback and the strategies to control this active guidance are studied.
Abstract: A scheme that uses an active camera to guide object manipulation in an environment that has not been modeled is introduced. By actively moving camera to desired position with proper distance and viewing angle from objects of interest, it is possible to acquire good data for robot visual feedback control when locating objects. The environment is described in an object-centered coordinates system and is measured relatively in consecutive 2-D image spaces. This approach is flexible and efficient in manipulation, since it avoids complicated 3-D modeling and image processing is driven by tasks. It is shown that robot operation and camera motion are guided by visual feedback. The strategies to control this active guidance are studied. >

30 citations


Proceedings ArticleDOI
03 Jun 1991
TL;DR: A method for object manipulation using an active camera for robot guidance and image understanding by actively moving the camera to proper positions with desired viewing angle from objects of interest is introduced.
Abstract: A method for object manipulation using an active camera is introduced. By actively moving the camera to proper positions with desired viewing angle from objects of interest, it is possible to acquire data to realize the robust visual feedback control of a robot. This approach avoids complicated 3-D modeling, and the image processing carried out is driven by tasks. Active camera control for robot guidance and image understanding is addressed. >

13 citations


Proceedings ArticleDOI
01 Mar 1991
TL;DR: The model has been developed with the intention of using it to investigate algorithms for recovering depth from image blur, but the model is general and can be used to address other problems in machine vision.
Abstract: A mathematical model for a typical CCD camera system used in machine vision applications is presented. This model is useful in research and development of machine vision systems and in the computer simulation of camera systems. The model has been developed with the intention of using it to investigate algorithms for recovering depth from image blur. However the model is general and can be used to address other problems in machine vision. The model is based on a precise definition of input to the camera system. This definition decouple3 the photometric properties of a scene from the geometric properties of the scene in the input to the camera system. An ordered sequence of about 20 operations are defined which transform the camera system''s input to its output i. e. digital image data. Each operation in the sequence usually defines the effect of one component of the camera system on the input. This model underscores the complexity of the actual imaging process which is routinely underestimated and oversimplified in machine vision research.© (1991) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

10 citations


Proceedings ArticleDOI
09 Apr 1991
TL;DR: It is shown that the proposed method for obtaining precise range information by swivelling a conventional TV camera can be used to locate objects with a precision depending on the angular resolution of the camera controller, which is 0 to 20 times better than the conventional imaging method.
Abstract: A method is proposed for obtaining precise range information by swivelling a conventional TV camera, and the experimental verification of the method is described. To realize the method, the blurring that occurs in the image as a result of aberrations of the TV camera lens is used. The direction to the object is measured with a subpixel-level precision which depends on the angular resolution of the camera controller. The uncertainties of the object locations measured by conventional stereo methods depend on the resolution of the image sensor. It is shown that the method can be used to locate objects with a precision depending on the angular resolution of the camera controller, which is 0 to 20 times better than the conventional imaging method. >

10 citations


Proceedings ArticleDOI
13 Aug 1991
TL;DR: The problem of estimating the fixed rotation and translation between the camera's and the mobile robot's coordinate systems given a sequence of monocular images and robot movements is addressed and the proposed algorithm decomposes the calibration task and is able to calibrate all the three rotational degrees of freedom and the two translational degree of freedom.
Abstract: The problem of estimating the fixed rotation and translation between the camera's and the mobile robot's coordinate systems given a sequence of monocular images and robot movements is addressed. Existing hand/eye calibration algorithms are not directly applicable because they require the robot hand to have at least two rotational degrees of freedom, while a mobile robot can usually execute only planar motion. By using the proper representation for camera rotation, the proposed algorithm decomposes the calibration task, and thus is able to calibrate all the three rotational degrees of freedom and the two translational degrees of freedom. The remaining translational degree of freedom is not needed for the purpose of camera-centered robot vision applications. To recover the camera's rotation motion between two images, the algorithm uses inverse perspective geometry constraints on a rectangular corner. Complicated calibration patterns are not needed and thus the algorithm can be easily implemented and used in structured environments. >

8 citations


Proceedings ArticleDOI
01 Jun 1991
TL;DR: A camera system composed of a 35 mm Nikon F3 camera body, a camera back containing a CCD-imager, and a portable hard drive for storing digitized images was constructed and employed to acquire images distributed over photospace.
Abstract: Portable electronic still cameras have been available for some years although, in general, the imagequality has fallen short of the 35 mmfilm quality benchmark. To obtain improved image capture andreproduction, higher resolution CCD imagers with wider dynamic ranges must be employed in thesecameras.A camera system composed of a 35 mmNikon F3 camera body, a camera back containing a CCI)-imager, and a portable hard drive for storing digitiZed images was constructed and employed to acquireimages distributed over photospace. This paper describes the camera's hardware and capabilities, the software post-processing, the camera's characteristics, and a method for evaluating the camera's performance. 1. INTRODUCTION Portable electronic still cameras possess numerous advantages as well as disadvantages compared to film-based cameras. Among the advantages are rapid image availability without the chemical processing step required by film, and direct input capability to computers for applications requiringimage data. However, while the image quality obtainable with these cameras may be acceptable fordisplay on a video monitor, their overall quality has proven to be too low for many applications. A

8 citations



Journal ArticleDOI
TL;DR: The author describes an image processing/machine vision that was developed specifically for educational use, which consists of an adapter card for the PC, a standard black-and-white video camera and monitor, and a collection of image processing algorithms.
Abstract: The author describes an image processing/machine vision that was developed specifically for educational use. Most systems used in educational settings were not designed specifically for such use. In many cases, most of a student's laboratory time is spent learning the particular system in use, rather than the complexities of the assigned algorithm. The IBM PC based system described here, which consists of an adapter card for the PC, a standard black-and-white video camera and monitor, and a collection of image processing algorithms, eliminates this problem. The major advantages of the system described are its low cost and extremely simple programming interface. Both the hardware design of the adapter card and the software interface are described. >

Proceedings ArticleDOI
01 Feb 1991
TL;DR: The purpose is the design and development of a 3-D vision system which can evaluate the space environment and correlate complete or incomplete object views to CAD-based models.
Abstract: Various approaches to three-dimensional vision in space are reviewed with emphasis on the redundant 3D vision system designed for the Center for Intelligent Robotic Systems for Space Exploration. The system uses a controllable subset of five cameras, programmable structured light patterns, and sophisticated calibration routines. The design emphasizes real-time operation, human supervisory intervention, and the use of 3D vision to enhance the performance of cooperating robotic arms. Two methods of estimating the location of a point using 3D vision are discussed.

Proceedings ArticleDOI
07 Oct 1991
TL;DR: Experimental results show visual feedback can control the camera motion within small deviations from the planned path, and the deviations measured in the image are used to compensate for the errors in the 3-D position and the cameramotion.
Abstract: Most active systems use a camera mounted on a manipulator, because the control of the camera motion is easy and accurate. A limited range of the camera motion by the fixed manipulator, however, often prevents the method from real applications. The paper explores a more general method that a camera on a mobile robot freely moves in the environment and acquires the spatial information based on the active vision paradigm. The camera motion is controlled so as to fix its gaze upon a feature point and to keep the distance to the fixation point constant by visual feedback. The camera motion is determined by its rotation estimated from motions of vanishing points of horizontal lines in the environment. Non-horizontal lines are detected by their motion parallax caused by the linear motion. Experimental results show this feedback can control the camera motion within small deviations from the planned path. The deviations measured in the image are used to compensate for the errors in the 3-D position and the camera motion. >

Proceedings ArticleDOI
03 Jun 1991
TL;DR: The authors show how stereo-based range data, obtained over time by a moving vehicle can be integrated without the explicit knowledge or computation of camera motion, by transforming range information into relative distances and encoding these distances in image registered maps.
Abstract: The authors show how stereo-based range data, obtained over time by a moving vehicle can be integrated without the explicit knowledge or computation of camera motion. A unique aspect of the approach is the transformation of range information into relative distances and encoding these distances in image registered maps. Results of experiments on dynamic stereo images, are presented. >


Proceedings ArticleDOI
03 Nov 1991
TL;DR: A technique to vary the degree of focusing by moving the camera with respect to the object position is proposed, which ensures that the focused areas of the image are always subjected to the same magnification.
Abstract: In practice focusing can be obtained by displacing the sensor plate with respect to the image plane, by moving the lens or by moving the object with respect to the optical system. Moving the lens or sensor plate with respect to each other, causes changes of the magnification and corresponding changes on the object coordinates. In order to overcome these problems, the authors propose a technique to vary the degree of focusing by moving the camera with respect to the object position. The camera is attached to the tool of a manipulator in a hand-eye configuration with the position of the camera always known. This approach ensures that the focused areas of the image are always subjected to the same magnification. To measure the focus quality operators are used to evaluate the quantity of high-frequency components on the image. Different types of these operators was tested and the results compared. >

Patent
29 Mar 1991
TL;DR: In this paper, a video camera is fixed on a universal head installed on the upper part of a tripod and the output of the video camera 4 is displayed on a window 7 of the screen 6.
Abstract: PURPOSE: To improve operabilitly and to control a video camera from a considerably distant place by controlling the video camera or a universal head corresponding to the inputted control information using a display part. CONSTITUTION: A video camera 4 is fixed on a universal head 3 installed on the upper part of a tripod 2. For example, a control part 1 composed of personal computers is connected with the universal head 3 and the video camera 4 through a communication interface. The prescribed coordinate information can be inputted from a mouse 8 equipped with buttons 8a to 8c to the control part 1, and the control part 1 can display the prescribed picture on a screen 6 of a display part 5 composed of CRTs, for example. The output of the video camera 4 is displayed on a window 7 of the screen 6. Thus, when the button 8c is depressed, whether or not this pressure is released, and the zoom-out control is executed when the pressure is not released. COPYRIGHT: (C)1992,JPO&Japio

Proceedings ArticleDOI
28 Oct 1991
TL;DR: The authors propose to vary the degree of focusing by moving the camera with respect to the object position by using operators to measure the focus quality and quantify the high-frequency content of the image.
Abstract: The authors explore focus to obtain depth or structure perception of the world. They propose to vary the degree of focusing by moving the camera with respect to the object position. In this case the camera is attached to the tool of a manipulator in a hand-eye configuration with the position of the camera always known. This approach ensures that the focused areas of the images are always subjected to the same magnification. To measure the focus quality, operators are used to quantify the high-frequency content of the image. Different types of operators were tested and the results compared. >

Proceedings ArticleDOI
26 Jun 1991
TL;DR: This paper describes the ongoing development and testing of a two-level "intelligent" controller which applies this approach to elevation and azimuth position control of a camera pointing gimbal.
Abstract: Control systems designs based on single, classical algorithms can involve severe compromises among system response characteristics such as speed of response, relative stability and steady-state error. Additionally, they often are not very robust, and lack the capability to respond to changes in performance requirements. In practice, more desirable combinations of these characteristics can be obtained by using a mixture of control algorithms, based on both modern and classical theory, and carefully combining them so that the particular benefits of each are used to best advantage. This is one approach to intelligent control. This paper describes the ongoing development and testing of a two-level "intelligent" controller which applies this approach to elevation and azimuth position control of a camera pointing gimbal. The controller was designed using a detailed, non-linear computer simulation model and is implemented in a single Intel 80386/80387 based microcomputer. As predicted by the computer simulation model, and verified by the remarkably similar laboratory test data, the intelligent controller provides-when compared with a more classical controller earlier designed for the same hardware-a factor of 2 to 3 increase in bandwidth and a similar decrease in large-angle rise time, an order of magnitude decrease in overshoot for large-angle steps, and an order of magnitude decrease in steady-state error.

Proceedings ArticleDOI
01 Feb 1991
TL;DR: The vertical integration of related vision hardware image analysis software and analytical techniques together with the novel algorithms for robot eye-brain-hand coordination constitutes a unique robot vision system.
Abstract: A new approach is introduced in this paper to deal with the problems of real-time machine vision and pattern recognition for robotic manipulations. This approach emphasizes three directions: (1) the developed algorithm has to be compact enough for embedded intelligent control implementation (2) the computational scheme should be highly efficient for on-line robot reasoning and manipulations and (3) the resulting system has to be sufficiently flexible to accommodate various working environments and to cope with some system shortcomings. The vertical integration of related vision hardware image analysis software and analytical techniques (e. g. fuzzy logic and neural networks) together with the novel algorithms for robot eye-brain-hand coordination constitutes a unique robot vision system. The potential of more extensive hardware implementation is discussed and a wider spectrum of applications of the proposed robot vision system is envisioned.


Journal ArticleDOI
TL;DR: In this paper, the authors describe the sensor of the camera (by connecting the Thomson C.C.D. and the RTC image intensifier) and its three working modes.
Abstract: This paper deals with the study and the fabrication of an intensified C.C.D. camera for guiding telescopes or for searching fields containing faint stars. It describes the sensor of the camera (by connecting the Thomson C.C.D. and the RTC image intensifier) and its three working modes. The first is the direct exploitation of the video signal sent to the TV monitor, in conformity with the C.C.I.R. standard (the N.T.S.C. standard is also available). However, the study of stability and sensitivity of the intensifier lead us to create a second working mode necessary for the correct detection of faint objects (by an “averaging filter” improving the signal to noise ratio). The last mode is the “integration” by the C.C.D. itself: it allows a maximum of three-seconds integration time (therefore the detection of fainter objects), with the help of a thermoelectric cooling system. Moreover, the tests made on the 1.52 meter telescope of the O.A.N. (Calar-Alto, Spain) have been successful and confirm the capabilities of this camera to replace the intensified TV tube cameras, like the super-isocon camera.

Proceedings ArticleDOI
18 Nov 1991
TL;DR: This paper presents methods to determine useful camera and laser scanner calibrations in spite of imprecise calibration points.
Abstract: This paper addresses the calibration of a robotic 3-0 vision system. The vision system consists of two cam- eras and a laser scanner each afixed with reposition- able mounts on the ceiling above a robotic testbed. The purpose is to gather global information about the robot workspace. Consequently, the laser scanner and each camera has a large angle of view and must provide ac- curate 3-D information over a range from 0.5 m to 2.5 meters. In this application, the manufacturing of a high precision calibration target is impractical, and so the calibration data points lack the precision com- mon in many calibration processes. This paper presents methods to determine useful camera and laser scanner calibrations in spite of imprecise calibration points

Book ChapterDOI
01 Jan 1991
TL;DR: In this paper, the authors discuss studio cameras and mountings cameras, and the characteristics of an electronic camera that determine its quality are signal/noise ratio, sensitivity, static and dynamic resolution, color reproduction, color registration of the three scanning rasters.
Abstract: Publisher Summary This chapter discusses studio cameras and mountings cameras. The electronic color camera is one of two important signal sources for the creation of television signals. The other is the telecine. It seems likely that eventually even the classic 35 mm film camera will be superseded by the electronic camera. The first sign of this is in the use of electronic cameras for high definition television. Every studio camera system includes a camera head with zoom lens and viewfinder, which is connected through a camera cable to the camera control unit. The operational control panel and, in the case of multi-camera operations, also a master control panel, are connected to this CCU as well. After optical color separation in the prism, the signals for the three color channels R, G, and B are created by line scanning in the three camera tubes. After pre-amplificaticn, this acquired signal current is then linearly and nonlinearly processed in the head signal processor. A microprocessor controlled correction system creates analog correction voltages. These are firstly mixed into the scanning beam deflection unit, the characteristics of an electronic camera that determine its quality are signal/noise ratio, sensitivity, static and dynamic resolution, color reproduction, color registration of the three scanning rasters.