scispace - formally typeset
Search or ask a question

Showing papers on "Smart camera published in 1992"


Patent
31 Aug 1992
TL;DR: In this article, a portable camera is connected to a computer for capturing an image and providing the captured image to the computer for storage there with the camera ergonomically acting like an independent, self-functioning peripheral device while in actuality depending on instructions from the computer.
Abstract: A portable, electronic camera is connectable to a computer for capturing an image and providing the captured image to the computer for storage therewith. The camera ergonomically acts like an independent, self-functioning peripheral device while in actuality depending on instructions from the computer. The camera is remotely linked to the computer, e.g., by a cable, thereby allowing mobility of the camera independent of the computer. The camera includes an electronic image sensor, and circuit for driving the sensor to generate an image signal that is applied to the computer through the remote link. The readiness of the computer to accept an image signal is manifested by operative device in the camera, which condition the camera for image capture in response to a status signal from the computer transmitted through the remote link. In one embodiment, a capture switch is positioned on or with the camera for user engagement, whereby the operative device inhibits actuation of the capture switch until receipt of the status signal. In another embodiment, the operative device energizes an exposure readiness indicator when the status signal is received.

150 citations


Patent
10 Dec 1992
TL;DR: In this article, an electronic camera is provided as a module that attaches to the signal bus of a PC-compatible computer, and the camera includes a minimum of components, particularly an image sensor and an A/D converter, and a PC compatible interface connector for mating with a bus extension connector on the computer.
Abstract: An electronic camera is provided as a module that attaches to the signal bus of a PC-compatible computer. The camera includes a minimum of components, particularly an image sensor and an A/D converter, and a PC-compatible interface connector for mating with a bus extension connector on the computer. By directly presenting digitized data from the camera to the signal bus of the computer through the bus connector, the camera can be kept relatively simple and the computer can be relied upon to perform image processing, storage, and display.

90 citations


Journal ArticleDOI
TL;DR: The integration of a single camera into a robotic system to control the relative position and orientation between the robot's end-effector and a moving part in real time is discussed.
Abstract: The integration of a single camera into a robotic system to control the relative position and orientation between the robot's end-effector and a moving part in real time is discussed. Only monocular vision techniques are considered because of current limitations in the speed of computer vision analysis. The approach uses geometric models of both the part and the camera, as well as the extracted image features, to generate the appropriate robot control signals for tracking. Part and camera models are also used during the teaching stage to predict important image features that appear during task completion. >

41 citations


Patent
27 May 1992
TL;DR: A user-actuated modular photography system for use in a photobooth or like enclosure to provide user self-photography with realistic user-selective user image previewing includes a camera module having a housing including a photographic camera having a lens oriented for viewing the user when within camera lens view of the photographic camera.
Abstract: A user-actuated modular photography system for use in a photobooth or like enclosure to provide user self-photography with realistic user-selective user image previewing includes a camera module having a housing including a photographic camera having a lens oriented for viewing the user when within camera lens view of the photographic camera, a video camera and a gimballed camera mount assembly interconnecting the video camera with the photographic camera for video imaging by the video camera of the photographic camera lens view of the user, the video camera providing a user video image signal. There is also a gimbal drive mechanism for providing driven gimballed movement of the camera mount assembly for selective aiming of the camera lens axis within at least one aiming plane and for selective rotation of the camera lens view within an image resolution plane between vertical and horizontal orientations. Also included is at least one modular display device for colocation with the camera module within a photobooth or like enclosure. The modular display device includes video image display mechanism for receiving the user video image signals for vertical and horizontal modes of display of the user video image for user image previewing prior to user image exposure by the photographic camera. Circuit means are included for causing the video display means to switch automatically between the first and second display modes in response to said lens view rotation. Also, a user control device provides for user selective remote control of the aiming movement of the camera assembly, selection of the image rotation between the vertical and horizontal orientations, and for user selective remote actuation of the photographic camera for thereby capturing user photographic images in accordance with user previewed realistic display thereof by the video display mechanism.

35 citations


Proceedings ArticleDOI
12 May 1992
TL;DR: Real-time stereo vision using a multiple arrayed camera system, named MAC vision, is proposed, which processes the correspondence problem very rapidly by making use of the simple geometric relationship of images on the same-height scanning lines.
Abstract: Real-time stereo vision using a multiple arrayed camera system, named MAC vision, is proposed. MAC vision processes the correspondence problem, one of the basic problems in stereo vision, very rapidly by making use of the simple geometric relationship of images on the same-height scanning lines. Several problems involved in MAC vision are discussed. The results of experiments performed on a newly constructed five-camera prototype are presented. The experimental results showed that a speed of 16 (frames/s), where each frame included 1800 points. >

19 citations


Patent
04 Feb 1992
TL;DR: In this paper, the authors proposed a system for the remote control by which the control variable of a camera 10 is adjusted by using a mouse 37 or the automatic control by adjusting automatically with the camera 10.
Abstract: PURPOSE: To allow a host computer to set camera control parameters such as exposure and various balances in an image pickup system configured by the combination of the electronic camera and the host computer. CONSTITUTION: The system is attained for the remote control by which the control variable of a camera 10 is adjusted by using a mouse 37 or the automatic control by which the control variable is adjusted automatically with the camera 10. When the remote control is set, the control variable is inputted by the host computer 30 and sent to the camera 10. The control variable is adjusted to be the control variable in the camera 10. In the case of inputting an image pickup command to the host computer 30, the camera 10 picks up the image and image data representing an object image are sent to the host computer 30. COPYRIGHT: (C)1993,JPO&Japio

18 citations


Proceedings ArticleDOI
01 Mar 1992
TL;DR: This paper describes the camera systems developed for Carnegie Mellon University''s Calibrated Imaging Lab with good mathematical models describing the relationships between the control parameters and the parameters of the resulting images.
Abstract: In a perfect world we would be able to use the many possible degrees of freedom in a camera system to do many useful things, such as accommodating for changes or differences in the scenes being imaged, correcting for camera behaviour that isn''t quite ideal, or measuring properties of the scene by noting how the scene''s image changes as the camera''s parameters are varied. Unfortunately the parameters that control the formation of the camera''s images often interact in complex and subtle ways that can cause unforeseen problems for machine vision tasks. To be able to effectively use multi degree of freedom camera systems we need to know how variations in the camera''s control parameters are going to cause changes in the produced images. For this we need to have good mathematical models describing the relationships between the control parameters and the parameters of the resulting images. Ideally we would like to base the form of the models on an understanding of the underlying physical processes involved, but in many cases these are either unknown or are just too complex to model. In these situations experimentation and generalized modeling techniques are necessary. To perform the experiments needed to develop and validate models and to obtain calibration data for the models we need precise automated imaging systems. In this report we describe the camera systems developed for Carnegie Mellon University''s Calibrated Imaging Lab and show how these systems have been used to develop methods for using computer-controlled cameras and lenses.

18 citations


Patent
24 Aug 1992
TL;DR: In this article, a remote endoscopic video camera system is provided which is capable of providing both left and right eye pre-video signals for generating a three-dimensional image, which is provided by first and second imagers which are capable of being enclosed in a single camera head, which, in turn, is coupled to the remaining camera circuitry through a single cable, without causing substantial interference between the two imagers.
Abstract: A remote endoscopic video camera system is provided which is capable of providing both left and right eye pre-video signals for generating a three-dimensional image. The left and right eye pre-video signals are provided by first and second imagers which are capable of being enclosed in a single camera head, which, in turn, is capable of being coupled to the remaining camera circuitry through a single cable, without causing substantial interference between the two imagers.

15 citations


Patent
Katsunori Nakamura1
15 Jan 1992
TL;DR: In this article, an information processing device for writing data into a computer incorporated in a camera body or reading data from the computer is presented. But this device is not suitable for the use of cameras.
Abstract: The present invention relates to an information processing device for writing data into a computer incorporated in a camera body or reading data from the computer. According to the present invention, there is provided an information processing device which is inserted between the camera body and an accessory device to execute writing of data into the computer of the camera body or reading of data from the computer and to execute data communication between the computer of the camera body and the accessory device.

15 citations


Proceedings ArticleDOI
03 Jan 1992
TL;DR: In this article, the authors have examined what makes a camera suitable for machine vision use and discussed how such measurements can be useful in designing or selecting the components of a machine vision system: the video capture systems, the cameras, and image processing algorithms.
Abstract: Solid state (CCD, CID, or multiplexed photosensor) television cameras are the most widely used input devices in machine vision, because they are relatively inexpensive, rugged, and reliable. However, the design, specification, and testing of these cameras typically are geared to their primary use in producing images that will ultimately be observed by humans; the intended applications for these cameras are as diverse as parking lot security and home entertainment. Because the video information produced by the camera is not used in the same ways by people and machine vision systems, there is no a priori reason to expect that a camera designed for one use will be optimal for another. In our work we have examined what makes a camera suitable for machine vision use. This paper describes which characteristics are important to the camera's performance machine vision applications and why. We show how these characteristics can be measured and standardized using simple tests suitable for production screening or more extensive tests suitable for use in the laboratory. Tests for important camera characteristics, including transfer function, noise, and resolution, are described and test results for representative solid state cameras are presented. Finally, we discuss how such measurements can be useful in designing or selecting the components of a machine vision system: the video capture systems, the cameras, and the image processing algorithms.

14 citations


Proceedings ArticleDOI
12 Aug 1992
TL;DR: Visual comparison of color matches on a characterized color monitor indicate that the five band camera is capable of color measurements that produce no significant visual error on the display.
Abstract: In this paper we compare the accuracy of the color information obtained from television cameras using three and five wavelength bands. This comparison is based on real digital camera data. The cameras are treated as colorimeters whose characteristics are not linked to that of the display. The color matrices for both cameras were obtained by identical optimization procedures that minimized the color error The color error for the five band camera is 2. 5 times smaller than that obtained from the three band camera. Visual comparison of color matches on a characterized color monitor indicate that the five band camera is capable of color measurements that produce no significant visual error on the display. Because the outputs from the five band camera are reduced to the normal three channels conventionally used for display there need be no increase in signal handling complexity outside the camera. Likewise it is possible to construct a five band camera using only three sensors as in conventional cameras. The principal drawback of the five band camera is the reduction in effective camera sensitivity by about 3/4 of an I stop. 1.

Journal ArticleDOI
TL;DR: In this article, a 3D camera is developed, capable of measuring 50 complete depth images per second of arbitrary, diffusely reflecting objects, in which the depth is linearly coded as the local gray-value of the video image.
Abstract: A 3-D camera has been developed, capable of measuring 50 complete depth images per second of arbitrary, diffusely reflecting objects. The output of this real-time 3-D camera is a CCIR video signal, in which the depth is linearly coded as the local gray-value of the video image. The measurement principle is active triangulation using a solid- state laser and two galvanometer mirrors for beam and scene scanning. No complicated electronics or computers are required because the essential ‘calculations’ (centre-of-gravity of light profiles) are carried out purely optically with a special sensor consisting of wedge pixels. The use of this smart sensor makes the 3-D video camera relatively inexpensive, and the output in video-format allows the connection with any standard video framestore for direct 3-D image processing. It is concluded that this development can be of appreciable practical interest for many applications in industry.

01 Jan 1992
TL;DR: Techniques to automatically synthesize desirable camera views of a known scene by posing tile problem in a constrained optimization setting and obtaining viewpoints that are globally admissible and locally optimal are presented.
Abstract: In this thesis, we present techniques to automatically synthesize desirable camera views of a known scene. Desirability of a camera view of a scene is represented in terms of a set of constraints. These constraints express whether certain scene features of interest are detectable or not in the resulting image. The feature detectability constraints that are chosen are fairly generic to vision tasks and require that the features are: (1) not occluded--the visibility constraint; (2) resolvable to a given specification--the resolution constraint; (3) in-focus-- the focus constraint; (4) within the field-of-view of the camera--the field-of-view constraint. An in-depth analysis of each of the above constraints results in the locus of viewpoints that satisfy each constraint separately. In this work, a viewpoint is an eight-dimensional quantity that consists of the three positional and two orientational degrees of freedom of camera placement, and three optical parameters of camera and lens setting. In order to determine globally admissible viewpoints, the loci of the individual constraints are combined by posing tile problem in a constrained optimization setting. Using existing optimization schemes, viewpoints that are globally admissible and locally optimal are obtained. In order to realize such a computed viewpoint in an actual sensor setup, the relationships mapping the planned parameters to the parameters that can be controlled, are obtained. This mapping is determined in the case of a sensor setup that consists of a camera in a hand-eye arrangement equipped with a lens that has zoom, focus and aperture control. The lens is modeled by a general thick lens model with non-coinciding pupils and principal points. The sensor planning and sensor modeling techniques that have been developed compose the MVP system. MVP is used in a robotic vision system that consists of a camera with a controllable lens mounted on a robot manipulator. The camera is positioned and its lens is set according to the results generated by MVP. Camera views taken from the computed viewpoints verify that the feature detectability constraints are indeed satisfied.

Proceedings ArticleDOI
01 Nov 1992
TL;DR: The paper deals with geometric modelling, the control system architecture as well as the practical system design and some accuracy considerations, which are focused on navigation of autonomous vehicles and obstacle detection in such an environment.
Abstract: Basically the Multivision System consists of two different sensor systems combined to a multisensor system via the data processing. The first one is a 2D-picture processing system, where as the second one is the 3D Laser Range Finder module. In view of the X/Y-scanner this module feeds digital picture data--information on the object's position in terms of its Z axis and its tilt and turn angles--to the interface. The Laser Range Finder provides absolute range values at the interface and the laser spot on the surface of the measured object will be detected by the camera system. Any information of a scene provided by the camera system (e.g. edge-detection, edge-description) can be used to control the laser spot in view of the 3D measurement. Thereby the location of a scan point, which is measured with the laser scanner, can be transformed into the camera system, so that the position in the camera image is calculable. An easy way to describe the geometric information of such points is the use of coordinate systems. Therefore the multisensor system is molded by a set of different coordinate systems: the scanner-, the camera- and the cartesian transfer coordinate system. The paper deals with geometric modelling, the control system architecture as well as the practical system design and some accuracy considerations. The first application focused within this paper are navigation of autonomous vehicles and obstacle detection in such an environment.

Proceedings ArticleDOI
02 Jun 1992
TL;DR: The authors compare the performance of a spatial domain and a spatial frequency domain control method used to assess image content information from charge coupled device (CCD) camera imagery to illustrate comparative levels of control for different water qualities, object contrasts, and ranges.
Abstract: The authors compare the performance of a spatial domain and a spatial frequency domain control method used to assess image content information from charge coupled device (CCD) camera imagery. Sample underwater images were processed using the developed techniques and are used to illustrate comparative levels of control for different water qualities, object contrasts, and ranges. The implementation algorithm required to use these techniques to control an autonomous underwater vehicle's camera light level and range-to-target distance is also presented. >

Proceedings ArticleDOI
01 Nov 1992
TL;DR: An active vision system which employs two high-resolution cameras for image acquisition and is capable of automatically directing movements of the cameras so that camera positioning and image acquisition are tightly coupled with visual processing.
Abstract: This paper describes an active vision system which employs two high-resolution cameras for image acquisition. The system is capable of automatically directing movements of the cameras so that camera positioning and image acquisition are tightly coupled with visual processing. The system was developed as a research tool and is largely based on off-the-shelf components. A central workstation controls imaging parameters, which include five degrees of freedom for camera positioning (tilt, pan, translation, and independent vergence) and six degrees of freedom for the control of two motorized lenses (focus, aperture, and zoom). This paper is primarily concerned with describing the hardware of the system, the imaging model, and the calibration method employed. A brief description of system software is also given.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
30 Aug 1992
TL;DR: A vision system is proposed that has two differentiated visual fields, i.e. peripheral and central, and the fixation point to the scene is actuated actively, which is aiming at an active vision system that recognizes environment actively by gazing at each object of interest.
Abstract: A vision system is proposed that has two differentiated visual fields, i.e. peripheral and central, and the fixation point to the scene is actuated actively. The proposed vision system is aiming at an active vision system that recognizes environment actively by gazing at each object of interest. To build this system, the authors developed a vision system with two CCD cameras of which fixation points are controlled by a computer. A fundamental vision model is proposed that consists of five independent neural network modules. Each module is educated individually and processes visual information independently. To signify the feasibility of the proposed vision model, some fundamental experiments were carried out. >

Journal ArticleDOI
01 Aug 1992
TL;DR: A wide-screen color video camera with a 1/3 in 9:16 charge coupled device (CCD) imager was successfully developed to get a higher picture quality, exposure and white-balance were adaptively controlled through automatically understanding when and where the camera was used.
Abstract: As one application of digital camera signal processing, a wide-screen color video camera with a 1/3 in 9:16 charge coupled device (CCD) imager was successfully developed. To get a higher picture quality, exposure and white-balance were adaptively controlled through automatically understanding when and where the camera was used. The camera has full 500-line resolution and can be applied to a wide-screen TV system. >

Book ChapterDOI
03 Jan 1992
TL;DR: An approach to automating the control of virtual cameras in computer animation by determining the best view direction based on the view direction unsuitahility functions of actors and actors’ weights and a BSP-based rule is proposed.
Abstract: This paper presents an approach to automating the control of virtual cameras in computer animation. The best view direction is determined based on the view direction unsuitahility functions of actors and actors’ weights, and the camera is then positioned. To keep the order of actors on the screen, a BSP-based rule is also proposed.

Proceedings ArticleDOI
30 Apr 1992
TL;DR: This work explores focus to obtain depth or structure perception of the world by varying the degree of focusing by moving the camera with respect to the object position.
Abstract: Vision systems are a possible choice to obtain sensorial data about the world in robotic systems. To obtain three-dimensional information using vision we can use different computer vision techniques such as stereo, motion, or focus. In particular, this work explores focus to obtain depth or structure perception of the world. In practice, focusing can be obtained by displacing the sensor plate with respect to the image plane, by moving the lens, or by moving the object with respect to the optical system. Moving the lens or sensor plate with respect to each other causes changes of the magnification and corresponding changes on the object coordinates. In order to overcome these problems, we propose varying the degree of focusing by moving the camera with respect to the object position. In our case, the camera is attached to the tool of a manipulator in a hand-eye configuration, with the position of the camera always known. This approach ensures that the focused areas of the image are always subjected to the same magnification. To measure the focus quality we use operators to evaluate the quantity of high-frequency components on the image. Different types of these operators were tested and the results compared.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

01 Jan 1992
TL;DR: This thesis examines the control of the motion of a visual sensor and presents an efficient method for determining the set of viewpoints which satisfy an optimality criterion.
Abstract: This thesis examines the control of the motion of a visual sensor. Control of the position of a camera falls into the area of computer vision called active vision. Active vision is concerned with controlling camera parameters, such as the camera trajectory, in ways that serve to make the vision processing more robust. We examine three particular active vision tasks which exemplify the advantage of controlling camera position. Building a three dimensional model of an object requires three dimensional data, such as depth to the object surface. Small controlled motion of the camera induces object dependent optical flows that are to be used to estimate depths to points in the scene. We use knowledge of the motion to estimate depth to visible surfaces. Our approach models the individual sources of error in the estimation process to recover a robust estimate of the depth and a measure of the uncertainty in the estimate. Second, we look for motions of the camera which move the camera to a new viewpoint from which a more accurate solution to the computation of depth from motion can be obtained. We present an efficient method for determining the set of viewpoints which satisfy an optimality criterion. Using the depth measured at the current viewpoint, we compute the direction of motion to move the camera towards an optimal viewpoint. This is an example of directed vision in that the vision system carrying out the vision task specifies a trajectory for the camera based on the visual information in the scene. From a particular viewpoint, the "attention" of the visual system may be directed to portions of the scene for further study. We present a method to obtain visual servo control of a visual sensor based on a model of visual attention. Control of the direction of gaze of the camera is achieved through the specification of gains in parallel visual feedback loops.

Proceedings ArticleDOI
01 Jan 1992
TL;DR: A method which involves moving a single camera and prolonging the base line, then measuring the motion parameters of the camera by image correspondence, and Primitives are used as units of correspondence to make the matching stable.
Abstract: It is difficult to obtain sufficient accuracy in view of the outdoor environment to perform 3D stereo measurement. The authors propose a method which involves moving a single camera and prolonging the base line, then measuring the motion parameters of the camera by image correspondence. Primitives are used as units of correspondence to make the matching stable. This paper presents the method and some experimental results. >

Proceedings ArticleDOI
09 Nov 1992
TL;DR: A top-down scheme is proposed for target acquisition, recognition, and localization based on a top- down process of human vision subsumed within the scanpath theory.
Abstract: A top-down scheme is proposed for target acquisition, recognition, and localization. This scheme is based on a top-down process of human vision subsumed within the scanpath theory. An important problem is to arrange for camera control when an obstacle intervenes between the camera and the robot being monitored. Two possible procedures to resolve this problem have been evaluated in simulation experiments utilizing computer graphics. >

Proceedings ArticleDOI
01 Jun 1992
TL;DR: The next phase of this project will bring this type of analysis into a machine environment more conducive to interactivity: a backhoe simulator with levers to control the vehicle and bucket positions, viewed through a virtual reality environment.
Abstract: A major criterion in the design of backhoes (and other heavy machinery) is the ability of the operator to see all critical portions of the vehicle and the surrounding environment. Computer graphics provides a method for analyzing this ability prior to the building of full-scale wooden models. By placing the computer graphic camera at the operator's eyepoint, designers can detect poor placement of supports, blind spots, etc. In this type of analysis, the camera becomes an active, yet somewhat imperfect, participant in our understanding of what an operator of the backhoe 'sees'. In order to simulate a backhoe operator's vision from within a cab, one needs to expand the angle of view of the camera to mimic unfocused, peripheral vision. A traditional wide-angle lens creates extreme distortions that are not present in 'natural' vision, and is therefore hardly an adequate representation. The solution we arrived at uses seven cameras fanned out horizontally in order to capture a relatively undistorted 155 degree angle of view. In addition, another camera displays and numerically analyzes the percentage of the loader bucket visible and blocked. These two views are presented simultaneously in order to address both the 'naturalistic' and quantitative needs of the designers, as well as to point to the incompleteness of any one representation of a scene. In the next phase of this project we will bring this type of analysis into a machine environment more conducive to interactivity: a backhoe simulator with levers to control the vehicle and bucket positions, viewed through a virtual reality environment.

Proceedings ArticleDOI
01 Nov 1992
TL;DR: A qualitative vision system suitable for intelligent robots to perceive depth information qualitatively using monocular 2-D images by establishing a set of propositions relating depth information to the changes of image region caused by camera motion.
Abstract: There are two kinds of depth perception for robot vision systems: quantitative and qualitative. The first one can be used to reconstruct the visible surfaces numerically while the second to describe the visible surfaces qualitatively. In this paper, we present a qualitative vision system suitable for intelligent robots. The goal of such a system is to perceive depth information qualitatively using monocular 2-D images. We first establish a set of propositions relating depth information, such as 3-D orientation and distance, to the changes of image region caused by camera motion. We then introduce an approximation-based visual tracking system. Given an object, the tracking system tracks its image while moving the camera in a way dependent upon the particular depth property to be perceived. Checking the data generated by the tracking system with our propositions provides us the depth information about the object. The visual tracking system can track image regions in real-time even as implemented on a PC AT clone machine, and mobile robots can naturally provide the inputs to our visual tracking system, therefore, we are able to construct a real-time, cost effective, monocular, qualitative and 3-dimensional robot vision system. To verify our idea, we present examples of perception of planar surface orientation, distance, size, dimensionality and convexity/concavity.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
01 Nov 1992
TL;DR: A low-cost machine vision system for identifying various types of automotive wheels which are manufactured in several styles and sizes has been installed in a plant and has proven to be extremely effective.
Abstract: This paper describes the design and implementation of a low-cost machine vision system for identifying various types of automotive wheels which are manufactured in several styles and sizes. In this application, a variety of wheels travel on a conveyor in random order through a number of processing steps. One of these processes requires the identification of the wheel type which was performed manually by an operator. A vision system was designed to provide the required identification. The system consisted of an annular illumination source, a CCD TV camera, frame grabber, and 386-compatible computer. Statistical pattern recognition techniques were used to provide robust classification as well as a simple means for adding new wheel designs to the system. Maintenance of the system can be performed by plant personnel with minimal training. The basic steps for identification include image acquisition, segmentation of the regions of interest, extraction of selected features, and classification. The vision system has been installed in a plant and has proven to be extremely effective. The system properly identifies the wheels correctly up to 30 wheels per minute regardless of rotational orientation in the camera's field of view. Correct classification can even be achieved if a portion of the wheel is blocked off from the camera. Significant cost savings have been achieved by a reduction in scrap associated with incorrect manual classification as well as a reduction of labor in a tedious task.

Proceedings ArticleDOI
Q. Chen1, J.Y.S. Luh1
12 May 1992
TL;DR: The method of relaxation labeling, which is a parallel distributed processing mechanism, is presented and the relaxation labeling algorithm was applied to perform an object identification task in an intelligent robotic workstation with two cameras.
Abstract: The method of relaxation labeling, which is a parallel distributed processing mechanism, is presented. Relaxation labeling is able to integrate the sensor data with some known knowledge. Such an integration is implemented by forcing constraint satisfaction. One type of constraint satisfaction in intelligent robotic systems is described. A relaxation labeling algorithm is presented. The local convergence properties of a labeling process are established. Its global behavior was examined numerically. To illustrate its potential applications in robotic workstations, the relaxation labeling algorithm was applied to perform an object identification task in an intelligent robotic workstation with two cameras. >

Proceedings ArticleDOI
30 Apr 1992
TL;DR: The MVP sensor planning and modeling system as mentioned in this paper automatically determines camera viewpoints and settings so that object features of interest are simultaneously visible, inside the field-of-view, in-focus and magnified as required.
Abstract: In this paper we present an overview of the MVP sensor planning and modeling system that we have developed. MVP automatically determines camera viewpoints and settings so that object features of interest are simultaneously visible, inside the field-of-view, in-focus and magnified as required. We have analytically characterized the domain of admissible camera locations, orientations and optical settings for each of the above feature detectability requirements. In addition, we have posed the problem in an optimization setting in order to determine viewpoints that simultaneously satisfy all previous requirements. The location, orientation and optical settings of the computer viewpoint are achieved in the employed sensor setup by using sensor calibration models. For this purpose, calibration techniques have been developed that determine the mapping between the parameters that are planned and the parameters that can be controlled in a sensor setup which consists of a camera in a hand-eye arrangement equipped with a lens that has zoom, focus and aperture control. Experimental results are shown of these techniques when all the above feature detectability constraints are included. In order to verify satisfaction of these constraints, camera views are taken from the computed viewpoints by a robot vision system that is positioned and its lens is set according to the results of this method.

Proceedings ArticleDOI
Peter Cencik1
01 Mar 1992
TL;DR: This paper examines the effect of analog output voltage from the camera sampled asynchronously by the common industrial machine vision systems and it can resolve in worse edge deformation and mislocation for the camera with higher resolution, and better signal to noise ratio than for thecamera of lower performance.
Abstract: It is important to select the right solid state camera in every machine vision application. The solid state camera technical specifications, such as spectral response, signal to noise ratio, dynamic range, sensitivity, sensor type and size, horizontal, and vertical resolution are the leading criteria for sensor selection. In general, it is expected that a camera with better specifications will improve gaging accuracy of a vision system. Yet, many times the result does not meet the expectation. In some cases the system performance even decreases. The reason is that the analog output voltage from the camera is sampled asynchronously by the common industrial machine vision systems and it can resolve in worse edge deformation and mislocation for the camera with higher resolution, and better signal to noise ratio than for the camera of lower performance. In this paper, we examine this effect with particular emphasis on edge detection performance. Video sampling timing charts and achieved subpixel accuracy and repeatability, when using several common solid state cameras with the same frame grabber, are presented. The guidelines for selecting a solid state camera according to the frame grabber are provided.