scispace - formally typeset
Search or ask a question

Showing papers on "Image sensor published in 1995"


Journal ArticleDOI
01 Feb 1995
TL;DR: A survey of research in the area of vision sensor planning is presented, and a brief description of representative sensing strategies for the tasks of object recognition and scene reconstruction are presented.
Abstract: A survey of research in the area of vision sensor planning is presented. The problem can be summarized as follows: given information about the environment as well as information about the task that the vision system is to accomplish, develop strategies to automatically determine sensor parameter values that achieve this task with a certain degree of satisfaction. With such strategies, sensor parameters values can be selected and can be purposefully changed in order to effectively perform the task at hand. The focus here is on vision sensor planning for the task of robustly detecting object features. For this task, camera and illumination parameters such as position, orientation, and optical settings are determined so that object features are, for example, visible, in focus, within the sensor field of view, magnified as required, and imaged with sufficient contrast. References to, and a brief description of, representative sensing strategies for the tasks of object recognition and scene reconstruction are also presented. For these tasks, sensor configurations are sought that will prove most useful when trying to identify an object or reconstruct a scene. >

493 citations


Patent
20 Dec 1995
TL;DR: In this article, an electronic camera consists of an image sensor for capturing an image, a converter stage for converting the image into digital image data, and a memory for storing a plurality of categories providing classification of the images by subject.
Abstract: An electronic camera captures images representing a variety of subjects and categorizes the image according to subject matter. The camera comprises an image sensor for capturing an image, a converter stage for converting the image into digital image data, and a memory for storing a plurality of categories providing classification of the images by subject. A processor in the camera has the capability of assigning the plurality of categories to the images captured by the camera, with each category providing a subject classification for the images. A user selects one or more categories for a plurality of images prior to capture, and an output image signal is then generated including the digital image data corresponding to a captured image and the particular category selected by the user. The categories can be default identifiers stored in the memory, or can be names, text (i.e., account number), and/or graphics overlays (i.e., company logo) entered via a host computer and uploaded to the camera memory before the pictures are taken.

444 citations


Journal ArticleDOI
TL;DR: The results showed that the flat-panel detector for digital radiology can potentially satisfy the detector design requirements for radiography (e.g., chest radiography and mammography) and is not quantum noise limited below the mean exposure rate typically used in fluoroscopy.
Abstract: We investigate a concept for making a large area, flat-panel detector for digital radiology. It employs an x-ray sensitive photoconductor to convert incident x-radiation to a charge image which is then electronically read out with a large area integrated circuit. The large area integrated circuit, also called an active matrix, consists of a two-dimensional array of thin film transistors (TFTs). The potential advantages of the flat-panel detector for digital radiography include: instantaneous digital radiographs without operator intervention; compact size approaching that of a screen-film cassette and thus compatibility with existing x-ray equipment; high quantum efficiency combined with high resolution. Its potential advantages over the x-ray image intensifier (XRII)/video systems for fluoroscopy include: compactness; geometric accuracy; high resolution, and absence of veiling glare. The feasibility of the detector for digital radiology was investigated using the properties of a particular photoconductor (amorphous selenium) and active matrix array (with cadmium selenide TFTs). The results showed that it can potentially satisfy the detector design requirements for radiography (e.g., chest radiography and mammography). For fluoroscopy, the images can be obtained in real-time but the detector is not quantum noise limited below the mean exposure rate typically used in fluoroscopy. Possible improvements in x-ray sensitivity and noise performance for the application in fluoroscopy are discussed.

297 citations


Patent
Kazuaki Takano1, Tatsuhiko Monzi1, Tanaka Yasunari1, Eiryoh Ondoh1, Makoto Shioya1 
01 Jun 1995
TL;DR: In this paper, an environment recognition device formed in such a manner that an image sensor is installed so that a mounting location recognition mark falls within an image pickup area, a comparator for comparing mark locations the images of which are picked up sequentially by the image sensor with respect to the initial location of the recognition mark in the pickup area is provided, and an indication signal for indicating that correction of the location of installing the image sensors is required is outputted to prevent erroneous decision in image processing when the mounting position changes by vibration, contact or the like.
Abstract: An environment recognition device formed in such a manner that an image sensor is installed so that an image sensor mounting location recognition mark falls within an image pickup area, comparator for comparing mark locations the images of which are picked up sequentially by the image sensor with respect to the initial location of the recognition mark in the image pickup area is provided, and, when it is detected that the mark location recognized sequentially has changed with respect to the initial location, an indication signal for indicating that correction of the location of installing the image sensor is required is outputted, thereby to prevent erroneous decision in image processing when the mounting position changes by vibration, contact or the like.

228 citations


Patent
10 May 1995
TL;DR: In this article, an image sensor array system is arranged to enable oblique access for readout of image data from a stepped pixel pattern of sensor cells, which represents an oblique line component of an image portion containing a 2-D bar code or other dataform.
Abstract: An image sensor array system is arranged to enable oblique access for readout of image data from a stepped pixel pattern of sensor cells. The stepped pixel pattern represents an oblique line component of an image portion containing a 2-D bar code or other dataform. An obliquely aligned bar code image can thus be read out along oblique lines which follow rows of bar code elements traversing the elements. The sensor array (16) is accessed by horizontal and vertical readout circuits (22 and 24) under the control of address signals from an address unit (20). Location signals, from a source (12), indicative of a selected image portion (39) may be used by the address unit (20) to provide address signals representative of the stepped pixel pattern for a particular oblique line component. Under the control of the address signals, image data from cells at the intersection of array lines and columns are sampled by sampling devices (26-32) and provided as output signals representative of the selected oblique line component. The output signals are then usable for decoding the bar code or other dataform.

202 citations


Proceedings ArticleDOI
01 Sep 1995
TL;DR: An approach to image fusion using the wavelet transform is described, which allows experimentation with various wavelet array combination and manipulation methods for image fusion, using a set of basic operations on wavelet frequency blocks.
Abstract: As new remote sensing systems are deployed, we will see an increase in the amount of image data available at different wavelengths. Also, images from a single sensor over the same area often exhibit clouds, forcing analysts to switch among several images or to mosaic the images by manually defining cutlines to eliminate clouds. The ability to fuse multiple images over the same area, and to have the fused product exhibit, in a single image, the important details visible in individual bands has become crucial in dealing with the large volume of data available. We describe an approach to image fusion using the wavelet transform. When images are merged in wavelet space, we can process different frequency ranges differently. For example, high frequency information from one image can be combined with lower frequency information from another, for performing edge enhancement. We have built a prototype system that allows experimentation with various wavelet array combination and manipulation methods for image fusion, using a set of basic operations on wavelet frequency blocks. Problems caused by image misregistration and processing artifacts are described. Examples of wavelet fusion results are shown, along with test images that clarify behavior of the wavelet fusion methods used.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

177 citations


Patent
08 Dec 1995
TL;DR: In this article, the scene illuminant is determined and an optimum color-correction transformation is determined to minimize color errors between an original scene and a reproduced image by adjusting three or more parameters.
Abstract: Multi-channel color image signals from a digital camera having multi-channel image sensors are corrected to account for variations in scene illuminant. This is accomplished by determining the scene illuminant and determining an optimum color-correction transformation in response to the scene illuminant which transform minimizes color errors between an original scene and a reproduced image by adjusting three or more parameters.

168 citations


Journal ArticleDOI
01 Feb 1995
TL;DR: This paper presents a technique that poses the vision sensor planning problem in an optimization setting and determines viewpoints that satisfy all previous requirements simultaneously and with a margin and presents experimental results of this technique when applied to a robotic vision system that consists of a camera mounted on a robot manipulator in a hand-eye configuration.
Abstract: The MVP (machine vision planner) model-based sensor planning system for robotic vision is presented. MVP automatically synthesizes desirable camera views of a scene based on geometric models of the environment, optical models of the vision sensors, and models of the task to be achieved. The generic task of feature detectability has been chosen since it is applicable to many robot-controlled vision systems. For such a task, features of interest in the environment are required to simultaneously be visible, inside the field of view, in focus, and magnified as required. In this paper, we present a technique that poses the vision sensor planning problem in an optimization setting and determines viewpoints that satisfy all previous requirements simultaneously and with a margin. In addition, we present experimental results of this technique when applied to a robotic vision system that consists of a camera mounted on a robot manipulator in a hand-eye configuration. >

167 citations


Journal ArticleDOI
TL;DR: In this article, a new type of CCD image sensor for two-dimensional synchronous detection ("lock-in imager") is presented. But the measurement principle allows each pixel to measure the size of the amplitude modulation, the relative phase and the mean brightness level (background) of an oscillating optical wave field.
Abstract: A new type of CCD image sensor for two-dimensional synchronous detection ("lock-in imager") is presented. The measurement principle allows each pixel to measure the size of the amplitude modulation, the relative phase and the mean brightness level (background) of an oscillating optical wave field. Design, operation and measurement results are presented. A typical application for this sensor would be in 3-D imaging (range measurement) using heterodyne interferometry. >

166 citations


Patent
04 Oct 1995
TL;DR: In this article, an image sensor array system is addressable to enable readout of randomly selected image data from any one or more individual sensor cells, for any selected image area, or for the entire image area.
Abstract: An image sensor array system is addressable to enable readout of randomly selected image data from any one or more individual sensor cells, for any selected image area (13), or for the entire image area (15). The sensor array (16) is accessed by horizontal and vertical readout circuits (22 and 24) under the control of address signals from an address unit (20). Location signals, from a source (12), indicative of a selected image area (13) may be used by the address unit (20) to provide address signals representative of the location of a specific sensor cell or area (13) of the array including image data of interest. Under the control of the address signals, image data from cells at the intersection of array lines and columns are sampled by sampling devices (26-32) and provided as output signals representative of the selected portion of the image area at an output port (34). Simplified readout can be provided for oblique line components of images. Differing input bus widths enable different levels of cell or line selection.

163 citations


Journal ArticleDOI
01 Oct 1995
TL;DR: A method for navigating a robot by detecting the azimuth of each object in the omnidirectional image, in real-time (at the frame rate of a TV camera), using a conic mirror is reported here.
Abstract: We designed a new omnidirectional image sensor COPIS (Conic Projection Image Sensor) to guide the navigation of a mobile robot. The feature of COPIS is passive sensing of the omnidirectional image of the environment, in real-time (at the frame rate of a TV camera), using a conic mirror. COPIS is a suitable sensor for visual navigation in a real world environment. We report here a method for navigating a robot by detecting the azimuth of each object in the omnidirectional image. The azimuth is matched with the given environmental map. The robot can precisely estimate its own location and motion (the velocity of the robot) because COPIS observes a 360/spl deg/ view around the robot, even when all edges are not extracted correctly from the omnidirectional image. The robot can avoid colliding against unknown obstacles and estimate locations by detecting azimuth changes, while moving about in the environment. Under the assumption of the known motion of the robot, an environmental map of an indoor scene is generated by monitoring azimuth change in the image. >

Proceedings ArticleDOI
20 Jun 1995
TL;DR: A prototype focus range sensor has been developed that produces up to 512/spl times/480 depth estimates at 30 Hz with an accuracy better than 0.3%.
Abstract: Structures of dynamic scenes can only be recovered using a real-time range sensor. Depth-from-defocus offers a direct solution to fast and dense range estimation. It is computationally efficient as it circumvents the correspondence problem faced by stereo and feature tracking in structure-from-motion. However, accurate depth estimation requires theoretical and practical solutions to a variety of problems including the recovery of textureless surfaces, precise blur estimation, and magnification variations caused by defocusing. Both textured and textureless surfaces are recovered using an illumination pattern that is projected via the same optical path used to acquire images. The illumination pattern is optimized to ensure maximum accuracy and spatial resolution in the computed depth. The relative blurring in two images is computed using a narrow-band linear operator that is designed by considering all the optical, sensing and computational elements of the depth-from-defocus system. Defocus-invariant magnification is achieved by the use of an additional aperture in the imaging optics. A prototype focus range sensor has been developed that produces up to 512/spl times/480 depth estimates at 30 Hz with an accuracy better than 0.3%. Several experimental results are included to demonstrate the performance of the sensor. >

Proceedings ArticleDOI
10 Dec 1995
TL;DR: Recent advancements in CMOS image sensor technology are reviewed, including both passive pixel sensors and active pixel sensors that permit realization of an electronic camera-on-a-chip.
Abstract: Recent advancements in CMOS image sensor technology are reviewed, including both passive pixel sensors and active pixel sensors. On-chip analog to digital converters and on-chip timing and control circuits permit realization of an electronic camera-on-a-chip.

Patent
02 Mar 1995
TL;DR: In this article, a linear array of sensor elements, with a two-dimensional navigation sensor array at each end, is used to scan an image and then manipulate the image signal from the imaging sensor to reduce distortion artifacts caused by curvilinear scanning.
Abstract: A scanning device and method of forming a scanned electronic image include an imaging sensor and at least one navigation sensor. In the preferred embodiment, the imaging sensor is a linear array of sensor elements, with a two-dimensional navigation sensor array at each end. The scanning device has three degrees of freedom, since position information from the navigation sensors allows manipulation of an image signal from the imaging sensor to reduce distortion artifacts caused by curvilinear scanning. Acceptable sources of the position information include printed matter and contrast variations dictated by variations in the inherent structure-related properties of the medium on which the scanned image is formed. Illumination for optimal operation of the navigation system may be introduced at a grazing angle in some applications or in the normal to a plane of the original in other applications, but this is not essential.

Patent
04 Aug 1995
TL;DR: In this paper, an active pixel image sensor in accordance with the present invention utilizes guard rings, protective diffusions, and/or a combination of these two techniques to prevent electrons generated at the periphery of the active area from impacting upon the image sensor array.
Abstract: An active pixel image sensor in accordance with the present invention utilizes guard rings, protective diffusions, and/or a combination of these two techniques to prevent electrons generated at the periphery of the active area from impacting upon the image sensor array. For example, an n+ guard ring connected to V cc can be imposed in the p-epi layer between the active area edge and the array, making it difficult for edge-generated electrons to penetrate the p+ epi in the array; this approach requires the use of annular MOS devices in the array. Alternatively, the gates of the n-channel devices in the array can be built to overlap heavily doped p+ bands, forcing current flow between the source/drain regions. As stated above, combinations of these two techniques are also contemplated. Elimination of the active area edge leakage component from the array can increase the dynamic range of the image sensor by 6 bits.

Proceedings ArticleDOI
21 May 1995
TL;DR: An image sensor with a hyperboloidal mirror for vision based navigation of a mobile robot and a method for estimating the motion of the robot and finding unknown obstacles is shown.
Abstract: Described here is an image sensor with a hyperboloidal mirror for vision based navigation of a mobile robot. Its name is HyperOmni Vision. This sensing system can acquire an omnidirectional view around the robot, in real-time, with use of a hyperboloidal mirror. The authors show a prototype of a mobile robot system with HyperOmni Vision and a method for estimating the motion of the robot and finding unknown obstacles.

Journal ArticleDOI
TL;DR: In this article, a large-aperture x-ray TV-type detector was developed for diffraction with synchrotron radiation, which consists of a beryllium-windowed xray image intensifier, an optical lens, a charge coupled device (CCD) image sensor, and data acquisition system.
Abstract: A large‐aperture (150 mm and 230 mm in diameter) x‐ray TV‐type detector has been developed for x‐ray diffraction with synchrotron radiation. The detector consists of a beryllium‐windowed x‐ray image intensifier, an optical lens, a charge coupled device (CCD) image sensor, and data acquisition system. The spatial resolution is 270 μm(FWHM), and the dynamic range is 6000:1. The noise level is quantum limited. The nonuniformity of response and image distortion is corrected by software. When a TV‐rate (NTSC‐mode) CCD is used as an image sensor, time‐resolved measurements with a rate of 30 frame/s can be achieved with its noise quantum limited.

Patent
28 Nov 1995
TL;DR: In this article, an image sensor assembly is mounted in an optical system having a plurality of reference locators, and the externally accessible reference features are used to exactly constrain the image sensor array relative to the locators.
Abstract: An image sensor assembly is mounted in an optical system having a plurality of reference locators. The image sensor assembly includes an image sensing device having photolithographically generated elements, such as image sensing sites, and a carrier package for enclosing the image sensing device. The carrier package has externally accessible reference features that are optically aligned with respect to the photolithographically generated elements on the image sensing device. Moreover, the externally accessible reference features are used to exactly constrain the image sensor assembly relative to the reference locators. Referencing the image sensing device to the same features that are used for exact constraint removes the effect of material variations that may cause dimensional changes and eliminates the need to activate the sensor for alignment of the sensor assembly in the optical system.

Journal ArticleDOI
TL;DR: By analyzing the signal-to-noise ratios and visual aesthetics of the fused images, contrast-sensitivity-based fusion is shown to provide excellent fusion results and to outperform previous fusion methods.
Abstract: A perceptual-based multiresolution image fusion technique is demonstrated using the Airborne Visible and Infrared Imaging Spectrometer (AVIRIS) hyperspectral sensor data. The AVIRIS sensor, which simultaneously collects information in 224 spectral bands that range from 0.4 to 2.5 μm in approximately 10-nm increments, produces 224 images, each representing a single spectral band. The fusion algorithm consists of three stages. First, a Daubechies orthogonal wavelet basis set is used to perform a multiresolution decomposition of each spectral image. Next, the coefficients from each image are combined using a perceptual-based weighting. The weighting of each coefficient, from a given spectral band image, is determined by the spatial-frequency response (contrast sensitivity) of the human visual system. The spectral image with the higher saliency value, where saliency is based on a perceptual energy, will receive the larger weight. Finally, the fused coefficients are used for reconstruction to obtain the fused image. The image fusion algorithm is analyzed using test images with known image characteristics and image data from the AVIRIS hyperspectral sensor. By analyzing the signal-to-noise ratios and visual aesthetics of the fused images, contrast-sensitivity-based fusion is shown to provide excellent fusion results and to outperform previous fusion methods.

Patent
21 Nov 1995
TL;DR: In this article, an improved CCD-based x-ray image sensor system enables the use of an uncooled or only slightly cooled CCD array within a standard size xray film cassette.
Abstract: An improved CCD-based x-ray image sensor system enables the use of an uncooled or only slightly cooled CCD array (18a) within a standard size x-ray film cassette (1). The sensor system provides a number of advanced functions such as remote diagnostic capability, variable image resolution, real-time exposure control, automatic x-ray detection, a low-power "sleep" mode, and automatic, closed loop optimization of image quality.

Patent
17 Jan 1995
TL;DR: In this article, a method for capturing and directly scanning a rectilinear imaging element using a non-linear scan is incorporated into a single chip comprising at least a sensor array and an MSD.
Abstract: A method for capturing and directly scanning a rectilinear imaging element using a non-linear scan is incorporated into a single chip comprising at least a sensor array and an MSD. The method directly addresses each picture element of an analog image captured with an imaging device having either a partial spherical field of view or a conventional two-dimensional field of view. An image transform processor is used to process the captured image depending upon the particular portion of interest of the image. In the case of a non-linear scan, the image transform processor is provided with the capability of geometrically filtering the portion of interest of the captured image such that a two-dimensional, undistorted image is displayed at the monitor. A CMOS active pixel image sensor (APS) or Charge Injection Diode (CID) camera array are used to capture the image to be scanned. The image transform processor of the present invention is a Mixed-signal Semiconductor Device (MSD). The image transform processor corrects any predetermined distortion introduced by the image sensor array.

Journal ArticleDOI
TL;DR: A variety of designs for polarization camera sensors that have been built to automatically sense partial linearly polarized light, and computationally process this sensed polarization information at pixel resolution to produce a visualization of reflected polarization from a scene, and/or a visualize of physical information in a scene directly related to sensed polarization.

Patent
24 Oct 1995
TL;DR: In this article, a photoelectric converter for converting incident light into photoelectric current, and a function of removing noise light from imaging light including noise light and reflected from an object to be photographed.
Abstract: There is provided a photosensor having a photoelectric converter for converting incident light into a photoelectric current, and a function of removing noise light from imaging light including noise light and reflected from an object to be photographed. A plurality of photosensors each having the arrangement are used as an image sensor. A single photosensor having the arrangement or a plurality of photosensors each having the arrangement are used as a distance sensor. There is provided a photosensor in which a storage unit stores electric quantity corresponding to fixed light, and electric quantity corresponding to reflected light in the state wherein electric quantity stored in the storage unit is reproduced by a reproduction unit is introduced, and the difference therebetween is reproduced as an electric signal. A single photosensor having the arrangement or a plurality of photosensors each having the arrangement are used as a distance sensor. The plurality of photosensors are used as an image or distance image sensor.

Journal ArticleDOI
TL;DR: In this article, two piezoelastic polarization modulators are used in combination with charge-coupled-device (CCD) image sensors to simultaneously record all four Stokes parameters.
Abstract: A new type of 2-D polarimeter is developed for use in high-resolution observations of solar magnetic fields. Two piezoelastic polarization modulators are used in combination with charge-coupled-device (CCD) image sensors to simultaneously record all four Stokes parameters. Demodulation of the fast 50 and 100 kHz intensity modulations produced by the piezoelastic modulators is achieved by CCD sensors used as synchronous integrators sensitive to a single frequency. The temporary buffer storage needed to separate the charges generated during the two modulation half periods is obtained by covering every second row of the CCD sensors with an opaque mask. The charges are shifted back and forth between the photosensitive uncovered and the adjacent storage rows in phase with the modulation. The polarization signal is calculated from the difference between the charges accumulated in two adjacent rows. A separate CCD sensor is needed for each normalized Stokes parameter Q/I, U/I, or V/I. Because the high modulation frequency lies well above the seeing frequencies occurring in solar observations, precision polarimetry becomes possible. We have demonstrated the capability of this new type of instrument to achieve a polarimetric sensitivity below 10-3 in a single frame. By frame averaging the noise level in the fractional polarization can be reduced to the order of 10-5.

Patent
Yuuji Toyomura1, Toshihumi Abe1
15 Feb 1995
TL;DR: In this paper, an image reading apparatus consisting of a scanner portion, a carriage system carrying the first line image sensor, and an ADF unit, having a transporting pass for feeding a copy from a stacked copies, was described.
Abstract: An image reading apparatus comprises: a scanner portion, having a first line image sensor a carriage system carrying the first line image sensor; and an ADF unit, having a transporting pass for feeding a copy from a stacked copies and carrying the copy from a first end of a copy glass of the scanner and discharging the copy at a the second end of the scanner, the ADF unit further having an attribution detection portion, including a second line image sensor, for detecting an attribution of an image of the copy, such as a color image or monochrome image, or a multi-value image or binary image. According to the attribution, a reading condition such as a reading speed or the reading interval by the first line image sensor is changed. In the absence of a command signal indicative of the attribution, the first line image sensor is positioned at the second end and in the presence of the command signal, the first line image sensor is positioned at the first end. If a high resolution is required, a copy fed to the copy glass is scanned by the carriage system with a scanning speed and resolution determined by the request and the detection result of the attribution detection portion.

Proceedings ArticleDOI
15 Feb 1995
TL;DR: This 256/spl times/256 active pixel sensor (APS) is designed for consumer multimedia applications requiring low-cost, high-functionality, compact cameras capable of acquiring high-quality images at video frame rates.
Abstract: This 256/spl times/256 active pixel sensor (APS) is designed for consumer multimedia applications requiring low-cost, high-functionality, compact cameras capable of acquiring high-quality images at video frame rates. This sensor allows random access of the image data, permitting a simple implementation of electronic pan and zoom. Use in portable equipment is simplified by standard operating voltages and low power (80mW@5V, 20mW@3.3V). Fabrication in a standard CMOS process allows the integration of a variety of new and existing digital circuits with the image sensor. In addition, by making use of the implicit dynamic frame buffer provided by the active pixel structure, the sensor can generate a signal that represents the difference between sequential frames. This may be used for motion detection, image stabilization, and compression purposes.

Journal ArticleDOI
TL;DR: In this paper, a 64/spl times/64-pixel image sensor with full-frame analog memory and an on-chip motion processor is presented, which uses the switched-capacitor technique and calculates the difference between the values of the signal on each pixel in successive frames.
Abstract: A 64/spl times/64-pixel image sensor with full-frame analog memory and on-chip motion processor is presented. The processor consists of a charge amplifier and an analog subtractor. It uses the switched-capacitor technique and calculates the difference between the values of the signal on each pixel in successive frames. The rate can achieve up to 60 frames/s with limited area and power overhead. The analog memory required for the storage of the previous frame is implemented using implanted capacitors placed within the sensor array. Fabricated in a 1.2-/spl mu/m standard CMOS process with an added metal 3 light-shielding layer, the circuit is fully functional and requires a total core area of 13 mm/sup 2/. >

Patent
28 Oct 1995
TL;DR: In this paper, the authors proposed an imaging sensor that simultaneously detects and demodulates intensity-modulated radiation as a function of position, thus ensuring that the object is accurately recorded for rangefinding purposes.
Abstract: An imaging sensor (13) has a multiplicity of sensor elements (16). Each sensor element (16) has a light-sensitive zone (17) in which radiation is detected as a function of position. A multiplicity of storage cells (21) successively store charges detected in the light-sensitive zone (17) of each sensor element (16) in synchronism with a modulation signal which is produced by the radiation source. The imaging sensor (13) simultaneously detects and demodulates intensity-modulated radiation as a function of position. The invention makes it possible to determine a range of parameters for the object (11) being examined, thus ensuring that the object is accurately recorded for rangefinding purposes.

Patent
09 Feb 1995
TL;DR: An image sensing device comprises an image sensor for photoelectrically converting an image sensing light coming from an object into image sensing signal and for storing signal charges and providing a readout signal.
Abstract: An image sensing device comprises an image sensor for photoelectrically converting an image sensing light coming from an object into an image sensing signal and for storing signal charges and providing a readout signal. Control circuitry variably sets storage times of the image sensor and an image composer composes an image signal for one picture from a plurality of readout signals from the image sensor having different charge storage times.

Proceedings ArticleDOI
01 May 1995
TL;DR: A biologically motivated CMOS foveated image sensor for use in mobile robotic and machine vision applications that benefits from a high degree of integration, minimal power consumption and ease of manufacture due to the use of a standard 1.2 /spl mu/m ASIC CMOS process.
Abstract: We describe the design and implementation of a CMOS foveated image sensor for use in mobile robotic and machine vision applications. The sensor is biologically motivated and performs a spatial image transformation from Cartesian to log-polar coordinates. As opposed to traditional approaches, the sensor benefits from a high degree of integration, minimal power consumption and ease of manufacture due to the use of a standard 1.2 /spl mu/m ASIC CMOS process. The prototype imager operates at 28 frames/sec when interfaced to a PC.