scispace - formally typeset
Search or ask a question

Showing papers on "Image sensor published in 1998"


Journal ArticleDOI
TL;DR: The edge method provides a convenient measurement of the presampled MTF for digital radiographic systems with good response at low frequencies.
Abstract: The modulation transfer function (MTF) of radiographic systems is frequently evaluated by measuring the system's line spread function (LSF) using narrow slits. The slit method requires precise fabrication and alignment of a slit and high radiation exposure. An alternative method for determining the MTF uses a sharp, attenuating edge device. We have constructed an edge device from a 250-microm-thick lead foil laminated between two thin slabs of acrylic. The device is placed near the detector and aligned with the aid of a laser beam and a holder such that a polished edge is parallel to the x-ray beam. A digital image of the edge is processed to obtain the presampled MTF. The image processing includes automated determination of the edge angle, reprojection, sub-binning, smoothing of the edge spread function (ESF), and spectral estimation. This edge method has been compared to the slit method using measurements on standard and high-resolution imaging plates of a digital storage phosphor (DSP) radiography system. The experimental results for both methods agree with a mean MTF difference of 0.008. The edge method provides a convenient measurement of the presampled MTF for digital radiographic systems with good response at low frequencies.

701 citations


Proceedings ArticleDOI
04 Jan 1998
TL;DR: In this article, the authors derived the complete class of single-lens single-mirror catadioptric sensors which have a single viewpoint and an expression for the spatial resolution of a single-view camera in terms of the resolution of the camera used to construct it.
Abstract: Conventional video cameras have limited fields of view which make them restrictive for certain applications in computational vision. A catadioptric sensor uses a combination of lenses and mirrors placed in a carefully arranged configuration to capture a much wider field of view. When designing a catadioptric sensor, the shape of the mirror(s) should ideally be selected to ensure that the complete catadioptric system has a single effective viewpoint. In this paper, we derive the complete class of single-lens single-mirror catadioptric sensors which have a single viewpoint and an expression for the spatial resolution of a catadioptric sensor in terms of the resolution of the camera used to construct it. We also include a preliminary analysis of the defocus blur caused by the use of a curved mirror.

415 citations


Patent
15 Jul 1998
TL;DR: In this article, a camera system comprising at least one area image sensor for imaging a scene, a camera processor for processing the image scene in accordance with a predetermined scene transformation requirement, and a printer for printing out the processed image scene on print media, utilizing printing ink stored in a single detachable module inside the camera system.
Abstract: A camera system comprising at least one area image sensor for imaging a scene; a camera processor means for processing the image scene in accordance with a predetermined scene transformation requirement; and a printer for printing out the processed image scene on print media, utilizing printing ink stored in a single detachable module inside the camera system; the camera system comprising a portable hand held unit for the imaging of scenes by the area image sensor and printing the scenes directly out of the camera system via the printer.

286 citations


Proceedings ArticleDOI
04 Jan 1998
TL;DR: It is shown that when this registration technique is applied to the chosen image representation with a local normalized-correlation similarity measure, it provides a new multi-sensor alignment algorithm which is robust to outliers, and applies to a wide variety of globally complex brightness transformations between the two images.
Abstract: This paper presents a method for alignment of images acquired by sensors of different modalities (e.g., EO and IR). The paper has two main contributions: (i) It identifies an appropriate image representation, for multi-sensor alignment, i.e., a representation which emphasizes the common information between the two multi-sensor images, suppresses the non-common information, and is adequate for coarse-to-fine processing. (ii) It presents a new alignment technique which applies global estimation to any choice of a local similarity measure. In particular, it is shown that when this registration technique is applied to the chosen image representation with a local normalized-correlation similarity measure, it provides a new multi-sensor alignment algorithm which is robust to outliers, and applies to a wide variety of globally complex brightness transformations between the two images. Our proposed image representation does not rely on sparse image features (e.g., edge, contour, or point features). It is continuous and does not eliminate the detailed variations within local image regions. Our method naturally extends to coarse-to-fine processing, and applies even in situations when the multi-sensor signals are globally characterized by low statistical correlation.

272 citations


Patent
17 Sep 1998
TL;DR: In this paper, a reduced area imaging device is provided for use in medical or dental instruments such as an endoscope, where the image sensor is placed remote from the remaining circuitry.
Abstract: A reduced area imaging device is provided for use in medical or dental instruments such as an endoscope. In one configuration of the imaging device, the image sensor is placed remote from the remaining circuitry. In another configuration, all of the circuitry to include the image sensor is placed in a stacked fashion at the same location. In a first embodiment of the invention, the entire imaging device can be placed at the distal tip of an endoscope. In a second embodiment, the image sensor is remote from the remaining circuitry according to the first configuration, and wherein a control box can be provided which communicates with the image sensor and is placed remotely from the endoscope. In yet another embodiment, the imaging device can be incorporated in the housing of a standard medical camera which is adapted for use with traditional rod lens endoscopes. In any of the embodiments, the image sensor may be placed alone on a first circuit board, or timing and control circuits may be included on the first circuit board containing the image sensor. One or more video processing boards can be stacked in a longitudinal fashion with respect to the first board, or the video processing boards may be placed in the control box.

245 citations


Proceedings ArticleDOI
04 Jan 1998
TL;DR: This paper proposes mapping the image velocity vectors to a sphere, using the Jacobian of the transformation between the projection model of the camera and spherical projection, and demonstrates the ability to compute ego-motion with omnidirectional cameras.
Abstract: Recent research in image sensors has produced cameras with very large fields of view. An area of computer vision research which will benefit from this technology is the computation of camera motion (ego-motion) from a sequence of images. Traditional cameras stiffer from the problem that the direction of translation may lie outside of the field of view, making the computation of camera motion sensitive to noise. In this paper, we present a method for the recovery of ego-motion using omnidirectional cameras. Noting the relationship between spherical projection and wide-angle imaging devices, we propose mapping the image velocity vectors to a sphere, using the Jacobian of the transformation between the projection model of the camera and spherical projection. Once the velocity vectors are mapped to a sphere, we show how existing ego-motion algorithms can be applied and present some experimental results. These results demonstrate the ability to compute ego-motion with omnidirectional cameras.

243 citations


Journal ArticleDOI
TL;DR: It is concluded that the linear interpolation method, which takes correlation into consideration, is the most suitable for consumer product applications such as digital still cameras.
Abstract: This paper discusses the interpolation technique applied to the Bayer primary color method, used frequently as the pixel structure of CCD image sensors for digital still cameras. Eight typical types of interpolation methods are discussed from three viewpoints: the characteristics of the interpolated images, the processing time required to realize their methods based on a 32-bit MCU for embedded applications, and the quality of the resultant images. In terms of reducing the occurrences of pseudocolor and achieving good color restoration, the linear interpolation method taking G's correlation determined by using R/B pixels into consideration was found to be excellent. The measured machine cycle of the interpolation methods was approximately 46 cycles per pixel. Therefore, every method was able to interpolate a VGA-size image in approximately 0.2 seconds with the MCU operating at 60 MHz. In terms of the S/N ratio, a good image quality was obtained through the linear interpolation methods, even with shorter processing time. Based on these results it is concluded that the linear interpolation method, which takes correlation into consideration, is the most suitable for consumer product applications such as digital still cameras.

214 citations


Patent
20 Oct 1998
TL;DR: In this paper, a reduced area imaging device is provided for use in medical or dental instruments such as an endoscope, where the image sensor is placed remote from the remaining circuitry.
Abstract: A reduced area imaging device is provided for use in medical or dental instruments such as an endoscope. In one configuration of the imaging device, the image sensor is placed remote from the remaining circuitry. In another configuration, all of the circuitry to include the image sensor is placed in a stacked fashion at the same location. In a first embodiment of the invention, the entire imaging device can be placed at the distal tip of an endoscope. In a second embodiment, the image sensor is remote from the remaining circuitry according to the first configuration, and wherein a control box can be provided which communicates with the image sensor and is placed remotely from the endoscope. In yet another embodiment, the imaging device can be incorporated in the housing of a standard medical camera which is adapted for use with traditional rod lens endoscopes. In any of the embodiments, the image sensor may be placed alone on a first circuit board, or timing and control circuits may be included on the first circuit board containing the image sensor. One or more video processing boards can be stacked in a longitudinal fashion with respect to the first board, or the video processing boards may be placed in the control box.

202 citations


Patent
30 Apr 1998
TL;DR: In this article, a scanning mouse has two optical navigation sensors to allow measurement of both translation and rotation of the scanning mouse, and an image sensor, and the ability to digitize the sensed image data.
Abstract: A scanning mouse has two optical navigation sensors to allow measurement of both translation and rotation of the scanning mouse. It also has an image sensor, and the ability to digitize the sensed image data. There may be a second image sensor at perpendicular to the first. A high speed digital data path connects the scanning mouse to the computer. Software in the computer processes image line data that includes the locations of the navigation sensors and the digitized image data to produce partial scanned images that are further assembled into a complete image within the environment of the computer. Each optical navigation sensor images as an array of pixels the spatial features of generally any micro textured or micro detailed work surface below the mouse. The photo detector responses are digitized and stored as a frame into memory. Motion produces successive frames of translated patterns of pixel information, which are compared by autocorrelation to ascertain the direction and amount of movement. A hold feature suspends the production of movement signals to the computer.

198 citations


Journal ArticleDOI
01 Aug 1998
TL;DR: A median-based motion correction scheme is proposed which is robust to various irregular conditions such as moving objects and intentional panning and can be realized using only Boolean functions which have significantly reduced computational complexity.
Abstract: In this paper, we present a new digital image stabilization (DIS) scheme based on bit-plane matching (BPM) The proposed DIS system performs motion estimation using 1-bit planes which are extracted from a video sequence This motion estimation technique can be realized using only Boolean functions which have significantly reduced computational complexity, while the accuracy of motion estimation is maintained In the second part of this paper, a median-based motion correction scheme is proposed which is robust to various irregular conditions such as moving objects and intentional panning Simulation results show that the proposed DIS algorithm exhibits better performance compared with existing other algorithms when applied to real video signals

191 citations


Journal ArticleDOI
TL;DR: Limits need to be overcome before these devices can be used clinically, including developing larger flat-panel light sensors, the elimination of "noisy" pixels with high dark signal, and improvements in the uniform sensitivity of the sensors.
Abstract: We have measured the linearity, spatial resolution (MTF), noise (NPS), and signal-to-noise characteristics (DQE) of an electronic portal imaging device(EPID) based on an amorphous silicon flat-panel array. The array has a 128×128 -pixel matrix and each pixel is 0.75×0.75 mm 2 in dimension so the array covers an area of 96×96 mm 2 . The array acts like a large area light sensor and records the optical signals generated in a metal plate/phosphor screen x-ray detector when the detector is irradiated by a megavoltage x-ray beam. In addition, approximately 0.5% of the total signal is generated by nonoptical processes. The noise measurements show that the device is quantum noise limited with the noise power generated by the x-ray quanta being up to 100 times greater than the noise added by the external readout electronics and flat-panel light sensor itself. However, the flat-panel light sensor does reduce the spatial resolution (compared to a perfect optical sensor with infinitesimal pixel size) because of its moderate pixel size and because optical spread can occur in the transparent glues used to attach the phosphor screen to the flat-panel light sensor. The response of the sensor is very linear and does not suffer from the glare phenomenon associated with TV camera-based EPIDs—characteristics which suggest that the amorphous siliconEPID will be well suited to transit dosimetry. Nevertheless, some limitations need to be overcome before these devices can be used clinically. These include developing larger flat-panel light sensors, the elimination of “noisy” pixels with high dark signal, and improvements in the uniform sensitivity of the sensors. This last requirement is only needed for transit dosimetry applications where it would greatly simplify calibration of the device. In addition, an image acquisition scheme must be developed to eliminate artifacts created by the pulsed x-ray beam generated by linear accelerators. Despite these limitations, our studies suggest that the amorphous siliconEPIDs are very well suited to portal imaging.

Patent
10 Jul 1998
TL;DR: In this paper, an improved opto-electronic imaging system was proposed for use with incoherently illuminated objects, and which produces final images having a reduced imaging error content, in part by including a phase mask for causing the OTF of the optical assembly to be relatively invariant over a range of working distances, and an amplitude mask having a transmittance that decreases continuously as a function of distance from the center thereof.
Abstract: An improved opto-electronic imaging system which is adapted for use with incoherently illuminated objects, and which produces final images having a reduced imaging error content. The imaging system includes an optical assembly for forming an intermediate image of the object to be imaged, an image sensor for receiving the intermediate image and producing an intermediate image signal, and processing means for processing the intermediate image signal to produce a final image signal having a reduced imaging error content. A reduction in imaging error content is achieved, in part, by including in the optical assembly a phase mask for causing the OTF of the optical assembly to be relatively invariant over a range of working distances, and an amplitude mask having a transmittance that decreases continuously as a function of distance from the center thereof. The reduction in imaging error content is also achieved, in part, by including in the processing means an improved generalized recovery function that varies in accordance with at least the non-ideal calculated IOTF of the optical assembly under a condition of approximately optimum focus.

Patent
04 Mar 1998
TL;DR: In this paper, a pan-tilt-zoom camera that can move from a wide-angle view to a direct view of a target or region of interest, using pan, tilt and zoom controls, under either remote or automatic control, is described.
Abstract: Optical systems are disclosed that enable an image sensor to generate an image of either a wide-angle view of an area of interest, or a direct view of an area of interest. In an embodiment, a wide-angle optical system is mounted to reflect radiation reflected from the area of interest along an optical axis. Radiation on the optical axis is directed to a zoom lens of the image sensor. Optionally, a planar mirror redirects the radiation, so that the optical axis is angled with respect to a central axis of the image sensor. In the preferred embodiment, the image sensor is a pan-tilt-zoom camera that can move from the wide-angle view to a direct view of a target or region of interest, using pan, tilt, and zoom controls, under either remote or automatic control. The disclosure also provides a method for maintaining registration of the image sensor when it is moved from a wide-angle view of a target to a direct view of a target.

Patent
17 Mar 1998
TL;DR: An image acquisition system that uses multiple cameras or image sensors in a redundant camera array is described in this article, where the cameras or sensors are arrayed in rows and columns so that each camera overlaps a viewing area of an adjacent camera.
Abstract: An image acquisition system that uses multiple cameras or image sensors in a redundant camera array. The cameras or sensors are arrayed in rows and columns so that a viewing area of each camera overlaps a viewing area of an adjacent camera. At least one camera is positioned in the array so that all edges of its viewing area abuts the viewing area of an adjacent camera. The image is displayed or stored in seamless and continuous form in high resolution. The system may also be used in low light conditions for image acquisition. Multiple cameras or sensors may be arrayed on modular panels that mates with and adjacent modular panel. The system is adaptable to image acquisition for X-rays, scanning, photocopying, security systems and the like.

Patent
26 Mar 1998
TL;DR: In this paper, an electronic still imaging system employs an image sensor comprised of discrete light sensitive picture elements overlaid with a color filter array (CFA) pattern to produce color image data corresponding to the CFA pattern.
Abstract: An electronic still imaging system employs an image sensor comprised of discrete light sensitive picture elements overlaid with a color filter array (CFA) pattern to produce color image data corresponding to the CFA pattern, an A/D converter for producing digital CFA image data from the color image data, and a memory for storing the digital CFA image data from the picture elements. A processor enables the processing of the digital CFA image data to produce finished image data, and the digital CFA image data and the finished image data are both stored together in an image file. This enables image processing from raw camera data to final output data to be completed in a single, integrated process to provide improved image quality when printing.

Proceedings ArticleDOI
TL;DR: A model for CMOS FPN is presented as the sum of two components: a column and a pixel component, modeled by a first order isotropic autoregressive random process, and each component is assumed to be uncorrelated with the other.
Abstract: Fixed pattern noise (FPN) for a CCD sensor is modeled as a sample of a spatial white noise process. This model is, however, not adequate for characterizing FPN in CMOS sensors, since the redout circuitry of CMOS sensors and CCDs are very different. The paper presents a model for CMOS FPN as the sum of two components: a column and a pixel component. Each component is modeled by a first order isotropic autoregressive random process, and each component. Each component is modeled by a first order isotropic autoregressive random process, and each component is assumed to be uncorrelated with the other. The parameters of the processes characterize each component of the FPN and the correlations between neighboring pixels and neighboring columns for a batch of sensor. We show how to estimate the model parameters from a set of measurements, and report estimates for 64 X 64 passive pixel sensor (PPS) and active pixel sensor (APS) test structures implemented in a 0.35 micron CMOS process. High spatial correlations between pixel components were measured for the PPS structures, and between the column components in both PPS and APS. The APS pixel components were uncorrelated.

Proceedings ArticleDOI
23 Jun 1998
TL;DR: The multi-focus camera, a new image sensor used for depth from defocus (DFD) range measurement, is introduced and two different depth measurement methods using the camera are proposed, using a coded aperture with four pinholes and convolution based model matching.
Abstract: In this paper, we first introduce the multi-focus camera, a new image sensor used for depth from defocus (DFD) range measurement. It can capture three images with different focus values simultaneously. We then propose two different depth measurement methods using the camera. The first method, an augmented version of the one proposed by N. Asada et al. (1998), employs a noniterative optimization process to compute depth values on edge points. The second one incorporates a coded aperture with the camera; and applies model-based pattern matching to estimate depth values of textured surfaces. Here we propose two types of coded apertures and corresponding analysis algorithms: 1D Fourier analysis to acquire a depth map and a blur-free image from three defocused images taken with a pair of pinholes, and 2D convolution based model matching for the fast and precise depth measurement using a coded aperture with four pinholes. Experimental results showed that the multi-focus camera works well as a practical DFD range sensor and that the coded apertures much improve its range estimation capability for real world scenes.

Proceedings ArticleDOI
16 Aug 1998
TL;DR: A method of generating stereo panoramic images by using a high-resolution omnidirectional stereo imaging sensor and the Tsai's method, which restores the radial distortion of each camera image.
Abstract: We have developed a high-resolution omnidirectional stereo imaging sensor that can take images at video-rate. The sensor system takes an omnidirectional view by a component constructed of six cameras and a hexagonal pyramidal mirror and acquires stereo views by symmetrically connecting two sensor components. The paper describes a method of generating stereo panoramic images by using our sensor. First, the sensor system is calibrated; that is, twelve cameras are correctly aligned with pyramidal mirrors and the Tsai's method restores the radial distortion of each camera image. Stereo panoramic images are then computed by registering the camera images captured at the same time.

Patent
Walter J. Mack1
16 Jul 1998
TL;DR: A focal plane processor, located on the focal plane of an imaging array, allows on-chip imaging and scaling as mentioned in this paper, which can result in an imager with advanced functionality.
Abstract: A focal plane processor, located on the focal plane of an imaging array, allows on-chip imaging and scaling. Computational functions normally achieved by a separate computer may be achieved through the imaging chip itself. This can result in an imager with advanced functionality. Also, additional processing bandwidth provided by the focal plane processor may assist a computer which may receive different image segments from different pixel arrays each having associated focal plane processors.

Patent
10 Jul 1998
TL;DR: In this article, a camera system consisting of an image sensor and processing device for sensing and processing an image, a print head for printing the sensed image on print media stored internally to the camera system, and a series of motor drive units each including motor drive transistors for the driving of external mechanical system of camera system.
Abstract: A camera system is disclosed comprising an image sensor and processing device for sensing and processing an image; a print media supply means provided for the storage of print media; a print head for printing the sensed image on print media stored internally to the camera system; the image sensor and processing device comprising a single integrated circuit chip including the following interconnected components: a processing unit for controlling the operation of the camera system; a program ROM utilized by the processing unit; a CMOS active pixel image sensor for sensing the image; a memory store for storing images and associated program data; a series of motor drive units each including motor drive transistors for the driving of external mechanical system of the camera system; and print head interface unit for driving the print head for printing of the sensed image. Preferably, the motor drive transistors are located along one peripheral edge of the integrated circuit and the CMOS pixel image sensor is located along an opposite edge of the integrated circuit.

Patent
02 Sep 1998
TL;DR: In this paper, a high frequency component detector detects the high frequency components of the target pixel using Laplacian filter based on each pixel and each of the pixels immediately adjacent to said target pixel are input.
Abstract: A noise elimination apparatus and method enable effective elimination of noise on each line in an image captured by a CCD provided with a Bayer-type color filter. A graduation device obtains the quantity of graduation for a target picture element by obtaining the difference between the mean value of the value of the target pixel and the value of a pixel around the target pixel and the value of the target pixel. A high frequency component detector detects the high frequency component of the target pixel using Laplacian filter based upon the target pixel and each of the pixels immediately adjacent to said target pixel are input. A high frequency component can be detected without being influenced by noise on each line by using a filter based upon each of the pixels immediately adjacent to said target pixel. Noise on each line can be effectively eliminated by adding the optimum quantity of graduation based upon the absolute value of the high frequency component detected by the high frequency component detector to the value of the target pixel.

Patent
Kia Silverbrook1, Paul Lapstun1
10 Jul 1998
TL;DR: In this article, a camera system consisting of an image sensor, a velocity detection means such as a MEMS accelerometer for determining any motion of the image relative to an external environment, and a processor is designed to process the sensed image so as to deblurr the image and to output the deblurred image to a printer.
Abstract: A camera system is disclosed having the ability to overcome the effects of motion blur. The camera system includes an image sensor; a velocity detection means such as a MEMS accelerometer for determining any motion of the image relative to an external environment; a processor means interconnected to the image sensor and the velocity detection means and adapted to process the sensed image so as to deblurr the image and to output the deblurred image to a printer means.

Patent
20 Mar 1998
TL;DR: In this paper, the authors proposed a sensing module that uses a number of readout passages in parallel to produce several segmented outputs from the image sensor and subsequently combine the outputs to produce an interleaved scanning signal under a sequence of control signals derived from a sensor clock signal.
Abstract: The present invention has been made in consideration of accommodating a higher sensor clock signal to increase the pixel readout rate from a regular image sensor and has particular applications to generating high-resolution and high-speed images from scanning objects. The sensing module in the present invention uses a number of readout passages in parallel to produce several segmented outputs from the image sensor and subsequently combine the outputs to produce an interleaved scanning signal under a sequence of control signals derived from a sensor clock signal.

Patent
01 Oct 1998
TL;DR: In this paper, a planar microtitre plate for biological objects is used to reduce the size of the test objects to be measured in such a way that all the objects are imaged completely on a two-dimensional, photosensitive image sensor.
Abstract: In the measurement system for detecting optical signals of microassays, the signal- generating test objects 5 are arranged on an investigation surface of a planar carrier 4. The planar carrier 4 is, in particular, a microtitre plate for biological objects. In principle, the measurement system comprises an optical imaging arrangement which reduces the size of the test objects 4 to be measured in such a way that all the objects are imaged completely on a two-dimensional, photosensitive image sensor 6. For imaging, a high-resolution glass-fibre taper element 1 having a large-area 2 and a small-area end 3 is used in this case, whose end surfaces 2, 3 are selected such that the large-area end surface 2 corresponds at least to the investigation surface of the carrier 4 and the small-area end surface 3 corresponds to the size of the image sensor 6, the ratio of the end surfaces 2, 3 producing the scale of reduction of the optical imaging arrangement in order to image the investigation surface of the carrier 4 completely onto the image sensor 6.

Patent
10 Jul 1998
TL;DR: In this article, a digital camera system is disclosed including an image sensor for sensing an image; storage means for storing the sensed image and associated system structures; data input means for the insertion of an image modification data module for modification of the sensed images; processor means interconnected to the image sensor, the storage means and the data input mean for the control of the camera system in addition to the manipulation of sensed image.
Abstract: A digital camera system is disclosed including an image sensor for sensing an image; storage means for storing the sensed image and associated system structures; data input means for the insertion of an image modification data module for modification of the sensed image; processor means interconnected to the image sensor, the storage means and the data input means for the control of the camera system in addition to the manipulation of the sensed image; printer means for printing out the sensed image on demand on print media supplied to the printer means; including providing an image modification data module adapted to cause the processor means to perform a series of diagnostic tests on the digital camera system and to print out the results via the printer means. Preferably, the image modification module can is comprise a card having instruction data encoded on one surface thereof and the processor means includes means for interpreting the instruction data encoded on the card. The diagnostic tests can include a cleaning cycle for the printer means so as to improve the operation of the printer means such as by printing a continuous all black strip. Alternatively, the diagnostic tests can include modulating the operation of the nozzles so as to improve the operation of the ink jet printer. Additionally, various internal operational parameters of the camera system can be printed out. Where the camera system is equipped with a gravitational shock sensor, the diagnostic tests can include printing out an extreme value of the sensor.

Journal ArticleDOI
TL;DR: Originally designed for NASA, CMOS-technology digital camera systems on a chip hold great commercial application potential and are currently in development for military and commercial applications.
Abstract: Originally designed for NASA, CMOS-technology digital camera systems on a chip hold great commercial application potential.

Journal ArticleDOI
TL;DR: Multivariate image analysis (MIA) methods based on multiway principal component analysis to decompose the highly correlated data present in multispectral images.
Abstract: Information from on-line imaging sensors has great potential for the monitoring and control of spatially distributed systems. The major difficulty lies in the efficient extraction of information from the images in real-time, information such as the frequencies of occurrence of specific features and their locations in the process or product space. This paper uses multivariate image analysis (MIA) methods based on multiway principal component analysis to decompose the highly correlated data present in multispectral images. The frequencies of occurrence of certain features in the image, regardless of their spatial locations, can be easily monitored in the space of the principal components (PC). The spatial locations of these features in the original image space can then be obtained by transposing highlighted pixels from the PC space into the original image space. In this manner it is possible to easily detect and locate (even very subtle) features from real-time imaging sensors for the purpose of performing ...

Journal ArticleDOI
05 Feb 1998
TL;DR: A PC-based camera system is described using a single-chip digital cameras that offer system designers fully-digital interfaces, reduced part counts, and low-power dissipation.
Abstract: A digital color camera has been monolithically realized in a standard 0.8-/spl mu/m CMOS technology. The chip integrates a 353/spl times/292 photogate sensor array with a unity-gain column circuit, a hierarchical column multiplexer, a switched-capacitor programmable-gain amplifier, and an 8-b flash analog/digital converter together with digital circuits performing color interpolation, color correction, computation of image statistics, and control functions. The 105-mm/sup 2/ chip produces 24-b RGB video at 30 frames/s. The sensor array achieves a conversion gain of 40 /spl mu/V/electron and a monochrome sensitivity of 7 V/lux/spl middot/s. For a 33-ms exposure time, the camera chip achieves a dynamic range of 65 dB and peak-to-peak fixed pattern noise that is 0.3% of saturation. Digital switching noise coupling into the analog circuits is shown to be data independent and therefore has no effect on image quality. Total power dissipation is less than 200 mW from a 3.3 V supply.

Patent
10 Jul 1998
TL;DR: A camera has at least one area image sensor for imaging a scene; a processor for processing said imaged scene in accordance with a predetermined requirement; a color printer for printing out the processed imaged scenes on print media via said color printer; and a detachable module for storing print media and printing inks for said color printers.
Abstract: A camera has at least one area image sensor for imaging a scene; a processor for processing said imaged scene in accordance with a predetermined requirement; a color printer for printing out the processed imaged scene on print media via said color printer; and a detachable module for storing print media and printing inks for said color printer.

Patent
10 Jul 1998
TL;DR: In this article, a camera system comprising an image sensor device for sensing and storing an image; processing means for processing the sensed image; a print media supply means provided for the storage of print media; and a print head for printing the detected image on print media stored internally to the camera system; a first button and second button each interconnected to the processing means.
Abstract: In a camera system comprising an image sensor device for sensing and storing an image; a processing means for processing the sensed image; a print media supply means provided for the storage of print media; a print head for printing the sensed image on print media stored internally to the camera system; a first button and second button each interconnected to the processing means; a method is disclosed of operation of the camera system comprising utilizing the first button to activate the image sensor device to sense an image; and utilizing the second button to activate the print head to print out a copy of the image on the print head. Preferably, the utilization of the first button also results in the printing out of the sensed image on the print media using the print head.