scispace - formally typeset
Search or ask a question

Showing papers on "Image sensor published in 1988"


Journal ArticleDOI
TL;DR: The decision threshold can be theoretically determined for a given probability of false alarm as a function of the number of looks of the image under study and the size of the processing neighborhood.
Abstract: A constant-false-alarm-rate (CFAR) edge detector based on the ratio between pixel values is described. The probability distribution of the image obtained by applying the edge detector is derived. Hence, the decision threshold can be theoretically determined for a given probability of false alarm as a function of the number of looks of the image under study and the size of the processing neighborhood. For a better and finer detection, the edge detector operates along the four usual directions over windows of increasing sizes. A test performed, for a given direction, on a radar image of an agricultural scene shows good agreement with the theoretical study. The operator is compared with the CFAR edge detectors suitable for radar images. >

674 citations


Patent
22 Dec 1988
TL;DR: In this article, a first optical system, a projection device, a second optical system and an image sensor are used to obtain three-dimensional information about an object, and the distances from the detected positions of the optical images to a plurality of positions on the object are measured.
Abstract: Apparatus and method for obtaining three-dimensional information about an object, includes a first optical system, a projection device, a second optical system, and an image sensor. A plurality of pattern beams are radiated onto the object through the first optical system. Optical images formed by the pattern beams on the object are received by the image sensor through the second optical system to detect the positions of the received optical images. The distances from the detected positions of the optical images to a plurality of positions on the object are measured, thereby obtaining three-dimensional information about the object.

85 citations


Patent
30 Jun 1988
TL;DR: In this paper, an intelligent scan image sensor including a two-dimensional solid-state array of addressable imaging cells arranged for exposure to an image, where each cell includes a photosensitive diode and a sample and hold unit.
Abstract: An intelligent scan image sensor including a two-dimensional solid-state array of addressable imaging cells arranged for exposure to an image, where each cell includes a photosensitive diode and a sample and hold unit. The diode accumulates an electrical quantity having a value in relation to the image light intensity falling thereupon during successive integration periods and the sample and hold unit repeatedly samples and stores the accumulated quantities as analog video data values at the end of each of the successive integration periods.

81 citations


Patent
10 Jun 1988
TL;DR: In this paper, a solid-state imaging device is provided which includes an optical low-pass filter, and a solid state imaging element chip for receiving optical signals through the low pass filter.
Abstract: A solid-state imaging device is provided which includes an optical low-pass filter, and a solid-state imaging element chip for receiving optical signals through the low-pass filter. In addition, a shielding member having an optical transmissivity and an electric conductivity is interposed between the low-pass filter and the solid-state imaging element chip to improve image quality.

75 citations


Patent
19 Apr 1988
TL;DR: In this paper, a sensor for creating images containing range depth information is disclosed, which comprises a gated energy transmitter used to "illuminate" the object to be imaged, an integrated imaging energy receiver to produce "raw" images, a processor system to combine the raw images from the receiver to output image, and a timing system to control the gate timing of the transmitter and gate timing on the receiver.
Abstract: A sensor for creating images containing range depth information is disclosed. The sensor comprises a gated energy transmitter used to "illuminate" the object to be imaged, a gated integrating imaging energy receiver to produce "raw" images, a processor system to combine the raw images from the receiver to produce a output image, and a timing system to control the gate timing of the transmitter and gate timing of the receiver. In operation, two raw images are created with differing time relationships between the transmitter gating, receiver gating and integrated energy readout. One raw image is a reference image containing no range information while the second image contains range information along with unwanted information such as reflectivity variations. These raw images are processed to produce a final output image in which the unwanted information is mainly cancelled producing an output "range" image. This output image is a one or two dimensional array of data. The position of data in the array is proportional to the angular displacement from the sensor's boresight. The data at each position in the array is proportional to the range from the sensor (or a reference distance from the sensor) to the point on the object being imaged as determined from the data's position in the array.

74 citations


Patent
26 May 1988
TL;DR: An improved helmet line of sight measuring system was proposed in this article, where a plurality of assemblies of light sources were distributed on the helmet, each comprising three light sources positioned at the vertices of a triangle and another light source outside the plane of the triangle.
Abstract: An improved helmet line of sight measuring system for determining the spatial location of a helmet and the line of sight of an observer wearing the helmet, both relative to a coordinate reference frame. A plurality of assemblies of light sources are distributed on the helmet each comprising three light sources positioned at the vertices of a triangle and a fourth light source outside the plane of the triangle. Optical means fixed in space relative to the coordinate reference frame image the light emitted by the light sources in at least one of the assemblies onto an area image sensor, thereby producing two-dimensional image data of the light sources on the plane of the image sensor. Computing means coupled to the area image sensor is thereby able to determine the spatial coordinates of the helmet from the image data.

71 citations


Patent
26 Feb 1988
TL;DR: An apparatus for monitoring a bloodstream in a skin surface including a laser light source for emitting a laser beam, a cylindrical lens for expanding the laser beam and an objective lens for collecting light rays reflected by the skin surface and scattered by blood cells is described in this article.
Abstract: An apparatus for monitoring a bloodstream in a skin surface including a laser light source for emitting a laser beam, a cylindrical lens for expanding the laser beam, an objective lens for collecting light rays reflected by the skin surface and scattered by blood cells, a linear image sensor for receiving the reflected light rays via an objective lens, A/D converter for converting output signals read out of light receiving elements of the linear image sensor into digital signals, a memory for storing the digital signals, a calculating circuit for calculating a bloodstream velocity or a distribution of bloodstream, and a display device for displaying the bloodstream velocity or the distribution of bloodstream.

67 citations


Journal ArticleDOI
TL;DR: In phantom studies implemented on a digital fluoroscopy system, for scatter corrected selective material cancellations in human phantoms, improved contrast and field uniformity are observed and these results facilitate the implementation of efficient large area detectors for dual-energy imaging.
Abstract: In addition to the familiar problems of reduced contrast and signal-to-noise ratio (SNR) in the single energy case, dual-energy subtractions in the presence of scattered radiation suffer further degradations from: (1) artifacts due to nonuniform subtraction of scatter, and (2) a serious deterioration of the signal of interest. To determine the expected performance of scatter correcting schemes, we simulated energy subtractions performed in the presence of scatter. We discuss scatter's detrimental effects on contrast and SNR in these simulations and the expected improvements from scatter corrections to within 5% to 10%. We introduce two sampling schemes for the correction of scatter. Each scheme requires two measurements, and each involves placing an x-ray opaque sampling grid between the source and the object. In the first method, the grid is an array of lead disks present only during one measurement. Using these samples we generate an estimate of the scatter field and then subtract it from the second measurement yielding a scatter corrected image. In the second method, the grid is an array of lead strips present during both measurements but displaced between measurements by one-half of a strip spacing to completely sample the image. From the two measurements we generate an image to be corrected, an estimate of the scatter field, and a scatter corrected image. In phantom studies implemented on a digital fluoroscopy system, we observed for single energy images of blood vessel phantoms improved contrast and field uniformity. For scatter corrected selective material cancellations in human phantoms we observed improved contrast and significant reduction in artifacts. In both cases we observed no significant loss in SNR. These results facilitate the implementation of efficient large area detectors for dual-energy imaging.

67 citations


Journal ArticleDOI
TL;DR: A new method of digitally imaging vocal fold vibration using a solid-state image sensor attached to a conventional camera system is reported, which has proved useful for clinical examination of pathological vocal fold vibrations.
Abstract: A new method of digitally imaging vocal fold vibration using a solid-state image sensor attached to a conventional camera system is reported. The obtained image signals are stored in an image memory in combination with a personal computer through a high-speed A/D converter. The maximum frame rate for analysis is 4000 fps. After storage of data, images can be reproduced and displayed on a monitor screen as a form of slow-motion display. The system has proved useful for clinical examination of pathological vocal fold vibration.

64 citations


Patent
11 Oct 1988
TL;DR: In this paper, an optomechanical system for illuminating and imaging selected portions of a three dimensional object having specularly reflective surfaces is presented, including a two dimensional image sensor for receiving light reflected from the selected portions and producing an analog output signal representing a two-dimensional image of the selected parts over a predetermined time period.
Abstract: An optomechanical system for illuminating and imaging selected portions of a three dimensional object having specularly reflective surfaces, including a two dimensional image sensor for receiving light reflected from the selected portions and producing an analog output signal representing a two dimensional image of the selected portions over a predetermined time period, a video digitizer for receiving and converting the analog output signal to a digital signal, and an image computer for receiving the digital signal output by the video digitizer, comparing the information obtained from the output digital signal to the stored specifications of a master of the imaged portion of the object, indicating whether the imaged object meets the specifications of the master, controlling the movement of the imaged object, and controlling the operation of the system.

59 citations


Proceedings ArticleDOI
24 Apr 1988
TL;DR: A tactile sensor has been developed which provides a 256-element tactile image to a host computer which contains all required circuitry to scan the sensor array and provide eight-bit digitized data as well as synchronization signals.
Abstract: A tactile sensor has been developed which provides a 256-element tactile image to a host computer. The sensor contains all required circuitry to scan the sensor array and provide eight-bit digitized data as well as synchronization signals. The sensing area is 0.5 in.*0.5 in., providing spatial resolution of 0.031 in. The overall package measures 0.8 in.*0.8 in.*0.25 in., making it small enough to be located on a fingertip. The circuitry is located inside the package on a hybrid microcircuit which measures 0.5 in.*0.5 in. The sensing elements are force-sensing resistors with a wide usable force range. >

Journal ArticleDOI
TL;DR: For geometries commonly found in radiotherapy, the loss in spatial resolution due to the x‐ray source was at least equal to that caused by electron and photon scatter within the metal plate/film detectors.
Abstract: We have developed a novel method, which employs large lead collimators and computed tomography reconstruction techniques, to measure the intensity distributions of x-ray sources of radiotherapy devices. Using this method, we have measured the intensity distributions of x-ray sources from 60Co, 6-, 18-, and 25-MV radiotherapy devices. The x-ray sources of the accelerators were all elliptical in shape, but varied in eccentricity, and the sizes of the accelerator sources varied from 0.7 to 3.3 mm full width at half-maximum. The 60Co source was circular in shape and 20 mm in diameter, however, the output from this source was not uniform across its face. The modulation transfer functions (MTF's) (at the image plane) calculated for the accelerator sources, assuming an image magnification of 1.2, had similar magnitudes at low spatial frequencies as the MTF's of the metal plate/film detectors commonly used for therapy imaging. However, the source MTF's declined much more rapidly at high spatial frequencies. Therefore, for geometries commonly found in radiotherapy, the loss in spatial resolution due to the x-ray source was at least equal to that caused by electron and photon scatter within the metal plate/film detectors.

Journal ArticleDOI
TL;DR: In this paper, the scientific and technical aspects of high-resolution γ-ray and X-ray imaging of solar flares are discussed, and several planned future high-energy imagers are described with a description of the options for detectors and grid fabrication.
Abstract: We discuss the scientific and technical aspects of high-resolution γ-ray and X-ray imaging of solar flares. The scientific necessity for imaging observations of solar flares and the implications of future observations for the study of solar flare electrons and ions are considered. Performance parameters for a future hard X-ray and γ-ray imager are then summarized. We briefly survey techniques for high-energy photon imaging including direct collimation imaging, coded apertures, and modulation collimators. We then discuss in detail the technique of Fourier-transform imaging. The basic formalism is presented, followed by a discussion of several practical aspects of the technique. We conclude our discussion of imaging techniques with a description of the options for detectors and grid fabrication. Several planned future high-energy imagers are described including the Solar-A hard X-ray imager, the balloon-borne GRID γ-ray imager, and the Pinhole/Occulter Facility.

Journal ArticleDOI
J. Hynecek1
TL;DR: In this article, a device architecture for building high-performance and high-resolution image sensors suitable for consumer TV camera applications is introduced, where the sensor elements are junction field effect transistors that are organized in an array with their gates floating and capacitively coupled to common horizontal address line.
Abstract: A device architecture for building high-performance and high-resolution image sensors suitable for consumer TV camera applications is introduced. The sensor elements are junction field-effect transistors that are organized in an array with their gates floating and capacitively coupled to common horizontal address line. The photogenerated signal is sampled one line at a time, processed to remove the element-to-element nonuniformities, and stored in a buffer for subsequent readout. The concept, which includes an intrinsic exposure control, is demonstrated on a test image sensor that has an 8-mm sensing area diagonal and 580 (H)*488 (V) pixels. The key performance parameters, in addition to a high packing density of sensing elements with a unique hexagonal shape, include high signal uniformity, low dark current, good light sensitivity, high blooming overload protection, and no image smear. The discussion covers the design and operation of the basic image-sensing element, the architecture of the array, and the operation of the on-chip circuits needed for addressing and processing of generated signals. The overall device performance is demonstrated by typical device characterization results. >

Proceedings ArticleDOI
24 Apr 1988
TL;DR: The authors address the problem of robot multisensor fusion and integration with special emphasis on optimal estimation of fused sensor data, based on a Unimation PUMA 560 robot and various external sensors.
Abstract: The authors address the problem of robot multisensor fusion and integration with special emphasis on optimal estimation of fused sensor data. The investigation is based on a Unimation PUMA 560 robot and various external sensors. These include overhead vision, eye-in-hand vision, proximity, tactile array, position, force/torque, crossfire, overload, and slip sensing devices. The efficient fusion of data from different sources will enable the machine to respond promptly in dealing with the real world. Towards this goal, the general paradigm of a sensor data fusion system has been developed, and some simulation results as well as results from the actual implementation of certain concepts of sensor data fusion have been demonstrated. >

Patent
12 Dec 1988
TL;DR: In this article, a light intensity detecting circuit using a charge storage type of optical sensor having a parallel capacitor eliminates the influence of the dark current of the optical sensor and the offset of the comparator included in the detecting circuit so that the accuracy of light intensity detection is improved over that of the prior art.
Abstract: A light intensity detecting circuit using a charge storage type of optical sensor having a parallel capacitor eliminates the influence of the dark current of the optical sensor and/or the influence of the offset of the comparator included in the detecting circuit so that the accuracy of the light intensity detection is improved over that of the prior art. To eliminate the influence of the dark current of the optical sensor in an image sensor, there is provided an optically shielded mimic sensor having the same structure as that of the optical sensor including a corresponding parallel capacitor. In order to eliminate the influence of the offset of the comparator, there is provided a time constant circuit having a capacitor that may be composed partially of the parallel capacitor of the mimic sensor corresponding to the parallel capacitor of the optical sensor; with potential applying means for operating the time constant circuit.

Patent
18 Mar 1988
TL;DR: In this paper, a video scope system including a video processor and a light source unit is presented, where the video processor is connected to the light source by a cable to transmit a signal between them.
Abstract: A video scope system including a video scope in which a solid state image sensor is disposed in a distal end of an insertion to be inserted into an object under inspection and picks up an image of the inside of the object under inspection illuminated by a light fed by a light guide extending in the insertion section, a light source unit having a light source feeding a light into the light guide, and a video processor unit processing a signal supplied from the solid state image sensor to output a picture signal to be displayed on a monitor. The light source unit and the video processor unit are disposed in separate housings, respectively, and these units are connected to each other by a cable to transmit a signal therebetween.

Proceedings ArticleDOI
H. Takahashi1, F. Tomita
05 Dec 1988
TL;DR: A self-calibration method based on boundary representations of a pair of stereo images, which autonomously computes its current camera parameters from unknown observed data, is proposed.
Abstract: All stereo systems assume that the correct camera parameters are obtained In fact, those parameters can be calibrated in advance using known test patterns However, not only there is calibration errors but also some parameters are easy to change after the calibration Even if the errors may be small, the effects on identifying corresponding points and computing distances are very large It is infeasible to compute camera parameters every time befort? observation especially for stereo cameras which change its focus of attention by convergence or those on mobile robots which observe their environment and acquire depth information The self-calibration of stereo cameras, which autonomously computes its current camera parameters from unknown observed data, is really necessary for real stereo systlxns In this paper, we propose a self-calibration method based on boundary representations of a pair of stereo images

Journal ArticleDOI
TL;DR: To arrive at an estimate of the TM imaging system MTF, the TM point spread function (PSF) was measured using a two-dimensional array of black squares constructed at the White Sands Missile Range in New Mexico.
Abstract: This paper presents a method for measuring the Thematic Mapper (TM) imaging system point spread function (PSF) using TM imagery or a specially constructed target consisting of a two-dimensional array of approximate point sources of known dimensions and radiometric qualities. The target allows 16 separate point sources to be imaged simultaneously by the TM. The point sources were carefully placed on the ground so that their relative positions were known. Owing to sample-scene phasing, each imaged point source exhibits a different amount of blur in the digital image. The target pixels may then be recombined according to their known relative positions to form a single, sampled, nonaliased imaging system PSF. The modulation transfer function is then obtained as the modulus of the discrete Fourier transform of the PSF.

Proceedings ArticleDOI
12 Feb 1988
TL;DR: A new type of high speed range finder system that is based on the principle of triangulation range-finding, with a novel custom range sensor consisting of a 2D array of discrete photo-detectors attached to an individual memory element.
Abstract: We present a new type of high speed range finder system that is based on the principle of triangulation range-finding. One of the unique elements of this system is a novel custom range sensor. This sensor consists of a 2D array of discrete photo-detectors. Each photo-detector is attached to an individual memory element. A slit-ray is used to illuminate the object which is then imaged by the sensor. The slit-ray is scanned at a constant angular velocity, so elapsed time is a direct function of the direction of the slit source. This elapsed time is latched into each individual memory element when the corresponding detector is triggered. The system can acquire the basic data required for range computation without repeatedly scanning the sensor many times. The slit-ray scans the entire object once at high speed. The resulting reflected energy strip sweeps across the sensor triggering the photo-detectors in succession. The expected time to acquire the data is approximately 1 millisecond for a 100x100 pixel range data. The sensor is scanned only once at the end of data acquisition for transferring the stored data to a host processing computer. The range information for each pixel is obtained from the location of the pixel and the value of time (direction of the slit source) stored in the attached memory element. We have implemented this system in an abbreviated manner to verify the method. The implementation uses a 47 x 47 array of photo-transistors. Because of the practical difficulty of hooking up the entire array to individual memories and the magnitude of the hardware involved, the implementation uses only 47 memories corresponding to a row at a time. The sensor is energized a row at a time and the laser scanned. This yields one row of data at a time as we described before. In order to obtain the whole image, we repeat this procedure as many times as we have rows, i.e, 47 times. This is not due to any inherent limitation of the method, but due to implementational difficulties in the experimental system. This can be rectified when the sensor is emitted to custom VLSI hardware. The time to completely obtain a frame of data (47 x47) is approximately 80 milliseconds. The depth measurment error is less than 1.0%.

Patent
11 Jan 1988
TL;DR: In this article, an adjustable mount for positioning an image sensor in a device for generating video signals is presented. All movement modes are controlled by five screws which are all accessible and operable from the same direction.
Abstract: Adjustable mount for positioning an image sensor in a device for generating video signals. All movement modes are controlled by five screws which are all accessible and operable from the same direction.

Journal ArticleDOI
TL;DR: A light-stripe vision system is used to measure the location of polyhedral features of parts from a single frame of video camera output and issues such as accuracy in locating the line segments of intersection in the image and combining redundant information from multiple measurements and multiple sources are addressed.
Abstract: A light-stripe vision system is used to measure the location of polyhedral features of parts from a single frame of video camera output. The geometric conditions which assure location of the feature when the light plane intersects three of the feature's faces are given. Issues such as accuracy in locating the line segments of intersection in the image and combining redundant information from multiple measurements and multiple sources are addressed. It was found that in 2.5 s, the prototype sensor was capable of locating a 2-in cube to an accuracy (one standard deviation) of 0.002 in (0.055 mm) in translation and 0.1 degrees (0.0015 radians) in rotation. When integrated with a manipulator, the system was capable of performing high-precision assembly tasks. >

Patent
Hideo Homma1
26 Jan 1988
TL;DR: An image sensing device allowing normal or teleconversion operation has an image sensor which includes a plurality of horizontal lines and is capable of being non-destructively read out Area setting ciruitry is included for variably setting a reading area of the image sensor as mentioned in this paper.
Abstract: An image sensing device allowing normal or teleconversion operation has an image sensor which includes a plurality of horizontal lines and is capable of being non-destructively read out Area setting ciruitry is included for variably setting a reading area of the image sensor Clock control circuitry includes a plurality of reading lines and is arranged to read out image information from the selected reading area Reading circuitry non-destructively reads out, a plurality of times for the tele-conversion operation, the signals from the set reading area The reading cycle corresponds to the size of the set reading area Clearing circuitry is also included for clearing altogether the areas of the image sensor other than the set reading area

Patent
17 Feb 1988
TL;DR: In this article, the storage start time for each image sensor is shifted so as to realize a continuous signal pickup among the image sensors in order to shorten the wait time for the signal pickup.
Abstract: The image readout apparatus has a plurality of independently driven image sensors. The charge storage time for each image sensor is determined based on light incident thereto, and the signal pickup is started in the storage completion order of the image sensor. The time sequential signal readout of each image sensor is sent to a single signal processor and thereafter it is stored in a memory. In the case of concurrent storage start of the image sensors, if the charge storage of an image sensor is terminated while the signal pickup of another image sensor is performed, the signal pickup of the former image sensor is temporarily delayed. To shorten the wait time for the signal pickup, the storage start time for each image sensor is shifted so as to realize a continuous signal pickup among the image sensors.

Proceedings ArticleDOI
22 Aug 1988
TL;DR: Analyzes geometric parameters of polar exponential arrays, and their relation to 3-D sensing precision requirements which drive sensor design, and a software testbed for simulation of the sensor geometry and mappings is described.
Abstract: The polar exponential arrays whose geometric parameters are presently analyzed have proven superior to X-Y raster imaging sensors when wide FOV, high central resolution, and rotation- and zoom-invariance are required; attractive applications for such arrays are in spacecraft docking/tracking/stationkeeping and mobile robot navigation. Attention is given to optimal designs minimizing sensor configuration and computation requirements, and the relation of geometric parameters to the three-dimensional sensing precision requirements driving sensor design. A method for smooth patching of the 'blind-spot' singularity in the sensor with a uniformly high-resolution 'fovea' is also presented.

Patent
22 Sep 1988
TL;DR: In this paper, a video endoscope is presented in which an objective and a solid state imaging sensor are provided in the distal end portion of the inserting portion thereof, which is to be inserted into a cavity of a body, and the image of the wall of the cavity picked-up by the image sensor is displayed on a monitor.
Abstract: In a video endoscope apparatus in which an objective and a solid state imaging sensor are provided in the distal end portion of the inserting portion thereof, which is to be inserted into a cavity of a body, and the image of the wall of the cavity picked-up by the solid state image sensor is displayed on a monitor, the solid state image sensor is vibrated at a predetermined frequency of 10 Hz in the direction of the optical axis of the objective by means of the piezoelectric element, the peak value of the image signal supplied from the solid state image sensor is detected on each horizontal scanning line, the peak value detected thereby is passed through a band pass filter having a central frequency of 10 Hz, the signal passed through the band pass filter is synchronously detected by using a sampling signal which has a frequency of 10 Hz and is synchronized with said vibration, a focusing error correcting signal is generated by comparing a synchronously detected signal and a reference signal, and the solid state image sensor is moved in and in-focused position by supplying the focusing error correcting signal to the piezoelectric element.

Patent
14 Jul 1988
TL;DR: In this paper, three linear image sensors (14B, 14G and 14R) read the image of an original (1) being moved along the subscanning direction (-Y), by receiving lights (L B, L G and L R ) from respective instantaneous reading lines (W B, W G and W R ), respectively.
Abstract: Three linear image sensors (14B, 14G and 14R) read the image of an original (1) being moved along the subscanning direction (-Y), by receiving lights (L B , L G and L R ) from respective instantaneous reading lines (W B , W G and W R ), respectively. Scanning line switching clocks serving as sensor drive timing signals are supplied from a control circuit (20) to the linear image sensors, respectively. These clocks are time-shifted from each other by the time shift values calculated from deviations between the instantaneous reading positions and scanning line pitch, thereby the linear image sensors read common scanning lines.

Patent
22 Jun 1988
TL;DR: In this article, a contact type image sensor includes a light source for illuminating an original to be read and a substrate in which an optical fiber array member is assembled such that one end of the optical fibre array member faces the original for transmitting a reflected light from the illuminated original therethrough.
Abstract: A contact type image sensor includes a light source for illuminating an original to be read and a substrate in which an optical fiber array member is assembled. The substrate is disposed such that one end of the optical fiber array member faces the original for transmitting a reflected light from the illuminated original therethrough. The contact type image sensor further includes a light detecting element array formed on the substrate and facing the other end of the optical fiber array member for receiving the transmitted light and converting the received light to an electrical signal. The contact type image sensor also includes a driving circuit disposed on the substrate and electrically connected to the light detecting element array for driving the light detecting element array.

Journal ArticleDOI
TL;DR: Based on the results from the simulation, it appears that an image intensifier-based CT system is a feasible concept from a noise viewpoint, if the anticipated imaging task is intravenous angiography.
Abstract: This present study reports the results of a computer simulation whose aim was to predict the low-contrast imaging performance of which a conventional x-ray image intensifier with charge coupled device (CCD) camera would be capable if incorporated into a computed tomography (CT) volume imager. A vascular imaging task was modeled in our simulation. The effects of detector noise, x-ray exposure levels, analog-to-digital conversion (ADC) precision and residual levels of detected x-ray scatter were considered. The results of this simulation indicate that the low-contrast imaging performance of an image intensifier-based CT system was most limited by the CCD detector readout noise. Given this limitation the detection of greater than about 100,000 detected photons/pixel/projection gave marginal improvement in low-contrast resolution. At these exposures 12 bit ADC precision resulted in little additional image noise. The effects of detecting scattered x rays are twofold; decreasing the signal-to-noise ratio associated with our modeled artery and introducing a cupping artifact. Based on the results from the simulation, it appears that an image intensifier-based CT system is a feasible concept from a noise viewpoint, if the anticipated imaging task is intravenous angiography.

Patent
Yoichi Takaragi1
11 Oct 1988
TL;DR: In this article, an image processing apparatus has a plurality of parallel linear sensors, which may be sensors for different colors, in which output from one or more of the sensors may be subjected to an interpolation process to produce image data corresponding to the same scan line of an original as the image data being output from another sensor.
Abstract: An image processing apparatus having a plurality of parallel linear sensors, which may be sensors for different colors, in which output from one or more of the sensors may be subjected to an interpolation process to produce image data corresponding to the same scan line of an original as the image data being output from another of the sensors. This makes it possible to use the same driving signals for all the sensors even if they are staggered, and yet to produce output image data free of noise of the kind which can be caused by crosstalk. In one embodiment, compensation is made for a distance between two (or more) of the image sensors, based on magnification. In another embodiment, an interpolator is provided and interpolates an image signal of two adjacent lines of image from one image sensor, in accordance with magnification, to obtain an image signal on the same line as that from another of the image sensors.