scispace - formally typeset
Search or ask a question
Topic

Image sensor

About: Image sensor is a research topic. Over the lifetime, 44921 publications have been published within this topic receiving 504185 citations. The topic is also known as: electronic imager & sensor.


Papers
More filters
Journal ArticleDOI
TL;DR: The edge method provides a convenient measurement of the presampled MTF for digital radiographic systems with good response at low frequencies.
Abstract: The modulation transfer function (MTF) of radiographic systems is frequently evaluated by measuring the system's line spread function (LSF) using narrow slits. The slit method requires precise fabrication and alignment of a slit and high radiation exposure. An alternative method for determining the MTF uses a sharp, attenuating edge device. We have constructed an edge device from a 250-microm-thick lead foil laminated between two thin slabs of acrylic. The device is placed near the detector and aligned with the aid of a laser beam and a holder such that a polished edge is parallel to the x-ray beam. A digital image of the edge is processed to obtain the presampled MTF. The image processing includes automated determination of the edge angle, reprojection, sub-binning, smoothing of the edge spread function (ESF), and spectral estimation. This edge method has been compared to the slit method using measurements on standard and high-resolution imaging plates of a digital storage phosphor (DSP) radiography system. The experimental results for both methods agree with a mean MTF difference of 0.008. The edge method provides a convenient measurement of the presampled MTF for digital radiographic systems with good response at low frequencies.

701 citations

Journal ArticleDOI
30 Apr 2009-Nature
TL;DR: This work maps a two-dimensional (2D) image into a serial time-domain data stream and simultaneously amplifies the image in the optical domain and overcomes the compromise between sensitivity and frame rate without resorting to cooling and high-intensity illumination.
Abstract: Ultrafast real-time optical imaging is used in many areas of science, from biological imaging to the study of shockwaves. But in systems that undergo changes on very fast timescales, conventional technologies such as CCD (charge-coupled-device) cameras are compromised. Either imaging speed or sensitivity has to be sacrificed unless special cooling or extra-bright light is used. This is because it takes time to read out the data from sensor arrays, and at high frame rates only a few photons are collected. Now a UCLA team has developed an imaging method that overcomes these limitations and offers frame rates at least a thousand times faster than those of conventional CCDs, making this perhaps the world's fastest continuously running camera, with a shutter speed of 440 picoseconds. The technology — serial time-encoded amplified microscopy or STEAM — maps a two-dimensional image into a serial time-domain data stream and simultaneously amplifies the image in the optical domain. A single-pixel photodetector then captures the entire image. Ultrafast real-time optical imaging is used in diverse areas of science, but conventional imaging devices such as CCDs are incapable of capturing fast dynamical processes with high sensitivity and resolution. This imaging method overcomes these limitations and offers frame rates that are at least 1,000 times faster than those of conventional CCDs. The approach is applied to continuous real-time imaging of microfluidic flow and phase-explosion effects that occur during laser ablation. Ultrafast real-time optical imaging is an indispensable tool for studying dynamical events such as shock waves1,2, chemical dynamics in living cells3,4, neural activity5,6, laser surgery7,8,9 and microfluidics10,11. However, conventional CCDs (charge-coupled devices) and their complementary metal–oxide–semiconductor (CMOS) counterparts are incapable of capturing fast dynamical processes with high sensitivity and resolution. This is due in part to a technological limitation—it takes time to read out the data from sensor arrays. Also, there is the fundamental compromise between sensitivity and frame rate; at high frame rates, fewer photons are collected during each frame—a problem that affects nearly all optical imaging systems. Here we report an imaging method that overcomes these limitations and offers frame rates that are at least 1,000 times faster than those of conventional CCDs. Our technique maps a two-dimensional (2D) image into a serial time-domain data stream and simultaneously amplifies the image in the optical domain. We capture an entire 2D image using a single-pixel photodetector and achieve a net image amplification of 25 dB (a factor of 316). This overcomes the compromise between sensitivity and frame rate without resorting to cooling and high-intensity illumination. As a proof of concept, we perform continuous real-time imaging at a frame speed of 163 ns (a frame rate of 6.1 MHz) and a shutter speed of 440 ps. We also demonstrate real-time imaging of microfluidic flow and phase-explosion effects that occur during laser ablation.

699 citations

Proceedings ArticleDOI
28 Sep 2004
TL;DR: A direct solution is given that minimizes an algebraic error from this constraint, and subsequent nonlinear refinement minimizes a re-projection error, which is the first published calibration tool for this problem.
Abstract: We describe theoretical and experimental results for the extrinsic calibration of sensor platform consisting of a camera and a 2D laser range finder. The calibration is based on observing a planar checkerboard pattern and solving for constraints between the "views" of a planar checkerboard calibration pattern from a camera and laser range finder. We give a direct solution that minimizes an algebraic error from this constraint, and subsequent nonlinear refinement minimizes a re-projection error. To our knowledge, this is the first published calibration tool for this problem. Additionally we show how this constraint can reduce the variance in estimating intrinsic camera parameters.

697 citations

Journal Article
TL;DR: In this article, the requirements for CMOS image sensors and their historical development, CMOS devices and circuits for pixels, analog signal chain, and on-chip analog-to-digital conversion are reviewed and discussed.
Abstract: CMOS active pixel sensors (APS) have performance competitive with charge-coupled device (CCD) technology, and offer advantages in on-chip functionality, system power reduction, cost, and miniaturization. This paper discusses the requirements for CMOS image sensors and their historical development, CMOS devices and circuits for pixels, analog signal chain, and on-chip analog-to-digital conversion are reviewed and discussed.

693 citations

Proceedings ArticleDOI
17 Jun 1997
TL;DR: A new camera with a hemispherical field of view is presented and results are presented on the software generation of pure perspective images from an omnidirectional image, given any user-selected viewing direction and magnification.
Abstract: Conventional video cameras have limited fields of view that make them restrictive in a variety of vision applications. There are several ways to enhance the field of view of an imaging system. However, the entire imaging system must have a single effective viewpoint to enable the generation of pure perspective images from a sensed image. A new camera with a hemispherical field of view is presented. Two such cameras can be placed back-to-back, without violating the single viewpoint constraint, to arrive at a truly omnidirectional sensor. Results are presented on the software generation of pure perspective images from an omnidirectional image, given any user-selected viewing direction and magnification. The paper concludes with a discussion on the spatial resolution of the proposed camera.

688 citations


Network Information
Related Topics (5)
Image processing
229.9K papers, 3.5M citations
90% related
Image segmentation
79.6K papers, 1.8M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Feature extraction
111.8K papers, 2.1M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023188
2022360
2021918
20202,104
20192,322
20182,179