scispace - formally typeset
Open AccessJournal ArticleDOI

Single-Pixel Imaging via Compressive Sampling

TLDR
A new camera architecture based on a digital micromirror device with the new mathematical theory and algorithms of compressive sampling is presented that can operate efficiently across a broader spectral range than conventional silicon-based cameras.
Abstract
In this article, the authors present a new approach to building simpler, smaller, and cheaper digital cameras that can operate efficiently across a broader spectral range than conventional silicon-based cameras. The approach fuses a new camera architecture based on a digital micromirror device with the new mathematical theory and algorithms of compressive sampling.

read more

Content maybe subject to copyright    Report

1053-5888/08/$25.00©2008IEEE IEEE SIGNAL PROCESSING MAGAZINE [83] MARCH 2008
© DIGITAL VISION
H
umans are visual animals, and imaging sensors that extend our reach—
cameras—have improved dramatically in recent times thanks to the intro-
duction of CCD and CMOS digital technology. Consumer digital cameras in
the megapixel range are now ubiquitous thanks to the happy coincidence
that the semiconductor material of choice for large-scale electronics inte-
gration (silicon) also happens to readily convert photons at visual wavelengths into elec-
trons. On the contrary, imaging at wavelengths where silicon is blind is considerably
more complicated, bulky, and expensive. Thus, for comparable resolution, a US$500 digi-
tal camera for the visible becomes a US$50,000 camera for the infrared.
In this article, we present a new approach to building simpler, smaller, and cheaper
digital cameras that can operate efficiently across a much broader spectral range than
conventional silicon-based cameras. Our approach fuses a new camera architecture
[
Marco F. Duarte,
Mark A. Davenport,
Dharmpal Takhar,
Jason N. Laska, Ting Sun,
Kevin F. Kelly, and
Richard G. Baraniuk
]
Single-Pixel Imaging via
Compressive Sampling
[
Building simpler, smaller, and less-expensive digital cameras
]
Digital Object Identifier 10.1109/MSP.2007.914730

based on a digital micromirror device (DMD—see “Spatial Light
Modulators”) with the new mathematical theory and algorithms
of compressive sampling (CS—see “CS in a Nutshell”).
CS combines sampling and compression into a single non-
adaptive linear measurement process [1]–[4]. Rather than meas-
uring pixel samples of the scene under view, we measure inner
products between the scene and a set of test functions.
Interestingly, random test functions play a key role, making each
measurement a random sum of pixel values taken across the
entire image. When the scene under view is compressible by an
algorithm like JPEG or JPEG2000, the CS theory enables us to
stably reconstruct an image of the scene from fewer measure-
ments than the number of reconstructed pixels. In this manner
we achieve sub-Nyquist image acquisition.
Our “single-pixel” CS camera architecture is basically an
optical computer (comprising a DMD, two lenses, a single pho-
ton detector, and an analog-to-digital (A/D) converter) that com-
putes random linear measurements of the scene under view.
The image is then recovered or processed from the measure-
ments by a digital computer. The camera design reduces the
required size, complexity, and cost of the photon detector array
down to a single unit, which enables the use of exotic detectors
that would be impossible in a conventional digital camera. The
random CS measurements also enable a tradeoff between space
and time during image acquisition. Finally, since the camera
compresses as it images, it has the capability to efficiently and
scalably handle high-dimensional data sets from applications
like video and hyperspectral imaging.
This article is organized as follows. After describing the hard-
ware, theory, and algorithms of the single-pixel camera in detail,
we analyze its theoretical and practical performance and com-
pare it to more conventional cameras based on pixel arrays and
raster scanning. We also explain how the camera is information
scalable in that its random measurements can be used to direct-
ly perform simple image processing tasks, such as target classi-
fication, without first reconstructing the underlying imagery.
We conclude with a review of related camera architectures and a
discussion of ongoing and future work.
THE SINGLE-PIXEL CAMERA
ARCHITECTURE
The single-pixel camera is an optical computer that sequentially
measures the inner products
y[m] =x
m
between an
N
-
pixel sampled version
x
of the incident light-field from the scene
under view and a set of two-dimensional (2-D) test functions
{φ
m
}
[5]. As shown in Figure 1, the light-field is focused by
biconvex Lens 1 not onto a CCD or CMOS sampling array but
rather onto a DMD consisting of an array of
N
tiny mirrors (see
“Spatial Light Modulators”).
Each mirror corresponds to a particular pixel in
x
and
φ
m
and can be independently oriented either towards Lens 2 (corre-
sponding to a one at that pixel in
φ
m
) or away from Lens 2 (cor-
responding to a zero at that pixel in
φ
m
). The reflected light is
then collected by biconvex Lens 2 and focused onto a single
photon detector (the single pixel) that integrates the product
x[n]φ
m
[n]
to compute the measurement
y[m] =x
m
as its
output voltage. This voltage is then digitized by an A/D convert-
er. Values of
φ
m
between zero and one can be obtained by
dithering the mirrors back and forth during the photodiode
integration time. To obtain
φ
m
with both positive and negative
values (
±1
, for example), we estimate and subtract the mean
light intensity from each measurement, which is easily meas-
ured by setting all mirrors to the full-on one position.
To compute CS randomized measurements
y = x
as in
(1), we set the mirror orientations
φ
m
randomly using a
pseudorandom number generator, measure
y[m]
, and then
repeat the process
M
times to obtain the measurement vec-
tor
y
. Recall from “CS in a Nutshell” that we can set
[FIG1] Aerial view of the single-pixel CS camera in the lab [5].
Object
Light
Lens 1
DMD+ALP Board
Lens 2
Photodiode Circuit
SPATIAL LIGHT MODULATORS
A spatial light modulator (SLM) modulates the intensity of a
light beam according to a control signal. A simple example of
a transmissive SLM that either passes or blocks parts of the
beam is an overhead transparency. Another example is a liq-
uid crystal display (LCD) projector.
The Texas Instruments (TI) digital micromirror device (DMD)
is a reflective SLM that selectively redirects parts of the light
beam [31]. The DMD consists of an array of bacterium-sized,
electrostatically actuated micromirrors, where each mirror in
the array is suspended above an individual static random
access memory (SRAM) cell (see Figure 6). Each mirror rotates
about a hinge and can be positioned in one of two states
(
+10
and
10
from horizontal) according to which bit is
loaded into the SRAM cell; thus light falling on the DMD can
be reflected in two directions depending on the orientation
of the mirrors.
The DMD micro-mirrors in our lab’s TI DMD 1100 developer’s
kit (Tyrex Services Group Ltd., http://www.tyrexservices.com)
and accessory light modulator package (ALP, ViALUX GmbH,
http://www.vialux.de) form a pixel array of size
1024 × 768
.
This limits the maximum native resolution of our single-pixel
camera. However, mega-pixel DMDs are already available for
the display and projector market.
IEEE SIGNAL PROCESSING MAGAZINE [84] MARCH 2008

M = O(K log(N/K))
which is
N
when the scene being
imaged is compressible by a compression algorithm like
JPEG or JPEG2000. Since the DMD array is programmable,
we can also employ test functions
φ
m
drawn randomly from
a fast transform such as a Walsh, Hadamard, or noiselet
transform [6], [7].
CS is based on the recent understanding that a small collection
of nonadaptive linear measurements of a compressible signal or
image contain enough information for reconstruction and pro-
cessing [1]–[3]. For a tutorial treatment see [4] or the article by
Romberg in this issue.
The traditional approach to digital data acquisition samples an
analog signal uniformly at or above the Nyquist rate. In a digital
camera, the samples are obtained by a 2-D array of
N
pixel sensors
on a CCD or CMOS imaging chip. We represent these samples
using the vector
x
with elements
x[n]
,
n = 1, 2,...,N
. Since
N
is
often very large, e.g., in the millions for today’s consumer digital
cameras, the raw image data
x
is often compressed in the follow-
ing multi-step transform coding process.
The first step in transform coding represents the image in terms
of the coefficients
{α
i
}
of an orthonormal basis expansion
x =
N
i=1
α
i
ψ
i
where
{ψ
i
}
N
i=1
are the
N × 1
basis vectors. Forming
the coefficient vector
α
and the
N × N
basis matrix
:= [ψ
1
|ψ
2
| ...|ψ
N
]
by stacking the vectors
{ψ
i
}
as columns, we
can concisely write the samples as
x = α
. The aim is to find a
basis where the coefficient vector
α
is sparse (where only
K N
coefficients are nonzero) or
r
-compressible (where the sorted coef-
ficient magnitudes decay under a power law with scaling expo-
nent
r
). For example, natural images tend to be compressible in
the discrete cosine transform (DCT) and wavelet bases on which
the JPEG and JPEG-2000 compression standards are based. The sec-
ond step in transform coding encodes only the values and loca-
tions of the
K
significant coefficients and discards the rest.
This sample-then-compress framework suffers from three
inherent inefficiencies: First, we must start with a potentially
large number of samples
N
even if the ultimate desired
K
is
small. Second, the encoder must compute all of the
N
transform
coefficients
{α
i
}
, even though it will discard all but
K
of them.
Third, the encoder faces the overhead of encoding the locations
of the large coefficients.
As an alternative, CS bypasses the sampling process and direct-
ly acquires a condensed representation using
M < N
linear
measurements between
x
and a collection of test functions
{φ
m
}
M
m=1
as in
y[m] =x
m
. Stacking the measurements
y[m]
into the
M × 1
vector
y
and the test functions
φ
T
m
as rows into an
M × N
matrix
we can write
y = x = α. (1)
The measurement process is nonadaptive in that
does not
depend in any way on the signal
x
.
The transformation from
x
to
y
is a dimensionality reduction
and so loses information in general. In particular, since
M < N
, given
y
there are infinitely many
x
such that
x
= y
.
The magic of CS is that
can be designed such that
sparse/compressible
x
can be recovered exactly/approximately
from the measurements
y
.
While the design of
is beyond the scope of this review, an
intriguing choice that works with high probability is a random
matrix. For example, we can draw the elements of
as i.i.d.
±1
random variables from a uniform Bernoulli distribution
[22]. Then, the measurements
y
are merely
M
different ran-
domly signed linear combinations of the elements of
x
. Other
possible choices include i.i.d., zero-mean,
1/N
-variance
Gaussian entries (white noise) [1]–[3], [22], randomly permut-
ed vectors from standard orthonormal bases, or random sub-
sets of basis vectors [7], such as Fourier, Walsh-Hadamard, or
Noiselet [6] bases. The latter choices enable more efficient
reconstruction through fast algorithmic transform implemen-
tations. In practice, we employ a pseudo-random
driven by
a pseudo-random number generator.
To recover the image
x
from the random measurements
y
, the
traditional favorite method of least squares can be shown to fail
with high probability. Instead, it has been shown that using the
1
optimization [1]–[3]
α = arg min α
1
such that α
= y (2)
we can exactly reconstruct
K
-sparse vectors and closely approxi-
mate compressible vectors stably with high probability using just
M O(K log(N/K))
random measurements. This is a convex
optimization problem that conveniently reduces to a linear pro-
gram known as basis pursuit [1]–[3]. There are a range of alter-
native reconstruction techniques based on greedy, stochastic,
and variational algorithms [4].
If the measurements
y
are corrupted by noise, then the solu-
tion to the alternative
1
minimization, which we dub basis pur-
suit with inequality constrains (BPIC) [3]
α = arg min α
1
such that y α
2
<, (3)
satisfies
α α
2
< C
N
+ C
K
σ
K
(x)
with overwhelming probabil-
ity.
C
N
and
C
K
are the noise and approximation error amplifica-
tion constants, respectively;
is an upper bound on the noise
magnitude, and
σ
K
(x)
is the
2
error incurred by approximating
α
using its largest
K
terms. This optimization can be solved using
standard convex programming algorithms.
In addition to enabling sub-Nyquist measurement, CS enjoys
a number of attractive properties [4]. CS measurements are
universal in that the same random matrix
works simultane-
ously for exponentially many sparsitfying bases
with high
probability; no knowledge is required of the nuances of the
data being acquired. Due to the incoherent nature of the
measurements, CS is robust in that the measurements have
equal priority, unlike the Fourier or wavelet coefficients in a
transform coder. Thus, one or more measurements can be lost
without corrupting the entire reconstruction. This enables a
progressively better reconstruction of the data as more meas-
urements are obtained. Finally, CS is asymmetrical in that it
places most of its computational complexity in the recovery
system, which often has more substantial computational
resources than the measurement system.
CS IN A NUTSHELL
IEEE SIGNAL PROCESSING MAGAZINE [85] MARCH 2008

The single-pixel design reduces the required size, complexity,
and cost of the photon detector array down to a single unit,
which enables the use of exotic detectors that would be impossi-
ble in a conventional digital camera. Example detectors include
a photomultiplier tube or an avalanche photodiode for low-light
(photon-limited) imaging (more on this below), a sandwich of
several photodiodes sensitive to different light wavelengths for
multimodal sensing, a spectrometer for hyperspectral imaging,
and so on.
In addition to sensing flexibility, the practical advantages of
the single-pixel design include the facts that the quantum effi-
ciency of a photodiode is higher than that of the pixel sensors in
a typical CCD or CMOS array and that the fill factor of a DMD
can reach 90% whereas that of a CCD/CMOS array is only about
50%. An important advantage to highlight is the fact that each
CS measurement receives about
N/2
times more photons than
an average pixel sensor, which significantly reduces image dis-
tortion from dark noise and read-out noise. Theoretical advan-
tages that the design inherits from the CS theory include its
universality, robustness, and progressivity.
The single-pixel design falls into the class of multiplex cam-
eras [8]. The baseline standard for multiplexing is classical
raster scanning, where the test functions
{φ
m
}
are a sequence of
delta functions
δ[n m]
that turn on each mirror in turn. As
we will see below, there are substantial advantages to operating
in a CS rather than raster scan mode, including fewer total
measurements (
M
for CS rather than
N
for raster scan) and sig-
nificantly reduced dark noise.
IMAGE ACQUISITION EXAMPLES
Figure 2(a) and (b) illustrates a target object (a black-and-white
printout of an “
R
”)
x
and reconstructed image
x
taken by the sin-
gle-pixel camera prototype in Figure 1 using
N = 256 × 256
and
M = N/50
[5]. Figure 2(c) illustrates an
N = 256 × 256
color
single-pixel photograph of a printout of the Mandrill test image
taken under low-light conditions using RGB color filters and a
photomultiplier tube with
M = N/10
. In both cases, the images
were reconstructed using Total Variation minimization, which is
closely related to wavelet coefficient
1
minimization [2].
STRUCTURED ILLUMINATION CONFIGURATION
In a reciprocal configuration to that in Figure 1, we can illumi-
nate the scene using a projector displaying a sequence of ran-
dom patterns
{φ
m
}
and collect the reflected light using a single
lens and photodetector. Such a “structured illumination” setup
has advantages in applications where we can control the light
source. In particular, there are intriguing possible combinations
of single-pixel imaging with techniques such as three-dimen-
sional (3-D) imaging and dual photography [9].
SHUTTERLESS VIDEO IMAGING
We can also acquire video sequences using the single-pixel
camera. Recall that a traditional video camera opens a shutter
periodically to capture a sequence of images (called video
frames) that are then compressed by an algorithm like MPEG
that jointly exploits their spatiotemporal redundancy. In con-
trast, the single-pixel video camera needs no shutter; we mere-
ly continuously sequence through randomized test functions
φ
m
and then reconstruct a video sequence using an optimiza-
tion that exploits the video’s spatiotemporal redundancy [10].
If we view a video sequence as a 3-D space/time cube, then
the test functions
φ
m
lie concentrated along a periodic sequence
of 2-D image slices through the cube. A naïve way to reconstruct
the video sequence would group the corresponding measure-
ments
y[m]
into groups where the video is quasi-stationary and
then perform a 2-D frame-by-frame reconstruction on each
group. This exploits the compressibility of the 3-D video cube in
the space but not time direction.
A more powerful alternative exploits the fact that even though
each
φ
m
is testing a different 2-D image slice, the image slices
are often related temporally through smooth object motions in
the video. Exploiting this 3-D compressibility in both the space
and time directions and inspired by modern 3-D video coding
techniques [11], we can, for example, attempt to reconstruct the
sparsest video space/time cube in the 3-D wavelet domain.
These two approaches are compared in the simulation study
illustrated in Figure 3. We employed simplistic 3-D tensor prod-
uct Daubechies-4 wavelets in all cases. As we see from the figure,
3-D reconstruction from 2-D random measurements performs
almost as well as 3-D reconstruction from
3-D random measurements, which are
not directly implementable with the sin-
gle-pixel camera.
SINGLE-PIXEL CAMERA TRADEOFFS
The single-pixel camera is a flexible
architecture to implement a range of
different multiplexing methodologies,
just one of them being CS. In this sec-
tion, we analyze the performance of CS
and two other candidate multiplexing
methodologies and compare them to
the performance of a brute-force array
of
N
pixel sensors. Integral to our analy-
sis is the consideration of Poisson pho-
[FIG2] Single-pixel photo album. (a)
256 × 256
conventional image of a black-and-white
R
. (b) Single-pixel camera reconstructed image from
M = 1, 300
random measurements
(
50×
sub-Nyquist). (c)
256 × 256
pixel color reconstruction of a printout of the Mandrill
test image imaged in a low-light setting using a single photomultiplier tube sensor, RGB
color filters, and
M = 6, 500
random measurements.
(a) (b) (c)
IEEE SIGNAL PROCESSING MAGAZINE [86] MARCH 2008

ton counting noise at the detector, which is image dependent.
We conduct two separate analyses to assess the “bang for the
buck” of CS. The first is a theoretical analysis that provides
general guidance. The second is an experimental study that
indicates how the systems typically perform in practice.
SCANNING METHODOLOGIES
The four imaging methodologies we consider are:
Pixel array (PA): a separate sensor for each of the
N
pixels
receives light throughout the total capture time
T
. This is
actually not a multiplexing system, but we use it as the gold
standard for comparison.
Raster scan (RS): a single sensor takes
N
light measure-
ments sequentially from each of the
N
pixels over the cap-
ture time. This corresponds to test functions
{φ
m
}
that are
delta functions and thus
= I
. The measurements
y
thus
directly provide the acquired image
x
.
Basis scan (BS): a single sensor takes
N
light measure-
ments sequentially from different combinations of the
N
pix-
els as determined by test functions
{φ
m
}
that are not delta
functions but from some more general basis [12]. In our
analysis, we assume a Walsh basis modified to take the values
0/1 rather than
±1
; thus
= W
, where
W
is the 0/1 Walsh
matrix. The acquired image is obtained from the measure-
ments
y
by
x =
1
y = W
1
y
.
CS: a single sensor takes
M N
light measurements
sequentially from different combinations of the
N
pixels
as determined by random 0/1 test functions
{φ
m
}
.
Typically, we set
M = O(K log(N/K ))
which is
N
when
the image is compressible. In our analysis, we assume that
the
M
rows of the matrix
consist of randomly drawn
rows from a 0/1 Walsh matrix that are then randomly per-
muted (we ignore the first row con-
sisting of all ones). The acquired
image is obtained from the meas-
urements
y
via a sparse reconstruc-
tion algorithm such as BPIC (see
“CS in a Nutshell”).
THEORETICAL ANALYSIS
In this section, we conduct a theoreti-
cal performance analysis of the above
four scanning methodologies in terms
of the required dynamic range of the
photodetector, the required bit depth of
the A/D converter, and the amount of
Poisson photon counting noise. Our
results are pessimistic in general; we
show in the next section that the aver-
age performance in practice can be
considerably better. Our results are
summarized in Table 1. An alternative
analysis of CS imaging for piecewise
smooth images in Gaussian noise has
been reported in [13].
DYNAMIC RANGE
We first consider the photodetector dynamic range required
to match the performance of the baseline PA. If each detector
in the PA has a linear dynamic range of 0 to
D
, then it is easy
to see that single-pixel RS detector need only have that same
dynamic range. In contrast, each Walsh basis test function
contains
N/2
ones and so directs
N/2
times more light to the
detector. Thus, BS and CS each require a larger linear
dynamic range of 0 to
ND/2
. On the positive side, since BS
and CS collect considerably more light per measurement
than the PA and RS, they benefit from reduced detector non-
idealities like dark currents.
QUANTIZATION ERROR
We now consider the number of A/D bits required within the
required dynamic range to match the performance of the base-
line PA in terms of worst-case quantization distortion. Define
the mean-squared error (MSE) between the true image
x
and
its acquired version
x
as
MSE =
1
N
x
x
2
2
. Assuming that
each measurement in the PA and RS is quantized to
B
bits, the
worst-case mean-squared quantization error for the quantized
PA and RS images is
MSE =
ND2
B1
[14]. Due to its larg-
er dynamic range, BS requires
B + log
2
N
bits per measure-
ment to reach the same MSE distortion level. Since the
distortion of CS reconstruction is up to
C
N
times larger than
the distortion in the measurement vector (see “CS in a
Nutshell”), we require up to an additional
log
2
C
N
bits per
measurement. One empirical study has found roughly that
C
N
lies between eight and ten for a range of different random
measurement configurations [15]. Thus, BS and CS require a
higher-resolution A/D converter than PA and RS to acquire an
image with the same worst-case quantization distortion.
[FIG3] Frame 32 from a reconstructed video sequence using (top row)
M = 20, 000
and
(bottom row)
M = 50, 000
measurements (simulation from [10]). (a) Original frame of an
N = 64 ×64 × 64
video of a disk simultaneously dilating and translating. (b) Frame-by-
frame 2-D measurements + frame-by-frame 2-D reconstruction; MSE
= 3.63
and
0.82
,
respectively. (c) Frame-by-frame 2-D measurements + joint 3-D reconstruction; MSE
= 0.99
and
0.24
, respectively. (d) Joint 3-D measurements
+
joint 3-D reconstruction; MSE
= 0.76
and
0.18
, respectively.
(a) (b) (c) (d)
IEEE SIGNAL PROCESSING MAGAZINE [87] MARCH 2008

Citations
More filters
Journal ArticleDOI

Message-passing algorithms for compressed sensing

TL;DR: A simple costless modification to iterative thresholding is introduced making the sparsity–undersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures, inspired by belief propagation in graphical models.
BookDOI

Compressed sensing : theory and applications

TL;DR: In this paper, the authors introduce the concept of second generation sparse modeling and apply it to the problem of compressed sensing of analog signals, and propose a greedy algorithm for compressed sensing with high-dimensional geometry.
Journal ArticleDOI

Structured Compressed Sensing: From Theory to Applications

TL;DR: The prime focus is bridging theory and practice, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware in compressive sensing.
Journal ArticleDOI

On Dynamic Mode Decomposition: Theory and Applications

TL;DR: A theoretical framework in which dynamic mode decomposition is defined as the eigendecomposition of an approximating linear operator, which generalizes DMD to a larger class of datasets, including nonsequential time series, and shows that under certain conditions, DMD is equivalent to LIM.
Journal ArticleDOI

Task-Driven Dictionary Learning

TL;DR: This paper presents a general formulation for supervised dictionary learning adapted to a wide variety of tasks, and presents an efficient algorithm for solving the corresponding optimization problem.
References
More filters
Book

Compressed sensing

TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Journal ArticleDOI

Stable signal recovery from incomplete and inaccurate measurements

TL;DR: In this paper, the authors considered the problem of recovering a vector x ∈ R^m from incomplete and contaminated observations y = Ax ∈ e + e, where e is an error term.
Journal ArticleDOI

Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?

TL;DR: If the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program.
Posted Content

Stable Signal Recovery from Incomplete and Inaccurate Measurements

TL;DR: It is shown that it is possible to recover x0 accurately based on the data y from incomplete and contaminated observations.
Posted Content

Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?

TL;DR: In this article, it was shown that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal $f \in {\cal F}$ decay like a power-law, then it is possible to reconstruct $f$ to within very high accuracy from a small number of random measurements.
Related Papers (5)
Frequently Asked Questions (14)
Q1. What are the contributions in this paper?

In this article, the authors present a new approach to building simpler, smaller, and cheaper digital cameras that can operate efficiently across a much broader spectral range than conventional silicon-based cameras. Their approach fuses a new camera architecture [ Marco F. Duarte, Mark A. Davenport, 

The baseline standard for multiplexing is classical raster scanning, where the test functions {φm} are a sequence of delta functions δ[n − m] that turn on each mirror in turn. 

Each mirror rotates about a hinge and can be positioned in one of two states (+10◦and −10◦ from horizontal) according to which bit is loaded into the SRAM cell; thus light falling on the DMD can be reflected in two directions depending on the orientation of the mirrors. 

The single-pixel design reduces the required size, complexity, and cost of the photon detector array down to a single unit, which enables the use of exotic detectors that would be impossible in a conventional digital camera. 

Assuming that each measurement in the PA and RS is quantized to B bits, the worst-case mean-squared quantization error for the quantized PA and RS images is MSE = √N D2−B−1 [14]. 

Values of φm between zero and one can be obtained by dithering the mirrors back and forth during the photodiode integration time. 

CN and CK are the noise and approximation error amplification constants, respectively; is an upper bound on the noise magnitude, and σK(x) is the 2 error incurred by approximating α using its largest K terms. 

In addition to sensing flexibility, the practical advantages of the single-pixel design include the facts that the quantum efficiency of a photodiode is higher than that of the pixel sensors in a typical CCD or CMOS array and that the fill factor of a DMD can reach 90% whereas that of a CCD/CMOS array is only about 50%. 

The acquired image is obtained from the measurements y by ̂x = −1 y = W−1 y. ■ CS: a single sensor takes M ≤ N light measurements sequentially from different combinations of the N pixels as determined by random 0/1 test functions {φm} . 

As the authors will see below, there are substantial advantages to operating in a CS rather than raster scan mode, including fewer total measurements (M for CS rather than N for raster scan) and significantly reduced dark noise. 

Lens 1DMD+ALP BoardLens 2Photodiode CircuitSPATIAL LIGHT MODULATORSA spatial light modulator (SLM) modulates the intensity of a light beam according to a control signal. 

From Table 1, the authors see that the advantages of a single-pixel camera over a PA come at the cost of more stringent demands on the sensor dynamic range and A/D quantization and larger MSE due to photon counting effects. 

On the positive side, since BS and CS collect considerably more light per measurement than the PA and RS, they benefit from reduced detector nonidealities like dark currents. 

The DMD micro-mirrors in their lab’s TI DMD 1100 developer’s kit (Tyrex Services Group Ltd., http://www.tyrexservices.com) and accessory light modulator package (ALP, ViALUX GmbH, http://www.vialux.de) form a pixel array of size 1024 × 768.