scispace - formally typeset
Search or ask a question

Showing papers by "Massachusetts Institute of Technology published in 1997"


Journal ArticleDOI
14 Mar 1997-Science
TL;DR: Findings in this work indicate that dopaminergic neurons in the primate whose fluctuating output apparently signals changes or errors in the predictions of future salient and rewarding events can be understood through quantitative theories of adaptive optimizing control.
Abstract: The capacity to predict future events permits a creature to detect, model, and manipulate the causal structure of its interactions with its environment. Behavioral experiments suggest that learning is driven by changes in the expectations about future salient events such as rewards and punishments. Physiological work has recently complemented these studies by identifying dopaminergic neurons in the primate whose fluctuating output apparently signals changes or errors in the predictions of future salient and rewarding events. Taken together, these findings can be understood through quantitative theories of adaptive optimizing control.

8,163 citations


Journal ArticleDOI
TL;DR: Measuring Business Excellence revisits this now landmark work to review its continuing relevance to the aspirant learning organization as discussed by the authors, focusing on the cultural and structural issues they need to confront in order to acquire the flexibility and responsiveness to learn.
Abstract: Learning is now widely accepted as the currency of survival in an era of constant change. Many businesses, however, are struggling to learn how to learn. The cultural and structural issues they need to confront in order to acquire the flexibility and responsiveness to learn were articulated in 1990 in The Fifth Discipline by Peter M Senge of Massachusetts Institute of Technology. Measuring Business Excellence revisits this now landmark work to review its continuing relevance to the aspirant learning organization.

7,301 citations


Journal ArticleDOI
TL;DR: In this article, the first observation of single molecule Raman scattering was made using a single crystal violet molecule in aqueous colloidal silver solution using one second collection time and about $2.
Abstract: By exploiting the extremely large effective cross sections ( ${10}^{\ensuremath{-}17}--{10}^{\ensuremath{-}16}{\mathrm{cm}}^{2}/\mathrm{molecule}$) available from surface-enhanced Raman scattering (SERS), we achieved the first observation of single molecule Raman scattering. Measured spectra of a single crystal violet molecule in aqueous colloidal silver solution using one second collection time and about $2\ifmmode\times\else\texttimes\fi{}{10}^{5}\mathrm{W}/{\mathrm{cm}}^{2}$ nonresonant near-infrared excitation show a clear ``fingerprint'' of its Raman features between 700 and $1700{\mathrm{cm}}^{\ensuremath{-}1}$. Spectra observed in a time sequence for an average of 0.6 dye molecule in the probed volume exhibited the expected Poisson distribution for actually measuring 0, 1, 2, or 3 molecules.

6,454 citations


Book
01 Jan 1997
TL;DR: Key issues in affective computing, " computing that relates to, arises from, or influences emotions", are presented and new applications are presented for computer-assisted learning, perceptual information retrieval, arts and entertainment, and human health and interaction.
Abstract: Computers are beginning to acquire the ability to express and recognize affect, and may soon be given the ability to " have emotions. " The essential role of emotion in both human cognition and perception, as demonstrated by recent neurological studies, indicates that affective computers should not only provide better performance in assisting humans, but also might enhance computers' abilities to make decisions. This paper presents and discusses key issues in " affective computing, " computing that relates to, arises from, or influences emotions. Models are suggested for computer recognition of human emotion, and new applications are presented for computer-assisted learning, perceptual information retrieval, arts and entertainment, and human health and interaction. Affective computing, coupled with new wear-able computers, will also provide the ability to gather new data necessary for advances in emotion and cog-nition theory. Nothing in life is to be feared. It is only to be understood. – Marie Curie Emotions have a stigma in science; they are believed to be inherently non-scientific. Scientific principles are derived from rational thought, logical arguments, testable hypotheses, and repeatable experiments. There is room alongside science for " non-interfering " emotions such as those involved in curiosity, frustration, and the pleasure of discovery. In fact, much scientific research has been prompted by fear. Nonetheless, the role of emotions is marginalized at best. Why bring " emotion " or " affect " into any of the deliberate tools of science? Moreover, shouldn't it be completely avoided when considering properties to design into computers? After all, computers control significant parts of our lives – the phone system, the stock market, nuclear power plants, jet landings, and more. Who wants a computer to be able to " feel angry " at them? To feel contempt for any living thing? In this essay I will submit for discussion a set of ideas on what I call " affective computing, " computing that relates to, arises from, or influences emotions. This will need some further clarification which I shall attempt below. I should say up front that I am not proposing the pursuit of computerized cingulotomies 1 or even into the business of building " emotional computers ". 1 The making of small wounds in the ridge of the limbic system known as the cingulate gyrus, a surgical procedure to aid severely depressed patients. Nor will I propose answers to the difficult and intriguing questions , " …

5,700 citations


Journal ArticleDOI
TL;DR: In this paper, a synthesis of highly luminescent (CdSe)ZnS composite quantum dots with CdSe cores ranging in diameter from 23 to 55 A was reported.
Abstract: We report a synthesis of highly luminescent (CdSe)ZnS composite quantum dots with CdSe cores ranging in diameter from 23 to 55 A. The narrow photoluminescence (fwhm ≤ 40 nm) from these composite dots spans most of the visible spectrum from blue through red with quantum yields of 30−50% at room temperature. We characterize these materials using a range of optical and structural techniques. Optical absorption and photoluminescence spectroscopies probe the effect of ZnS passivation on the electronic structure of the dots. We use a combination of wavelength dispersive X-ray spectroscopy, X-ray photoelectron spectroscopy, small and wide angle X-ray scattering, and transmission electron microscopy to analyze the composite dots and determine their chemical composition, average size, size distribution, shape, and internal structure. Using a simple effective mass theory, we model the energy shift for the first excited state for (CdSe)ZnS and (CdSe)CdS dots with varying shell thickness. Finally, we characterize the...

4,293 citations


Journal ArticleDOI
TL;DR: Pfinder is a real-time system for tracking people and interpreting their behavior that uses a multiclass statistical model of color and shape to obtain a 2D representation of head and hands in a wide range of viewing conditions.
Abstract: Pfinder is a real-time system for tracking people and interpreting their behavior. It runs at 10 Hz on a standard SGI Indy computer, and has performed reliably on thousands of people in many different physical locations. The system uses a multiclass statistical model of color and shape to obtain a 2D representation of head and hands in a wide range of viewing conditions. Pfinder has been successfully used in a wide range of applications including wireless interfaces, video databases, and low-bandwidth coding.

4,280 citations


Proceedings ArticleDOI
27 Mar 1997
TL;DR: Tangible Bits allows users to "grasp & manipulate" bits in the center of users’ attention by coupling the bits with everyday physical objects and architectural surfaces and ambient media for background awareness.
Abstract: This paper presents our vision of Human Computer Interaction (HCI): "Tangible Bits." Tangible Bits allows users to "grasp & manipulate" bits in the center of users’ attention by coupling the bits with everyday physical objects and architectural surfaces. Tangible Bits also enables users to be aware of background bits at the periphery of human perception using ambient display media such as light, sound, airflow, and water movement in an augmented space. The goal of Tangible Bits is to bridge the gaps between both cyberspace and the physical environment, as well as the foreground and background of human activities. This paper describes three key concepts of Tangible Bits: interactive surfaces; the coupling of bits with graspable physical objects; and ambient media for background awareness. We illustrate these concepts with three prototype systems ‐ the metaDESK, transBOARD and ambientROOM ‐ to identify underlying research issues.

3,885 citations


Journal ArticleDOI
TL;DR: In this article, an incentive model of financial intermediation in which firms as well as intermediaries are capital constrained is studied, and how the distribution of wealth across firms, intermediaries, and uninformed investors affects investment, interest rates, and the intensity of monitoring.
Abstract: We study an incentive model of financial intermediation in which firms as well as intermediaries are capital constrained. We analyze how the distribution of wealth across firms, intermediaries, and uninformed investors affects investment, interest rates, and the intensity of monitoring. We show that all forms of capital tightening (a credit crunch, a collateral squeeze, or a savings squeeze) hit poorly capitalized firms the hardest, but that interest rate effects and the intensity of monitoring will depend on relative changes in the various components of capital. The predictions of the model are broadly consistent with the lending patterns observed during the recent financial crises. I. INTRODUCTION During the late 1980s and early 1990s several OECD countries appeared to be suffering from a credit crunch. Higher interest rates reduced cash flows and pushed down asset prices, weakening the balance sheets of firms. Loan losses and lower asset prices (particularly in real estate) ate significantly into the equity of the banking sector, causing banks to pull back on their lending and to increase interest rate spreads. The credit crunch hit small, collateral-poor firms the hardest. Larger firms were less affected as they could either renegotiate their loans or go directly to the commercial paper or bond markets. Scandinavia seems to have been most severely hit by the credit crunch. The banking sectors of Sweden, Norway, and Finland all had to be rescued by their governments at a very high

3,823 citations


Book
01 Jan 1997
TL;DR: Standard equipment thinking machines revenge of the nerds the mind's eye good ideas hotheads family values the meaning of life.
Abstract: Standard equipment thinking machines revenge of the nerds the mind's eye good ideas hotheads family values the meaning of life.

3,799 citations


Journal ArticleDOI
TL;DR: A new information-theoretic approach is presented for finding the pose of an object in an image that works well in domains where edge or gradient-magnitude based methods have difficulty, yet it is more robust than traditional correlation.
Abstract: A new information-theoretic approach is presented for finding the pose of an object in an image. The technique does not require information about the surface properties of the object, besides its shape, and is robust with respect to variations of illumination. In our derivation few assumptions are made about the nature of the imaging process. As a result the algorithms are quite general and may foreseeably be used in a wide variety of imaging situations. Experiments are presented that demonstrate the approach registering magnetic resonance (MR) images, aligning a complex 3D object model to real scenes including clutter and occlusion, tracking a human head in a video sequence and aligning a view-based 2D object model to real images. The method is based on a formulation of the mutual information between the model and the image. As applied here the technique is intensity-based, rather than feature-based. It works well in domains where edge or gradient-magnitude based methods have difficulty, yet it is more robust than traditional correlation. Additionally, it has an efficient implementation that is based on stochastic approximation.

3,584 citations


Journal ArticleDOI
TL;DR: The Bose-Einstein condensation (BEC) phenomenon was first introduced by Bose as discussed by the authors, who derived the Planck law for black-body radiation by treating the photons as a gas of identical particles.
Abstract: In 1924 the Indian physicist Satyendra Nath Bose sent Einstein a paper in which he derived the Planck law for black-body radiation by treating the photons as a gas of identical particles. Einstein generalized Bose's theory to an ideal gas of identical atoms or molecules for which the number of particles is conserved and, in the same year, predicted that at sufficiently low temperatures the particles would become locked together in the lowest quantum state of the system. We now know that this phenomenon, called Bose-Einstein condensation (BEC), only happens for "bosons" – particles with a total spin that is an integer multiple of h, the Planck constant divided by 2π.

Journal ArticleDOI
13 Mar 1997-Nature
TL;DR: In this article, the authors describe the photonic bandgap as a periodicity in dielectric constant, which can create a range of 'forbidden' frequencies called a photonic Bandgap.
Abstract: Photonic crystals are materials patterned with a periodicity in dielectric constant, which can create a range of 'forbidden' frequencies called a photonic bandgap. Photons with energies lying in the bandgap cannot propagate through the medium. This provides the opportunity to shape and mould the flow of light for photonic information technology.

Journal ArticleDOI
TL;DR: Three kinds of algorithms that learn axis-parallel rectangles to solve the multiple instance problem are described and compared, giving 89% correct predictions on a musk odor prediction task.

Proceedings ArticleDOI
17 Jun 1997
TL;DR: A decomposition algorithm that guarantees global optimality, and can be used to train SVM's over very large data sets is presented, and the feasibility of the approach on a face detection problem that involves a data set of 50,000 data points is demonstrated.
Abstract: We investigate the application of Support Vector Machines (SVMs) in computer vision. SVM is a learning technique developed by V. Vapnik and his team (AT&T Bell Labs., 1985) that can be seen as a new method for training polynomial, neural network, or Radial Basis Functions classifiers. The decision surfaces are found by solving a linearly constrained quadratic programming problem. This optimization problem is challenging because the quadratic form is completely dense and the memory requirements grow with the square of the number of data points. We present a decomposition algorithm that guarantees global optimality, and can be used to train SVM's over very large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of optimality conditions which are used both to generate improved iterative values, and also establish the stopping criteria for the algorithm. We present experimental results of our implementation of SVM, and demonstrate the feasibility of our approach on a face detection problem that involves a data set of 50,000 data points.

Proceedings ArticleDOI
04 May 1997
TL;DR: A family of caching protocols for distrib-uted networks that can be used to decrease or eliminate the occurrence of hot spots in the network, based on a special kind of hashing that is called consistent hashing.
Abstract: We describe a family of caching protocols for distrib-uted networks that can be used to decrease or eliminate the occurrence of hot spots in the network. Our protocols are particularly designed for use with very large networks such as the Internet, where delays caused by hot spots can be severe, and where it is not feasible for every server to have complete information about the current state of the entire network. The protocols are easy to implement using existing network protocols such as TCP/IP, and require very little overhead. The protocols work with local control, make efficient use of existing resources, and scale gracefully as the network grows. Our caching protocols are based on a special kind of hashing that we call consistent hashing. Roughly speaking, a consistent hash function is one which changes minimally as the range of the function changes. Through the development of good consistent hash functions, we are able to develop caching protocols which do not require users to have a current or even consistent view of the network. We believe that consistent hash functions may eventually prove to be useful in other applications such as distributed name servers and/or quorum systems.

Journal ArticleDOI
22 Aug 1997-Cell
TL;DR: The cloning of a human gene, hEST2, that shares significant sequence similarity with the telomerase catalytic subunit genes of lower eukaryotes is reported, suggesting that the induction of hEST 2 mRNA expression is required for the telomersase activation that occurs during cellular immortalization and tumor progression.

Journal ArticleDOI
10 Jan 1997-Science
TL;DR: In this paper, the Raman spectra of single wall carbon nanotubes (SWNTs) were studied using laser excitation wavelengths in the range from 514.5 to 1320 nanometers.
Abstract: Single wall carbon nanotubes (SWNTs) that are found as close-packed arrays in crystalline ropes have been studied by using Raman scattering techniques with laser excitation wavelengths in the range from 514.5 to 1320 nanometers. Numerous Raman peaks were observed and identified with vibrational modes of armchair symmetry (n, n) SWNTs. The Raman spectra are in good agreement with lattice dynamics calculations based on C-C force constants used to fit the two-dimensional, experimental phonon dispersion of a single graphene sheet. Calculated intensities from a nonresonant, bond polarizability model optimized for sp2 carbon are also in qualitative agreement with the Raman data, although a resonant Raman scattering process is also taking place. This resonance results from the one-dimensional quantum confinement of the electrons in the nanotube.

Journal ArticleDOI
05 Sep 1997-Cell
TL;DR: This Candida cph1/cph1 efg1/efg1 double mutant, locked in the yeast form, is avirulent in a mouse model.

Journal ArticleDOI
TL;DR: In this article, a method of coupling of modes in time was proposed to simplify both the analysis and filter synthesis aspects of these devices, and the response of filters comprised of an arbitrarily large dumber of resonators may be written down by inspection, as a continued fraction.
Abstract: Microring resonators side coupled to signal waveguides provide compact, narrow band, and large free spectral range optical channel dropping filters. Higher order filters with improved passband characteristics and larger out-of-band signal rejection are realized through the coupling of multiple rings. The analysis of these devices is approached by the novel method of coupling of modes in time. The response of filters comprised of an arbitrarily large dumber of resonators may be written down by inspection, as a continued fraction. This approach simplifies both the analysis and filter synthesis aspects of these devices.

Journal ArticleDOI
TL;DR: A simple mechanistic model has emerged for both processes that involves a nucleation-dependent polymerization that dictates that aggregation is dependent on protein concentration and time.
Abstract: Ordered protein aggregation in the brain is a hallmark of Alzheimer's disease and scrapie. The disease-specific amyloid fibrils comprise primarily a single protein, amyloid beta, in Alzheimer's disease, and the prion protein in scrapie. These proteins can be induced to form aggregates in vitro that are indistinguishable from brain-derived fibrils. Consequently, much effort has been invested in the development of in vitro model systems to study the details of the aggregation processes and the effects of endogenous molecules that have been implicated in disease. Selected studies of this type are reviewed herein. A simple mechanistic model has emerged for both processes that involves a nucleation-dependent polymerization. This mechanism dictates that aggregation is dependent on protein concentration and time. Furthermore, amyloid formation can be seeded by a preformed fibril. The physiological consequences of this mechanism are discussed.

Journal ArticleDOI
TL;DR: An unsupervised technique for visual learning is presented, which is based on density estimation in high-dimensional spaces using an eigenspace decomposition and is applied to the probabilistic visual modeling, detection, recognition, and coding of human faces and nonrigid objects.
Abstract: We present an unsupervised technique for visual learning, which is based on density estimation in high-dimensional spaces using an eigenspace decomposition. Two types of density estimates are derived for modeling the training data: a multivariate Gaussian (for unimodal distributions) and a mixture-of-Gaussians model (for multimodal distributions). Those probability densities are then used to formulate a maximum-likelihood estimation framework for visual search and target detection for automatic object recognition and coding. Our learning technique is applied to the probabilistic visual modeling, detection, recognition, and coding of human faces and nonrigid objects, such as hands.

Journal ArticleDOI
TL;DR: A stability theorem for systems described by IQCs is presented that covers classical passivity/dissipativity arguments but simplifies the use of multipliers and the treatment of causality.
Abstract: This paper introduces a unified approach to robustness analysis with respect to nonlinearities, time variations, and uncertain parameters. From an original idea by Yakubovich (1967), the approach has been developed under a combination of influences from the Western and Russian traditions of control theory. It is shown how a complex system can be described, using integral quadratic constraints (IQC) for its elementary components. A stability theorem for systems described by IQCs is presented that covers classical passivity/dissipativity arguments but simplifies the use of multipliers and the treatment of causality. A systematic computational approach is described, and relations to other methods of stability analysis are discussed. Last, but not least, the paper contains a summarizing list of IQCs for important types of system components.

Journal ArticleDOI
TL;DR: The authors compare the results of Eulerian hydrodynamic simulations of cluster formation against virial scaling relations between four bulk quantities: the cluster mass, the dark matter velocity dispersion, the gas temperature and the cluster luminosity.
Abstract: We compare the results of Eulerian hydrodynamic simulations of cluster formation against virial scaling relations between four bulk quantities: the cluster mass, the dark matter velocity dispersion, the gas temperature and the cluster luminosity. The comparison is made for a large number of clusters at a range of redshifts in three different cosmological models (CHDM, CDM and OCDM). We find that the analytic formulae provide a good description of the relations between three of the four numerical quantities. The fourth (luminosity) also agrees once we introduce a procedure to correct for the fixed numerical resolution. We also compute the normalizations for the virial relations and compare extensively to the existing literature, finding remarkably good agreement. The Press-Schechter prescription is calibrated with the simulations, again finding results consistent with other authors. We also examine related issues such as the size of the scatter in the virial relations, the effect of metallicity with a fixed pass-band, and the structure of the halos. All of this is done in order to establish a firm groundwork for the use of clusters as cosmological probes. Implications for the models are briefly discussed.

Journal ArticleDOI
TL;DR: It is illustrated how the routers of an IP network could be augmented to perform such customized processing on the datagrams flowing through them, and these active routers could also interoperate with legacy routers, which transparently forwarddatagrams in the traditional manner.
Abstract: Active networks are a novel approach to network architecture in which the switches (or routers) of the network perform customized computations on the messages flowing through them. This approach is motivated by both lead user applications, which perform user-driven computation at nodes within the network today, and the emergence of mobile code technologies that make dynamic network service innovation attainable. The authors discuss two approaches to the realization of active networks and provide a snapshot of the current research issues and activities. They illustrate how the routers of an IP network could be augmented to perform such customized processing on the datagrams flowing through them. These active routers could also interoperate with legacy routers, which transparently forward datagrams in the traditional manner.

Journal ArticleDOI
TL;DR: In this article, the authors propose a theory of development that links the degree of market incompleteness to capital accumulation and growth, and show that the decentralized equilibrium is inefficient because individuals do not take into account their impact on others' diversification opportunities, and that the typical development pattern will consist of a lengthy period of primitive accumulation with highly variable output, followed by takeoff and financial deepening and, finally, steady growth.
Abstract: This paper offers a theory of development that links the degree of market incompleteness to capital accumulation and growth. At early stages of development, the presence projects limits the degree of risk spreading (diversification) that the economy can achieve. The desire to avoid highly risky investments slows down capital accumulation, and the inability to diversify idiosyncratic risk introduces a large amount of uncertainty in the growth process. The typical development pattern will consist of a lengthy period of “primitive accumulation” with highly variable output, followed by takeoff and financial deepening and, finally, steady growth. “Lucky” countries will spend relatively less time in the primitive accumulation stage and develop faster. Although all agents are price takers and there are no technological spillovers, the decentralized equilibrium is inefficient because individuals do not take into account their impact on others' diversification opportunities. We also show that our results generalize to economies with international capital flows.

Journal ArticleDOI
TL;DR: In this article, a gauge-invariant decomposition of the nucleon spin into quark helicity, quark orbital, and gluon contributions is proposed, and the total quark contribution is measured through virtual Compton scattering in a special kinematic region where single quark scattering dominates.
Abstract: I introduce a gauge-invariant decomposition of the nucleon spin into quark helicity, quark orbital, and gluon contributions. The total quark (and hence the quark orbital) contribution is shown to be measurable through virtual Compton scattering in a special kinematic region where single quark scattering dominates. This deeply virtual Compton scattering has much potential to unravel the quark and gluon structure of the nucleon.

Journal ArticleDOI
TL;DR: In this article, the authors combine Simon's conception of relational contracts with Grossman and Hart's focus on asset ownership to analyze whether transactions should occur under vertical integration or non-integration, and with or without self-enforcing relational contracts.
Abstract: We combine Simon's conception of relational contracts with Grossman and Hart's focus on asset ownership. We analyze whether transactions should occur under vertical integration or non-integration, and with or without self-enforcing relational contracts. These four models allow us to re-run the horse race Coase proposed between markets and firms as alternative governance structures, but with four horses rather than two. We find that efficient ownership patterns are determined in part by the relational contracts that ownership facilitates, that vertical integration is an efficient response to widely varying supply prices, and that high-powered incentives create bigger reneging temptations under integration than under non-integration. Note: this paper was formerly titled "Implicit Contracts and the Theory of the Firm"

Journal ArticleDOI
06 Feb 1997-Nature
TL;DR: It is shown that changes in cell migration speed resulting from three separate variables—substratum ligand level, cell integrin expression level, and integrin–ligand binding affinity—are all quantitatively predictable through the changes they cause in a single unifying parameter: short-term cell– substratum adhesion strength.
Abstract: Migration of cells in higher organisms is mediated by adhesion receptors, such as integrins, that link the cell to extracellular-matrix ligands, transmitting forces and signals necessary for locomotion. Whether cells will migrate or not on a given substratum, and also their speed, depends on several variables related to integrin-ligand interactions, including ligand levels, integrin levels, and integrin-ligand binding affinities. These and other factors affect the way molecular systems integrate to effect and regulate cell migration. Here we show that changes in cell migration speed resulting from three separate variables-substratum ligand level, cell integrin expression level, and integrin-ligand binding affinity-are all quantitatively predictable through the changes they cause in a single unifying parameter: short-term cell-substratum adhesion strength. This finding is consistent with predictions of a mathematical model for cell migration. The ligand concentration promoting maximum migration speed decreases reciprocally as integrin expression increases. Increases in integrin-ligand affinity similarly result in maximal migration at reciprocally lower ligand concentrations. The maximum speed attainable, however, remains unchanged as ligand concentration, integrin expression, or integrin-ligand affinity vary, suggesting that integrin coupling with intracellular motors remains unaltered.

Journal ArticleDOI
26 Dec 1997-Cell
TL;DR: It is shown that nucleolar changes in aging yeast mother cells are likely due to the accumulation of extrachromosomal rDNA circles (ERCs) in old cells and that, in fact, ERCs cause aging.

Journal ArticleDOI
27 Jun 1997-Science
TL;DR: Optical coherence tomography was adapted to allow high-speed visualization of tissue in a living animal with a catheter-endoscope 1 millimeter in diameter, and was used to obtain cross-sectional images of the rabbit gastrointestinal and respiratory tracts at 10-micrometer resolution.
Abstract: Current medical imaging technologies allow visualization of tissue anatomy in the human body at resolutions ranging from 100 micrometers to 1 millimeter. These technologies are generally not sensitive enough to detect early-stage tissue abnormalities associated with diseases such as cancer and atherosclerosis, which require micrometer-scale resolution. Here, optical coherence tomography was adapted to allow high-speed visualization of tissue in a living animal with a catheter-endoscope 1 millimeter in diameter. This method, referred to as "optical biopsy," was used to obtain cross-sectional images of the rabbit gastrointestinal and respiratory tracts at 10-micrometer resolution.