scispace - formally typeset
Search or ask a question

Showing papers on "Image resolution published in 1996"


Book
01 Jan 1996
TL;DR: In this article, the transmission electron microscope (TEM) is used to detect X-ray spectra and images using a combination of parallel-beam diffraction patterns and CBED patterns.
Abstract: Basics.- The Transmission Electron Microscope.- Scattering and Diffraction.- Elastic Scattering.- Inelastic Scattering and Beam Damage.- Electron Sources.- Lenses, Apertures, and Resolution.- How to 'See' Electrons.- Pumps and Holders.- The Instrument.- Specimen Preparation.- Diffraction.- Diffraction in TEM.- Thinking in Reciprocal Space.- Diffracted Beams.- Bloch Waves.- Dispersion Surfaces.- Diffraction from Crystals.- Diffraction from Small Volumes.- Obtaining and Indexing Parallel-Beam Diffraction Patterns.- Kikuchi Diffraction.- Obtaining CBED Patterns.- Using Convergent-Beam Techniques.- Imaging.- Amplitude Contrast.- Phase-Contrast Images.- Thickness and Bending Effects.- Planar Defects.- Imaging Strain Fields.- Weak-Beam Dark-Field Microscopy.- High-Resolution TEM.- Other Imaging Techniques.- Image Simulation.- Processing and Quantifying Images.- Spectrometry.- X-ray Spectrometry.- X-ray Spectra and Images.- Qualitative X-ray Analysis and Imaging.- Quantitative X-ray Analysis.- Spatial Resolution and Minimum Detection.- Electron Energy-Loss Spectrometers and Filters.- Low-Loss and No-Loss Spectra and Images.- High Energy-Loss Spectra and Images.- Fine Structure and Finer Details.

2,679 citations


Journal ArticleDOI
TL;DR: A novel observation model based on motion compensated subsampling is proposed for a video sequence and Bayesian restoration with a discontinuity-preserving prior image model is used to extract a high-resolution video still given a short low-resolution sequence.
Abstract: The human visual system appears to be capable of temporally integrating information in a video sequence in such a way that the perceived spatial resolution of a sequence appears much higher than the spatial resolution of an individual frame. While the mechanisms in the human visual system that do this are unknown, the effect is not too surprising given that temporally adjacent frames in a video sequence contain slightly different, but unique, information. This paper addresses the use of both the spatial and temporal information present in a short image sequence to create a single high-resolution video frame. A novel observation model based on motion compensated subsampling is proposed for a video sequence. Since the reconstruction problem is ill-posed, Bayesian restoration with a discontinuity-preserving prior image model is used to extract a high-resolution video still given a short low-resolution sequence. Estimates computed from a low-resolution image sequence containing a subpixel camera pan show dramatic visual and quantitative improvements over bilinear, cubic B-spline, and Bayesian single frame interpolations. Visual and quantitative improvements are also shown for an image sequence containing objects moving with independent trajectories. Finally, the video frame extraction algorithm is used for the motion-compensated scan conversion of interlaced video data, with a visual comparison to the resolution enhancement obtained from progressively scanned frames.

1,058 citations


Journal ArticleDOI
02 Nov 1996
TL;DR: MicroPET as discussed by the authors is the first PET scanner to incorporate the new scintillator LSO and to our knowledge is the highest resolution multi-ring PET scanner currently in existence, which consists of a ring of 30 position sensitive scintillation detectors, each with an 8/spl times/8 array of small lutetium oxyorthosilicate (LSO) crystals coupled via optical fibers to a multi-channel photomultiplier tube.
Abstract: MicroPET is a high resolution positron emission tomography (PET) scanner designed for imaging small laboratory animals. It consists of a ring of 30 position-sensitive scintillation detectors, each with an 8/spl times/8 array of small lutetium oxyorthosilicate (LSO) crystals coupled via optical fibers to a multi-channel photomultiplier tube. The detectors have an intrinsic resolution averaging 1.68 mm, an energy resolution between 15 and 25% and 2.4 ns timing resolution at 511 keV. The detector ring diameter of microPET is 17.2 cm with an imaging field of view of 112 mm transaxially by 18 mm axially. The scanner has no septa and operates exclusively in 3D mode. Reconstructed image resolution 1 cm from the center of the scanner is 2.0 mm and virtually isotropic, yielding a volume resolution of 8 mm/sup 3/. For comparison, the volume resolution of state-of-the-art clinical PET systems is in the range of 50-75 mm/sup 3/. Initial images of phantoms have been acquired and are reported. A computer controlled bed is under construction and will incorporate a small wobble motion to improve spatial sampling. This is projected to further enhance spatial resolution. MicroPET is the first PET scanner to incorporate the new scintillator LSO and to our knowledge is the highest resolution multi-ring PET scanner currently in existence.

578 citations


Journal ArticleDOI
TL;DR: The analysis shows that standard regularization penalties induce space-variant local impulse response functions, even for space-invariant tomographic systems, which leads naturally to a modified regularization penalty that yields reconstructed images with nearly uniform resolution.
Abstract: This paper examines the spatial resolution properties of penalized-likelihood image reconstruction methods by analyzing the local impulse response. The analysis shows that standard regularization penalties induce space-variant local impulse response functions, even for space-invariant tomographic systems. Paradoxically, for emission image reconstruction, the local resolution is generally poorest in high-count regions. We show that the linearized local impulse response induced by quadratic roughness penalties depends on the object only through its projections. This analysis leads naturally to a modified regularization penalty that yields reconstructed images with nearly uniform resolution. The modified penalty also provides a very practical method for choosing the regularization parameter to obtain a specified resolution in images reconstructed by penalized-likelihood methods.

520 citations


Journal ArticleDOI
TL;DR: In this paper, an Occam's inversion algorithm for crosshole resistivity data that uses a finite-element method forward solution is discussed, where the earth is discretized into a series of parameter blocks, each containing one or more elements.
Abstract: An Occam's inversion algorithm for crosshole resistivity data that uses a finite-element method forward solution is discussed. For the inverse algorithm, the earth is discretized into a series of parameter blocks, each containing one or more elements. The Occam's inversion finds the smoothest 2-D model for which the Chi-squared statistic equals an a priori value. Synthetic model data are used to show the effects of noise and noise estimates on the resulting 2-D resistivity images. Resolution of the images decreases with increasing noise. The reconstructions are underdetermined so that at low noise levels the images converge to an asymptotic image, not the true geoelectrical section. If the estimated standard deviation is too low, the algorithm cannot achieve an adequate data fit, the resulting image becomes rough, and irregular artifacts start to appear. When the estimated standard deviation is larger than the correct value, the resolution decreases substantially (the image is too smooth). The same effects are demonstrated for field data from a site near Livermore, California. However, when the correct noise values are known, the Occam's results are independent of the discretization used. A case history of monitoring at an enhanced oil recovery site is used to illustrate problems in comparing successive images over time from a site where the noise level changes. In this case, changes in image resolution can be misinterpreted as actual geoelectrical changes. One solution to this problem is to perform smoothest, but non-Occam's, inversion on later data sets using parameters found from the background data set.

459 citations


Journal ArticleDOI
TL;DR: Experiments show that using blobs in iterative reconstruction methods leads to substantial improvement in the reconstruction performance, based on visual quality and on quantitative measures, in comparison with the voxel case.
Abstract: Spherically symmetric volume elements with smooth tapering of the values near their boundaries are alternatives to the more conventional voxels for the construction of volume images in the computer. Their use, instead of voxels, introduces additional parameters which enable the user to control the shape of the volume element (blob) and consequently to control the characteristics of the images produced by iterative methods for reconstruction from projection data. For images composed of blobs, efficient algorithms have been designed for the projection and discrete back-projection operations, which are the crucial parts of iterative reconstruction methods. The authors have investigated the relationship between the values of the blob parameters and the properties of images represented by the blobs. Experiments show that using blobs in iterative reconstruction methods leads to substantial improvement in the reconstruction performance, based on visual quality and on quantitative measures, in comparison with the voxel case. The images reconstructed using appropriately chosen blobs are characterized by less image noise for both noiseless data and noisy data, without loss of image resolution.

325 citations


Journal ArticleDOI
TL;DR: A maximum a posteriori (MAP) approach to linearized image reconstruction using knowledge of the noise variance of the measurements and the covariance of the conductivity distribution has the advantage of an intuitive interpretation of the algorithm parameters as well as fast image reconstruction.
Abstract: Dynamic electrical impedance tomography (EIT) images changes in the conductivity distribution of a medium from low frequency electrical measurements made at electrodes on the medium surface. Reconstruction of the conductivity distribution is an under-determined and ill-posed problem, typically requiring either simplifying assumptions or regularization based on a priori knowledge. This paper presents a maximum a posteriori (MAP) approach to linearized image reconstruction using knowledge of the noise variance of the measurements and the covariance of the conductivity distribution. This approach has the advantage of an intuitive interpretation of the algorithm parameters as well as fast (near real time) image reconstruction. In order to compare this approach to existing algorithms, the authors develop figures of merit to measure the reconstructed image resolution, the noise amplification of the image reconstruction, and the fidelity of positioning in the image. Finally, the authors develop a communications systems approach to calculate the probability of detection of a conductivity contrast in the reconstructed image as a function of the measurement noise and the reconstruction algorithm used.

273 citations


Journal ArticleDOI
TL;DR: A prototype focus range sensor has been developed that produces up to 512/spl times/480 depth estimates at 30 Hz with an average RMS error of 0.2%.
Abstract: Structures of dynamic scenes can only be recovered using a real-time range sensor. Depth from defocus offers an effective solution to fast and dense range estimation. However, accurate depth estimation requires theoretical and practical solutions to a variety of problems including recovery of textureless surfaces, precise blur estimation, and magnification variations caused by defocusing. Both textured and textureless surfaces are recovered using an illumination pattern that is projected via the same optical path used to acquire images. The illumination pattern is optimized to maximize accuracy and spatial resolution in computed depth. The relative blurring in two images is computed using a narrow-band linear operator that is designed by considering all the optical, sensing, and computational elements of the depth from defocus system. Defocus invariant magnification is achieved by the use of an additional aperture in the imaging optics. A prototype focus range sensor has been developed that has a workspace of 1 cubic foot and produces up to 512/spl times/480 depth estimates at 30 Hz with an average RMS error of 0.2%. Several experimental results are included to demonstrate the performance of the sensor.

271 citations


Patent
12 Sep 1996
TL;DR: In this paper, a digital camera, which captures images and transfers the captured images to a host computer, includes an image sensor exposed to image light for capturing the images and generating image signals; an A/D converter for converting the image signals into digitized image data; a digital interface for transferring the digitised image data to the host computer; and means for controlling the image sensor in at least two different camera configurations, each configuration including configuration information defining a plurality of camera parameters.
Abstract: A digital camera, which captures images and transfers the captured images to a host computer, includes an image sensor exposed to image light for capturing the images and generating image signals; an A/D converter for converting the image signals into digitized image data; a digital interface for transferring the digitized image data to the host computer; means for controlling the image sensor in at least two different camera configurations, each configuration including configuration information defining a plurality of camera parameters; and means for communicating at least part of the configuration information along with the digitized image data to the computer via the digital interface.

247 citations


Journal ArticleDOI
TL;DR: The MODIS Airborne Simulator as discussed by the authors was developed for measuring reflected solar and emitted thermal radiation in 50 narrowband channels between 0.55 and 14.2mm using an airborne scanning spectrometer.
Abstract: An airborne scanning spectrometer was developed for measuring reflected solar and emitted thermal radiation in 50 narrowband channels between 0.55 and 14.2mm. The instrument provides multispectral images of outgoing radiation for purposes of developing and validating algorithms for the remote sensing of cloud, aerosol, water vapor, and surface properties from space. The spectrometer scans a swath width of 37 km, perpendicular to the aircraft flight track, with a 2.5-mrad instantaneous field of view. Images are thereby produced with a spatial resolution of 50 m at nadir from a nominal aircraft altitude of 20 km. Nineteen of the spectral bands correspond closely to comparable bands on the Moderate Resolution Imaging Spectroradiometer ( MODIS ) , a facility in- strument being developed for the Earth Observing System to be launched in the late 1990s. This paper describes the optical, mechanical, electrical, and data acquisition system design of the MODIS Airborne Simulator and presents some early results obtained from measurements acquired aboard the National Aeronautics and Space Administration ER-2 aircraft that illustrate the performance and quality of the data produced by this instrument.

233 citations


Journal ArticleDOI
TL;DR: An algorithm is described to obtain the slope correction from a SAR interferogram, which also enables retrieval of the full scattering geometry, and demonstrates that the spatial resolution and calibration error are adequate for most applications.
Abstract: The brightness in a SAR image is affected by topographic height variations due to (1) the projection between ground and image coordinates, and (2) variations in backscattering coefficient with the local scattering geometry. This paper derives a new equation for (1), i.e. the radiometric slope correction, based on a calibration equation which is invariant under a coordinate transformation. An algorithm is described to obtain the slope correction from a SAR interferogram, which also enables retrieval of the full scattering geometry. Since the SAR image and interferogram are derived from the same data set, there is no need to match the image with the calibration data. There is also no need for phase unwrapping since the algorithm only uses the fringe frequencies. A maximum-likelihood estimator for the fringe frequency is analyzed and the algorithm is illustrated by processing ERS-1 SAR data. The example demonstrates that the spatial resolution and calibration error are adequate for most applications.

Patent
17 Jan 1996
TL;DR: In this paper, a robust system, adaptive to motion estimation accuracy, for creating a high resolution image from a sequence of lower resolution motion images produces a mapping transformation for each low resolution image to map pixels in each low-resolution image into locations in the high-resolution images.
Abstract: A robust system, adaptive to motion estimation accuracy, for creating a high resolution image from a sequence of lower resolution motion images produces a mapping transformation for each low resolution image to map pixels in each low resolution image into locations in the high resolution image. A combined point spread function (PSF) is computed for each pixel in each lower resolution image employing the mapping transformations provided that they describe accurate motion vectors. The high resolution image is generated from the lower resolution images employing the combined PSF's by projection onto convex sets (POCS), where sets and associated projections are defined only for those pixels whose motion vector estimates are accurate.

Journal ArticleDOI
TL;DR: In this article, a semiautomatic method for detecting the shoreline accurately and efficiently in ERS-1 SAR images is presented, aimed primarily at a particular application, namely the construction of a digital elevation model of an intertidal zone using SAR images and hydrodynamic model output, but could be carried over to other applications.
Abstract: Extraction of the shoreline in SAR images is a difficult task to perform using simple image processing operations such as grey-value thresholding, due to the presence of speckle and because the signal returned from the sea surface may be similar to that from the land. A semiautomatic method for detecting the shoreline accurately and efficiently in ERS-1 SAR images is presented. This is aimed primarily at a particular application, namely the construction of a digital elevation model of an intertidal zone using SAR images and hydrodynamic model output, but could be carried over to other applications. A coarse-fine resolution processing approach is employed, in which sea regions are first detected as regions of low edge density in a low resolution image, then image areas near the shoreline are subjected to more elaborate processing at high resolution using an active contour model. Over 90% of the shoreline detected by the automatic delineation process appear visually correct.

Journal ArticleDOI
TL;DR: A two-dimensional (2-D) prototype of a quasi real-time microwave tomographic system was constructed and was utilized to reconstruct images of physiologically active biological tissues such as an explanted canine perfused heart.
Abstract: Microwave tomographic imaging is one of the new technologies which has the potential for important applications in medicine. Microwave tomographically reconstructed images may potentially provide information about the physiological state of tissue as well as the anatomical structure of an organ. A two-dimensional (2-D) prototype of a quasi real-time microwave tomographic system was constructed. It was utilized to reconstruct images of physiologically active biological tissues such as an explanted canine perfused heart. The tomographic system consisted of 64 special antennae, divided into 32 emitters and 32 receivers which were electronically scanned. The cylindrical microwave chamber had an internal diameter of 360 mm and was filled with various solutions, including deionized water. The system operated on a frequency of 2.45 GHz. The polarization of the incident electromagnetic field was linear in the vertical direction. Total acquisition time was less than 500 ms. Both accurate and approximation methods of image reconstruction were used. Images of 2-D phantoms, canine hearts, and beating canine hearts have been achieved. In the worst-case situation when the 2-D diffraction model was used for an attempt to "slice" three-dimensional (3-D) object reconstruction, the authors still achieved spatial resolution of 1 to 2 cm and contrast resolution of 5%.

01 Jan 1996
TL;DR: In this article, a two-dimensional (2D) prototype of a quasi real-time microwave tomographic system was constructed, which was utilized to recon-struct images of physiologically active biological tissues such as an explanted canine perfused heart.
Abstract: Microwave tomographic imaging is one of the new technologies which has the potential for important applications in medicine. Microwave tomographically reconstructed images may potentially provide information about the physiological state of tissue as well as the anatomical structure of an organ. A two-dimensional (2-D) prototype of a quasi real-time microwave tomographic system was constructed. It was utilized to recon- struct images of physiologically active biological tissues such as an explanted canine perfused heart. The tomographic system consisted of 64 special antennae, divided into 32 emitters and 32 receivers which were electronically scanned. The cylindrical microwave chamber had an internal diameter of 360 mm and was filled with various solutions, including deionized water. The system operated on a frequency of 2.45 GHz. The polarization of the incident electromagnetic field was linear in the vertical direction. Total acquisition time was less than 500 ms. Both accur,ate and approximation methods of image reconstruction were used. Images of 2-D phantoms, canine hearts, and beating canine hearts have been achieved. In the worst-case situation when the 2-D diffraction model was used for an attempt to '*slice" three-dimensional (3-D) object reconstruction, we still achieved spatial resolution of 1 to 2 cm and contrast resolution of 5%.

Journal ArticleDOI
TL;DR: In this article, a method for direct recording of Fresnel holograms on a charge-coupled device and their numerical reconstruction is described. But the method requires a great distance between the object and the CCD target.
Abstract: Direct recording of Fresnel holograms on a charge-coupled device and their numerical reconstruction is possible if the maximum spatial frequency of the holographic microstructure is adapted to the spatial resolution of the detector array. The maximum spatial frequency is determined by the angle between the interfering waves. For standard CCDs with spatial resolutions of ;100 lines/mm, the angle between ref- erence and object wave is limited to a few degrees. This limits the size of the objects to be recorded or requires a great distance between object and CCD target. A method is described in which the primary object angle is optically reduced so that objects with larger dimensions can be re- corded. The principle is demonstrated for the example of deformation analysis. Two Fresnel holograms, which represent the undeformed and the deformed states of the object, are generated on a CCD target, stored electronically, and the wave fields are reconstructed numerically. The interference phase can be calculated directly from the digital holograms, without generating an interference pattern. As an application of this method, we present the transient deformation field of a plate that is loaded by an impact. © 1996 Society of Photo-Optical Instrumentation Engineers. Subject terms: digital holography; hologram interferometry; numerical hologram reconstruction; charge-coupled devices.

Journal ArticleDOI
TL;DR: A pixel based colour mapping algorithm is presented that produces a fused false colour rendering of two gray level images representing different sensor modalities that have a higher information content than each of the original images and retain sensor-specific image information.
Abstract: A pixel based colour mapping algorithm is presented that produces a fused false colour rendering of two gray level images representing different sensor modalities. The result-ing fused false colour images have a higher information content than each of the original images and retain sensor-specific image information. The unique component of each image modality is enhanced in the resulting fused colour image representation. First, the common component of the two original input images is determined. Second, the common component is subtracted from the original images to obtain the unique component of each image. Third, the unique component of each image modality is subtracted from the image of the other modality. This step serves to enhance the representation of sensor specific details in the final fused result. Finally, a fused colour image is produced by displaying the images resulting from the last step through respec-tively the red and green channels of a colour display. The method is applied to fuse thermal and visual images. The results show that the colour mapping enhances the visibility of certain details and preserves the specificity of the sensor information. The fused images also have a fairly natural appearance. The fusion scheme involves only operations on corresponding pixels. The resolution of a fused image is therefore directly related to the resolution of the input images. Before fusing, the contrast of the images can be enhanced and their noise can be reduced by standard image processing techniques. The colour mapping algorithm is computational simple. This implies that the investi-gated approaches can eventually be applied in real time and that the hardware needed is not too complicated or too voluminous (an important consideration when it has to fit in an aeroplane for instance).

Journal ArticleDOI
TL;DR: The near-sensor image processing concept, which has earlier been theoretically described, is here verified with an implementation and examples of image processing tasks such as gradient and maximum detection, histogram equalization, and thresholding with hysteresis are given.
Abstract: The near-sensor image processing concept, which has earlier been theoretically described, is here verified with an implementation. The NSIP describes a method to implement a two-dimensional (2-D) image sensor array with processing capacity in every pixel. Traditionally, there is a contradiction between high spatial resolution and complex processor elements, In the NSIP concept we have a nondestructive photodiode readout and we can thereby process binary images without loosing gray-scale information. The global image processing is handled by an asynchronous Global Logical Unit. These two features makes it possible to have efficient image processing in a small processor element. Electrical problems such as power consumption and fixed pattern noise are solved. All design is aimed at a 128/spl times/128 pixels NSIP in a 0.8 /spl mu/m double-metal single-poly CMOS process. We have fabricated and measured a 32/spl times/32 pixels NSIP. We also give examples of image processing tasks such as gradient and maximum detection, histogram equalization, and thresholding with hysteresis. In the NSIP concept automatic light adaptivity within a 160 dB range is possible.

Patent
15 Apr 1996
TL;DR: In this article, an imaging system is provided for imaging a scene to produce a sequence of image frames of the scene at a frame rate, R, of at least about 25 image frames per second.
Abstract: An imaging system is provided for imaging a scene to produce a sequence of image frames of the scene at a frame rate, R, of at least about 25 image frames per second. The system includes an optical input port (14), a charge-coupled imaging device (16a), an analog signal processor (24), and an analog-to-digital processor (A/D) (26). The A/D (26) digitizes the amplified pixel signal to produce a digital image signal formatted as a sequence of image frames each of a plurality of digital pixel values and having a dynamic range of digital pixel values represented by a number of digital bits, B, where B is greater than 8. A digital image processor (28) is provided for processing digital pixel values in the sequence of image frames to produce an output image frame sequence at the frame rate, R, representative of the imaged scene, with a latency of no more than about 1/R and a dynamic range of image frame pixel values represented by a number of digital bits, D, where D is less than B. The output image frame sequence is characterized by noise-limited resolution of at least a minimum number, NM, of line pairs per millimeter, referred to the charge-coupled imaging device pixel array, in an imaged scene as a function of illuminance of the input light impinging the charge-coupled imaging device pixels.

Proceedings ArticleDOI
25 Aug 1996
TL;DR: A fast implementation of an electronic digital image stabilization system that is able to handle large image displacements and is based on a 2D feature-based multi-resolution motion estimation algorithm, that tracks a small set of features to estimate the motion of the camera.
Abstract: We present a fast implementation of an electronic digital image stabilization system that is able to handle large image displacements. The system has been implemented in a parallel pipeline image processing hardware (Datacube Max Video 200) connected to a SUN SPARCstation 20/612. Our technique is based on a 2D feature-based multi-resolution motion estimation algorithm, that tracks a small set of features to estimate the motion of the camera. The combination of the estimates from a reference frame is used to warp the current frame in order to achieve stabilization. Experimental results using video sequences taken from a camera mounted on a moving vehicle demonstrate the robustness of the system when processing 15 frames per second.

Patent
13 May 1996
TL;DR: In this article, an improved airborne, direct digital panoramic camera system and method in which an in-line electro-optical sensor operating in conjunction with a data handling unit (82), a controller unit (84), and real time archive unit (86), eliminates the need for photographic film and film transport apparatus normally associated with prior art airborne reconnaissance cameras and yet still retains the very high image resolution quality which is so important in intelligence operations and commercial geographic information systems (GIS), mapping and other remote sensing applications.
Abstract: The present invention relates to an improved airborne, direct digital panoramic camera system and method in which an in-line electro-optical sensor (80) operating in conjunction with a data handling unit (82), a controller unit (84), and real time archive unit (86), eliminates the need for photographic film and film transport apparatus normally associated with prior art airborne reconnaissance cameras and yet still retains the very high image resolution quality which is so important in intelligence operations and commercial geographic information systems (GIS), mapping and other remote sensing applications. Precise geographic data for the system is provided by the use of navigation aids which include the Global Positioning Satellite (GPS) System (14) and an airborne platform carried GPS receiver (85). The present invention provides a simpler, more efficient and less costly panoramic camera by utilizing a simpler and less expensive line-focus type of lens in conjunction with an electro-optical line array sensor.

Proceedings ArticleDOI
TL;DR: A preliminary version of a foveated imaging system, implemented on a general purpose computer, which greatly reduces the transmission bandwidth of images, based on the fact that the spatial resolution of the human eye is space variant, decreasing with increasing eccentricity from the point of gaze.
Abstract: We have developed a preliminary version of a foveated imaging system, implemented on a general purpose computer, which greatly reduces the transmission bandwidth of images. The system is based on the fact that the spatial resolution of the human eye is space variant, decreasing with increasing eccentricity from the point of gaze. By taking advantage of this fact, it is possible to create an image that is almost perceptually indistinguishable from a constant resolution image, but requires substantially less information to code it. This is accomplished by degrading the resolution of the image so that it matches the space-variant degradation in the resolution of the human eye. Eye movements are recorded so that the high resolution region of the image can be kept aligned with the high resolution region of the human visual system. This system has demonstrated that significant reductions in bandwidth can be achieved while still maintaining access to high detail at any point in an image. The system has been tested using 256 by 256 8 bit gray scale images with a 20 degree field-of-view and eye-movement update rates of 30 Hz (display refresh was 60 Hz). users of the system have reported minimal perceptual artifacts at bandwidth reductions of up to 94.7% (a factor of 18.8). Bandwidth reduction factors of over 100 are expected once lossless compression techniques are added to the system.

Book ChapterDOI
26 Jun 1996
TL;DR: This paper proposes a multi-resolution algorithm of an improved snake model, the balloon model, and presents amulti-resolution parametrically deformable model using Fourier descriptors in which the curve is first described by a single harmonic; then harmonics of higher frequencies are used so that precision increases with the resolution.
Abstract: Multi-resolution methods applied to active contour models can speed up processes and improve results. In order to estimate those improvements, we describe and compare in this paper two models using such algorithms. First we propose a multi-resolution algorithm of an improved snake model, the balloon model. Convergence is achieved on an image pyramid and parameters are automatically modified so that, at each scale, the maximal length of the curve is proportional to the image size. This algorithm leads to an important saving in computational time without decreasing the accuracy of the result at the full scale. Then we present a multi-resolution parametrically deformable model using Fourier descriptors in which the curve is first described by a single harmonic; then harmonics of higher frequencies are used so that precision increases with the resolution. We show that boundary finding using this multi-resolution algorithm leads to more stability. These models illustrate two different ways of using multi-resolution methods: the first one uses multi-resolution data, the second one applies multi-resolution to the model itself.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the limitations on the formation of focused ion beam images from secondary electrons and found that for small features, sputtering is the limit to imaging resolution, and for extended small features (e.g., layered structures), rearrangement, redeposition, and differential sputtering rates may limit the resolution in some cases.
Abstract: This article investigates the limitations on the formation of focused ion beam images from secondary electrons. We use the notion of the information content of an image to account for the effects of resolution, contrast, and signal‐to‐noise ratio and show that there is a competition between the rate at which small features are sputtered away by the primary beam and the rate of collection of secondary electrons. We find that for small features, sputtering is the limit to imaging resolution, and that for extended small features (e.g., layered structures), rearrangement, redeposition, and differential sputtering rates may limit the resolution in some cases.

Journal ArticleDOI
TL;DR: The authors discuss an efficient phase preserving technique for ScanSAR focusing that can be significantly reduced by means of an azimuth varying filter, and the SAR-ScanSAR interferometry is proposed: here the decorrelation can always be removed.
Abstract: The authors discuss an efficient phase preserving technique for ScanSAR focusing, used to obtain images suitable for ScanSAR interferometry. Given two complex focused ScanSAR images of the same area, an interferogram can be generated as for conventional repeat pass SAR interferometry. However, due to the nonstationary azimuth spectrum of ScanSAR images, the coherence of the interferometric pair and the interferogram resolution are affected, both by the possible scan misregistration between two passes and by the terrain slopes along the azimuth. The resulting decorrelation can be significantly reduced by means of an azimuth varying filter, provided that some conditions on the scan misregistration are met. Finally, the SAR-ScanSAR interferometry is proposed: here the decorrelation can always be removed. With no resolution loss by means of the technique presented.

Journal ArticleDOI
TL;DR: In this paper, the authors describe the operation of a simple near-field scanning microwave microscope with a spatial resolution of about 100 μm, which is constructed from an open-ended resonant coaxial line which is excited by an applied microwave voltage in the frequency range of 7.5-12.4 GHz.
Abstract: We describe the operation of a simple near‐field scanning microwave microscope with a spatial resolution of about 100 μm. The probe is constructed from an open‐ended resonant coaxial line which is excited by an applied microwave voltage in the frequency range of 7.5–12.4 GHz. We present images of conducting structures with the system configured in either receiving or reflection mode. The images demonstrate that the smallest resolvable feature is determined by the diameter of the inner wire of the coaxial line and the separation between the sample and probe.

Journal ArticleDOI
TL;DR: The lateral resolution of the new system can exceed the diffraction limit imposed on conventional imaging systems utilizing delay-and-sum beamformers and the range resolution is compared to that of conventional pulse-echo systems with resolution enhancement (the PIO behaves as a pseudo-inverse Wiener filter in the range direction).
Abstract: A new approach to ultrasound imaging with coded-excitation is presented. The imaging is performed by reconstruction of the scatterer strength on an assumed grid covering the region of interest (ROI). Our formulation is based on an assumed discretized signal model which represents the received sampled data vector as a superposition of impulse responses of all scatterers in the ROI. The reconstruction operator is derived from the pseudo-inverse of the linear operator (system matrix) that produces the received data vector. The singular value decomposition (SVD) method with appropriate regularization techniques is used for obtaining a robust realization of the pseudo-inverse. Under simplifying (but realistic) assumptions, the pseudo-inverse operator (PIO) can be implemented using a bank of transversal filters with each filter designed to extract echoes from a specified image line. This approach allows for the simultaneous acquisition of a large number of image lines. This could be useful in increasing frame rates for two-dimensional imaging systems or allowing for real-time implementation of three-dimensional imaging systems. When compared to the matched filtering approach to similar coded-excitation systems, our approach eliminates correlation artifacts that are known to plague such systems. Furthermore, the lateral resolution of the new system can exceed the diffraction limit imposed on conventional imaging systems utilizing delay-and-sum beamformers. The range resolution is compared to that of conventional pulse-echo systems with resolution enhancement (our PIO behaves as a pseudo-inverse Wiener filter in the range direction). Both simulation and experimental verification of these statements are given.

Journal ArticleDOI
TL;DR: The results from the 12 selected MR and CT image sets at various slice thickness show that the Haar transform in the slice direction gives the optimum performance for most image sets, except for a CT image set which has 1 mm slice distance.
Abstract: This paper proposes a three-dimensional (3-D) medical image compression method for computed tomography (CT) and magnetic resonance (MR) that uses a separable nonuniform 3-D wavelet transform. The separable wavelet transform employs one filter bank within two-dimensional (2-D) slices and then a second filter bank on the slice direction. CT and MR image sets normally have different resolutions within a slice and between slices. The pixel distances within a slice are normally less than 1 mm and the distance between slices can vary from 1 mm to 10 mm. To find the best filter bank in the slice direction, the authors use the various filter banks in the slice direction and compare the compression results. The results from the 12 selected MR and CT image sets at various slice thickness show that the Haar transform in the slice direction gives the optimum performance for most image sets, except for a CT image set which has 1 mm slice distance. Compared with 2-D wavelet compression, compression ratios of the 3-D method are about 70% higher for CT and 35% higher for MR image sets at a peak signal to noise ratio (PSNR) of 50 dB, In general, the smaller the slice distance, the better the 3-D compression performance.

Proceedings ArticleDOI
02 Nov 1996
TL;DR: In this article, the authors developed a model in which the following factors are explicitly included: depth dependent geometric sensitivity, photon pair non-colinearity, attenuation, intrinsic detector sensitivity, non-uniform sinogram sampling, crystal penetration and inter-crystal scatter.
Abstract: Accurate modeling of the data formation and detection process in PET is essential for optimizing resolution. Here, the authors develop a model in which the following factors are explicitly included: depth dependent geometric sensitivity, photon pair non-colinearity, attenuation, intrinsic detector sensitivity, non-uniform sinogram sampling, crystal penetration and inter-crystal scatter. Statistical reconstruction methods can include these modeling factors in the system matrix that represents the probability of detecting an emission from each image pixel at each detector-pair. The authors describe a method for computing these factors using a combination of calibration measurements, geometric modeling and Monte Carlo computation. By assuming that blurring effects and depth dependent sensitivities are separable, the authors are able to exploit rotational symmetries with respect to the sinogram. This results in substantial savings in both storage requirements and computational costs. Using phantom data the authors show that this system model can produce higher resolution near the center of the field of view, at a given SNR, than both simpler geometric models and reconstructions using filtered backprojection. The authors also show, using an off-centered phantom, that larger improvements in resolution occur towards the edge of the field of view due to the explicit modeling of crystal penetration effects.

Journal ArticleDOI
TL;DR: A fast and robust electronic digital image stabilization system that can handle large image displacements based on a two-dimensional feature-based multi-resolution motion estimation technique is presented.
Abstract: Image stabilization can be used as front-end system for many tasks that require dynamic image analysis, such as navigation and tracking of independently moving objects from a moving platform. We present a fast and robust electronic digital image stabilization system that can handle large image displacements based on a two-dimensional feature-based multi-resolution motion estimation technique. The method tracks a small set of features and estimates the movement of the camera between consecutive frames. Stabilization is achieved by combining all motion from a reference frame and warping the current frame back to the reference. The system has been implemented on parallel pipeline image processing hardware (a Datacube MaxVideo 200) connected to a SUN SPARCstation 20/612 via a VME bus adaptor. Experimental results using video sequences taken from a camera mounted on a vehicle moving on rough terrain show the robustness of the system while running at approximately 20 frames/s.