scispace - formally typeset
Search or ask a question

Showing papers in "Proceedings of SPIE in 2001"


Proceedings ArticleDOI
TL;DR: Depending on the approximation, the algorithm can by far outperform Fourier-transform based implementations of the normalized cross correlation algorithm and it is especially suited to problems, where many different templates are to be found in the same image f.
Abstract: In this paper, we present an algorithm for fast calculation of the normalized cross correlation and its application to the problem of template matching. Given a template t, whose position is to be determined in an image f, the basic idea of the algorithm is to represent the template, for which the normalized cross correlation is calculated, as a sum of rectangular basis functions. Then the correlation is calculated for each basis function instead of the whole template. The result of the correlation of the template t and the image f is obtained as the weighted sum of the correlation functions of the basis functions. Depending on the approximation, the algorithm can by far outperform Fourier-transform based implementations of the normalized cross correlation algorithm and it is especially suited to problems, where many different templates are to be found in the same image f.

595 citations


Proceedings ArticleDOI
TL;DR: An overview of wavelet-based watermarking techniques available today can be found in this paper, where the authors provide an overview of the wavelet wavelet transform domain and its application in image compression.
Abstract: In this paper, we will provide an overview of the wavelet-based watermarking techniques available today. We will see how previously proposed methods such as spread-spectrum watermarking have been applied to the wavelet transform domain in a variety of ways and how new concepts such as the multi-resolution property of the wavelet image decomposition can be exploited. One of the main advantages of watermarking in the wavelet domain is its compatibility with the upcoming image coding standard, JPEG2000. Although many wavelet-domain watermarking techniques have been proposed, only few fit the independent block coding approach of JPEG2000. We will illustrate how different watermarking techniques relate to image compression and examine the robustness of selected watermarking algorithms against image compression.

302 citations


Proceedings ArticleDOI
TL;DR: A novel method for calculating stereoscopic camera parameters is described, which provides the user intuitive controls related to easily measured physical values and precisely controlled perceived depth, and a new analysis of the distortions introduced by different camera parameters was undertaken.
Abstract: Stereoscopic images are hard to get right, and comfortable images are often only produced after repeated trial and error. The main difficulty is controlling the stereoscopic camera parameters so that the viewer does not experience eye strain or double images from excessive perceived depth. Additionally, for head tracked displays, the perceived objects can distort as the viewer moves to look around the displayed scene. We describe a novel method for calculating stereoscopic camera parameters with the following contributions: (1) Provides the user intuitive controls related to easily measured physical values. (2) For head tracked displays; necessarily ensures that there is no depth distortion as the viewer moves. (3) Clearly separates the image capture camera/scene space from the image viewing viewer/display space. (4) Provides a transformation between these two spaces allowing precise control of the mapping of scene depth to perceived display depth. The new method is implemented as an API extension for use with OpenGL, a plug-in for 3D Studio Max and a control system for a stereoscopic digital camera. The result is stereoscopic images generated correctly at the first attempt, with precisely controlled perceived depth. A new analysis of the distortions introduced by different camera parameters was undertaken.

250 citations


Proceedings ArticleDOI
TL;DR: In this article, the authors developed an alternative approach to analysis of pulsed thermographic data, based on developing a parametric equation for the time history of each pixel, which provides increased spatial and temporal resolution, and significantly extends the range of defect depths and sample configurations.
Abstract: The use of pulsed thermography as an NDE solution for manufacturing and in-service applications has increased dramatically in the past five years, enabled by advances in IR camera and computer technology. However, the basic approaches to analysis and processing of pulsed thermographic data have remained largely unchanged. These methods include image averaging, subtraction, division, slope calculation and contrast methods (e.g. peak contrast and peak slope time mapping). We have developed an alternative approach to analysis of pulsed thermographic data, based on developing a parametric equation for the time history of each pixel. The resulting synthetic image provides increased spatial and temporal resolution, and significantly extends the range of defect depths and sample configurations to which pulsed thermography can be applied. In addition, our approach reduces the amount of data that must be manipulated and stored, so that an entire array of image sequences from a large structure can be processed simultaneously.

217 citations


Proceedings ArticleDOI
TL;DR: In this paper, a low light level CCD can operate over a wide range of readout rates from TV to slow scan and give superior performance to that available from either intensified or slow-scan CCD sensors.
Abstract: A new CCD sensor technology has been developed by Marconi Applied Technologies which effectively reduces read-out noise to less than one electron rms. A single low light level CCD can operate over a wide range of read-out rates from TV to slow-scan and give superior performance to that available from either intensified or slow-scan CCD sensors.

164 citations


Proceedings ArticleDOI
TL;DR: A true 3D video camera (Zcam), capable of producing RGB and D signals where D stands for distance or depth to each pixel, which makes possible the production of mixed reality real time video as well as post- production manipulation of recorded video.
Abstract: At 3DV Systems Ltd. we developed and built a true 3D video camera (Zcam), capable of producing RGB and D signals where D stands for distance or depth to each pixel. The new RGBD camera makes it possible to do away with color based background substitution known as chroma-key as well as creating a whole gallery of new effects and applications such as multilayer foreground as well as background substitutions and manipulations. The new multilayerd modality makes possible the production of mixed reality real time video as well as post- production manipulation of recorded video. The new RGBD camera is scannerless and uses low power laser illumination to create the D channel. Prototypes have been in use for more than 2 years and, are capable of sub-centimeter depth resolution at any desired distance up to 10 m. on the present model. Additional potential applications as well as low cost versions are currently being explored.© (2001) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

154 citations


Proceedings ArticleDOI
TL;DR: In addition to bioactive fluid dispensing, ink-jet based microdispensing allows integration of features (electronic, photonic, sensing, structural, etc.) that are not possible, or very difficult, with traditional photolithographic-based MEMS fabrication methods as mentioned in this paper.
Abstract: Applications of microfluidics and MEMS (micro-electromechanical systems) technology are emerging in many areas of biological and life sciences. Non-contact microdispensing systems for accurate, high-throughput deposition of bioactive fluids can be an enabling technology for these applications. In addition to bioactive fluid dispensing, ink-jet based microdispensing allows integration of features (electronic, photonic, sensing, structural, etc.) that are not possible, or very difficult, with traditional photolithographic-based MEMS fabrication methods.Our single fluid and mutlifluid (MatrixJetT) piezoelectric microdispensers have been used for spot synthesis of peptides, production of microspheres to deliver drugs/biological materials, microprinting of biodegradable polymers for cell proliferation in tissue engineering requirements, and spot deposition for DNA, diagnostic immunoassay, antibody and protein arrays. We have created optical elements, sensors, and electrical interconnects by microdeposition of polymers and metal alloys. We have also demonstrated the integration of a reverse phase microcolumn within a piezoelectric dispenser for use in the fractionation of peptides for mass spectrometer analysis.

151 citations


Proceedings ArticleDOI
TL;DR: In this article, a detailed assessment of these devices, including novel methods of measuring their properties when operated at peak mean signal levels well below one electron per pixel, is presented. And the authors conclude that these new deices have radically changed the balance in the perpetual trade-off between read out noise and the speed of readout.
Abstract: A radically new CCD development by Marconi Applied Technology has enabled substantial internal gain within the CCD before the signal reaches the output amplifier. With reasonably high gain, sub-electron readout noise levels are achieved even at MHz pixel rates. This paper reports a detailed assessment of these devices, including novel methods of measuring their properties when operated at peak mean signal levels well below one electron per pixel. The devices are shown to be photon shot noise limited at essentially all light levels below saturation. Even at the lowest signal levels the charge transfer efficiency is good. The conclusion is that these new deices have radically changed the balance in the perpetual trade-off between read out noise and the speed of readout. They will force a re- evaluation of camera technologies and imaging strategies to enable the maximum benefit to be gained form these high- speed, essentially noiseless readout devices. This new LLLCCD technology, in conjunction with thinning should provide detectors which will be very close indeed to being theoretically perfect.

138 citations


Proceedings ArticleDOI
TL;DR: This work proposes a novel methodology for confidentiality, which turns entropy coders into encryption ciphers by using multiple statistical models, and shows that security is achieved without sacrificing the compression performance and the computational speed.
Abstract: Efficient encryption algorithms are essential to multimedia data security, since the data size is large and real-time processing is often required. After discussing limitations of previous work on multimedia encryption, we propose a novel methodology for confidentiality, which turns entropy coders into encryption ciphers by using multiple statistical models. The choice of statistical models and the order in which they are applied are kept secret as the key Two encryption schemes are constructed by applying this methodology to the Huffman coder and the QM coder. It is shown that security is achieved without sacrificing the compression performance and the computational speed. The schemes can be applied to most modern compression systems such as MPEG audio, MPEG video and JPEG/JPEG2000 image compression.

134 citations


Proceedings ArticleDOI
TL;DR: This work proposes a new method for data hiding in binary text documents by embedding data in the 8-connected boundary of a character by identifying a fixed set of pairs of five-pixel long boundary patterns for embedded data.
Abstract: With the proliferation of digital media such as digital images, digital audio, and digital video, robust digital watermarking and data hiding techniques are needed for copyright protection, copy control, annotation, and authentication. While many techniques have been proposed for digital color and grayscale images, not all of them can be directly applied to binary text images. The difficulty lies in the fact that changing pixel values in a binary document could introduce irregularities that are very visually noticeable. We propose a new method for data hiding in binary text documents by embedding data in the 8-connected boundary of a character. We have identified a fixed set of pairs of five-pixel long boundary patterns for embedding data. One of the patterns in a pair requires deletion of the center foreground pixel, whereas the other requires the addition of a foreground pixel. A unique property of the proposed method is that the two patterns in each pair are dual of each other -- changing the pixel value of one pattern at the center position would result in the other. This property allows easy detection of the embedded data without referring to the original document, and without using any special enforcing techniques for detecting embedded data.

134 citations


Proceedings ArticleDOI
TL;DR: The role of watermarking for MIS security and the problem of integrity control of medical images are addressed and alternative schemes to extract verification signatures and compare their tamper detection performance are discussed.
Abstract: The control of the integrity and authentication of medical images is becoming ever more important within the Medical Information Systems (MIS). The intra- and interhospital exchange of images, such as in the PACS (Picture Archiving and Communication Systems), and the ease of copying, manipulation and distribution of images have brought forth the security aspects. In this paper we focus on the role of watermarking for MIS security and address the problem of integrity control of medical images. We discuss alternative schemes to extract verification signatures and compare their tamper detection performance.

Proceedings ArticleDOI
TL;DR: A new approach using the wavelet based method for data fusion between hyperspectral and multispectral images is presented, which achieves the goal of creating a composite image that has the same spectral resolution as the hyperspectrals and the same spatialresolution as the mult ispectral image with minimum artifacts.
Abstract: Different research groups have recently studied the concept of wavelet image fusion between panchromatic and multispectral images using different approaches. In this paper, a new approach using the wavelet based method for data fusion between hyperspectral and multispectral images is presented. Using this wavelet concept of hyperspectral and multispectral data fusion, we performed image fusion between two spectral levels of a hyperspectral image and one band of multispectral image. The reconstructed image has a root mean square error of 2.8 per pixel and a signal-to- noise ratio of 36 dB. We achieved our goal of creating a composite image that has the same spectral resolution as the hyperspectral image and the same spatial resolution as the multispectral image with minimum artifacts.

Proceedings ArticleDOI
TL;DR: The 3D Shape Spectrum Descriptor (3D SSD) as mentioned in this paper is a descriptor that provides an intrinsic shape description of a 3D mesh and is defined as the distribution of the shape index over the entire mesh.
Abstract: Because of the continuous development of multimedia technologies, virtual worlds and augmented reality, 3D contents become a common feature of the today information systems. Hence, standardizing tools for content-based indexing of visual data is a key issue for computer vision related applications. Within the framework of the future MPEG-7 standard, tools for intelligent content-based access to 3D information, targeting applications such as search & retrieval and browsing of 3D model databases, have been recently considered and evaluated. In this paper, we present the 3D Shape Spectrum Descriptor (3D SSD), recently adopted within the current MPEG-7 Committee Draft (CD). The proposed descriptor aims at providing an intrinsic shape description of a 3D mesh and is defined as the distribution of the shape index over the entire mesh. The shape index is a local geometric attribute of a 3D surface, expressed as the angular coordinate of a polar representation of the principal curvature vector. Experimental results have been carried out upon the MPEG-7 3D model database consisting of about 1300 meshes in VRML 2.0 format. Objective retrieval results, based upon the definition of a ground truth subset, are reported in terms of Bull Eye Percentage (BEP) score.

Proceedings ArticleDOI
TL;DR: The paper provides results of extensive experimental comparisons of image restoration capabilities of the methods and demonstrates that they can naturally be interpreted in a unified way as different implementations of signal sub-band decomposition with uniform (in SWTD filters) or logarithmic (for WL-methods) arrangement of signalSub-bands and element-wise processing decomposed components.
Abstract: Two families of transform domain signal restoration (denoising and deblurring) and enhancement methods well suited to processing non-stationary signals are reviewed and comprehensively compared in their different modifications in terms of their signal restoration capability and computational complexity: sliding window transform domain (SWTD) filters and wavelet (WL) based algorithms. SWTD filters work in sliding window in the domain of an orthogonal transform and, in each position of the window, nonlinearly transform window transform coefficients to generate an estimate of the central pixel of the window. As a transform, DCT has been found to be one of the most efficient in most applications. WL methods act globally and apply element-wise nonlinear transformation similar to those used in SWTD methods to the wavelet transform coefficients to generate an estimate of the output signal. The paper provides results of extensive experimental comparisons of image restoration capabilities of the methods and demonstrates that they can naturally be interpreted in a unified way as different implementations of signal sub-band decomposition with uniform (in SWTD filters) or logarithmic (for WL-methods) arrangement of signal sub-bands and element-wise processing decomposed components. As a bridge, a hybrid wavelet/sliding window processing that combines advantages of both methods is described.

Proceedings ArticleDOI
TL;DR: Simulation results with a specific feature set and a well-known and commercially available watermarking technique indicates that the approach is able to accurately distinguish between watermarked and unwatermarked images.
Abstract: In this paper, we present techniques for steganalysis of images that have been potentially subjected to a watermarking algorithm. Our hypothesis is that a particular watermarking scheme leaves statistical evidence or structure that can be exploited for detection with the aid of proper selection of image features and multivariate regression analysis. We use some sophisticated image quality metrics as the feature set to distinguish between watermarked and unwatermarked images. To identify specific quality measures, which provide the best discriminative power, we use analysis of variance (ANOVA) techniques. The multivariate regression analysis is used on the selected quality metrics to build the optimal classifier using images and their blurred versions. The idea behind blurring is that the distance between an unwatermarked image and its blurred version is less than the distance between a watermarked image and its blurred version. Simulation results with a specific feature set and a well-known and commercially available watermarking technique indicates that our approach is able to accurately distinguish between watermarked and unwatermarked images.

Proceedings ArticleDOI
TL;DR: In this article, the seismic signal is a vector wave that can be used to track the source bearing and estimate the number of walkers by using the bearing information of multiple sources.
Abstract: Persons or vehicles moving over ground generate a succession of impacts; these soil disturbances propagate away from the source as seismic waves. These seismic waves are especially useful in detecting footsteps which cannot be detected acoustically. Footstep signals can be distinguished from other seismic sources, such as vehicles or wind noise, by their impulsive nature. Even in noisy environments, statistical measures of the seismic amplitude distribution, such as kurtosis, can be used to identify a footstep. These detection methods can be used even with single component geophones. Moreover, the seismic signal is a vector wave that can be used to track the source bearing. To do such tracking a three-component measurement is needed. If multiple sources are separated in angle, we can use this bearing information to estimate the number of walkers.

Proceedings ArticleDOI
TL;DR: A brief literature review reveals, that one of the first scientific papers on a micropump dates from 1978, which is more than two decades ago, and there seems to be no change of this trend as discussed by the authors.
Abstract: Among the large number of microfluidic components realized up to now, micropumps clearly represent the case of a 'long runner' in science. A brief literature review reveals, that one of the first scientific papers on a micropump dates from 1978, which is more than two decades ago. An increasing number of publications is found from that time on representing widespread research activities, and there seems to be no change of this trend. An astonishing diversity of micropump concepts and devices has emerged until today, reaching from peristaltic micropumps to a large number of micro diaphragm pumps to recent high-pressure devices without any moving parts. Electrohydrodynamic, electroosmotic, electrostatic, electromagnetic, magnetohydrodynamic, SMA, piezoelectric, thermopneumatic, hydraulic or pneumatic - almost every MEMS-based or mesoscopic actuation principle has been combined with micropumps. An outstanding diversity is also found in the fabrication technology - the span reaches from silicon-based devices over precision machining to injection moulding. This altogether makes it worth to summarize and also take a look into the future of micropumps - after the first two decades.

Proceedings ArticleDOI
TL;DR: In this paper, the authors present two approaches to automatic image annotation, by finding those rules underlying the links between the low-level features and the high-level concepts associated with images One scheme uses global color image information and classification tree based techniques through this supervised learning approach they are able to identify relationships between global color-based image features and some textual decriptors.
Abstract: In image similarity retrieval systems, color is one of the most widely used features Users who are not well versed with the image domain characteristics might be more comfortable in working with an Image Retrieval System that allows specification of a query in terms of keywords, thus eliminating the usual intimidation in dealing with very primitive features In this paper we present two approaches to automatic image annotation, by finding those rules underlying the links between the low-level features and the high-level concepts associated with images One scheme uses global color image information and classification tree based techniques Through this supervised learning approach we are able to identify relationships between global color-based image features and some textual decriptors In the second approach, using low-level image features that capture local color information and through a k-means based clustering mechanism, images are organized in clusters such that images that are similar are located in the same cluster For each cluster, a set of rules is derived to capture the association between the localized color-based image features and the textual descriptors relevant to the cluster

Proceedings ArticleDOI
TL;DR: The heart of the system, a multifunction ROIC based upon both analog and digital processing, is described and of particular interest is the obscuration penetration function, which is illustrated with a series of images.
Abstract: This paper reviews the progress of Advanced Scientific Concepts, Inc (ASC). flash ladar 3-D imaging systems and presents their newest single-pulse 128 x 128 flash ladar 3-D images. The heart of the system, a multifunction ROIC based upon both analog and digital processing, is described. Of particular interest is the obscuration penetration function, which is illustrated with a series of images. An image tube-based low-laser-signal 3-D FPA is also presented. A small-size handheld version of the 3-D camera is illustrated which uses an InGaAs lensed PIN detector array indium bump bonded to the ROIC.

Proceedings ArticleDOI
TL;DR: The HgCdTe high-density vertically integrated photodiode (HDVIPTM) concept developed at DRS Infrared Technologies is described in this article, which is currently in production in both large-area scanning and staring focal plane array (FPA) formats.
Abstract: The HgCdTe high-density vertically integrated photodiode (HDVIPTM) concept developed at DRS Infrared Technologies is described. This technology is currently in production in both large-area scanning and staring focal plane array (FPA) formats. Detector models are presented and compared to performance data from scanning and staring FPAs. Performance data from 256 X 256 and 640 X 480 LWIR and MWIR staring FPAs, in keeping with these models, is presented with responsivity and D* operabilities in excess of 99.9%. Third generation system requirements mandate megapixel FPA operation at high temperatures, with multi-color capability, and high frame rates. To this end operation of 640 X 480 MWIR HgCdTe FPAs has been demonstrated at temperatures in excess of 150 K, and the push to these higher operating temperatures, with its effect on system cost, is discussed. The technology has also been extended into the realm of simultaneous two-color detection with large area formats, and this effort is described.

Proceedings ArticleDOI
TL;DR: Joint time-frequency descriptions of chirps will be shown to allow for effective definitions of instantaneous frequencies via localized trajectories on the plane and a number of applications will be mentioned, ranging from bioacoustics to turbulence and gravitational waves.
Abstract: Chirps (i.e., transient AM-FM waveforms) are ubiquitous in nature and man-made systems, and they may serve as a paradigm for many nonstationary deterministic signals. The time-frequency plane is a natural representation space for chirps, and we will here review a number of questions related to chirps, as addressed from a time-frequency perspective. Global and local approaches will be described for matching and/or adapting representations to chirps. As a corollary, joint time-frequency descriptions of chirps will be shown to allow for effective definitions of instantaneous frequencies via localized trajectories on the plane. A number of applications will be mentioned, ranging from bioacoustics to turbulence and gravitational waves.

Proceedings ArticleDOI
TL;DR: Computer simulations show that the algorithm performs well even with noisy DOA estimates at sensors, and a simple algorithm based on geometrical matching of similar triangles will align the separate tracks and determine the sensor positions and orientations relative to a reference sensor.
Abstract: Starting with a randomly distributed sensor array with unknown sensor orientations, array calibration is needed before target localization and tracking can be performed using classical triangulation methods. In this paper, we assume that the sensors are only capable of accurate direction of arrival (DOA) estimation. The calibration problem cannot be completely solved given the DOA estimates alone, since the problem is not only rotationally symmetric but also includes a range ambiguity. Our approach to calibration is based on tracking a single target moving at a constant velocity. In this case, the sensor array can be calibrated from target tracks generated by an extended Kalman filter (EKF) at each sensor. A simple algorithm based on geometrical matching of similar triangles will align the separate tracks and determine the sensor positions and orientations relative to a reference sensor. Computer simulations show that the algorithm performs well even with noisy DOA estimates at sensors.© (2001) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
TL;DR: Actuality Systems, Inc. as mentioned in this paper developed an 8-color multiplanar volumetric display for molecular visualization, mechanical CAD, e-commerce, entertainment, and medical imaging.
Abstract: An 8-color multiplanar volumetric display is being developed by Actuality Systems, Inc. It will be capable of utilizing an image volume greater than 90 million voxels, which we believe is the greatest utilizable voxel set of any volumetric display constructed to date. The display is designed to be used for molecular visualization, mechanical CAD, e-commerce, entertainment, and medical imaging. As such, it contains a new graphics processing architecture, novel high-performance line-drawing algorithms, and an API similar to a current standard. Three-dimensional imagery is created by projecting a series of 2-D bitmaps ("image slices") onto a diffuse screen that rotates at 600 rpm. Persistence of vision fuses the slices into a volume-filling 3-D image. A modified three-panel Texas Instruments projector provides slices at approximately 4 kHz, resulting in 8 -color 3-D imagery comprised of roughly 200 radiallydisposed slices which are updated at 20 Hz. Each slice has a resolution of 768 by 768 pixels, subtending 10 inches. The display electronics includes a custom rasterization architecture which converts the user's 3-D geometry data into image slices, as well as 6 Gbits of DDR SDRAM graphics memory.

Proceedings ArticleDOI
TL;DR: Methods for estimation of length and width of vehicles based on scanning laser radar data and the use of the minimum rectangle estimator to retrieve initial parameters for fitting of more complex shapes is discussed.
Abstract: Over the years imaging laser radar systems have been developed for military and civilian applications. Among the applications we note collection of 3D data for terrain modeling and object recognition. One part of the object recognition process is to estimate the size and orientation of the object. This paper concerns a vehicle size and orientation estimation process based on scanning laser radar data. Methods for estimation of length and width of vehicles are proposed. The work is based on the assumption that from a top view most vehicles' edges are approximately of rectangular shape. Thus, we have a rectangle fitting problem. The first step in the process is sorting of data into lists containing object data and data from the ground closest to the object. Then a rectangle with minimal area is estimated based on object data only. We propose an algorithm for estimation of the minimum rectangle area containing the convex hull of the object data. From the rectangle estimate, estimates of the length and width of the object can be retrieved. The first rectangle estimate is then improved using least squares methods based on both object and ground data. Both linear and nonlinear least squares methods are described. These improved estimates of the length and width are less biased compared to the initial estimates. The methods are applied to both simulated and real laser radar data. The use of the minimum rectangle estimator to retrieve initial parameters for fitting of more complex shapes is discussed.

Proceedings ArticleDOI
TL;DR: This paper investigates the restoration of geometrically altered digital images with the aim of recovering an embedded watermark information by using a modified 12-parameters bilinear transformation model which closely matches the deformations taking place by an analog acquisition process.
Abstract: In this paper, we investigate the restoration of geometrically altered digital images with the aim of recovering an embedded watermark information. More precisely, we focus on the distortion taking place by the camera acquisition of an image. Indeed, in the cinema industry, a large part of early movie piracy comes from copies made in the theatre itself with a camera. The evolution towards digital cinema broadcast enables watermark based fingerprinting protection systems. The first step for fingerprint extraction of a counterfeit material is the compensation of the geometrical deformation inherent to the acquisition process. In order to compensate the deformations, we use a modified 12-parameters bilinear transformation model which closely matches the deformations taking place by an analog acquisition process. The estimation of the parameters can either be global, either vary across regions within the image. Our approach consist in the estimation of the displacement of a number of of pixels via a modified block-matching technique followed by a minimum mean square error optimisation of the parameters on basis of those estimated displacement-vectors. The estimated transformation is applied to the candidate image to get a reconstruction as close as possible to the original image. Classical watermark extraction procedure can follow.

Proceedings ArticleDOI
TL;DR: An audio watermarking algorithm that can embed a multiple-bit message which is robust against wow-and-flutter, cropping, noise-addition, pitch-shift, and audio compressions such as MP3 is described.
Abstract: In this paper, we describe an audio watermarking algorithm that can embed a multiple-bit message which is robust against wow-and-flutter, cropping, noise-addition, pitch-shift, and audio compressions such as MP3. The algorithm calculates and manipulates the magnitudes of segmented areas in the time-frequency plane of the content using short-term DFTs. The detection algorithm correlates the magnitudes with a pseudo-random array that maps to a two-dimensional area in the time-frequency plane. The two-dimensional array makes the watermark robust because, even when some portions of the content are heavily degraded, other portions of the content can match the pseudo-random array and contribute to watermark detection. Another key idea is manipulation of magnitudes. Because magnitudes are less influenced than phases by fluctuations of the analysis windows caused by random cropping, the watermark resists degradation. When signal transformation causes pitch fluctuations in the content, the frequencies of the pseudo-random array embedded in the content shift, and that causes a decrease in the volume of the watermark signal that still correctly overlaps with the corresponding pseudo-random array. To keep the overlapping area wide enough for successful watermark detection, the widths of the frequency subbands used for the detection segments should increase logarithmically as frequency increases. We theoretically and experimentally analyze the robustness of proposed algorithm against a variety of signal degradations.© (2001) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
TL;DR: This paper aims to illustrate the advantages of Lock-In Thermography as a non-destructive, real time and non- contact technique to analyze and to locate thermo-mechanical mechanisms in materials and structures, and proposes to improve these two methods by using LIT instead of temperature rise measurement to predict crack occurrence in real structures.
Abstract: This paper aims to illustrate the advantages of Lock-In Thermography (LIT) as a non-destructive, real time and non- contact technique to analyze and to locate thermo-mechanical mechanisms in materials and structures. Due to the first and second principles of thermodynamics, there is a relationship between temperature and mechanical behavior laws. LIT is classically used to measure linear thermo-elastic effect to evaluate stresses in structures under periodic, random or transient loading. The new digital processing D-MODE presented allows extracting non-linear coupled thermo-mechanical effects (dissipated energy) cycle by cycle during a fatigue test on specimens and on real structures. This quantity much smaller than thermo-elastic source needs a high sensitive thermal imaging camera and a dedicated algorithm to separate dissipated energy from thermo-elastic source. On the other hand, it has been known for a long time that there is a correlation between plasticity in materials and the appearance of heat dissipation. More recently, it was shown there is a clear relationship between fatigue limit and occurrence of dissipated energy. We propose to improve these two methods by using LIT instead of temperature rise measurement to predict crack occurrence in real structures. At last we present some industrial applications in automotive and aircraft industries.© (2001) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
TL;DR: This paper describes how decentralized control theory can be used to control multiple cooperative robotic vehicles and uses decentralized methods to connect otherwise independent non-touching robotic vehicles so that they behave in a stable, coordinated fashion.
Abstract: This paper describes how decentralized control theory can be used to control multiple cooperative robotic vehicles. Models of cooperation are discussed and related to the input/output reachability and structural observability and controllability of the entire system. Whereas decentralized control research in the past has concentrated on using decentralized controllers to partition complex physically interconnected systems, this work uses decentralized methods to connect otherwise independent non-touching robotic vehicles so that they behave in a stable, coordinated fashion. A vector Liapunov method is used to prove stability of a single example: the controlled motion of multiple vehicles along a line. The results of this stability analysis have been implemented on two applications: a robotic perimeter surveillance system and self-healing minefield.

Proceedings ArticleDOI
TL;DR: MIT Lincoln Laboratory is actively developing laser and detector technologies that make it possible to build a 3D laser radar with several attractive features, including capture of an entire 3D image on a single laser pulse, tens of thousands of pixels, few-centimeter range resolution, and small size, weight, and power requirements.
Abstract: MIT Lincoln Laboratory is actively developing laser and detector technologies that make it possible to build a 3D laser radar with several attractive features, including capture of an entire 3D image on a single laser pulse, tens of thousands of pixels, few-centimeter range resolution, and small size, weight, and power requirements. The laser technology is base don diode-pumped solid-state microchip lasers that are passively Q-switched. The detector technology is based on Lincoln-built arrays of avalanche photodiodes operating in the Geiger mode, with integrated timing circuitry for each pixel. The advantage of these technologies is that they offer the potential for small, compact, rugged, high-performance systems which are critical for many applications.© (2001) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
TL;DR: A CGIP (Computer-Generated Integral Photography) method is proposed and its feasibility is verified and autostereoscopic images with full color and full parallax were observed in real time.
Abstract: In this paper, we propose a CGIP (Computer-Generated Integral Photography) method and verify its feasibility. In CGIP, the elemental images of imaginary objects are computer-generated instead of using pickup process. Since this system is composed of only one lens array and conventional display devices, it is compact and cost effective. The animated image can also be presented by the time-varying elemental images. As a result, autostereoscopic images with full color and full parallax were observed in real time. Moreover, this method can be applied to a quasi-3D display system. If each camera picks a scene which is a part of total view and elemental images are generated so that each scene has different depth, real objects captured by ordinary cameras can be displayed in quasi-3D. In addition, since it is easy to change the shape or size of elemental images in this scheme, we can observe the effect of several viewing parameters. This helps us to analyze the basic IP system. We perform an experiment with different lens arrays and compare the results. The lateral and depth resolution of the integrated image is limited by some factors such as the image position, object thickness, the lens width, and the pixel size of display panel.