scispace - formally typeset
Search or ask a question

Showing papers in "Measurement Science and Technology in 2000"


Journal ArticleDOI
TL;DR: Krystek as discussed by the authors provides a comprehensive and self-contained overview of random data analysis, including derivations of the key relationships in probability and random-process theory not usually found to such extent in a book of this kind.
Abstract: This is a new edition of a book on random data analysis which has been on the market since 1966 and which was extensively revised in 1971. The book has been a bestseller since. It has been fully updated to cover new procedures developed in the last 15 years and extends the discussion to a broad range of applied fields, such as aerospace, automotive industries or biomedical research. The primary purpose of this book is to provide a practical reference and tool for working engineers and scientists investigating dynamic data or using statistical methods to solve engineering problems. It is comprehensive and self-contained and expands the coverage of the theory, including derivations of the key relationships in probability and random-process theory not usually found to such extent in a book of this kind. It could well be used as a teaching textbook for advanced courses on the analysis of random processes. The first four chapters present the background material on descriptions of data, properties of linear systems and statistical principles. They also include probability distribution formulas for one-, two- and higher-order changes of variables. Chapter five gives a comprehensive discussion of stationary random-process theory, including material on wave-number spectra, level crossings and peak values of normally distributed random data. Chapters six and seven develop mathematical relationships for the detailed analysis of single input/output and multiple input/output linear systems including algorithms. In chapters eight and nine important practical formulas to determine statistical errors in estimates of random data parameters and linear system properties from measured data are derived. Chapter ten deals with data aquisition and processing, including data qualification. Chapter eleven describes methods of data analysis such as data preparation, Fourier transforms, probability density functions, auto- and cross-correlation, spectral functions, joint record functions and multiple input/output functions. Chapter twelve shows how to handle nonstationary data analysis, classification of nonstationary data, probability structure of nonstationary data, calculation of nonstationary mean values or mean square values, correlation structures of nonstationary data and spectral structures of nonstationary data. The last chapter deals with the Hilbert transform including applications for both nondispersive and dispersive propagation problems. All chapters include many illustrations and references as well as examples and problem sets. This allows the reader to use the book for private study purposes. Altogether the book can be recommended for practical working engineers and scientists to support their daily work, as well as for university readers as a teaching textbook in advanced courses. M Krystek

3,390 citations


Journal ArticleDOI
TL;DR: This book provides a most welcome review and grounding in the necessary basics of the subject, and will prove to be a most useful addition to the literature in the ever-expanding field of light scattering.
Abstract: Almost all solid particles and very many liquid drops are not spherical. In addition, particles may have internal structure, both homogeneous and heterogeneous, and there may be agglomeration. It has long been recognized that the well-known Mie theory for homogeneous spheres, and similar theories for simple shapes such as the infinite cylinder, are not adequate representations of the scattering by more complex shapes and structures. There may be significant differences in the calculated phase function to that in reality, and a theory for homogeneous spheres will completely fail to predict polarization effects. In the absence of rigorous analytical solutions for particles of general shape and structure, recourse is made to numerical techniques. The development of powerful computers has enabled these calculations to be performed rapidly and accurately for a wide range of particle types and sizes. The growth in numerical techniques has been exponential. For these reasons this book has come at a very opportune time, and is a welcome review of a large field of expertise. Authors who are recognized masters review each subject, and all the major methods are covered. For completeness there are also sections dealing with examples of practical applications of the calculations to nature. The book opens with a foreword by the renowned H C van de Hulst, who provides an interesting historical review and perspective. An introductory section of three chapters follows, dealing with fundamental concepts and definitions. The first of these deals with scattering by single particles and moves on to multiple scattering and radiative transfer. The second chapter is concerned with methods for nonspherical particles. It briefly reviews the limited exact theories available and then covers numerical and approximate methods. Finally, there is a chapter covering the basic properties of the scattering matrix for small particles. Overall, this section provides a most welcome review and grounding in the necessary basics of the subject. The next two sections form the backbone of the book and are concerned with reviewing developments in numerical techniques. The first of these has chapters covering the method of separation of variables, the discrete dipole approximation, the T-matrix method and the finite difference time domain (FDTD) technique. The second section pursues inhomogeneous particles with refractive index profiles, heterogeneous particles with inclusions, multiple interacting particles and aggregates. At the end of this section is a chapter reviewing developments to date in the theory of scattering by statistically irregular particles. This is very welcome in light of the fact that the bulk of natural particles are not regular in the sense that their shapes can be predicted. The latter part of the book is the province of measurements and applications. Here it is slightly less satisfactory, being perhaps a little narrower in scope. Under measurements there is a description of one experimental method for the determination of the elements of the Stokes matrix and a description of one microwave facility for large scale modelling of particles. The final section is a review of applications. This is largely concerned with environmental situations, covering LIDAR and radiative transfer methods for studies of clouds, microwave measurements of precipitation, scattering in marine environments and interplanetary dust. The book ends with a short chapter of biological applications. These are all interesting and useful illustrative examples, but I wonder whether a wider view may have been appropriate. Applications in industrial situations come to mind. In summary this is a very worthwhile research publication. It is attractively presented and comprehensive. It will prove to be a most useful addition to the literature in the ever-expanding field of light scattering. A R Jones

637 citations


Journal ArticleDOI
TL;DR: In this paper, the authors have large experiences in industrial development (Bosch) as well as in academic research and introduce mechanical engineers into vehicle-specific signal processing and automatic control.
Abstract: From the Publisher: This book enables control engineers to understand engine and vehicle models necessary for controller design and introduces mechanical engineers into vehicle-specific signal processing and automatic control. With only a few exceptions the approaches are close to some of those utilized in actual vehicles, rather than being purely theoretical. The authors have large experiences in industrial development (Bosch) as well as in academic research.

490 citations


Journal ArticleDOI
TL;DR: In this article, a theoretical expression for the depth of the two-dimensional measurement plane is derived and it is shown that the particle concentration must be chosen judiciously in order to balance the desired spatial resolution and signal-to-noise ratio of the particle-image field.
Abstract: In particle image velocimetry experiments where optical access is limited or in microscale geometries, it may be desirable to illuminate the entire test section with a volume of light, as opposed to a two-dimensional sheet of light. With volume illumination, the depth of the measurement plane must be defined by the focusing characteristics of the recording optics. A theoretical expression for the depth of the two-dimensional measurement plane is derived and it is shown to agree well with experimental observations. Unfocused particle images, which lie outside the measurement plane, create background noise that decreases the signal-to-noise ratio of the particle-image fields. Results show that the particle concentration must be chosen judiciously in order to balance the desired spatial resolution and signal-to-noise ratio of the particle-image field.

478 citations


Journal ArticleDOI
TL;DR: In this article, the authors present the second edition of a textbook published in 1996 by McGraw Hill and originates from a graduate level course given by the authors at the University of Naples.
Abstract: This book is the second edition of a textbook published in 1996 by McGraw Hill and originates from a graduate level course given by the authors at the University of Naples. The topics include kinematics, statics and dynamics of robot manipulators together with trajectory planning and active control. There are only minor additions the first edition, which are mainly the use of quaternion to describe the orientation of the end effector and a short description of a closed chain architecture for a manipulator (parallelogram arm). The book is largely devoted to serial manipulators, with special developments about active control including adaptative control, robust controls and stability analysis. Another strength of this book is the great number of problems proposed at the end of each chapter, together with a list of references related to it. The fundamental features covered by the text are illustrated on simple examples of serial manipulators (two-link planar arm, parallelogram arm) including analytical results and numerical tests. The book has nine chapters followed by three appendices. The first appendix is devoted to linear algebra, the second recalls some fundamental aspects of rigid body mechanics and the third gives some basic principles of feedback control of linear systems. Chapter one is an introduction to the study of robot manipulators, giving an interesting classification of their architectures, the corresponding workspace and describes the tasks for which they are used. After some standard examples of industrial manipulators, bibliographical reference texts are proposed, including textbooks on modelling and control of robots, general books on robotics, specialized texts, scientific robotic reviews and some international conferences on robotics. Chapters two, three and four are devoted to mechanical modelling of robot manipulators. The fundamental basics of kinematics are given in chapter two, including the representation of finite rotations by Euler angles or unit quaternions, homogeneous transformations, Denavit-Hartenberg parameters and workspace. The direct and inverse kinematical problems are solved in analytical form for some typical manipulator structures. The differential kinematics of robots are presented in chapter three, with an introduction to the geometric and analytic Jacobian matrices, kinematic singularities and redundancy. The inverse kinematic problem is presented, with special attention to the case of redundant robots where the solution is obtained by a linear optimization problem leading to the introduction of the pseudo-inverse Jacobian matrix and to the solution of several objectives such as avoidance of collision with an obstacle or moving away from singularities. Several inverse kinematics algorithms are given wih an interesting application to a three-link planar arm. Finally, a property of kineto-statics duality is deduced from the principle of virtual work applied to an equilibrium configuration of the robot. Chapter four is a standard presentation of the derivation of the dynamical model by Lagrange formulation and then by the Newton-Euler method. In the Lagrange formulation method, the linearity with respect to inertial parameters is shown and a detailed formulation of the dynamical model is obtained for a two-link Cartesian arm, a two-link planar arm and a parallelogram arm. The problem of dynamic parameter identification is also briefly presented from a numerical point of view. The recursive algorithm constructed from the Newton-Euler formulation is presented and illustrated by considering a two-link planar arm. Finally, the operational space dynamic model is introduced. In chapter five, paths and trajectory planning in joints and in operational spaces are presented; several classical methods of interpolation are described. Chapter six is an extensive study of active control of manipulators. Several methods are presented, involving classical independent joint control, non-linear centralized control, robust control and adaptative control. Both joint-space control and operational-space control are studied together with stability analysis by using Liapounoff functions. An interesting application to the two-link planar arm already used shows the comparison between various control schemes. Chapter seven deals with interaction control of serial manipulators with the working environment. Several strategies involving compliance control, impedance control, force control and hybrid control are presented. Chapter eight describes the actuators and the sensors used in robotics. Several types of servomotors (electric and hydraulic) are presented, together with the model giving their input/output relationship. Several kinds of sensors are also described including encoders, tachometers, force and vision sensors. The last chapter gives a short presentation of the functional architecture of a robot's control system, including characteristics of the programing environment and the hardware architecture. In conclusion, the book provides a good insight about simulation and control of robot manipulators, with a detailed study of the various control strategies and several interesting and pedagogical applications. This book is an excellent review of the standard knowledge needed not only for graduate students but also for researchers interested in robot manipulators. M Pascal

329 citations


Journal ArticleDOI
TL;DR: A review of the use of liquid crystals in research with a special emphasis on recent developments in the field is given in this paper, where the reader is provided with an up-to-date background in this measurement technology and allow the researcher to decide whether liquid crystals would be suitable in specific applications.
Abstract: Liquid crystals have become an accurate and convenient means of measuring surface temperature and heat transfer for the gas turbine and heat transfer research communities. The measurement of surface shear stress using liquid crystals is finding increasing favour with aerodynamicists and developments in these techniques ensure that liquid crystals will continue to provide key thermal and shear stress data in the future. The increasing use of three-dimensional finite element computational models has allowed industry to capitalize on the advantages of the full surface data generated. The paper reviews the use of these complex materials in research with a special emphasis on recent developments in the field. The aim is to provide the reader with an up to date background in this measurement technology and allow the researcher to decide whether liquid crystals would be suitable in specific applications.

254 citations


Journal ArticleDOI
TL;DR: In this paper, new algorithms for particle-tracking velocimetry are proposed and tested with typical particle images showing two-dimensional fluid flows, and the performance of the particle tracking is much improved by the new relaxation method and that of the individual-particle detection by the use of the dynamic threshold-binarization method.
Abstract: New algorithms for particle-tracking velocimetry are proposed and tested with typical particle images showing two-dimensional fluid flows. There are new ideas not only in the algorithm of the particle tracking itself but also in that of the individual-particle detection. The performance of the particle tracking is much improved by the new relaxation method and that of the individual-particle detection by the use of the dynamic threshold-binarization method. The special concern of the authors about the new algorithms is the applicability of particle-tracking velocimetry to high-density particle images, contrary to what is usually believed regarding this type of particle-imaging velocimetry. These new algorithms are first tested in synthetic images with a variety of particle parameters and then in other types of experimental visualizations showing jet and wake flows.

240 citations


Journal ArticleDOI
TL;DR: In this article, the authors used the linear least squares estimator to find independently and uniquely the parameters for a given data set, which has been used successfully in the pre-flight calibration of the state-of-the-art magnetometers on board the magnetic mapping satellites Orsted, Astrid-2, CHAMP and SAC-C.
Abstract: The calibration parameters of a vector magnetometer are estimated only by the use of a scalar reference magnetometer. The method presented in this paper differs from those previously reported in its linearized parametrization. This allows the determination of three offsets or signals in the absence of a magnetic field, three scale factors for normalization of the axes and three non-orthogonality angles which build up an orthogonal system intrinsically in the sensor. The advantage of this method compared with others lies in its linear least squares estimator, which finds independently and uniquely the parameters for a given data set. Therefore, a magnetometer may be characterized inexpensively in the Earth's magnetic-field environment. This procedure has been used successfully in the pre-flight calibration of the state-of-the-art magnetometers on board the magnetic mapping satellites Orsted, Astrid-2, CHAMP and SAC-C. By using this method, full-Earth-field-range magnetometers (± 65536.0 nT) can be characterized down to an absolute precision of 0.5 nT, non-orthogonality of only 2 arcsec and a resolution of 0.2 nT.

216 citations


Journal ArticleDOI
TL;DR: In order to evaluate the image analysis of particle-image velocimetry, the use of standard images has been proposed and can be applied to investigate the performance of any PIV technique.
Abstract: Particle-image velocimetry (PIV) offers lots of advantages for studying fluid mechanics. Many PIV techniques and systems have been developed. However, no standard evaluation tool for evaluating the effectiveness and accuracy of the PIV systems has been established. To popularize PIV practically, for each PIV system there should be some means of evaluating the performance. PIV involves two processes, i.e. capturing the image for visualization and the image analysis. In order to evaluate the image analysis, the use of standard images has been proposed. Using these images, anybody can evaluate the effectiveness and accuracy of the PIV image analysis. The standard PIV images can be grouped into three categories, i.e. standard PIV images for two-dimensional, custom-made images with tunable parameters and images for a transient flow. The standard PIV images that we have developed are distributed via the web site http://www.vsj.or.jp/piv as part of a collaboration with the Visualization Society of Japan. They can be applied to investigate the performance of any PIV technique. The standard PIV images that we have developed have already been accessed by more than 3 000 researchers around the world.

210 citations


Journal ArticleDOI
TL;DR: In this article, the authors used a pulsed picosecond diode laser and detected the scattered signal from a non-cooperative target surface using a semiconductor single-photon detector.
Abstract: In this paper, we report results obtained with a time-of-flight ranging/scanning system based on time-correlated single-photon counting. This system uses a pulsed picosecond diode laser and detects the scattered signal from a non-cooperative target surface using a semiconductor single-photon detector. A demonstration system has been constructed and used to examine the depth resolution obtainable as a function of the integrated number of photon returns. The depth resolution has been examined for integrated photon returns varying by five orders of magnitude, both by obtaining experimental measurements and by computer simulation. Depth resolutions of approximately 3 mm were obtained for only ten returned photons. The effect of the background signal, originating either from temporally uncorrelated light signals or from detector noise, has also been examined.

185 citations


Journal ArticleDOI
TL;DR: In this article, the transient liquid crystal technique for convective heat transfer measurements is presented, which involves using a thermochromic liquid crystal coating on the test surface and the colour change time of the coating at every pixel location on the heat transfer surface during a transient test is measured using an image processing system.
Abstract: This paper presents in detail the transient liquid crystal technique for convective heat transfer measurements. A historical perspective on the active development of liquid crystal techniques for convective heat transfer measurement is also presented. The experimental technique involves using a thermochromic liquid crystal coating on the test surface. The colour change time of the coating at every pixel location on the heat transfer surface during a transient test is measured using an image processing system. The heat transfer coefficients are calculated from the measured time responses of these thermochromic coatings. This technique has been used for turbine blade internal coolant passage heat transfer measurements as well as turbine blade film cooling heat transfer measurements. Results can be obtained on complex geometry surfaces if visually accessible. Some heat transfer results for experiments with jet impingement, internal cooling channels with ribs, flow over simulated TBC spallation, flat plate film cooling, cylindrical leading edge and turbine blade film cooling are presented for demonstration.

Journal ArticleDOI
TL;DR: In this paper, a very low-noise, high-input-impedance probe was developed to make noncontact measurements of electrical potentials generated by currents flowing in the human body.
Abstract: In this paper we describe a new very-low-noise, high-input-impedance probe developed to make non-contact measurements of electrical potentials generated by currents flowing in the human body. With a noise level of 2 µV Hz-1/2 at 1 Hz, down to 0.1 µV Hz-1/2 at 1 kHz, and an operational bandwidth from 0.01 Hz to 100 KHz, this probe would seem well suited to the detection of a wide range of electrical activity in the body.

Journal ArticleDOI
TL;DR: In this article, the problem of spectral estimation from velocity data sampled irregularly in time by a laser Doppler anemometer (LDA) from very early estimators based on slot correlation to more refined estimators, which build upon a signal reconstruction and an equidistant re-sampling in time, is discussed.
Abstract: We review the problem of spectral estimation from velocity data sampled irregularly in time by a laser Doppler anemometer (LDA) from very early estimators based on slot correlation to more refined estimators, which build upon a signal reconstruction and an equidistant re-sampling in time. The discussion is restricted to single realization anemometry, i.e. excluding multiple particle signals. We classify the techniques and make an initial assessment before describing currently used methods in more detail. An intimately related subject, the simulation of LDA data, is then briefly reviewed, since this provides a means of evaluating various estimators. Using the expectation and variance as figures of merit, the advantages and disadvantages of several estimators for varying types of turbulent velocity spectral distributions are discussed. A set of recommendations is put forward as a conclusion.

Journal ArticleDOI
TL;DR: In this paper, a special receiving optical system was developed to acquire the interferential image without any losses due to the overlapping of neighbouring fringes, which enhances the spatial resolution by compressing the circular image with fringes into a linear image while maintaining the information of the fringe spacing, which is related to the particle diameter.
Abstract: The present investigation describes a novel measurement technique used to determine both velocity and diameter of transparent spherical particles, mainly droplets and gas bubbles. We have developed a special receiving optical system that acquires the interferential image without any losses due to the overlapping of neighbouring fringes. The system enhances the spatial resolution by compressing the circular image with fringes into a linear image while maintaining the information of the fringe spacing, which is related to the particle diameter. The compression simplifies and ensures the detection of the interferograms in the digitized image. The image provides both the location and the size of particles simultaneously. The velocities of individual particles are obtained by capturing sequential images of interferograms with double-pulsed illumination. The technique was examined with the measurement of monodisperse droplets and estimated with a resultant error of less than 3% for arithmetic mean diameter.

Journal ArticleDOI
TL;DR: In turbine engine development, rotor blade vibration measurements are made to ensure the blades are sufficiently durable in later service as mentioned in this paper, using a number of probes installed in the engine casing to sense the points in time at which the rotor are passing the probes.
Abstract: In turbine engine development, rotor blade vibration measurements are made to ensure the blades are sufficiently durable in later service. These measurements are conventionally taken using strain gauges or the frequency modulated grid. In recent years, a noncontact method of blade vibration measurement has become an increasingly accepted, low-cost alternative technique. This method uses a number of probes installed in the engine casing to sense the points in time at which the blades are passing the probes. When analysed, these blade passing times yield data on blade vibrations. This paper briefly describes the configuration of such a measurement system and the operating principle of two different probe types. An extended explanation is then provided of the various analysis methods in use at MTU. The methods are described by means of the essential equations and elucidated using comprehensive compressor test data.

Journal ArticleDOI
TL;DR: In this paper, a portable remote methane sensor using a 1.65 µm InGaAsP distributedfeedback laser is developed, which is designed as a man-portable long-path absorption lidar using a topographical target.
Abstract: A portable remote methane sensor using a 1.65 µm InGaAsP distributed-feedback laser is developed. It is designed as a man-portable long-path absorption lidar using a topographical target with a range of up to about 10 m. An operator can search for gas leaks easily by scanning the laser light. High sensitivity is accomplished by means of second-harmonic detection using frequency-modulation spectroscopy. The experimental detection limit (signal-to-noise ratio = 1) with a diffusive target of magnesium oxide (6 m range, normal incidence) is 450 ppb m with a time constant of 100 ms. Measurements of the reflectance of real targets show that the sensor can distinguish small gas leaks (typically 10 cm3 min -1) within a range of several metres.

Journal ArticleDOI
TL;DR: In this paper, four methods of range measurement for airborne ultrasonic systems are compared on the basis of bias error, standard deviation, total error, robustness to noise, and the difficulty/complexity of implementation.
Abstract: Four methods of range measurement for airborne ultrasonic systems - namely simple thresholding, curve-fitting, sliding-window, and correlation detection - are compared on the basis of bias error, standard deviation, total error, robustness to noise, and the difficulty/complexity of implementation. Whereas correlation detection is theoretically optimal, the other three methods can offer acceptable performance at much lower cost. Performances of all methods have been investigated as a function of target range, azimuth, and signal-to-noise ratio. Curve fitting, sliding window, and thresholding follow correlation detection in the order of decreasing complexity. Apart from correlation detection, minimum bias and total error is most consistently obtained with the curve-fitting method. On the other hand, the sliding-window method is always better than the thresholding and curve-fitting methods in terms of minimizing the standard deviation. The experimental results are in close agreement with the corresponding simulation results. Overall, the three simple and fast processing methods provide a variety of attractive compromises between measurement accuracy and system complexity. Although this paper concentrates on ultrasonic range measurement in air, the techniques described may also find application in underwater acoustics.

Journal ArticleDOI
TL;DR: The principles and performance of a confocal microscope, together with the measurement system, are described and both the intensity and the auto-focus methods are used to measure two-dimensional surface roughness by use of the system and results are presented.
Abstract: Surface topography and, in particular, roughness and form, plays an important role in determining the functional performance of engineering parts. The measurement and understanding of surface topography is rapidly attracting the attention of the physicist, the biologist and the chemist as well as the engineer. Optics in general played an important role in measurement and, with the advent of opto-mechatronics, it is once again at the forefront of measurement. In this paper, the principles and performance of a confocal microscope, together with the measurement system, are described. Suitable fixtures are developed and integrated with the computer system for generating three-dimensional surface and form data. Software for data acquisition, analysis of various parameters including new parameters and visualization of surface geometrical features has been developed. Both the intensity and the auto-focus methods are used to measure two-dimensional surface roughness by use of the system and results are presented. The measurement and characterization of three-dimensional surface topography and form error will be presented in part II of this paper.

Journal ArticleDOI
TL;DR: In this article, the state of the art and recent developments of laser-Doppler velocimetry, Rayleigh spectroscopy, spontaneous Raman spectroscope, coherent anti-Stokes Raman Spectroscopy and laser-induced fluorescence are reviewed.
Abstract: The demands on the quality of velocity, concentration and temperature data from turbulent combustion systems for comparison with numerical predictions are rising with the increasing performance of models for the numerical description of these effects. These demands lead to a high degree of maturity in laser-based methods for concentration, temperature and species information. Laser-based methods are able to give the required information without severely disturbing the observed effects and with the needed accuracy and temporal and spatial resolution. This article reviews the state of the art and recent developments of laser-Doppler velocimetry, Rayleigh spectroscopy, spontaneous Raman spectroscopy, coherent anti-Stokes Raman spectroscopy and laser-induced fluorescence. Emphasis is placed not only on aspects of these measurement techniques connected with spatially and temporally resolved quantitative measurements in turbulent combustion, but also on the interaction of the requirements of these methods on the object with the requirements of the characterized object itself and with demands from methods of numerical prediction on the generated data. New developments and requirements on the reviewed methods originating from new trends in combustion modelling are included.

Journal ArticleDOI
TL;DR: In this article, the application of electrical capacitance tomography (ECT) for media of high dielectric permittivity such as water was discussed and the performance of an ECT sensor was analyzed numerically for a range of dielectrics materials (1≤er≤80) and geometrical parameters.
Abstract: This paper concerns the application of electrical capacitance tomography (ECT) for media of high dielectric permittivity such as water. The performance of an ECT sensor was analysed numerically for a range of dielectric materials (1≤er≤80) and geometrical parameters. On the basis of numerical simulations an ECT sensor with internal electrodes was built. Experimental results concerned with imaging air voids in distilled water and water-continuous dispersions of oil, using a commercially available PTL tomography system, are presented. Effects arising from the conductivity of water are studied and prospects of using the ECT system in an `electrical resistivity' mode for weakly conductive solutions are outlined.

Journal ArticleDOI
TL;DR: Digital Speckle Pattern Interference (DSPI) is a class of important interferometric techniques such as TV holography or phase shifting pattern interferometry, all of which involve similar optical and electronic principles as discussed by the authors.
Abstract: From the Publisher: Digital Speckle Pattern Interferometry (DSPI) is the generic name for a class of important interferometric techniques such as TV holography, electronic holography or phase shifting pattern interferometry, all of which involve similar optical and electronic principles. These techniques are increasingly used in a wide range of fields including experimental mechanics, vibration analysis and non-destructive testing. Digital Speckle Interferometry and Related Techniques provides a single source of information in this rapidly progressing field. Containing contributions from leading experts, it provides the key background information, including the fundamental concepts, techniques, and applications, and presents the major technological progress that has contributed to revitalization in the field over the past fifteen years, including digital speckle photography and digital holographic interferometry. This is an invaluable text for practising engineers in industry and research institutes, and academic and development researchers in scientific and engineering disciplines who use, or are interested in, DSPI. This title would also be of interest to teachers and final year undergraduate/postsgraduate students of physics, optics, experiemental mechanics, photomechanics, optical metrology, engineering metrology, and non-destructive testing.

Journal ArticleDOI
TL;DR: A theoretical analysis of a SPR instrument using a convergent beam, a linear detector with various numbers of pixels and various analogue-to-digital converters with a corresponding resolution ranging from 8 to 16 bits is performed.
Abstract: Surface plasmon resonance (SPR) sensors are used to study biomolecular interactions. We have performed a theoretical analysis of a SPR instrument using a convergent beam, a linear detector with various numbers of pixels and various analogue-to-digital converters (ADCs) with a corresponding resolution ranging from 8 to 16 bits. Studies of small molecules at low concentrations or with low affinities are limited by the instrumental set-up, e.g. by the resolution, linearity and noise. The amplitudes of these parameters are highly dependent on the detector, ADC and dip-finding algorithm used. We have studied several dip-finding algorithms, e.g. intensity measurements, second- and third-order polynomial fits and centroid algorithms. Each algorithm used with the ADC and the detector has a resolution associated with it. Some algorithms also have an intrinsic algorithm error that is dependent on the number of pixels and the shape of the dip. A weighted centroid algorithm that has an excellent overall performance is described. If an accuracy of 10-6 refractive index units (RIU) is satisfactory, a 12-bit ADC and a 64-pixel detector are appropriate. Theoretically, by using a 16-bit ADC and a 1024-pixel detector, a resolution of better than 10-9 RIU is obtainable.

Journal ArticleDOI
TL;DR: In this article, a direct heat flux gauge (DHFG) consisting of an insulating layer mounted on a metal substrate has been developed to measure the heat flux across the insulating layers by measuring the top surface temperature employing a sputtered thin-film gauge (TFG) and the metal temperature using a thermocouple.
Abstract: A new type of direct-heat-flux gauge (DHFG) comprising an insulating layer mounted on a metal substrate has been developed. The gauge measures the heat flux across the insulating layer by measuring the top surface temperature employing a sputtered thin-film gauge (TFG) and the metal temperature using a thermocouple. The TFGs are platinum temperature sensors with physical thickness less than 0.1 µm. They are instrumented on the insulating layer. The thermal properties and the ratio of the thickness over the thermal conductivity of the insulating layer have been calibrated. A detailed method of analysis for calculating the surface heat flux from DHFG temperature traces is presented. The advantages of the DHFG include its high accuracy, its wide range of frequency response (from dc to 100 kHz) and, most significantly, that there is no requirement for knowledge of the structure of the metal substrate. Since the metal substrate is of high conductivity, few thermocouples are required to monitor the small spatial variation of the metal temperature, whereas multiple thin-film gauges may be employed. The DHFGs have been applied to a gas turbine nozzle guide vane and tested in the Oxford Cold Heat Transfer Tunnel successfully.

Journal ArticleDOI
TL;DR: Raman excitation plus laser-induced electronic fluorescence (RELIEF) images the motion of oxygen molecules in air and other gas mixtures as mentioned in this paper, which is accomplished by tagging oxygen molecules through vibrational excitation and imaging them after a short period of time by LIEF.
Abstract: Raman excitation plus laser-induced electronic fluorescence (RELIEF) images the motion of oxygen molecules in air and other gas mixtures. This is accomplished by tagging oxygen molecules through vibrational excitation and imaging them after a short period of time by laser-induced electronic fluorescence. The vibrational lifetime of oxygen is sufficiently long and the signal sufficiently strong to allow this technique to be used over a wide range of flow conditions, from low subsonic to hypersonic, and in a variety of gas mixtures including high humidity environments. The utilization of a molecular tagging technique such as this is critical for environments in which seeding is impossible or unreliable and for measurements in which a wide range of scales needs to be observed simultaneously. Two experiments which have been conducted at national laboratories in medium- to large-scale facilities are reported. At the Arnold Engineering Development Center, RELIEF was used to examine velocity in a 1 m diameter tunnel for applications in the area of engine testing. At the NASA Langley Research Center, RELIEF is being used to examine supersonic mixing of helium in air in a coaxial jet in association with studies of fuel-air mixing in hypersonic engines. These applications are two examples of the wide range of practical uses for this new technology.

Journal ArticleDOI
TL;DR: Fast-response aerodynamic probes are a promising alternative to other time-resolving measurement techniques such as hot-wire anemometry or laser anemometers as mentioned in this paper, which is a key to further improvements in turbomachinery.
Abstract: A better understanding of unsteady flow phenomena encountered in rotor-stator interactions is a key to further improvements in turbomachinery. Besides CFD methods yielding 3D flow field predictions, time-resolving measurement techniques are necessary to determine the instantaneous flow quantities of interest. Fast-response aerodynamic probes are a promising alternative to other time-resolving measurement techniques such as hot-wire anemometry or laser anemometry. This contribution gives an overview of the fast-response probe measurement technique, with the emphasis on the total system and its components, the development methods, the operation of such systems and the data processing requirements. A thorough optimization of all system components (such as sensor selection and packaging, probe tip construction, probe aerodynamics and data analysis) is the key of successful development. After description of the technique, examples of applications are given to illustrate its potential. Some remarks will refer to recent experiences gained by the development and application of the ETH FRAPreg system.

Journal ArticleDOI
TL;DR: In this article, a new high-resolution PIV technique based on a gradient method is proposed, in which the pixel unit displacement is detected by the iterative method and the sub-pixel displacement is evaluated by the use of the gradient method instead of the three-point Gaussian peak fitting technique.
Abstract: An iterative PIV technique in which the combination of the iterative cross-correlation technique and three-point Gaussian peak fitting for a sub-pixel analysis has been used can improve the spatial resolution and accuracy of measurement. It is reported that the root-mean-square (RMS) error of the technique is of the order of only 0.04 pixels. However, a large interrogation window, typically 32×32 pixels or larger, should be taken, resulting in a low resolution, in order to achieve the high sub-pixel accuracy. The high accuracy is not compatible with high spatial resolution. In this paper, a new high-resolution PIV technique based on a gradient method is proposed. Initially, the pixel unit displacement is detected by the iterative method. Then, the sub-pixel displacement is evaluated by the use of the gradient method instead of the three-point Gaussian peak fitting technique. The error of the proposed technique is analytically assessed by Monte Carlo simulations. The RMS error is of the order of 0.01 pixels even with a small interrogation window, for instance 13×13 pixels or less. Thus, the method can achieve high sub-pixel accuracy and high spatial resolution compatibility.

Journal ArticleDOI
TL;DR: In this paper, the authors introduce the subject of quantum tomography and present the classic solutions to a number of problems in the area, including quantum theory of light and quasi-probability functions.
Abstract: This is a well written book; it makes easy and pleasant reading for both experts and physicists working in other areas. The book is almost self-contained. It comprises six chapters. In the first three the author introduces the subject and deals with basic results and analytic tools from quantum optics: quantum theory of light and quasi-probability functions. In the fourth chapter he discusses the quantum-mechanical description of simple optical instruments. The last two chapters are the core of the book; they are devoted to a discussion of quantum tomography, joint measurement of position and momentum and the quantum-optical phase. The author was directly involved in the development of the field, which guarantees a fresh and clear exposition. At the same time he makes an open effort for a general (re)formulation of the subject and to present the `classic' solutions to a number of problems in the area. Overall, the presentation of quantum tomography is shaped by a personal view but it accounts sufficiently for the different approaches discussed in the literature. My only reservation is the way in which connections between classical and quantum tomography are presented. In particular, I find that an explicit statement that classical tomography (inverse Radon transform) cannot be straightforwardly applied to the quantum case should be made. The book was published in 1997. Since then the field has been growing rapidly and impetuously and the book shows its age. Nevertheless, it is still a useful reference tool for active researchers and a nice and valuable introduction for (graduate) students who are entering this area. It is worth noting that only a few books have been published on this subject; Leonhardt's volume is well written and to the point. The book is dedicated to Harry Paul, the supervisor of Leonhardt's postdoctoral thesis and the coauthor of most of his papers in the area. M G A Paris

Journal ArticleDOI
TL;DR: In this article, a stereo-imaging technique for the simultaneous measurement of size and velocity of solid/liquid particles in dispersed two-phase flows is developed, where particles are illuminated with back light provided by the two strobe lamps.
Abstract: A stereo-imaging technique for the simultaneous measurement of size and velocity of solid/liquid particles in dispersed two-phase flows is developed. Particles are illuminated with back light provided by the two strobe lamps. Double-pulsed strobe flashing is done synchronously with the framing of the CCD camera to achieve the frame-straddling illumination mode for the measurement of particle velocities. Silhouetted particle images are acquired with two black-and-white CCD cameras in stereo configuration. The particle images are analysed with a specially devised procedure, which can faithfully detect the perimeters both of spherical and of non-spherical particle images. The use of stereo imaging permits measurement of all three velocity components and also resolution of the depth-of-field effect in particle sizing, the problem that has been a key subject in the development of accurate particle-sizing techniques based on back lighting. The technique developed here is capable of sizing 10-500µm particles to within ±4µm inaccuracy. The validity of the technique is demonstrated by the measurement of a variety of transparent/opaque and spherical/non-spherical particles falling down a vertical pipe.

Journal ArticleDOI
TL;DR: In this paper, two complementary unseeded molecular flow tagging techniques for gas-flow velocity field measurement at low and high temperature are demonstrated, and the velocity field is extracted from OTV images in an air jet using the image correlation velocimetry (ICV) method.
Abstract: Two complementary unseeded molecular flow tagging techniques for gas-flow velocity field measurement at low and high temperature are demonstrated. Ozone tagging velocimetry (OTV) is applicable to low-temperature air flows whereas hydroxyl tagging velocimetry (HTV) is amenable to use in high-temperature reacting flows containing water vapour. In OTV, a grid of ozone lines is created by photodissociation of O_2 by a narrowband 193 nm ArF excimer laser. After a fixed time delay, the ozone grid is imaged with a narrowband KrF laser sheet that photodissociates the ozone and produces vibrationally excited O_2 that is subsequently made to fluoresce by the same KrF laser light sheet via the O_2 transition B^3Σ_u^-(v'=0,2) ← X^3Σ_g^-(v"=6,7). In HTV, a molecular grid of hydroxyl (OH) radicals is written into a flame by single-photon photodissociation of vibrationally excited H_2O by a 193 nm ArF excimer laser. After displacement, the OH tag line position is revealed through fluorescence caused by OH A^2Σ^+_-X^2Π (3←0) excitation using a 248 nm tunable KrF excimer laser. OTV and HTV use the same lasers and can simultaneously measure velocities in low and high temperature regions. Instantaneous flow-tagging grids are measured in air flows and a flame. The velocity field is extracted from OTV images in an air jet using the image correlation velocimetry (ICV) method.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed an original scheme for the systematic treatment of TVH and to review the existing techniques according to it, and split the measurement process into four highly independent stages (illumination and observation geometry, temporal treatment, secondary-correlogram generation and fringe-pattern analysis) and establish a common notation to formulate the corresponding techniques.
Abstract: Television holography (TVH) can be defined as `the family of optical measurement techniques based on the electronic recording and processing of holograms'. Image-plane TVH was introduced in the early 1970s with the name `electronic speckle-pattern interferometry' (ESPI). Since then, TVH has undergone an impressive development and become one of the most promising optical techniques for non-destructive testing and industrial inspection. The aim of this review is to propose an original scheme for the systematic treatment of TVH and to review the existing techniques according to it. In this approach we split the measurement process into four highly independent stages (illumination and observation geometry, temporal treatment, secondary-correlogram generation and fringe-pattern analysis) and establish a common notation to formulate the corresponding techniques. Such a strategy allows the free combination of the techniques proposed for each stage as building blocks to obtain every particular variant of the whole TVH measurement process, whether it has already been reported or not, and also the incorporation of new techniques while retaining compatibility with the existing variants of the previous and following stages.