scispace - formally typeset
Search or ask a question

Showing papers in "Measurement Science and Technology in 2001"


Journal ArticleDOI
TL;DR: The appearance of this book is quite timely as it provides a much needed state-of-the-art exposition on fault detection and diagnosis, a topic of much interest to industrialists.
Abstract: The appearance of this book is quite timely as it provides a much needed state-of-the-art exposition on fault detection and diagnosis, a topic of much interest to industrialists. The material included is well organized with logical and clearly identified parts; the list of references is quite comprehensive and will be of interest to readers who wish to explore a particular subject in depth. The presentation of the subject material is clear and concise, and the contents are appropriate to postgraduate engineering students, researchers and industrialists alike. The end-of-chapter homework problems are a welcome feature as they provide opportunities for learners to reinforce what they learn by applying theory to problems, many of which are taken from realistic situations. However, it is felt that the book would be more useful, especially to practitioners of fault detection and diagnosis, if a short chapter on background statistical techniques were provided. Joe Au

1,553 citations


Journal ArticleDOI
TL;DR: This is a comprehensive book discussing several methods for the identification of nonlinear systems, ranging from linear optimization techniques to fuzzy logic and nonlinear adaptive control, and Nelles has certainly described an extensive number of results.
Abstract: This is a comprehensive book discussing several methods for the identification of nonlinear systems. Identification is extremely relevant in applications and only recently has much ongoing research addressed the pressing problem of identifying systems with nonlinearities. In this respect, the book is timely as it is a collection of results from many different areas in applied science, ranging from linear optimization techniques to fuzzy logic and nonlinear adaptive control. The declared aim is `to provide engineers and scientists in academia and industry with a thorough understanding of the underlying principles of nonlinear system identification'. At the same time, the author wishes to enable users to apply the methods illustrated in the book. The book is well structured and divided into four distinct parts. The first part is entirely devoted to an overview of the main optimization techniques for nonlinear problems. Least squares methods and other classical strategies such as general gradient-based algorithms are discussed. While the presentation is clear, it is too wordy at times, making it difficult to appreciate the key issues involved. A set of diagrams and summarizing tables is included, though, to improve the overall clarity and highlight similarities and differences. The second part is mostly devoted to static models such as linear, polynomial and look-up table models. The main emphasis is on neural networks and fuzzy logic. The results are clearly expounded but the aim of giving a general overview of too many different approaches in some cases hampers the clarity of the exposition. Neuro-fuzzy models are presented in chapter 12 and further detailed in chapters 13 and 14 where local linear Neuro-Fuzzy models are discussed. In particular, chapter 13 focuses on methods proposed by the author. Despite their usefulness, I found that the choice of dedicating two entire chapters to such methods causes a slight imbalance in the presentation. Up to chapter 13, the discussion is quite well balanced and different methods are given the space needed to expound the main results. Unlike the other strategies, in my view, local neuro-fuzzy approaches are treated in far too much detail. This is beyond the scope of the book, which is that of giving a general balanced overview of all possible results. A summary of the second part is reported in chapter 15 where the author reinforces the view that local neuro-fuzzy methods should be more widely applied for static modelling problems. Dynamic Models are the subject of the third main section of the book. Linear dynamic system identification is discussed in chapter 16, where time series models are presented together with multivariable methods and other linear approaches. Nonlinear dynamic systems are considered in chapter 17 and are followed by classical polynomial approaches in chapter 18. Neural and fuzzy dynamic models are treated together with local neuro-fuzzy dynamic systems in the remaining chapters of this third part. Again particular emphasis is given to local neuro-fuzzy systems which have been the subject of research and development by the author. Unfortunately, this part does not include a chapter dedicated to summarizing the main results expounded. It must be noted though that many diagrams and schematics do help in highlighting the main results. Nevertheless, an extensive summary such as the one included at the end of the second part would have been useful. As I have indicated, Nelles has certainly described an extensive number of results in the book. On the other hand, more recent methods based on novel developments of Nonlinear Dynamics such as nonlinear time series analysis, which have been successfully used to identify nonlinear systems, have not been included in the book. I hope they will be incorporated in later editions, as they have the potential to play an important role in the identification of complex models. Applications are discussed in the fourth and last part of the book. The problems presented are interesting but again it becomes apparent that local linear neuro-fuzzy methods are somehow the author's preferred method. This bias, which might well be motivated by the author's experience, should in my view be counterbalanced by applications showing the use of other methods. Some are indeed included in the final chapters of the book but I would have liked to see a few more problems. Two appendices recall some useful results from linear algebra, vector calculus and statistics and are well suited to a general readership. An impressive reference list of more than 400 items completes the book, representing an invaluable starting point for further research and details. As mentioned in the Preface, throughout the book Nelles tries to keep the mathematical description to a basic level. This indeed makes the textbook accessible to a wider audience. Unfortunately, it also results at times in lengthy, wordy descriptions of the most intricate approaches. As a consequence, users who wish to apply some of the methods discussed to problems that interest them will often find that they need to look up further details from other sources. In this respect, the extensive reference list at the end of the book will certainly be helpful. Despite this disadvantage, the book is certainly an invaluable archive of available strategies for nonlinear system identification, which will undoubtedly help readers with the choice of the particular method to use. In conclusion, as I have indicated, I found the book a well-packaged overview of the main results concerned with nonlinear system identification. But I believe that the description is wordy at times and not rigorous enough. Contrary to what is stated in the Preface, I believe that rather than being a self-contained book, readers will undoubtedly need to look up further references to be able to make use of the methods illustrated. On the other hand, the book should be a useful reference for students. It certainly deserves to be included in the reading list of any course on nonlinear system identification and optimization. Mario di Bernado

1,451 citations


Journal ArticleDOI
TL;DR: In this paper, the proper orthogonal decomposition (POD) is combined with two new vortex identification functions, Γ1 and Γ2, to identify the locations of the center and boundary of the vortex on the basis of the velocity field.
Abstract: Particle image velocimetry (PIV) measurements are made in a highly turbulent swirling flow. In this flow, we observe a coexistence of turbulent fluctuations and an unsteady swirling motion. The proper orthogonal decomposition (POD) is used to separate these two contributions to the total energy. POD is combined with two new vortex identification functions, Γ1 and Γ2. These functions identify the locations of the centre and boundary of the vortex on the basis of the velocity field. The POD computed for the measured velocity fields shows that two spatial modes are responsible for most of the fluctuations observed in the vicinity of the location of the mean vortex centre. These two modes are also responsible for the large-scale coherence of the fluctuations. The POD computed from the Γ2 scalar field shows that the displacement and deformation of the large-scale vortex are correlated to these modes. We suggest the use of such a method to separate pseudo-fluctuations due to the unsteady nature of the large-scale vortices from fluctuations due to small-scale turbulence.

796 citations


Journal ArticleDOI
TL;DR: Large Eddy Simulation (LES) is an approach to compute turbulent flows based on resolving the unsteady large-scale motion of the fluid while the impact of the small-scale turbulence on the large scales is accounted for by a sub-grid scale model as mentioned in this paper.
Abstract: Large Eddy Simulation (LES) is an approach to compute turbulent flows based on resolving the unsteady large-scale motion of the fluid while the impact of the small-scale turbulence on the large scales is accounted for by a sub-grid scale model. This model distinguishes LES from any other method and reduces the computational demands compared with a Direct Numerical Simulation. On the other hand, the cost typically is still at least an order of magnitude larger than for steady Reynolds-averaged computations. The LES approach is attractive when statistical turbulence models fail, when insight into the vortical dynamics or unsteady forces on a body is desired, or when additional features are involved such as large-scale mixing, particle transport, sound generation etc. In recent years the rapid increase of computer power has made LES accessible to a broader scientific community, and this is reflected in an abundance of papers on the method and its applications. Still, however, some fundamental aspects of LES are not conclusively settled, a fact residing in the intricate coupling between mathematical, physical, numerical and algorithmic issues. In this situation it is of great importance to gain an overview of the available approaches and techniques. Pierre Sagaut, in the style of a French encyclopedist, gives a very complete and exhaustive treatment of the different kinds of sub-grid scale models which have been developed so far. After discussing the separation into resolved and unresolved scales and its application to the Navier-Stokes equations, more than 140 pages are directly devoted to the description of sub-grid scale models. They are classified according to different criteria, which helps the reader to find his or her way through the arsenal of reasonings. The theoretical framework for which these models have mostly been developed is isotropic turbulence. The required notions from classical turbulence theory are summarized together with notions from EDQNM theory in two concise and helpful appendices. Further sections deal with numerical and implementational issues, boundary conditions and validation practice. A final section assembles a few key applications, cumulating in a condensed list of some general experiences gained so far. The book very wisely concentrates on issues particular to LES, which to a large extent is sub-grid scale modelling. Classical issues of CFD, such as numerical discretization schemes, solution procedures etc, or post-processing are not addressed. Limiting himself to incompressible, non-reactive flows, the author succeeds in describing the fundamental issues in great detail, thus laying the foundations for the understanding of more complex situations. The presentation is essentially theoretical and the reader should have some prior knowledge of turbulence theory and Fourier transforms. The text itself is well written and generally very clear. A pedagogical effort is made in several places, e.g. when an overview over a group of models is given before these are described in detail. A few typing errors and technical details should be amended in a second edition, though, such as the statement that a filter which is not a projector is invertible (p 12), but this is not detrimental to the quality of the text. Overall the book is a very relevant contribution to the field of LES and I read it with pleasure and benefit. It constitutes a worthy reference book for scientists and engineers interested in or practising LES and may serve as a textbook for a postgraduate course on the subject. Jochen Frohlich

771 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a method for measuring the threshold intensity required to produce breakdown and damage in the bulk, as opposed to on the surface, of the material, and determine the relative role of different nonlinear ionization mechanisms for different laser and material parameters.
Abstract: Laser-induced breakdown and damage to transparent materials has remained an active area of research for four decades. In this paper we review the basic mechanisms that lead to laser-induced breakdown and damage and present a summary of some open questions in the field. We present a method for measuring the threshold intensity required to produce breakdown and damage in the bulk, as opposed to on the surface, of the material. Using this technique, we measure the material band-gap and laser-wavelength dependence of the threshold intensity for bulk damage using femtosecond laser pulses. Based on these thresholds, we determine the relative role of different nonlinear ionization mechanisms for different laser and material parameters.

747 citations


Journal ArticleDOI
TL;DR: In this paper, the spectral properties of Rayleigh scattering are discussed and a review of the new advances in flow field imaging that have been achieved using the new filter approaches is presented.
Abstract: Rayleigh scattering is a powerful diagnostic tool for the study of gases and is particularly useful for aiding in the understanding of complex flow fields and combustion phenomena. Although the mechanism associated with the scattering, induced electric dipole radiation, is conceptually straightforward, the features of the scattering are complex because of the anisotropy of molecules, collective scattering from many molecules and inelastic scattering associated with rotational and vibrational transitions. These effects cause the scattered signal to be depolarized and to have spectral features that reflect the pressure, temperature and internal energy states of the gas. The very small scattering cross section makes molecular Rayleigh scattering particularly susceptible to background interference. Scattering from very small particles also falls into the Rayleigh range and may dominate the scattering from molecules if the particle density is high. This particle scattering can be used to enhance flow visualization and velocity measurements, or it may be removed by spectral filtering. New approaches to spectral filtering are now being applied to both Rayleigh molecular scattering and Rayleigh particle scattering to extract quantitative information about complex gas flow fields. This paper outlines the classical properties of Rayleigh scattering and reviews some of the new advances in flow field imaging that have been achieved using the new filter approaches.

508 citations


Journal ArticleDOI
TL;DR: In this article, the authors present the design criteria for achieving significant overlap between the light guided in the fibre and the air holes and hence for producing efficient evanescent field devices.
Abstract: The optical and geometrical properties of microstructured optical fibres present new alternatives for a range of sensing applications. We present the design criteria for achieving significant overlap between the light guided in the fibre and the air holes and hence for producing efficient evanescent field devices. In addition, the novel dispersive properties combined with the tight mode confinement possible in holey fibres make ultra-broadband single-mode sources and new source wavelengths a possibility. Microstructuring technology can be readily extended to form multiple-core fibres, which have applications in bend/deformation sensing. Finally, fibre-based atom waveguides could ultimately be used for rotational or gravitational sensing.

365 citations


Journal ArticleDOI
TL;DR: In this article, the effect of Reynolds number on vortex formation from the blade tips of a Eurocopter BK117 and a large US utility helicopter was investigated in full-scale flight tests.
Abstract: The practical aspects of an advanced schlieren technique, which has been presented by Meier (1999) and Richard et al (2000) and in a similar form by Dalziel et al (2000), are described in this paper. The application of the technique is demonstrated by three experimental investigations on compressible vortices. These vortices play a major role in the blade vortex interaction (BVI) phenomenon, which is responsible for the typical impulsive noise of helicopters. Two experiments were performed in order to investigate the details of the vortex formation from the blade tips of two different helicopters in flight: a Eurocopter BK117 and a large US utility helicopter. In addition to this, simultaneous measurements of velocity and density fields were conducted in a transonic wind tunnel in order to characterize the structure of compressible vortices. The background oriented schlieren technique has the potential of complementing other optical techniques such as shadowgraphy or focusing schlieren methods and yields additional quantitative information. Furthermore, in the case of helicopter aerodynamics, this technique allows the effect of Reynolds number on vortex development from blade tips to be studied in full-scale flight tests more easily than through the use of laser-based techniques.

319 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigate the changes in the transmission spectrum of long period fibre gratings and tilted short-period fibre Bragg gratings versus the refractive index of the surrounding medium.
Abstract: We investigate the changes in the transmission spectrum of long period fibre gratings and tilted short-period fibre Bragg gratings versus the refractive index of the surrounding medium. The metrological characteristics of tilted short-period fibre Bragg gratings and an analytical method enabling their potential use in accurate refractometry are discussed.

310 citations


Journal ArticleDOI
TL;DR: In this paper, a technique based on genetic algorithms is proposed for improving the accuracy of solar cell parameters extracted using conventional techniques, which is based on formulating the parameter extraction as a search and optimization problem.
Abstract: In this paper, a technique based on genetic algorithms is proposed for improving the accuracy of solar cell parameters extracted using conventional techniques. The approach is based on formulating the parameter extraction as a search and optimization problem. Current–voltage data used were generated by simulating a two-diode solar cell model of specified parameters. The genetic algorithm search range that simulates the error in the extracted parameters was varied from ± 5t o±100% of the specified parameter values. Results obtained show that for a simulated error of ±5% in the solar cell model values, the deviation of the extracted parameters varied from 0.1 to 1% of the specified values. Even with a simulated error of as high as ±100%, the resulting deviation only varied from 2 to 36%. The performance of this technique is also shown to surpass the quasi-Newton method, a calculus-based search and optimization algorithm.

285 citations


Journal ArticleDOI
TL;DR: In this paper, the authors review the history of surface voltage and surface photovoltage measurements, and discuss the principles of the technique and give some examples and applications, as well as some examples of applications.
Abstract: Surface voltage and surface photovoltage measurements have become important semiconductor characterization tools, largely because of the availability of commercial equipment and the contactless nature of the measurements. The range of the basic technique has been expanded through the addition of corona charge. The combination of surface charge and illumination allows surface voltage, surface barrier height, flatband voltage, oxide thickness, oxide charge density, interface trap density, mobile charge density, oxide integrity, minority carrier diffusion length, generation lifetime, recombination lifetime and doping density to be determined. In this review I shall briefly review the history of surface voltage, then discuss the principles of the technique and give some examples and applications.

Journal ArticleDOI
TL;DR: In this article, the authors summarized the relevant theory and work to date for halide sensing using fluorescence quenching methods and outlined the future potential that fluorescence-quenching based optical sensors have to offer in halide determination.
Abstract: In the last century the production and application of halides assumed an ever greater importance. In the fields of medicine, dentistry, plastics, pesticides, food, photography etc many new halogen containing compounds have come into everyday use. In an analogous manner many techniques for the detection and determination of halogen compounds and ions have been developed with scientific journals reporting ever more sensitive methods. The 19th century saw the discovery of what is now thought of as a classical method for halide determination, namely the quenching of fluorescence. However, little analytically was done until over 100 years after its discovery, when the first halide sensors based on the quenching of fluorescence started to emerge. Due to the typical complexity of fluorescence quenching kinetics of optical halide sensors and their lack of selectivity, they have found little if any place commercially, despite their sensitivity, where other techniques such as ion-selective electrodes, x-ray fluorescence spectroscopy and colorimetry have dominated the analytical market. In this review article the author summarizes the relevant theory and work to date for halide sensing using fluorescence quenching methods and outlines the future potential that fluorescence quenching based optical sensors have to offer in halide determination.

Journal ArticleDOI
TL;DR: In this article, Cox et al. describe in alphabetical order individual distributions commonly encountered in applications, including the Weibull distribution in its two, three and five parameter forms.
Abstract: After a short introduction on generalities about probability distributions, the book consists of 40 chapters describing in alphabetical order individual distributions commonly encountered in applications. It is a substantially revised edition with, in particular, new diagrams and a longer treatment of the Weibull distribution in its two, three and five parameter forms. With two exceptions the distributions are univariate, i.e. one-dimensional. Typically for each distribution there is an introductory paragraph about potential applications, the formula for the distribution, the main properties of the distribution, usually some diagrams illustrating the shape of the distribution, and some notes on relations with other distributions and on how its parameters might be estimated from data. In some cases a note on simulation of values is included. The book ends with some computational references, some statistical tables and a short bibliography. The book contains a large amount of information clearly set out in concise form. The introductory notes on the motivation for the individual distributions are sometimes perfunctory and inevitably are no substitute for a careful discussion of the domain of potential applicability of, for example, the exponential distribution or the negative binomial distribution. They are of little or no help in dealing with the question: `Here is a particular kind of problem calling for a simple form of probability distribution: what kind of distribution might be useful?' However, the book could certainly be a valuable reference source on such specific if rather narrow questions as: `Given involvement in the use of the negative binomial distribution, what are its moments?' A criticism of the book is the failure to give references for further reading about individual special distributions. The short general bibliography is no help on where to look for particular points. D R Cox

Journal ArticleDOI
TL;DR: In this paper, the thermal expansion coefficient of the relevant alloy at the temperatures involved is investigated and a review of existing sources of data for this property is presented. And the implications of the available data and measurement techniques are discussed.
Abstract: Metallurgical operations at elevated temperatures, such as those that involve solidification and/or mechanical deformation, can be critically influenced by the thermal stresses and strains that result from expansion and contraction of the material as a function of temperature. With the increasing use of computer-based process models for these operations, there arises a greater need for quantitative data on the thermal expansion coefficient of the relevant alloy at the temperatures involved. After briefly reviewing some existing sources of data for this property, the various techniques for its measurement at elevated temperatures are then described. These include mechanical dilatometry, optical imaging and interference systems, x-ray diffraction methods and electrical pulse heating techniques. Finally the implications, for process modelling, of the available data and measurement techniques are discussed.

PatentDOI
TL;DR: In this article, a new image reconstruction technique for imaging two and three-phase flows using electrical capacitance tomography (ECT) has been developed based on multi-criteria optimization using an analog neural network, hereafter referred to as Neural Network Multi-Criteria Optimization Image Reconstruction (NN-MOIRT).
Abstract: A new image reconstruction technique for imaging two- and three-phase flows using electrical capacitance tomography (ECT) has been developed based on multi-criteria optimization using an analog neural network, hereafter referred to as Neural Network Multi-criteria Optimization Image Reconstruction (NN-MOIRT)) The reconstruction technique is a combination between multi-criteria optimization image reconstruction technique for linear tomography, and the so-called linear back projection (LBP) technique commonly used for capacitance tomography The multi-criteria optimization image reconstruction problem is solved using Hopfield model dynamic neural-network computing For three-component imaging, the single-step sigmoid function in the Hopfield networks is replaced by a double-step sigmoid function, allowing the neural computation to converge to three-distinct stable regions in the output space corresponding to the three components, enabling the differentiation among the single phases

Journal ArticleDOI
TL;DR: In this paper, an opto-chemical in-fibre Bragg grating (FBG) sensor for refractive index measurement in liquids has been developed using fibre side-polishing technology.
Abstract: An opto-chemical in-fibre Bragg grating (FBG) sensor for refractive index measurement in liquids has been developed using fibre side-polishing technology. At a polished site where the fibre cladding has partly been removed, a FBG is exposed to a liquid analyte via evanescent field interaction of the guided fibre mode. The Bragg wavelength of the FBG is obtained in terms of its dependence on the refractive index of the analyte. Modal and wavelength dependences have been investigated both theoretically and experimentally in order to optimize the structure of the sensor. Using working wavelengths far above the cut-off wavelength results in an enhancement of the sensitivity of the sensor. Measurements with different mode configurations lead to the separation of cross sensitivities. Besides this, a second FBG located in the unpolished part can be used to compensate for temperature effects. Application examples for monitoring fuels of varying quality as well as salt concentrations under deep borehole conditions are presented.

Journal ArticleDOI
TL;DR: This paper explores several methods for increasing the efficiency and accuracy of particle image velocimetry (PIV) data analysis, and a technique for enhancing PIV images to increase the contrast between the particles and the background is presented.
Abstract: This paper explores several methods for increasing the efficiency and accuracy of particle image velocimetry (PIV) data analysis. The time to directly compute the correlation function to determine the displacement in PIV interrogation windows is reduced using two techniques. First, a scheme that calculates the correlation for 16 pixels in parallel is implemented. A further increase in efficiency results from using a truncated multiplication scheme that calculates 82% of the answer while reducing the work by 84%. Second, advantage is taken of the common practice of overlapping adjacent interrogation windows by not recorrelating the portion of the new window that overlaps the old. For the commonly used 50% overlap, the speed of vector calculation can theoretically be increased 300%. In practice, the improvement depends on the implementation of the method. The most efficient algorithm doubles the processing speed at 50% overlap. Accuracy is increased using three methods. First, a technique for enhancing PIV images to increase the contrast between the particles and the background is presented. This method is particularly useful when experimental exigencies result in low contrast images. Second, a method is presented for resolving the velocity in areas of high velocity gradient where the correlation map contains multiple peaks. Third, equalizing the histogram of sub-pixel adjustments should eliminate peak locking. Sample data show a `decrease' in error of 15%.

Journal ArticleDOI
TL;DR: In this paper, the authors introduce the concept and possibilities of optical fiber sensors for structural monitoring and present a series of back-to-basics tutorials on optical theory and photonic technology.
Abstract: This is an ambitious book aimed at introducing the relatively new concepts and possibilities of optical fibre sensors for structural monitoring to the uninitiated who have an engineering or general physics background. Measures draws the reader into the volume with a description of smart structures - the structural monitoring equivalent of artificial nervous systems - before a series of back-to-basics tutorials on optical theory and photonic technology. The emphasis on smart structures early in the book is a worthy attention-grabber since it elevates the subject of structural health monitoring above just another set of techniques for making engineering measurements. The promise is to `revolutionize engineering design philosophy' by creating `intelligence within otherwise inanimate structures'. In the latter two thirds of the book, the author steps through the main issues of structural monitoring using fibre optic sensors. Intensity-based, interferometric, polarimetric and spectral sensors (including the ubiquitous Bragg grating) are compared and contrasted. The hot topic of strain versus temperature discrimination in fibre sensors earns a whole chapter and several useful techniques for overcoming this cross-sensitivity are portrayed. Installation of sensors is also discussed with reference to retro-fit and co-manufacturing (embedding) approaches. Examples of concrete constructions such as bridges (a frequent theme in the book) and fibre-reinforced plastics such as glass-fibre and carbon composite materials are considered. A chapter on `short-gauge' sensors and applications deals in some depth with the Bragg grating as a strain sensor. The methods of multiplexing and interrogating these devices are explored with many examples from both Measures' own research and the work of other groups worldwide. The Beddington Trail bridge trial in Calgary, one of the first such installations of Bragg gratings, followed by the more ambitious Confederation Bridge, also in Canada, provide concrete examples of the technology's application. The material is marred somewhat by the inferior reproduction of some of the photographs, especially those showing field installations of the optical sensors. Other applications are not neglected. A description of trials aboard a Norwegian Naval vessel with composite hull monitored by Bragg gratings is also given. Interferometric sensors in similar applications trials are also covered in chapters on short and long gauge length devices. Distributed strain and temperature sensing techniques using Fourier transform, low coherence and stimulated backscattering are covered in the penultimate chapter, which draws together distributed measurement at a small physical scale in the form of intra-Bragg grating strain profile measurements (on the scale of millimetres) and measurements over kilometres using stimulated Brillouin scattering. In this reviewer's opinion the book dwells on strain monitoring in civil engineering structures at the expense of a broader scope, which could have included, for example, the detection of impacts or the acoustic emissions from crack propagation and other forms of structural damage. Nevertheless, this volume is an impressive collection of background and examples of real applications in heavyweight engineering. It adds significantly to the claim that fibre optic sensors have at last arrived. Peter Foote

Journal ArticleDOI
TL;DR: In this paper, a case study outlining the principles of measuring the mass flow rate of solids in a vertical channel is shown, along with various levels of visualization and processing of tomographic data obtained in a pilot plant-scale pneumatic conveying system.
Abstract: Transient three-dimensional multiphase flows are a characteristic feature of many industrial processes. The experimental observations and measurements of such flows are extremely difficult, and industrial process tomography has been developed over the last decade into a reliable method for investigating these complex phenomena. Gas-solids flows, such as those in pneumatic conveying systems, exhibit many interesting features and these can be successfully investigated by using electrical capacitance tomography. This paper discusses the current state of the art in this field, advantages and limitations of the technique and required future developments. Various levels of visualization and processing of tomographic data obtained in a pilot-plant-scale pneumatic conveying system are presented. A case study outlining the principles of measuring the mass flow rate of solids in a vertical channel is shown.

Journal ArticleDOI
TL;DR: In this article, an experimental set-up for simultaneous measurements of thermopower and electrical resistivity at temperatures from 100 K to 1300 K is described, where the optimal configuration of electrodes and original mechanical contacts of thermocouples, current leads and potential probes with the sample make it possible to measure a large variety of materials and result in greater flexibility with respect to the sample form and dimensions.
Abstract: In this paper we describe an experimental set-up for simultaneous measurements of thermopower and electrical resistivity at temperatures from 100 K to 1300 K. Optimal configuration of electrodes and original mechanical contacts of thermocouples, current leads and potential probes with the sample make it possible to measure a large variety of materials and result in greater flexibility with respect to the sample form and dimensions. Both bulk and thin film samples with resistances in the range from 0 Ω up to 200 kΩ (1 GΩ in case of the resistivity measurement) can be investigated. Precision and high reliability of the system have been proven during more than three years of use. The resistivity and thermopower of pure Pb, Cu and Ni, and a Cr-Si thin film composite are presented as test materials to demonstrate the possibilities and accuracy of this experimental set-up.

Journal ArticleDOI
TL;DR: In this article, several possibilities of vortex detection and characterization from two-dimensional instantaneous vector fields like those obtained by means of the particle image velocimetry (PIV) technique are discussed.
Abstract: Efficient and well adapted algorithms are necessary to analyse the large number of vector fields generated when observing time dependent flow fields. This paper discusses several possibilities of vortex detection and characterization from two-dimensional instantaneous vector fields like those obtained by means of the particle image velocimetry (PIV) technique.

Journal ArticleDOI
TL;DR: In this article, the authors present an initial investigation of a fiber optical system which may be used both for intra-cavity and for ring-down measurements of absorption losses.
Abstract: We present the design and initial investigation of a fibre optical system which may be used both for intra-cavity and for ring-down measurements of absorption losses. The system consists of a fibre loop containing a length of erbium-doped fibre pumped at 980 nm, with gain adjustment below or above threshold for the two types of operation. The fibre loop is constructed from standard fibre optical components and includes a micro-optical gas cell. The intended application is for measurement of levels of trace gases which possess near-IR absorption lines within the gain bandwidth of the erbium fibre amplifier. We discuss the key issues involved in operation of the system and the level of sensitivity required. Our initial experimental investigations have demonstrated that ring-down times of several microseconds can be obtained, which can be altered through adjustment of the attenuation or gain factor of the loop. Gain control is one of the most important issues and we explain how this may be achieved.

Journal ArticleDOI
TL;DR: In this paper, a free-space broadband terahertz (THz) spectroscopy technique with frequency ranging from 30 GHz to over 40 THz is presented. But the authors focus on a time-domain detection technique using free-spaces electro-optic sampling.
Abstract: This paper reports on a free-space broadband terahertz (THz) spectroscopy technique with frequency ranging from 30 GHz to over 40 THz. A historical review is given on the development of THz spectroscopy and its unique features. We focus on a time-domain detection technique using free-space electro-optic sampling. This broadband THz spectroscopy technique opens the door to many interesting applications. We present some recent developments in applications of THz spectroscopy, including characterization of the optical properties of materials, study of the dynamics of phonon and photocarriers and THz imaging. This THz time-domain technique continues to attract interest in both fundamental research and real-world applications.

Journal ArticleDOI
TL;DR: The National Metrology Institute of Japan (NMIJ) has improved the laser flash technique to reduce uncertainty in thermal diffusivity measurements of solid materials above room temperature as mentioned in this paper.
Abstract: The National Metrology Institute of Japan (NMIJ) has improved the laser flash technique to reduce uncertainty in thermal diffusivity measurements of solid materials above room temperature. An advanced laser flash apparatus was constructed after the following technical improvements had been made. (i) Uniform pulse heating of a specimen was achieved with an improved laser beam using an optical fibre (decreasing the error due to nonuniform heating). (ii) A fast infrared radiation thermometer with an absolute temperature scale was developed (decreasing the error due to nonlinear temperature detection). (iii) A new data analysis algorithm employing curve-fitting method whereby the entire region of the temperature history curve is fitted by the theoretical solution under the real boundary condition was introduced (decreasing the heat loss error). The precision and accuracy of the apparatus were demonstrated by measuring specimens of glassy carbon which is a candidate reference material for a thermal diffusivity standard to be supplied by the NMIJ.

Journal ArticleDOI
TL;DR: The history and then the current developments in medical impedance tomography are reviewed.
Abstract: The resurgence of research into medical electrical impedance tomography about 20 years ago was soon accompanied by a parallel development in process impedance tomography. The interaction between these two research communities was beneficial to both groups. In recent years this interaction has been very much reduced. This paper briefly reviews the history and then the current developments in medical impedance tomography.

Journal ArticleDOI
TL;DR: The hot plate technique for measuring the thermal conductivities of insulating materials has been in existence in various forms since 1898 as discussed by the authors, and it is now unarguably recognized as the most accurate technique for determining thermal conductivity of insulations, having an uncertainty of about 1.5% over a limited temperature range near ambient.
Abstract: The hot plate technique for measuring the thermal conductivities (the exact term for the quantity measured is thermal transmission, which, depending on the material being measured, can have components of convective, radiative and conductive heat transfer; it is commonly referred to as the effective or apparent thermal conductivity) of insulating materials has been in existence in various forms since 1898. A brief historical survey of the early development of the experimental technique is followed by a brief description of the basic principles of the method of measurement. The technique has since become very well established and is documented in the written standard ISO8302:1991. It is now unarguably recognized as the most accurate technique for determining the thermal conductivity of insulations, having an uncertainty of about 1.5% over a limited temperature range near ambient. Details of two guarded hot plate apparatuses designed and constructed at NPL over the last decade or so, one to measure insulations up to 250 mm thick at or around room temperature and the other to measure insulations and refractories at temperatures up to 850 °C, are given. Finally, there is a section on certified reference materials required for validating the performance of newly built guarded hot plate apparatus and for calibrating heat flow meter apparatus, a type of hot plate apparatus commonly used for quality control purposes in insulation manufacturing plant. A brief overview of these reference materials includes details of their availability, thermal conductivities and temperature ranges.

Journal ArticleDOI
TL;DR: In this paper, a photothermal model is developed in order to investigate the behavior of thermal waves in homogeneous plates and layered plates with finite thicknesses under convective conditions, and the model is then utilized to predict the phase differences produced by multi-layer subsurface defects and optimum inspection parameters.
Abstract: Lock-in thermography is a technique which is increasingly being used for the evaluation of subsurface defects in composite materials such as carbon-fibre-reinforced polymers (CFRPs) in aircraft structures. Most CFRP structures have a finite thickness and non-destructive inspection is performed in a natural ambient environment. In this paper, a photothermal model is developed in order to investigate the behaviour of thermal waves in homogeneous plates and layered plates with finite thicknesses under convective conditions. The model is then utilized to predict the phase differences produced by multi-layer subsurface defects and optimum inspection parameters. The theoretical results are compared with the experimental results. The detectivity of lock-in thermography for CFRP is also presented in this paper.

Journal ArticleDOI
TL;DR: In this paper, the authors presented a unique optical technique that utilizes the reabsorption and emission of two fluorescent dyes to accurately measure film thickness while minimizing errors caused by variations in illumination intensity and surface reflectivity.
Abstract: This paper presents a unique optical technique that utilizes the reabsorption and emission of two fluorescent dyes to accurately measure film thickness while minimizing errors caused by variations in illumination intensity and surface reflectivity. Combinations of dyes are selected that exhibit a high degree of emission reabsorption and each dye concentration is adjusted to create an optically thick system where emission reabsorption is intrinsic to the fluorescence of the film being measured. Film thickness information as well as excitation and dye response characteristics are all imbedded in the emission intensities of the dyes. Errors normally associated with laser induced fluorescence based film thickness measurements, including those due to optical distortion, variations in surface reflectivity and excitation non-uniformities, are minimized by observing the ratio of the dye emissions. The principle and constitutive equations characterizing emission reabsorption laser induced fluorescence (ERLIF) film thickness measurement are presented. In addition, film thickness measurements from 5 to 400 µm with 1% accuracy are demonstrated.

Journal ArticleDOI
TL;DR: In this paper, the authors present a review of the most-squares camera calibration including lens distortion, automatic editing of calibration points, and self-calibration of a stereo camera from unknown camera Motions and Point Correspondences.
Abstract: 1 Introduction.- 2 Minimum Solutions for Orientation.- 3 Generic Estimation Procedures for Orientation with Minimum and Redundant Information.- 4 Photogrammetric Camera Component Calibration: A Review of Analytical Techniques.- 5 Least-Squares Camera Calibration Including Lens Distortion and Automatic Editing of Calibration Points.- 6 Modeling and Calibration of Variable-Parameter Camera Systems.- 7 System Calibration Through Self-Calibration.- 8 Self-Calibration of a Stereo Rig from Unknown Camera Motions and Point Correspondences.

Journal ArticleDOI
TL;DR: In this paper, a simple system for the dynamic compensation of nonlinearity in a homodyne laser interferometer that can be used for high-precision length measurement is presented.
Abstract: This paper presents a simple system for the dynamic compensation of nonlinearity in a homodyne laser interferometer that can be used for high-precision length measurement. The computer collects two phase-quadrature signals, and calculates the DC offsets, the AC amplitudes and the difference from the phase-quadrature by the elliptical fitting with a least-squares method. The control signals for adjusting these ellipse parameters are fed into the automatic control circuit through the D/A converters so that the offsets are zero, the amplitudes are same and the phase difference is 90°. As a result, the nonlinearity is eliminated electronically. The system can be used in applications requiring the real-time compensation of nonlinearity. Experimental results demonstrate that the nonlinearity of the homodyne interferometer can be reduced to sub-nanometre over the measuring range of 100 mm.