scispace - formally typeset
Search or ask a question

Showing papers in "Metrologia in 2013"


Journal ArticleDOI
TL;DR: In this paper, the Planck constant has been used to define a base unit of the International System of Units (SI) for the first time, and the most promising experiment for this purpose is the watt balance.
Abstract: Since 1889 the international prototype of the kilogram has served as the definition of the unit of mass in the International System of Units (SI). It is the last material artefact to define a base unit of the SI, and it influences several other base units. This situation is no longer acceptable in a time of ever increasing measurement precision.It is therefore planned to redefine the unit of mass by fixing the numerical value of the Planck constant. At the same time three other base units, the ampere, the kelvin and the mole, will be redefined. As a first step, the kilogram redefinition requires a highly accurate determination of the Planck constant in the present SI system, with a relative uncertainty of the order of 1 part in 108.The most promising experiment for this purpose, and for the future realization of the kilogram, is the watt balance. It compares mechanical and electrical power and makes use of two macroscopic quantum effects, thus creating a relationship between a macroscopic mass and the Planck constant.In this paper the background for the choice of the Planck constant for the kilogram redefinition is discussed and the role of the Planck constant in physics is briefly reviewed. The operating principle of watt balance experiments is explained and the existing experiments are reviewed. An overview is given of all presently available experimental determinations of the Planck constant, and it is shown that further investigation is needed before the redefinition of the kilogram can take place.

227 citations


Journal ArticleDOI
TL;DR: In this article, the authors present the results of their work concerning the long-distance fibre optic dissemination of time (1.PPS) and frequency (10MHz) signals generated by atomic sources, such as caesium clocks, hydrogen masers or caeium fountains.
Abstract: In this paper we present the results of our work concerning the long-distance fibre optic dissemination of time (1 PPS) and frequency (10 MHz) signals generated by atomic sources, such as caesium clocks, hydrogen masers or caesium fountains. For these purposes we developed dedicated hardware (a fibre optic system with active stabilization of the propagation delay and bidirectional fibre optic amplifiers) together with a procedure to enable calibration of the time transfer. Our laboratory measurements performed over fibre lengths of up to 480 km showed an Allan deviation of the order of 4 × 10−17, time deviation below 1 ps (both at one-day averaging) and the possibility of calibration with picosecond accuracy even for the longest from evaluated links. After successful laboratory evaluation the system was next installed on a 421.4 km long route between the Central Office of Measures (GUM) in Warsaw, Poland, and the Astrogeodynamic Observatory (AOS) in Borowiec near Poznan, Poland. Experiments comparing the UTC(PL) and UTC(AOS) atomic timescales using the fibre optic link and TTS-4 dual-frequency GNSS time transfer receivers showed that the consistency of the results is within the calibration accuracy of the GPS receivers and with much better noise performance. The field operation of the system proved its full functionality and confirmed our previous laboratory evaluation to the maximum extent possible using the methods for comparing distant clocks available at GUM and AOS.

181 citations


Journal ArticleDOI
TL;DR: An interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM) provided a framework for assessing nanoparticle size distributions using TEM for image acquisition.
Abstract: This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin-Rammler-Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition.

125 citations


Journal ArticleDOI
TL;DR: The Mark II METAS watt balance experiment as mentioned in this paper was designed by the Federal Institute of Metrology METAS to measure the relative drift between the International Prototype of the Kilogram (IPK) and the official copies kept under similar conditions at the Bureau International des Poids et Mesures.
Abstract: The kilogram is the last unit of the international system of units (SI) still based on a material artefact, the international prototype of the kilogram (IPK). The comparisons made in the last hundred years have clearly revealed a long-term relative drift between the IPK and the official copies kept under similar conditions at the Bureau International des Poids et Mesures. A promising route towards a new definition of the kilogram based on a fundamental constant is represented by the watt balance experiment which links the mass unit to the Planck constant h. For more than ten years, the Federal Institute of Metrology METAS has been actively working in the conception and development of a watt balance experiment. This paper describes the new design of the Mark II METAS watt balance. The metrological characteristics of the different components of the experiment are described and discussed.

102 citations


Journal ArticleDOI
TL;DR: In this article, the Boltzmann constant kB was derived from measurements of the speed of sound in argon gas which can be related directly to the mean molecular kinetic energy.
Abstract: The Comite international des poids et mesures (CIPM) has projected a major revision of the International System of Units (SI) in which all of the base units will be defined by fixing the values of fundamental constants of nature. In preparation for this we have carried out a new, low-uncertainty determination of the Boltzmann constant, kB, in terms of which the SI unit of temperature, the kelvin, can be re-defined. We have evaluated kB from exceptionally accurate measurements of the speed of sound in argon gas which can be related directly to the mean molecular kinetic energy, . Our new estimate is kB = 1.380 651 56 (98) × 10−23 J K−1 with a relative standard uncertainty uR = 0.71 × 10−6.

92 citations


Journal ArticleDOI
TL;DR: In this paper, the Boltzmann constant k has been determined by dielectric-constant gas thermometry at the triple point of water (TPW) by applying a new special experimental setup consisting of a large-volume thermostat, a vacuum-isolated measuring system, stainless-steel 10 pF cylindrical capacitors, an autotransformer ratio capacitance bridge, a high-purity gas-handling system including a mass spectrometer, and traceably calibrated special pressure balances with pistoncylinder assemblies having effective areas of 2
Abstract: Within an international project directed to the new definition of the base unit kelvin, the Boltzmann constant k has been determined by dielectric-constant gas thermometry at PTB. In the pressure range from about 1 MPa to 7 MPa, 11 helium isotherms have been measured at the triple point of water (TPW) by applying a new special experimental setup consisting of a large-volume thermostat, a vacuum-isolated measuring system, stainless-steel 10 pF cylindrical capacitors, an autotransformer ratio capacitance bridge, a high-purity gas-handling system including a mass spectrometer, and traceably calibrated special pressure balances with piston–cylinder assemblies having effective areas of 2 cm2. The value of k has been deduced from the linear, ideal-gas term of an appropriate virial expansion fitted to the combined isotherms. A detailed uncertainty budget has been established by performing Monte Carlo simulations. The main uncertainty components result from the measurement of pressure and capacitance as well as the influence of the effective compressibility of the measuring capacitor and impurities contained in the helium gas. The combination of the results obtained at the TPW (kTPW = 1.380 654 × 10−23 J K−1, relative standard uncertainty 9.2 parts per million) with data measured earlier at low temperatures (21 K to 27 K, kLT = 1.380 657 × 10−23 J K−1, 15.9 parts per million) has yielded a value of k = 1.380 655 × 10−23 J K−1 with uncertainty of 7.9 parts per million.

77 citations


Journal ArticleDOI
TL;DR: In this paper, the absolute frequency of the optical clock transition 1S0(F = 1/2) −3P0( F = 1 /2) of 171Yb atoms confined in a one-dimensional optical lattice was determined to be 518.5(8.1)
Abstract: We measured the absolute frequency of the optical clock transition 1S0(F = 1/2)–3P0(F = 1/2) of 171Yb atoms confined in a one-dimensional optical lattice and it was determined to be 518 295 836 590 863.5(8.1) Hz. The frequency was measured against Terrestrial Time (TT; the SI second on the geoid) using an optical frequency comb of which the frequency was phase-locked to an H-maser as a flywheel oscillator traceable to TT. The magic wavelength was also measured as 394 798.48(79) GHz. The results are in good agreement with two previous measurements of other institutes within the specified uncertainty of this work.

77 citations


Journal ArticleDOI
TL;DR: The results of the third European Comparison of Absolute Gravimeters held in Walferdange, Grand Duchy of Luxembourg, in November 2011 are presented in this paper, where two different combinations of data are used.
Abstract: We present the results of the third European Comparison of Absolute Gravimeters held in Walferdange, Grand Duchy of Luxembourg, in November 2011. Twenty-two gravimeters from both metrological and non-metrological institutes are compared. For the first time, corrections for the laser beam diffraction and the self-attraction of the gravimeters are implemented. The gravity observations are also corrected for geophysical gravity changes that occurred during the comparison using the observations of a superconducting gravimeter. We show that these corrections improve the degree of equivalence between the gravimeters. We present the results for two different combinations of data. In the first one, we use only the observations from the metrological institutes. In the second solution, we include all the data from both metrological and non-metrological institutes. Those solutions are then compared with the official result of the comparison published previously and based on the observations of the metrological institutes and the gravity differences at the different sites as measured by non-metrological institutes. Overall, the absolute gravity meters agree with one another with a standard deviation of 3.1 µGal. Finally, the results of this comparison are linked to previous ones. We conclude with some important recommendations for future comparisons.

64 citations


Journal ArticleDOI
TL;DR: An ac quantum voltmeter based on a 10 V programmable Josephson array that is simple to use, provides dc and ac calibration up to kHz range for equipment widely used in metrology, and ensures direct traceability to a quantum-based standard, is developed as discussed by the authors.
Abstract: An ac quantum voltmeter based on a 10 V programmable Josephson array that is simple to use, provides dc and ac calibration up to kHz range for equipment widely used in metrology, and ensures direct traceability to a quantum-based standard, is developed. This ac quantum voltmeter is proven to match conventional Josephson standard systems at dc and extends its advantages up to 10 kHz in the low-frequency ac range. The ac quantum voltmeter is capable of performing calibrations up to 7 VRMS in the frequency range from dc to 10 kHz completely under software control. A direct comparison at dc has demonstrated an uncertainty better than 2 parts in 1010 (k = 2). The uncertainty at 1 kHz is better than 1.7 µV V−1 (k = 2) for a measurement time of 1 min. The ac quantum voltmeter is a robust and practical system that fulfils the needs of general metrology laboratories for quantum-based voltage calibrations.

62 citations


Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper improved the analysis of the acoustic data by accounting for second-order perturbations to the frequencies from the thermo-viscous boundary layer.
Abstract: We report improvements to our previous (Zhang et al 2011 Int. J. Thermophys. 32 1297) determination of the Boltzmann constant kB using a single 80 mm long cylindrical cavity. In this work, the shape of the gas-filled resonant cavity is closer to that of a perfect cylinder and the thermometry has been improved. We used two different grades of argon, each with measured relative isotopic abundances, and we used two different methods of supporting the resonator. The measurements with each gas and with each configuration were repeated several times for a total of 14 runs. We improved the analysis of the acoustic data by accounting for certain second-order perturbations to the frequencies from the thermo-viscous boundary layer. The weighted average of the data yielded kB = 1.380 6476 × 10−23 J K−1 with a relative standard uncertainty ur(kB) = 3.7 × 10−6. This result differs, fractionally, by (−0.9 ± 3.7) × 10−6 from the value recommended by CODATA in 2010. In this work, the largest component of the relative uncertainty resulted from inconsistent values of kB determined with the various acoustic modes; it is 2.9 × 10−6. In our previous work, this component was 7.6 × 10−6.

55 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a predictable quantum efficient detector (PQED), which is capable of measuring optical power with a relative uncertainty of 1 ǫppm (ppm = parts per million).
Abstract: The design and construction of a predictable quantum efficient detector (PQED), suggested to be capable of measuring optical power with a relative uncertainty of 1 ppm (ppm = parts per million), is presented. The structure and working principle of induced junction silicon photodiodes are described combined with the design of the PQED. The detector uses two custom-made large area photodiodes assembled into a light-trapping configuration, reducing the reflectance down to a few tens of ppm. A liquid nitrogen cryostat is used to cool the induced junction photodiodes to 78 K to improve the mobility of charge carriers and to reduce the dark current. To determine the predicted spectral responsivity, reflectance losses of the PQED were measured at room temperature and at 78 K and also modelled throughout the visible wavelength range from 400 nm to 800 nm. The measured values of reflectance at room temperature were 29.8 ppm, 22.8 ppm and 6.6 ppm at the wavelengths of 476 nm, 488 nm and 532 nm, respectively, whereas the calculated reflectances were about 4 ppm higher. The reflectance at 78 K was measured at the wavelengths of 488 nm and 532 nm over a period of 60 h during which the reflectance changed by about 20 ppm. The main uncertainty components in the predicted internal quantum deficiency (IQD) of the induced junction photodiodes are due to the reliability of the charge-carrier recombination model and the extinction coefficient of silicon at wavelengths longer than 700 nm. The expanded uncertainty of the predicted IQD is 2 ppm at 78 K over a limited spectral range and below 140 ppm at room temperature over the visible wavelength range. All the above factors are combined as the external quantum deficiency (EQD), which is needed for the calculation of the predicted spectral responsivity of the PQED. The values of the predicted EQD are below 70 ppm between the wavelengths of 476 nm and 760 nm, and their expanded uncertainties mostly vary between 10 ppm and 140 ppm, where the lowest uncertainties are obtained at low temperatures.

Journal ArticleDOI
TL;DR: In this paper, the uncertainty in the measurement of the peak temperature on the side face of a cutting tool, during the metal cutting process, by infrared thermography was analyzed using a commercial off-the-shelf camera and optics, typical of what is used in metal cutting research.
Abstract: This paper presents a comprehensive analysis of the uncertainty in the measurement of the peak temperature on the side face of a cutting tool, during the metal cutting process, by infrared thermography The analysis considers the use of a commercial off-the-shelf camera and optics, typical of what is used in metal cutting research A physics-based temperature measurement equation is considered and an analytical method is used to propagate the uncertainties associated with measurement variables to determine the overall temperature measurement uncertainty A Monte Carlo simulation is used to expand on the analytical method by incorporating additional sources of uncertainty such as a point spread function (PSF) of the optics, difference in emissivity of the chip and tool, and motion blur Further discussion is provided regarding the effect of sub-scenel averaging and magnification on the measured temperature values It is shown that a typical maximum cutting tool temperature measurement results in an expanded uncertainty of U = 501 °C (k = 2) The most significant contributors to this uncertainty are found to be uncertainties in cutting tool emissivity and PSF of the imaging system

Journal ArticleDOI
TL;DR: The predictable quantum efficient detector (PQED) is intended to become a new primary standard for radiant power measurements in the wavelength range from 400 nm to 800 nm as discussed by the authors, where the spectral response of the PQED to optical radiation is investigated.
Abstract: The predictable quantum efficient detector (PQED) is intended to become a new primary standard for radiant power measurements in the wavelength range from 400 nm to 800 nm. Characterization results of custom-made single induced junction photodiodes as they are used in the PQED and of assembled PQEDs are presented. The single photodiodes were tested in terms of linearity and spatial uniformity of the spectral responsivity. The highly uniform photodiodes were proved to be linear over seven orders of magnitude, i.e. in the radiant power range from 100 pW to 400 µW. The assembled PQED has been compared with a cryogenic electrical substitution radiometer with a very low uncertainty of the order of 30 ppm. Experimental results show good agreement with the modelled response of the PQED to optical radiation and prove a near unity external quantum efficiency.

Journal ArticleDOI
TL;DR: An algorithm able to deal with any desired fitting model was developed for regression problems with uncertain and correlated variables, developed in the MATLAB® environment and validated on several benchmark data sets, fitted with linear and non-linear regression models.
Abstract: An algorithm able to deal with any desired fitting model was developed for regression problems with uncertain and correlated variables.A typical application concerns the determination of calibration curves, especially (i) in those cases in which the uncertainties on the independent variables xi cannot be considered negligible with respect to those associated with the dependent variables yi, and (ii) when correlations exist among xi and yi. In the metrological field, several types of software have already been dedicated to the determination of calibration curves, some being focused just on problem (i) and a few others considering also problem (ii) but only for a straight-line fitting model. The proposed algorithm is able to deal with problems (i) and (ii) at the same time, for a generic fitting model. The tool was developed in the MATLAB® environment and validated on several benchmark data sets, fitted with linear and non-linear regression models.A review of the most commonly applied approximations to the parameter uncertainty is also presented, together with a Monte Carlo method proposed for comparison purposes with the results provided by the formula for the uncertainty evaluation which is implemented in the software.

Journal ArticleDOI
TL;DR: Comparisons show that even when increasing the number of observations in CV thanks to the combination of the two constellations, the AV remains superior to the CV solution in terms of noise and short term stability, especially for long baselines.
Abstract: GPS code measurements have been used for three decades for remote clock comparison, also called Time Transfer. Initially based on a technique using common-view (CV) single-frequency measurements, GPS time transfer now mostly uses dual-frequency measurements from geodetic receivers processed in all-in-view (AV). With the completion of the GLONASS constellation, it has been possible to readily use it in the CV single-frequency mode, providing results similar to GPS for short-distance time links. However GLONASS results are not readily equivalent to GPS in the dual-frequency AV mode, necessary for any moderate- to long-distance link, and this paper shows how to achieve this. We first present the GLONASS upgrade of the R2CGGTTS software, a tool to provide dual-frequency measurements in a format dedicated to time transfer named CGGTTS (Common GPS GLONASS Time Transfer Standard). The GLONASS navigation files are used to determine satellite clocks and positions, and dual-frequency pseudorange measurements are linearly combined to compute the CGGTTS results in a similar way as for GPS. In a second part, we present the combination of GPS and GLONASS into one unique time transfer solution based on AV. The results are first corrected using precise satellite orbit and clock products delivered by the IGS analysis centre ESOC, and characterized by the same reference for the GPS and GLONASS satellite clocks. Then, the need to introduce satellite-dependent hardware delays in GLONASS results is emphasized, and a procedure is proposed for their determination. The time transfer solutions obtained for GPS-only and GPS+GLONASS are then compared. The combination of GPS and GLONASS results in AV provides a time transfer solution having the same quality as GPS only. Furthermore, comparisons show that even when increasing the number of observations in CV thanks to the combination of the two constellations, the AV remains superior to the CV solution in terms of noise and short term stability, especially for long baselines.

Journal ArticleDOI
TL;DR: In this paper, a CAD-based digitized model of the FG5X was used to estimate the self-attraction of the counterweights used to reduce recoil in FG5s.
Abstract: The gravitational attraction of the body of a gravity meter upon its own proof mass is sometimes called the self-attraction. The self-attraction is a source of systematic error for absolute measurements of g, the acceleration of an object due to Earth's gravity. While the effect is typically small—of the order of one part per billion of the Earth's gravitational attraction—it is significant at the current level of accuracy of absolute gravity meters. In the past, a self-attraction correction for the FG5 gravity meter has been estimated by considering a rather coarse description of the instrument using simple geometrical shapes (spheres and cylinders). This paper describes a more complete calculation using a CAD-based digitized model of the newest FG5X instrument. We have also included the attraction of the co-moving drag-free chamber as well as the self-attraction of the counterweights used in the FG5X to reduce recoil. The results are also applicable to older style FG5 instruments with a fibre-optic interferometer base. The correction found with this new approach agrees with previous estimates but is now based upon a more complete and accurate model.

Journal ArticleDOI
TL;DR: In this article, the Richardson-Lucy method was applied to spectrum bandpass correction in spectrometer measurements using monochromators, and the results showed that it is robust with respect to wavelength step size and measurement noise.
Abstract: Bandpass correction in spectrometer measurements using monochromators is often necessary in order to obtain accurate measurement results. The classical approach of spectrometer bandpass correction is based on local polynomial approximations and the use of finite differences. Here we compare this approach with an extension of the Richardson–Lucy method, which is well known in image processing, but has not been applied to spectrum bandpass correction yet. Using an extensive simulation study and a practical example, we demonstrate the potential of the Richardson–Lucy method. In contrast to the classical approach, it is robust with respect to wavelength step size and measurement noise. In almost all cases the Richardson–Lucy method turns out to be superior to the classical approach both in terms of spectrum estimate and its associated uncertainties.

Journal ArticleDOI
TL;DR: In this paper, the authors use a theoretical analysis and a differential finite element method to investigate the behavior of the quadratic dependence in a typical magnetic circuit, and several strategies are discussed to minimize this error in the magnetic system design for the watt balance.
Abstract: The watt balance is operated in two asynchronous measurement modes to obtain the voltage?velocity ratio U/v and the force?current ratio mg/I, respectively. The magnetic flux density will change between the two modes when the effect due to the coil current is taken into account, particularly for watt balances using a soft magnetic material in the magnetic circuit. Normally, the linear component of the magnetic flux density change can be easily eliminated by reversing the coil current; however, the quadratic component remains as a systematic error, i.e. a non-linear error. In this paper, we use a theoretical analysis and a differential finite element method to investigate the behaviour of the quadratic dependence in a typical magnetic circuit. Several strategies are discussed to minimize this error in the magnetic system design for the watt balance.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the degree of equivalence is not a unique measure of consistency between the results and the underlying measurand when determined solely from the data and that prior knowledge or additional assumptions are needed for this purpose, and Bayesian methods are particularly suitable to handle that.
Abstract: The degrees of equivalence can be viewed as possibly the main result in the analysis of key comparison data. Their specification as given in the CIPM MRA is discussed and critically assessed in this paper. We argue that there is an ambiguity in the definition and meaning of the (unilateral) degrees of equivalence. As a consequence of this ambiguity uncertainties quoted for (unilateral) degrees of equivalence may be questioned. The ambiguity can be avoided by identifying the quantities that are being estimated by the degrees of equivalence, and we propose a standard statistical model to do this.We then show that the degrees of equivalence are not a unique measure of consistency between the results and the underlying measurand when determined solely from the data. Prior knowledge or additional assumptions are needed for this purpose, and Bayesian methods are particularly suitable to handle that. However, such measures of consistency depend on the chosen additional assumptions and generally are not in accordance with the current CIPM MRA.Fortunately, quantifying consistency between the results and the underlying measurand is not necessary in order to assess equivalence between the laboratories. We show that on the basis of the (unambiguous) pairwise degrees of equivalence the laboratories can be grouped into equivalent subsets, the largest of which may be chosen to select those laboratories whose CMCs are then viewed as validated.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed an alternative approach and demonstrate that a standard uncertainty below 5??Gal (relative uncertainty of 5???10?9) is reachable under the conditions at BIPM.
Abstract: It has been recommended that the relative standard uncertainty of the numerical value of the Planck constant required for the redefinition of the kilogram should not exceed 2???10?8. To reach this goal using experiments based on a watt balance, the free-fall acceleration (g) traceable to the SI, at a given point and a given time, needs to be known with a sufficiently small uncertainty well below 2???10?8. Reducing the uncertainty in g allows the other uncertainties related to the watt balance to be increased. Instead of a simultaneous operation of an absolute gravimeter with a watt balance, we propose an alternative approach and demonstrate that a standard uncertainty below 5??Gal (relative uncertainty of 5???10?9) is reachable under the conditions at BIPM. Further decreasing the uncertainty could significantly increase commitments in terms of personnel and equipment and would not significantly improve the uncertainty targeted for the BIPM watt balance experiment. A 5??Gal uncertainty might also satisfy the needs of other watt balance experiments underway or planned. In our approach we combine the following information: (1) the Key Comparison Reference Values obtained from the CCM.G-K1, a key comparison carried out in the frame of the International Comparison of Absolute Gravimeters in 2009 (ICAG2009); (2) the accurate gravity network established using the qualified absolute and relative gravimeters; (3) temporal gravity variations based on observed Earth-tide parameters and modelled effects of polar motion and atmospheric mass redistribution; (4) uncertainty estimates that account for non-modelled effects; (5) the option to carry out absolute gravity measurements once every one or two years with two or more gravimeters for monitoring the stability of the gravity field at the BIPM.

Journal ArticleDOI
TL;DR: In this article, the authors report on their on-going effort to measure the Boltzmann constant, kB, using the Doppler broadening technique, and present a revised error budget in which the global uncertainty on systematic effects is reduced to 2.3 ppm.
Abstract: We report on our on-going effort to measure the Boltzmann constant, kB, using the Doppler broadening technique. The main systematic effects affecting the measurement are discussed. A revised error budget is presented in which the global uncertainty on systematic effects is reduced to 2.3 ppm. This corresponds to a reduction of more than one order of magnitude compared with our previous Boltzmann constant measurement. Means to reach a determination of kB at the part per million accuracy level are outlined.

Journal ArticleDOI
TL;DR: In this paper, two NIST programmable Josephson voltage standard (PJVS) systems are directly compared at 10V using different nanovoltmeters and the results of a direct comparison using an analogue detector show that the two independent PJVS systems agree within 2.6 parts in 1011 at 10 V with a relative total combined uncertainty of 3.4 parts.
Abstract: Two NIST programmable Josephson voltage standard (PJVS) systems are directly compared at 10 V using different nanovoltmeters. These PJVS systems use arrays of triple-stacked superconducting niobium Josephson junctions with barriers made of niobium silicide. Compared with the voltages produced by Josephson voltage standards based on hysteretic junctions, PJVS systems using damped junctions produce predictable voltage levels. However, in order to guarantee the quantization of the voltages and to minimize the errors at the room-temperature voltage output, additional precautions are required. We report several experimental results of voltage measurements that contain significant systematic errors. The generated voltages appear reproducible but they are, in fact, inaccurate. When proper measurement procedures are followed, the results of a direct comparison using an analogue detector show that the two independent PJVS systems agree within 2.6 parts in 1011 at 10 V with a relative total combined uncertainty of 3.4 parts in 1011 (k = 1). Investigations show that the largest systematic error and most significant contribution to the uncertainty budget is caused by the leakage resistance of each PJVS to ground. This paper describes a measurement procedure to characterize this leakage resistance and one approach to including the resulting voltage error in the uncertainty budget.

Journal ArticleDOI
TL;DR: The capabilities of thirteen European and non-European National Metrology Institutes for traceable calibrations of picoammeters have been compared and a EURAMET comparison was performed in the field of small DC currents below 1?nA.
Abstract: For the first time, a EURAMET comparison was performed in the field of small DC currents below 1?nA. The capabilities of thirteen European and non-European National Metrology Institutes for traceable calibrations of picoammeters have been compared. For that purpose, two different commercial picoammeters were used as travelling instruments. They were calibrated at current values of ?100?fA, ?1?pA, ?10?pA and ?100?pA. The accuracy achieved was to a large extent limited by the stability of the travelling instruments. The expanded relative uncertainty (k = 2) of the reference values varied between 2.7 ? 10?4 and 1.9 ? 10?5, depending on the current ranges and the travelling instruments. There was a good agreement between the results of most of the participants. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by EURAMET, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

Journal ArticleDOI
TL;DR: In this paper, a new route to gain traceability for dynamic calibration using the acousto-optic effect is described, and the experimental set-ups that are used for the realization and the thermophysical background of the measurements, and some numerical estimates on the expected results for two different pressure-transmitting media are shown.
Abstract: The primary calibration of pressure transducers is at present realized by static methods. This paper describes a new route to gaining traceability for dynamic calibration using the acousto-optic effect. The pressure range under consideration is up to 100 MPa. We set out a description of the general principle employed to gain traceability, the experimental set-ups that are used for the realization and the thermophysical background of the measurements, and some numerical estimates on the expected results for two different pressure-transmitting media are shown.

Journal ArticleDOI
TL;DR: In this article, a combined solvent wash and UV/ozone procedure, called UVOPS or "UV/oxide with pre-wash and stabilization" was proposed to remove even gross contamination from these noble metal surfaces.
Abstract: The stability of reference masses has been a long-standing concern within the SI. More recently the requirements of potential non-artefact realizations of the kilogram have added gold and its alloys to the platinum alloys that have historically been the focus of attention.Previously we proposed UV/ozone cleaning of standard-mass surfaces to improve stability with respect to carbonaceous contamination. Since then both NPL and BIPM have constructed prototypes of UV/ozone apparatus for cleaning kilogram standards. We have therefore tested a combined solvent wash and UV/ozone procedure, UVOPS or ‘UV/ozone with pre-wash and stabilization’, on both platinum and gold surfaces. X-ray photoelectron spectroscopy (XPS) shows this to be very successful in removing even gross contamination from these noble metal surfaces. Oxidation is negligible, limited to the outermost layer of noble metal atoms, and terminates at a level that is within the uncertainty of mass comparisons at the 1 kg level. This oxide has been seen in some earlier XPS studies but not others—we show that decomposition of the oxide by x-rays at XPS energies may be responsible for this disparity.

Journal ArticleDOI
TL;DR: In this paper, the authors introduce the principles developed for the application of metrology to the field of chemistry and particularly to analytical chemistry, and discuss the importance of the mole in analytical chemistry.
Abstract: This paper is an introduction to the principles developed for the application of metrology to the field of chemistry and particularly to analytical chemistry. It starts with a discussion of the mole, the base unit of the SI that is most relevant to analytical chemistry. The mole has become the subject of particular discussion recently, since the publication of proposals to re-define it along with three other base units of the SI. This discussion has also generated interest in the origin of the term ‘amount of substance’ used as the quantity for which the mole is the unit. This paper reviews the origin of this term and explains why it is not sufficient to replace it with an alternative such as a ‘number of entities’. The paper concludes with some discussion of how the mole is realized through the use of primary methods of measurement.

Journal ArticleDOI
TL;DR: A detailed analysis of the accuracy of the two-way time transfer via a single coaxial cable was carried out and a TWTT system for highly accurate time distribution or comparison has been designed and realized.
Abstract: In order to find limits of the accuracy of the two-way time transfer (TWTT) via a single coaxial cable, we have carried out a detailed analysis which is presented in this paper. We applied the TWTT concept when a transmission line is driven by pulse current drivers and the times of arrival of the pulses are measured at the ends of the line. In addition to the estimation of the accuracy, the analysis provides several rules for proper design of a TWTT system with optimal performance. Based on this concept, a TWTT system for highly accurate time distribution or comparison has been designed and realized. For distances up to 1?km the accuracy was better than 100?ps without any additional correction or adjustment. After the influence of the non-symmetry of input?output circuits was corrected, the errors were lower than 20?ps for distances up to 2?km. The TWTT system is designated for keeping unified time in a net of event timers distributed in one building or in a relatively small area. The timing units forming the system guarantee the time transfer parallel to the time tagging of external pulses.

Journal ArticleDOI
TL;DR: In this paper, the authors explored the feasibility of acoustic gas thermometry in the range 700 K to the copper point (1358 K) in order to more accurately measure the differences between ITS-90 and the thermodynamic temperature.
Abstract: This work explores the feasibility of acoustic gas thermometry (AGT) in the range 700 K to the copper point (1358 K) in order to more accurately measure the differences between ITS-90 and the thermodynamic temperature. To test material suitability and stability, we investigated microwave resonances in argon-filled cylindrical cavities machined from a Ni–Cr–Fe alloy. We measured the frequencies of five non-degenerate microwave modes of one cavity at temperatures up to 1349 K using home-made coaxial cables and antennas. The short-term repeatability of both the measured frequencies fN and the scaled half-widths gN/fN was better than 10−6 fN. Oxidation was not a problem while clean argon flowed through the cavity. The measurement techniques are compatible with highly accurate AGT and may be adaptable to refractive index gas thermometry.

Journal ArticleDOI
TL;DR: The cosine error of a class of in situ hyperspectral irradiance sensors largely applied for ocean colour investigations has been characterized for both in-air and in-water measurements as discussed by the authors.
Abstract: The cosine error of a class of in situ hyperspectral irradiance sensors largely applied for ocean colour investigations has been characterized for both in-air and in-water measurements. Results for in-air measurements indicate a slight wavelength dependence of the cosine error with differences up to 2% in the 412 nm to 865 nm spectral interval at 65° zenith angle (i.e. the angle of incident irradiance with respect to the normal axis of the irradiance collector). However, the dependence of the cosine error on the zenith angle is generally quite marked and significantly varies from radiometer to radiometer with values ranging from −5% up to +7% at 65°. Additionally, apart from the expected increase in the cosine error with the angle of incidence, in-air measurements generally indicate an irregular deviation from the ideal cosine response near the normal angle of incidence. A more pronounced increase in the cosine error is generally observed when radiometers are operated in water with respect to in air.

Journal ArticleDOI
TL;DR: In this article, the uncertainty of spectral UV measurements is driven by the signal-to-noise ratio of the detector, while the spectral UV calculations strongly depend on the uncertainties of the ozone input.
Abstract: Although some of the adverse effects of the ultraviolet (UV) radiation may be strictly proportional to cumulative UV doses, others may relate to the frequency of extreme UV events. Therefore, an improved understanding of the UV global climate, including variability and trends, has become of great interest. Variability and trend analyses require quality-ensured surface UV series. The quality of surface UV data depends on their uncertainty. Building upon our prior efforts, we have used a Monte Carlo-based method to compute, under different conditions, the uncertainties affecting UV data rendered by models (1D radiative transfer models) and by spectroradiometers (double monochromator-based and CCD array-based). We found that the uncertainty of spectral UV measurements is driven by the signal-to-noise ratio of the detector, while the uncertainty of spectral UV calculations strongly depends on the uncertainty of the ozone input. The presented uncertainty figures allow comparison of the performance of modern UV gathering techniques (models and instruments), and provide a frame to assess the significance of differences when intercomparing.