scispace - formally typeset
Search or ask a question

Showing papers by "Sandia National Laboratories published in 2009"


Journal ArticleDOI
TL;DR: This survey provides an overview of higher-order tensor decompositions, their applications, and available software.
Abstract: This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or $N$-way array. Decompositions of higher-order tensors (i.e., $N$-way arrays with $N \geq 3$) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.

9,227 citations


Journal ArticleDOI
TL;DR: This critical review discusses the origins of MOF luminosity, which include the linker, the coordinated metal ions, antenna effects, excimer and exciplex formation, and guest molecules.
Abstract: Metal–organic frameworks (MOFs) display a wide range of luminescent behaviors resulting from the multifaceted nature of their structure. In this critical review we discuss the origins of MOF luminosity, which include the linker, the coordinated metal ions, antenna effects, excimer and exciplex formation, and guest molecules. The literature describing these effects is comprehensively surveyed, including a categorization of each report according to the type of luminescence observed. Finally, we discuss potential applications of luminescent MOFs. This review will be of interest to researchers and synthetic chemists attempting to design luminescent MOFs, and those engaged in the extension of MOFs to applications such as chemical, biological, and radiation detection, medical imaging, and electro-optical devices (141 references).

4,407 citations


Journal ArticleDOI
TL;DR: The new growth process introduced here establishes a method for the synthesis of graphene films on a technologically viable basis and produces monolayer graphene films with much larger domain sizes than previously attainable.
Abstract: Graphene, a single monolayer of graphite, has recently attracted considerable interest owing to its novel magneto-transport properties, high carrier mobility and ballistic transport up to room temperature. It has the potential for technological applications as a successor of silicon in the post Moore's law era, as a single-molecule gas sensor, in spintronics, in quantum computing or as a terahertz oscillator. For such applications, uniform ordered growth of graphene on an insulating substrate is necessary. The growth of graphene on insulating silicon carbide (SiC) surfaces by high-temperature annealing in vacuum was previously proposed to open a route for large-scale production of graphene-based devices. However, vacuum decomposition of SiC yields graphene layers with small grains (30-200 nm; refs 14-16). Here, we show that the ex situ graphitization of Si-terminated SiC(0001) in an argon atmosphere of about 1 bar produces monolayer graphene films with much larger domain sizes than previously attainable. Raman spectroscopy and Hall measurements confirm the improved quality of the films thus obtained. High electronic mobilities were found, which reach mu=2,000 cm (2) V(-1) s(-1) at T=27 K. The new growth process introduced here establishes a method for the synthesis of graphene films on a technologically viable basis.

2,493 citations


Journal ArticleDOI
TL;DR: In this article, a single layer of electrically controlled metamaterial was used to achieve active control of the phase of terahertz waves and demonstrated high-speed broadband modulation.
Abstract: Using a single layer of electrically controlled metamaterial, researchers have achieved active control of the phase of terahertz waves and demonstrated high-speed broadband modulation.

935 citations


Journal ArticleDOI
01 Jan 2009
TL;DR: The development of advanced compression-ignition (CI) engines can deliver both high efficiencies and very low NOX and particulate (PM) emissions, but unlike conventional diesel engines, the charge is highly dilute and premixed (or partially premixed) to achieve low emissions as mentioned in this paper.
Abstract: Advanced compression-ignition (CI) engines can deliver both high efficiencies and very low NOX and particulate (PM) emissions. Efficiencies are comparable to conventional diesel engines, but unlike conventional diesel engines, the charge is highly dilute and premixed (or partially premixed) to achieve low emissions. Dilution is accomplished by operating either lean or with large amounts of EGR. The development of these advanced CI engines has evolved mainly along two lines. First, for fuels other than diesel, a combustion process commonly known as homogeneous charge compression-ignition (HCCI) is generally used, in which the charge is premixed before being compression ignited. Although termed “homogeneous,” there are always some thermal or mixture inhomogeneities in real HCCI engines, and it is sometimes desirable to introduce additional stratification. Second, for diesel fuel (which autoignites easily but has low volatility) an alternative low-temperature combustion (LTC) approach is used, in which the autoignition is closely coupled to the fuel-injection event to provide control over ignition timing. To obtain dilute LTC, this approach relies on high levels of EGR, and injection timing is typically shifted 10–15° CA earlier or later than for conventional diesel combustion so temperatures are lower, which delays ignition and provides more time for premixing. Although these advanced CI combustion modes have important advantages, there are difficulties to implementing them in practical engines. In this article, the principles of HCCI and diesel LTC engines are reviewed along with the results of research on the in-cylinder processes. This research has resulted in substantial progress toward overcoming the main challenges facing these engines, including: improving low-load combustion efficiency, increasing the high-load limit, understanding fuel effects, and maintaining low NOX and PM emissions over the operating range.

919 citations


Journal ArticleDOI
TL;DR: In this paper, the authors review LED performance targets that are needed to achieve these benefits and highlight some of the remaining technical challenges, and describe recent advances in LED materials and novel device concepts that show promise for realizing the full potential of LED-based white lighting.
Abstract: Over the past decade, advances in LEDs have enabled the potential for wide-scale replacement of traditional lighting with solid-state light sources. If LED performance targets are realized, solid-state lighting will provide significant energy savings, important environmental benefits, and dramatically new ways to utilize and control light. In this paper, we review LED performance targets that are needed to achieve these benefits and highlight some of the remaining technical challenges. We describe recent advances in LED materials and novel device concepts that show promise for realizing the full potential of LED-based white lighting.

764 citations


Journal ArticleDOI
TL;DR: Three distinct forms are derived for the force virial contribution to the pressure and stress tensor of a collection of atoms interacting under periodic boundary conditions, and are valid for arbitrary many-body interatomic potentials.
Abstract: Three distinct forms are derived for the force virial contribution to the pressure and stress tensor of a collection of atoms interacting under periodic boundary conditions. All three forms are written in terms of forces acting on atoms, and so are valid for arbitrary many-body interatomic potentials. All three forms are mathematically equivalent. In the special case of atoms interacting with pair potentials, they reduce to previously published forms. (i) The atom-cell form is similar to the standard expression for the virial for a finite nonperiodic system, but with an explicit correction for interactions with periodic images. (ii) The atom form is particularly suited to implementation in modern molecular dynamics simulation codes using spatial decomposition parallel algorithms. (iii) The group form of the virial allows the contributions to the virial to be assigned to individual atoms.

737 citations


Journal ArticleDOI
TL;DR: This review describes the use of PC expansions for the representation of random variables/fields and discusses their utility for the propagation of uncertainty in computational models, focusing on CFD models.
Abstract: The quantification of uncertainty in computational fluid dynamics (CFD) predictions is both a significant challenge and an important goal. Probabilistic uncertainty quantification (UQ) methods have been used to propagate uncertainty from model inputs to outputs when input uncertainties are large and have been characterized probabilistically. Polynomial chaos (PC) methods have found increased use in probabilistic UQ over the past decade. This review describes the use of PC expansions for the representation of random variables/fields and discusses their utility for the propagation of uncertainty in computational models, focusing on CFD models. Many CFD applications are considered, including flow in porous media, incompressible and compressible flows, and thermofluid and reacting flows. The review examines each application area, focusing on the demonstrated use of PC UQ and the associated challenges. Cross-cutting challenges with time unsteadiness and long time horizons are also discussed.

731 citations


Journal ArticleDOI
TL;DR: In this paper, different aspects of the PWI are assessed in their importance for the initial wall materials choice: CFC for the strike point tiles, W in the divertor and baffle and Be on the first wall.

708 citations


Journal ArticleDOI
TL;DR: The absolute grain boundary mobility of 388 nickel grain boundaries was calculated using a synthetic driving force molecular dynamics method; complete results appear in the Supplementary materials as discussed by the authors. But the authors did not consider the effect of boundary mobility on grain boundary roughening.

646 citations


Journal ArticleDOI
TL;DR: There has been much recent progress in the area of surrogate fuels for diesel as discussed by the authors, however, major research gaps remain, and no detailed chemical kinetic models or experimental investigations are available for such compounds.

Journal ArticleDOI
TL;DR: In this article, the authors present results from terascale direct numerical simulations (DNS) of turbulent flames, illustrating its role in elucidating flame stabilization mechanisms in a lifted turbulent hydrogen/air jet flame in a hot air coflow, and the flame structure of a fuel-lean turbulent premixed jet flame.
Abstract: Computational science is paramount to the understanding of underlying processes in internal combustion engines of the future that will utilize non-petroleum-based alternative fuels, including carbon-neutral biofuels, and burn in new combustion regimes that will attain high efficiency while minimizing emissions of particulates and nitrogen oxides. Next-generation engines will likely operate at higher pressures, with greater amounts of dilution and utilize alternative fuels that exhibit a wide range of chemical and physical properties. Therefore, there is a significant role for high-fidelity simulations, direct numerical simulations (DNS), specifically designed to capture key turbulence-chemistry interactions in these relatively uncharted combustion regimes, and in particular, that can discriminate the effects of differences in fuel properties. In DNS, all of the relevant turbulence and flame scales are resolved numerically using high-order accurate numerical algorithms. As a consequence terascale DNS are computationally intensive, require massive amounts of computing power and generate tens of terabytes of data. Recent results from terascale DNS of turbulent flames are presented here, illustrating its role in elucidating flame stabilization mechanisms in a lifted turbulent hydrogen/air jet flame in a hot air coflow, and the flame structure of a fuel-lean turbulent premixed jet flame. Computing at this scale requires close collaborations between computer and combustion scientists to provide optimized scaleable algorithms and software for terascale simulations, efficient collective parallel I/O, tools for volume visualization of multiscale, multivariate data and automating the combustion workflow. The enabling computer science, applied to combustion science, is also required in many other terascale physics and engineering simulations. In particular, performance monitoring is used to identify the performance of key kernels in the DNS code, S3D and especially memory intensive loops in the code. Through the careful application of loop transformations, data reuse in cache is exploited thereby reducing memory bandwidth needs, and hence, improving S3D's nodal performance. To enhance collective parallel I/O in S3D, an MPI-I/O caching design is used to construct a two-stage write-behind method for improving the performance of write-only operations. The simulations generate tens of terabytes of data requiring analysis. Interactive exploration of the simulation data is enabled by multivariate time-varying volume visualization. The visualization highlights spatial and temporal correlations between multiple reactive scalar fields using an intuitive user interface based on parallel coordinates and time histogram. Finally, an automated combustion workflow is designed using Kepler to manage large-scale data movement, data morphing, and archival and to provide a graphical display of run-time diagnostics.

Journal ArticleDOI
TL;DR: In this paper, the synthesis of anion exchange membranes based on a poly(phenylene) backbone prepared by a Diels−Alder reaction is demonstrated, and they have hydroxide ion conductivities as high as 50 mS/cm in liquid water.
Abstract: Cationic polymer membranes that conduct free anions comprise an enabling area of research for alkaline membrane fuel cells and other solid-state electrochemical devices that operate at high pH. The synthesis of anion exchange membranes based on a poly(phenylene) backbone prepared by a Diels−Alder reaction is demonstrated. The poly(phenylene)s have benzylic methyl groups that are converted to bromomethyl groups by a radical reaction. Cationic polymers result from conversion of the bromomethyl groups to ionic moieties by quaternization with trimethylamine in the solid state. The conversion to benzyltrimethylammonium groups is incomplete as evidenced by the differences between the IEC values measured by titration and the theoretical IECs based on 1H NMR measurements. The anion exchange membranes formed from these polymers have hydroxide ion conductivities as high as 50 mS/cm in liquid water, and they are stable under highly basic conditions at elevated temperatures.

Journal ArticleDOI
TL;DR: In this paper, adaptive refinement algorithms for non-local method peridynamics were introduced for scaling of the micromodulus and horizon and discussed the particular features of adaptivity for which multiscale modeling and grid refinement are closely connected.
Abstract: We introduce here adaptive refinement algorithms for the non-local method peridynamics, which was proposed in (J. Mech. Phys. Solids 2000; 48:175–209) as a reformulation of classical elasticity for discontinuities and long-range forces. We use scaling of the micromodulus and horizon and discuss the particular features of adaptivity in peridynamics for which multiscale modeling and grid refinement are closely connected. We discuss three types of numerical convergence for peridynamics and obtain uniform convergence to the classical solutions of static and dynamic elasticity problems in 1D in the limit of the horizon going to zero. Continuous micromoduli lead to optimal rates of convergence independent of the grid used, while discontinuous micromoduli produce optimal rates of convergence only for uniform grids. Examples for static and dynamic elasticity problems in 1D are shown. The relative error for the static and dynamic solutions obtained using adaptive refinement are significantly lower than those obtained using uniform refinement, for the same number of nodes. Copyright © 2008 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, a single-cylinder version of a heavy-duty diesel engine with extensive optical access to the combustion chamber was used to investigate the cause of diesel NOx emissions.
Abstract: It is generally accepted that emissions of nitrogen oxides (NOx) increase as the volume fraction of biodiesel increases in blends with conventional diesel fuel. While many mechanisms based on biodiesel effects on in- cylinder processes have been proposed to explain this observation, a clear understanding of the relative importance of each has remained elusive. To gain further insight into the cause(s) of the biodiesel NOx increase, experiments were conducted in a single- cylinder version of a heavy-duty diesel engine with extensive optical access to the combustion chamber. The engine was operated using two biodiesel fuels and two hydrocarbon reference fuels, over a wide range of loads, and using undiluted air as well as air diluted with simulated exhaust gas recirculation. Measurements were made of cylinder pressure, spatially integrated natural luminosity (a measure of radiative heat transfer), engine-out emissions of NOx and smoke, flame lift-off length, actual start of injection, ignition delay, and efficiency. Adiabatic flame temperatures for the test fuels and a surrogate #2 diesel fuel also were computed at representative diesel-engine conditions. Results suggest that the biodiesel NOx increase is not quantitatively determined by a change in a single fuel property, but rather is the result of a number of coupled mechanisms whose effects may tend to reinforce or cancel one another under different conditions, depending on specific combustion and fuel characteristics. Nevertheless, charge-gas mixtures that are closer to stoichiometric at ignition and in the standing premixed autoignition zone near the flame lift- off length appear to be key factors in helping to explain the biodiesel NOx increase under all conditions. These differences are expected to lead to higher local and average in-cylinder temperatures, lower radiative heat losses, and a shorter, more-advanced combustion event, all of which would be expected to increase thermal NOx emissions. Differences in prompt NO formation and species concentrations resulting from fuel and jet-structure changes also may play important roles.

Journal ArticleDOI
TL;DR: In comparison to untreated biomass, ionic liquid pretreated biomass produces cellulose that is efficiently hydrolyzed with commercial cellulase cocktail with high sugar yields over a relatively short time interval.
Abstract: Auto-fluorescent mapping of plant cell walls was used to visualize cellulose and lignin in pristine switchgrass (Panicum virgatum) stems to determine the mechanisms of biomass dissolution during ionic liquid pretreatment. The addition of ground switchgrass to the ionic liquid 1-n-ethyl-3-methylimidazolium acetate resulted in the disruption and solubilization of the plant cell wall at mild temperatures. Swelling of the plant cell wall, attributed to disruption of inter- and intramolecular hydrogen bonding between cellulose fibrils and lignin, followed by complete dissolution of biomass, was observed without using imaging techniques that require staining, embedding, and processing of biomass. Subsequent cellulose regeneration via the addition of an anti-solvent, such as water, was observed in situ and provided direct evidence of significant rejection of lignin from the recovered polysaccharides. This observation was confirmed by chemical analysis of the regenerated cellulose. In comparison to untreated biomass, ionic liquid pretreated biomass produces cellulose that is efficiently hydrolyzed with commercial cellulase cocktail with high sugar yields over a relatively short time interval.

Journal ArticleDOI
TL;DR: The majority of previously reported phononic crystal devices have been constructed by hand, assembling scattering inclusions in a viscoelastic medium, predominantly air, water or epoxy, resulting in large structures limited to frequencies below 1 MHz as mentioned in this paper.
Abstract: Phononic crystals are the acoustic wave analogue of photonic crystals. Here a periodic array of scattering inclusions located in a homogeneous host material forbids certain ranges of acoustic frequencies from existence within the crystal, thus creating what are known as acoustic bandgaps. The majority of previously reported phononic crystal devices have been constructed by hand, assembling scattering inclusions in a viscoelastic medium, predominantly air, water or epoxy, resulting in large structures limited to frequencies below 1 MHz. Recently, phononic crystals and devices have been scaled to VHF (30–300 MHz) frequencies and beyond by utilizing microfabrication and micromachining technologies. This paper reviews recent developments in the area of micro-phononic crystals including design techniques, material considerations, microfabrication processes, characterization methods and reported device structures. Micro-phononic crystal devices realized in low-loss solid materials are emphasized along with their potential application in radio frequency communications and acoustic imaging for medical ultrasound and nondestructive testing. The reported advances in batch micro-phononic crystal fabrication and simplified testing promise not only the deployment of phononic crystals in a number of commercial applications but also greater experimentation on a wide variety of phononic crystal structures.

Journal ArticleDOI
TL;DR: In this article, the authors review current progress in the understanding of interfaces in bulk thermoelectric materials and focus on emerging routes to engineer the nanoscale grain and interfacial structures.
Abstract: We review current progress in the understanding of interfaces in bulk thermoelectric materials. Following a brief discussion of the mechanisms by which embedded interfaces can enhance the electronic and thermal transport properties, we focus on emerging routes to engineer the nanoscale grain and interfacial structures in bulk thermoelectric materials. We address in particular (i) control of crystallographic texture, (ii) reduction of grain size to nanocrystalline dimensions, and (iii) formation of nanocomposite structures. While these approaches are beginning to yield promising improvements in performance, continued progress will require an improved fundamental understanding of the mechanisms governing the formation, stability, and properties of thermoelectric interfaces.

Journal ArticleDOI
TL;DR: In this article, the feasibility of using grain-boundary engineering techniques to reduce the susceptibility of a metallic material to intergranular embrittlement in the presence of hydrogen is examined.

Proceedings ArticleDOI
01 May 2009
TL;DR: The latest ideas for tailoring these expansion methods to numerical integration approaches will be explored, in which expansion formulations are modified to best synchronize with tensor-product quadrature and Smolyak sparse grids using linear and nonlinear growth rules.
Abstract: Non-intrusive polynomial chaos expansion (PCE) and stochastic collocation (SC) methods are attractive techniques for uncertainty quantification (UQ) due to their strong mathematical basis and ability to produce functional representations of stochastic variability. PCE estimates coefficients for known orthogonal polynomial basis functions based on a set of response function evaluations, using sampling, linear regression, tensor-product quadrature, or Smolyak sparse grid approaches. SC, on the other hand, forms interpolation functions for known coefficients, and requires the use of structured collocation point sets derived from tensor product or sparse grids. When tailoring the basis functions or interpolation grids to match the forms of the input uncertainties, exponential convergence rates can be achieved with both techniques for a range of probabilistic analysis problems. In addition, analytic features of the expansions can be exploited for moment estimation and stochastic sensitivity analysis. In this paper, the latest ideas for tailoring these expansion methods to numerical integration approaches will be explored, in which expansion formulations are modified to best synchronize with tensor-product quadrature and Smolyak sparse grids using linear and nonlinear growth rules. The most promising stochastic expansion approaches are then carried forward for use in new approaches for mixed aleatory-epistemic UQ, employing second-order probability approaches, and design under uncertainty, employing bilevel, sequential, and multifidelity approaches.

Journal ArticleDOI
TL;DR: This work considers a Bayesian approach to nonlinear inverse problems in which the unknown quantity is a spatial or temporal field, endowed with a hierarchical Gaussian process prior, and introduces truncated Karhunen-Loeve expansions, based on the prior distribution, to efficiently parameterize the unknown field.

Proceedings ArticleDOI
05 Jan 2009
TL;DR: Performance of PCE and SC is shown to be very similar, although when differences are evident, SC is the consistent winner over traditional PCE formulations, and this performance gap can be reduced, and in some cases, eliminated.
Abstract: Non-intrusive polynomial chaos expansion (PCE) and stochastic collocation (SC) methods are attractive techniques for uncertainty quantification (UQ) due to their strong mathematical basis and ability to produce functional representations of stochastic variability PCE estimates coefficients for known orthogonal polynomial basis functions based on a set of response function evaluations, using sampling, linear regression, tensor-product quadrature, or Smolyak sparse grid approaches SC, on the other hand, forms interpolation functions for known coefficients, and requires the use of structured collocation point sets derived from tensor-products or sparse grids When tailoring the basis functions or interpolation grids to match the forms of the input uncertainties, exponential convergence rates can be achieved with both techniques for general probabilistic analysis problems In this paper, we explore relative performance of these methods using a number of simple algebraic test problems, and analyze observed differences In these computational experiments, performance of PCE and SC is shown to be very similar, although when differences are evident, SC is the consistent winner over traditional PCE formulations This stems from the practical difficulty of optimally synchronizing the formof the PCE with the integration approach being employed, resulting in slight over- or under-integration of prescribed expansion form With additional nontraditional tailoring of PCE form, it is shown that this performance gap can be reduced, and in some cases, eliminated

Journal ArticleDOI
01 Jan 2009
TL;DR: In this article, the results of pilot-scale tests of oxy-fuel combustion and to accurately predict scale-up performance through CFD modeling were analyzed in detail through single-particle imaging at a gas temperature of 1700 K over a range of 12-36 vol % O 2 in both N 2 and CO 2 diluent gases.
Abstract: Oxy-fuel combustion of coal is a promising technology for cost-effective power production with carbon capture and sequestration that has ancillary benefits of emission reductions and lower flue gas cleanup costs. To fully understand the results of pilot-scale tests of oxy-fuel combustion and to accurately predict scale-up performance through CFD modeling, fundamental data are needed concerning coal and coal char combustion properties under these unconventional conditions. In the work reported here, the ignition and devolatilization characteristics of both a high-volatile bituminous coal and a Powder River Basin subbituminous coal were analyzed in detail through single-particle imaging at a gas temperature of 1700 K over a range of 12–36 vol % O 2 in both N 2 and CO 2 diluent gases. The bituminous coal images show large, hot soot cloud radiation whose size and shape vary with oxygen concentration and, to a lesser extent, with the use of N 2 versus CO 2 diluent gas. Subbituminous coal images show cooler, smaller emission signals during devolatilization that have the same characteristic size as the coal particles introduced into the flow (nominally 100 μm). The measurements also demonstrate that the use of CO 2 diluent retards the onset of ignition and increases the duration of devolatilization, once initiated. For a given diluent gas, a higher oxygen concentration yields shorter ignition delay and devolatilization times. The effect of CO 2 on coal particle ignition is explained by its higher molar specific heat and its tendency to reduce the local radical pool. The effect of O 2 on coal particle ignition results from its effect on the local mixture reactivity. CO 2 decreases the rate of devolatilization because of the lower mass diffusivity of volatiles in CO 2 mixtures, whereas higher O 2 concentrations increase the mass flux of oxygen to the volatiles flame and thereby increase the rate of devolatilization.

Journal ArticleDOI
TL;DR: This clinical study sought the ability of putative host- and microbially derived biomarkers to identify periodontal disease status from whole saliva and plaque biofilm to identify incipient disease and reduce health care costs.
Abstract: Background: Periodontitis is the major cause of tooth loss in adults and is linked to systemic illnesses, such as cardiovascular disease and stroke. The development of rapid point-of-care (POC) chairside diagnostics has the potential for the early detection of periodontal infection and progression to identify incipient disease and reduce health care costs. However, validation of effective diagnostics requires the identification and verification of biomarkers correlated with disease progression. This clinical study sought to determine the ability of putative host- and microbially derived biomarkers to identify periodontal disease status from whole saliva and plaque biofilm.Methods: One hundred human subjects were equally recruited into a healthy/gingivitis group or a periodontitis population. Whole saliva was collected from all subjects and analyzed using antibody arrays to measure the levels of multiple proinflammatory cytokines and bone resorptive/turnover markers.Results: Salivary biomarker data were co...

Journal ArticleDOI
TL;DR: A review of molecular-beam mass spectrometry of premixed, laminar, low-pressure flat flames has been provided in this paper, focusing on critical aspects of the experimental approach including probe sampling effects, different ionization processes, and mass separation procedures.

Journal ArticleDOI
26 Mar 2009-Nature
TL;DR: A dedicated search along the approach trajectory recovered 47 meteorites, fragments of a single body named Almahata Sitta, with a total mass of 3.95 kg, identifying the asteroid as F class, now firmly linked to dark carbon-rich anomalous ureilites, a material so fragile it was not previously represented in meteorite collections.
Abstract: On 6 October 2008, a small Earth-bound asteroid designated 2008 TC3 was discovered by the Catalina Sky Survey. Some 19 hours — and many astronomical observations — later it entered the atmosphere and disintegrated at 37 km altitude. No macroscopic fragments were expected to have survived but a dedicated search along the approach trajectory in a desert in northern Sudan has recovered 47 meteorites, fragments of a single body named Almahata Sitta, with a total mass of 3.95 kg. The asteroid and meteorite reflectance spectra identify the asteroid as surface matter from a class 'F' asteroid, material so fragile that it was not previously represented in meteorite collections. To have recovered meteorites from a known class of asteroids is a coup on a par with a successful spacecraft sample-return mission — without the rocket science. On 6 October 2008, a small asteroid designated 2008 TC3 hit the Earth in northern Sudan. Jenniskens et al. searched along the approach trajectory and luckily found 47 bits of a meteorite named Almahata Sitta. Analysis reveals it to be a porous achondrite and a polymict ureilite, and so the asteroid was F-class (dark carbon-rich anomalous ureilites). In the absence of a firm link between individual meteorites and their asteroidal parent bodies, asteroids are typically characterized only by their light reflection properties, and grouped accordingly into classes1,2,3. On 6 October 2008, a small asteroid was discovered with a flat reflectance spectrum in the 554–995 nm wavelength range, and designated 2008 TC3 (refs 4–6). It subsequently hit the Earth. Because it exploded at 37 km altitude, no macroscopic fragments were expected to survive. Here we report that a dedicated search along the approach trajectory recovered 47 meteorites, fragments of a single body named Almahata Sitta, with a total mass of 3.95 kg. Analysis of one of these meteorites shows it to be an achondrite, a polymict ureilite, anomalous in its class: ultra-fine-grained and porous, with large carbonaceous grains. The combined asteroid and meteorite reflectance spectra identify the asteroid as F class3, now firmly linked to dark carbon-rich anomalous ureilites, a material so fragile it was not previously represented in meteorite collections.

Journal ArticleDOI
TL;DR: An improvement on existing methods for SA of complex computer models is described for use when the model is too computationally expensive for a standard Monte-Carlo analysis, and a new approach to sensitivity index estimation using meta-models and bootstrap confidence intervals is described.

Journal ArticleDOI
TL;DR: In this paper, a non-ordinary state-based peridynamic method was developed to solve transient dynamic solid mechanics problems, in which the bonds are not restricted to central forces, nor is it restricted to a Poisson's ratio of 1/4 as with the bond-based method.

Journal ArticleDOI
13 Mar 2009-Science
TL;DR: In this article, the authors used laser alignment to transiently fix carbon disulfide molecules in space long enough to elucidate, in the molecular reference frame, details of ultrafast electronic-vibrational dynamics during a photochemical reaction.
Abstract: Random orientation of molecules within a sample leads to blurred observations of chemical reactions studied from the laboratory perspective. Methods developed for the dynamic imaging of molecular structures and processes struggle with this, as measurements are optimally made in the molecular frame. We used laser alignment to transiently fix carbon disulfide molecules in space long enough to elucidate, in the molecular reference frame, details of ultrafast electronic-vibrational dynamics during a photochemical reaction. These three-dimensional photoelectron imaging results, combined with ongoing efforts in molecular alignment and orientation, presage a wide range of insights obtainable from time-resolved studies in the molecular frame.

Journal ArticleDOI
TL;DR: In this paper, a Matlab/Simulink model of a single-phase grid-connected PV inverter has been developed and experimentally tested to predict the dynamics of the maximum power point trackers (MPPTs) and anti-islanding algorithms.
Abstract: Because of their deployment in dispersed locations on the lowest voltage portions of the grid, photovoltaic (PV) systems pose unique challenges to power system engineers. Computer models that accurately simulate the relevant behavior of PV systems would thus be of high value. However, most of today's models either do not accurately model the dynamics of the maximum power point trackers (MPPTs) or anti-islanding algorithms, or they involve excessive computational overhead for this application. To address this need, a Matlab/Simulink model of a single-phase grid-connected PV inverter has been developed and experimentally tested. The development of the PV array model, the integration of the MPPT with an averaged model of the power electronics, and the Simulink implementation are described. It is experimentally demonstrated that the model works well in predicting the general behaviors of single-phase grid-connected PV systems. This paper concludes with a discussion of the need for a full gradient-based MPPT model, as opposed to a commonly used simplified MPPT model.