scispace - formally typeset
Search or ask a question

Showing papers by "University of Texas at Austin published in 2009"


Journal ArticleDOI
05 Jun 2009-Science
TL;DR: It is shown that graphene grows in a self-limiting way on copper films as large-area sheets (one square centimeter) from methane through a chemical vapor deposition process, and graphene film transfer processes to arbitrary substrates showed electron mobilities as high as 4050 square centimeters per volt per second at room temperature.
Abstract: Graphene has been attracting great interest because of its distinctive band structure and physical properties. Today, graphene is limited to small sizes because it is produced mostly by exfoliating graphite. We grew large-area graphene films of the order of centimeters on copper substrates by chemical vapor deposition using methane. The films are predominantly single-layer graphene, with a small percentage (less than 5%) of the area having few layers, and are continuous across copper surface steps and grain boundaries. The low solubility of carbon in copper appears to help make this growth process self-limiting. We also developed graphene film transfer processes to arbitrary substrates, and dual-gated field-effect transistors fabricated on silicon/silicon dioxide substrates showed electron mobilities as high as 4050 square centimeters per volt per second at room temperature.

10,663 citations


Journal ArticleDOI
TL;DR: The use of colloidal suspensions to produce new materials composed of graphene and chemically modified graphene is reviewed, which is both versatile and scalable, and is adaptable to a wide variety of applications.
Abstract: Interest in graphene centres on its excellent mechanical, electrical, thermal and optical properties, its very high specific surface area, and our ability to influence these properties through chemical functionalization. There are a number of methods for generating graphene and chemically modified graphene from graphite and derivatives of graphite, each with different advantages and disadvantages. Here we review the use of colloidal suspensions to produce new materials composed of graphene and chemically modified graphene. This approach is both versatile and scalable, and is adaptable to a wide variety of applications.

6,178 citations


Journal ArticleDOI
TL;DR: In this article, the Wilkinson Microwave Anisotropy Probe (WMAP) 5-year data were used to constrain the physics of cosmic inflation via Gaussianity, adiabaticity, the power spectrum of primordial fluctuations, gravitational waves, and spatial curvature.
Abstract: The Wilkinson Microwave Anisotropy Probe (WMAP) 5-year data provide stringent limits on deviations from the minimal, six-parameter Λ cold dark matter model. We report these limits and use them to constrain the physics of cosmic inflation via Gaussianity, adiabaticity, the power spectrum of primordial fluctuations, gravitational waves, and spatial curvature. We also constrain models of dark energy via its equation of state, parity-violating interaction, and neutrino properties, such as mass and the number of species. We detect no convincing deviations from the minimal model. The six parameters and the corresponding 68% uncertainties, derived from the WMAP data combined with the distance measurements from the Type Ia supernovae (SN) and the Baryon Acoustic Oscillations (BAO) in the distribution of galaxies, are: Ω b h 2 = 0.02267+0.00058 –0.00059, Ω c h 2 = 0.1131 ± 0.0034, ΩΛ = 0.726 ± 0.015, ns = 0.960 ± 0.013, τ = 0.084 ± 0.016, and at k = 0.002 Mpc-1. From these, we derive σ8 = 0.812 ± 0.026, H 0 = 70.5 ± 1.3 km s-1 Mpc–1, Ω b = 0.0456 ± 0.0015, Ω c = 0.228 ± 0.013, Ω m h 2 = 0.1358+0.0037 –0.0036, z reion = 10.9 ± 1.4, and t 0 = 13.72 ± 0.12 Gyr. With the WMAP data combined with BAO and SN, we find the limit on the tensor-to-scalar ratio of r 1 is disfavored even when gravitational waves are included, which constrains the models of inflation that can produce significant gravitational waves, such as chaotic or power-law inflation models, or a blue spectrum, such as hybrid inflation models. We obtain tight, simultaneous limits on the (constant) equation of state of dark energy and the spatial curvature of the universe: –0.14 < 1 + w < 0.12(95%CL) and –0.0179 < Ω k < 0.0081(95%CL). We provide a set of WMAP distance priors, to test a variety of dark energy models with spatial curvature. We test a time-dependent w with a present value constrained as –0.33 < 1 + w 0 < 0.21 (95% CL). Temperature and dark matter fluctuations are found to obey the adiabatic relation to within 8.9% and 2.1% for the axion-type and curvaton-type dark matter, respectively. The power spectra of TB and EB correlations constrain a parity-violating interaction, which rotates the polarization angle and converts E to B. The polarization angle could not be rotated more than –59 < Δα < 24 (95% CL) between the decoupling and the present epoch. We find the limit on the total mass of massive neutrinos of ∑m ν < 0.67 eV(95%CL), which is free from the uncertainty in the normalization of the large-scale structure data. The number of relativistic degrees of freedom (dof), expressed in units of the effective number of neutrino species, is constrained as N eff = 4.4 ± 1.5 (68%), consistent with the standard value of 3.04. Finally, quantitative limits on physically-motivated primordial non-Gaussianity parameters are –9 < f local NL < 111 (95% CL) and –151 < f equil NL < 253 (95% CL) for the local and equilateral models, respectively.

5,904 citations


Journal ArticleDOI
TL;DR: A series of improvements to the spectroscopic reductions are described, including better flat fielding and improved wavelength calibration at the blue end, better processing of objects with extremely strong narrow emission lines, and an improved determination of stellar metallicities.
Abstract: This paper describes the Seventh Data Release of the Sloan Digital Sky Survey (SDSS), marking the completion of the original goals of the SDSS and the end of the phase known as SDSS-II. It includes 11,663 deg^2 of imaging data, with most of the ~2000 deg^2 increment over the previous data release lying in regions of low Galactic latitude. The catalog contains five-band photometry for 357 million distinct objects. The survey also includes repeat photometry on a 120° long, 2°.5 wide stripe along the celestial equator in the Southern Galactic Cap, with some regions covered by as many as 90 individual imaging runs. We include a co-addition of the best of these data, going roughly 2 mag fainter than the main survey over 250 deg^2. The survey has completed spectroscopy over 9380 deg^2; the spectroscopy is now complete over a large contiguous area of the Northern Galactic Cap, closing the gap that was present in previous data releases. There are over 1.6 million spectra in total, including 930,000 galaxies, 120,000 quasars, and 460,000 stars. The data release includes improved stellar photometry at low Galactic latitude. The astrometry has all been recalibrated with the second version of the USNO CCD Astrograph Catalog, reducing the rms statistical errors at the bright end to 45 milliarcseconds per coordinate. We further quantify a systematic error in bright galaxy photometry due to poor sky determination; this problem is less severe than previously reported for the majority of galaxies. Finally, we describe a series of improvements to the spectroscopic reductions, including better flat fielding and improved wavelength calibration at the blue end, better processing of objects with extremely strong narrow emission lines, and an improved determination of stellar metallicities.

5,665 citations


Journal ArticleDOI
TL;DR: This paper describes how accurate off-lattice ascent paths can be represented with respect to the grid points, and maintains the efficient linear scaling of an earlier version of the algorithm, and eliminates a tendency for the Bader surfaces to be aligned along the grid directions.
Abstract: A computational method for partitioning a charge density grid into Bader volumes is presented which is efficient, robust, and scales linearly with the number of grid points. The partitioning algorithm follows the steepest ascent paths along the charge density gradient from grid point to grid point until a charge density maximum is reached. In this paper, we describe how accurate off-lattice ascent paths can be represented with respect to the grid points. This improvement maintains the efficient linear scaling of an earlier version of the algorithm, and eliminates a tendency for the Bader surfaces to be aligned along the grid directions. As the algorithm assigns grid points to charge density maxima, subsequent paths are terminated when they reach previously assigned grid points. It is this grid-based approach which gives the algorithm its efficiency, and allows for the analysis of the large grids generated from plane-wave-based density functional theory calculations.

5,417 citations


Journal ArticleDOI
25 Sep 2009-Science
TL;DR: Amine scrubbing has been used to separate carbon dioxide (CO2) from natural gas and hydrogen since 1930 and is ready to be tested and used on a larger scale for CO2 capture from coal-fired power plants.
Abstract: Amine scrubbing has been used to separate carbon dioxide (CO2) from natural gas and hydrogen since 1930. It is a robust technology and is ready to be tested and used on a larger scale for CO2 capture from coal-fired power plants. The minimum work requirement to separate CO2 from coal-fired flue gas and compress CO2 to 150 bar is 0.11 megawatt-hours per metric ton of CO2. Process and solvent improvements should reduce the energy consumption to 0.2 megawatt-hour per ton of CO2. Other advanced technologies will not provide energy-efficient or timely solutions to CO2 emission from conventional coal-fired power plants.

3,427 citations


Journal ArticleDOI
TL;DR: An improved transfer process of large-area graphene grown on Cu foils by chemical vapor deposition is reported on, finding that the transferred graphene films have high electrical conductivity and high optical transmittance that make them suitable for transparent conductive electrode applications.
Abstract: Graphene, a two-dimensional monolayer of sp2-bonded carbon atoms, has been attracting great interest due to its unique transport properties. One of the promising applications of graphene is as a transparent conductive electrode owing to its high optical transmittance and conductivity. In this paper, we report on an improved transfer process of large-area graphene grown on Cu foils by chemical vapor deposition. The transferred graphene films have high electrical conductivity and high optical transmittance that make them suitable for transparent conductive electrode applications. The improved transfer processes will also be of great value for the fabrication of electronic devices such as field effect transistor and bilayer pseudospin field effect transistor devices.

3,017 citations


Journal ArticleDOI
TL;DR: These updated guidelines replace the previous guidelines published in the 15 January 2004 issue of Clinical Infectious Diseases and are intended for use by health care providers who care for patients who either have or are at risk of these infections.
Abstract: Guidelines for the management of patients with invasive candidiasis and mucosal candidiasis were prepared by an Expert Panel of the Infectious Diseases Society of America. These updated guidelines replace the previous guidelines published in the 15 January 2004 issue of Clinical Infectious Diseases and are intended for use by health care providers who care for patients who either have or are at risk of these infections. Since 2004, several new antifungal agents have become available, and several new studies have been published relating to the treatment of candidemia, other forms of invasive candidiasis, and mucosal disease, including oropharyngeal and esophageal candidiasis. There are also recent prospective data on the prevention of invasive candidiasis in high-risk neonates and adults and on the empiric treatment of suspected invasive candidiasis in adults. This new information is incorporated into this revised document.

3,016 citations


Journal ArticleDOI
01 Jan 2009-Carbon
TL;DR: In this paper, several nanometer-thick graphene oxide films were exposed to nine different heat treatments (three in Argon, three in Argon and Hydrogen, and three in ultra-high vacuum), and also a film was held at 70°C while being exposed to a vapor from hydrazine monohydrate.

2,990 citations


Journal ArticleDOI
TL;DR: In this article, the authors evaluate the out-of-sample performance of the sample-based mean-variance model, and its extensions designed to reduce estimation error, relative to the naive 1-N portfolio.
Abstract: We evaluate the out-of-sample performance of the sample-based mean-variance model, and its extensions designed to reduce estimation error, relative to the naive 1-N portfolio. Of the 14 models we evaluate across seven empirical datasets, none is consistently better than the 1-N rule in terms of Sharpe ratio, certainty-equivalent return, or turnover, which indicates that, out of sample, the gain from optimal diversification is more than offset by estimation error. Based on parameters calibrated to the US equity market, our analytical results and simulations show that the estimation window needed for the sample-based mean-variance strategy and its extensions to outperform the 1-N benchmark is around 3000 months for a portfolio with 25 assets and about 6000 months for a portfolio with 50 assets. This suggests that there are still many "miles to go" before the gains promised by optimal portfolio choice can actually be realized out of sample. The Author 2007. Published by Oxford University Press on behalf of The Society for Financial Studies. All rights reserved. For Permissions, please email: journals.permissions@oxfordjournals.org, Oxford University Press.

2,809 citations


Journal ArticleDOI
TL;DR: Key parameters of an RO process and process modifications due to feed water characteristics are brought to light by a direct comparison of seawater and brackish water RO systems.

Journal ArticleDOI
TL;DR: This article has reviewed the reasons why people want to love or leave the venerable (but perhaps hoary) MSE and reviewed emerging alternative signal fidelity measures and discussed their potential application to a wide variety of problems.
Abstract: In this article, we have reviewed the reasons why we (collectively) want to love or leave the venerable (but perhaps hoary) MSE. We have also reviewed emerging alternative signal fidelity measures and discussed their potential application to a wide variety of problems. The message we are trying to send here is not that one should abandon use of the MSE nor to blindly switch to any other particular signal fidelity measure. Rather, we hope to make the point that there are powerful, easy-to-use, and easy-to-understand alternatives that might be deployed depending on the application environment and needs. While we expect (and indeed, hope) that the MSE will continue to be widely used as a signal fidelity measure, it is our greater desire to see more advanced signal fidelity measures being used, especially in applications where perceptual criteria might be relevant. Ideally, the performance of a new signal processing algorithm might be compared to other algorithms using several fidelity criteria. Lastly, we hope that we have given further motivation to the community to consider recent advanced signal fidelity measures as design criteria for optimizing signal processing algorithms and systems. It is in this direction that we believe that the greatest benefit eventually lies.

Journal ArticleDOI
TL;DR: The properties of hydrogels that are important for tissue engineering applications and the inherent material design constraints and challenges are discussed.
Abstract: Hydrogels, due to their unique biocompatibility, flexible methods of synthesis, range of constituents, and desirable physical characteristics, have been the material of choice for many applications in regenerative medicine. They can serve as scaffolds that provide structural integrity to tissue constructs, control drug and protein delivery to tissues and cultures, and serve as adhesives or barriers between tissue and material surfaces. In this work, the properties of hydrogels that are important for tissue engineering applications and the inherent material design constraints and challenges are discussed. Recent research involving several different hydrogels polymerized from a variety of synthetic and natural monomers using typical and novel synthetic methods are highlighted. Finally, special attention is given to the microfabrication techniques that are currently resulting in important advances in the field.

Journal ArticleDOI
TL;DR: In this article, the authors used scanning electron microscopy to characterize the pore system in the Barnett Shale of the Fort Worth Basin, Texas, showing that the pores in these rocks are dominantly nanometer in scale (nanopores).
Abstract: Research on mudrock attributes has increased dramatically since shale-gas systems have become commercial hydrocarbon production targets. One of the most significant research questions now being asked focuses on the nature of the pore system in these mudrocks. Our work on siliceous mudstones from the Mississippian Barnett Shale of the Fort Worth Basin, Texas, shows that the pores in these rocks are dominantly nanometer in scale (nanopores). We used scanning electron microscopy to characterize Barnett pores from a number of cores and have imaged pores as small as 5 nm. Key to our success in imaging these nanopores is the use of Ar-ion-beam milling; this methodology provides flat surfaces that lack topography related to differential hardness and are fundamental for high-magnification imaging. Nanopores are observed in three main modes of occurrence. Most pores are found in grains of organic matter as intraparticle pores; many of these grains contain hundreds of pores. Intraparticle organic nanopores most commonly have irregular, bubblelike, elliptical cross sections and range between 5 and 750 nm with the median nanopore size for all grains being approximately 100 nm. Internal porosities of up to 20.2% have been measured for whole grains of organic matter based on point-count data from scanning electron microscopy analysis. These nanopores in the organic matter are the predominant pore type in the Barnett mudstones and they are related to thermal maturation. Nanopores are also found in bedding-parallel, wispy, organic-rich laminae as intraparticle pores in organic grains and as interparticle pores between organic matter, but this mode is not common. Although less abundant, nanopores are also locally present in fine-grained matrix areas unassociated with organic matter and as nano- to microintercrystalline pores in pyrite framboids. Intraparticle organic nanopores and pyrite-framboid intercrystalline pores contribute to gas storage in Barnett mudstones. We postulate that permeability pathways within the Barnett mudstones are along bedding-parallel layers of organic matter or a mesh network of organic matter flakes because this material contains the most pores.

Journal ArticleDOI
TL;DR: In patients with atrial fibrillation for whom vitamin K-antagonist therapy was unsuitable, the addition of clopidogrel to aspirin reduced the risk of major vascular events, especially stroke, and increased the riskof major hemorrhage.
Abstract: Background Vitamin K antagonists reduce the risk of stroke in patients with atrial fibrillation but are considered unsuitable in many patients, who usually receive aspirin instead. We investigated the hypothesis that the addition of clopidogrel to aspirin would reduce the risk of vascular events in patients with atrial fibrillation. Methods A total of 7554 patients with atrial fibrillation who had an increased risk of stroke and for whom vitamin K–antagonist therapy was unsuitable were randomly assigned to receive clopidogrel (75 mg) or placebo, once daily, in addition to aspirin. The primary outcome was the composite of stroke, myocardial infarction, non–central nervous system systemic embolism, or death from vascular causes. Results At a median of 3.6 years of follow-up, major vascular events had occurred in 832 patients receiving clopidogrel (6.8% per year) and in 924 patients receiving placebo (7.6% per year) (relative risk with clopidogrel, 0.89; 95% confidence interval [CI], 0.81 to 0.98; P = 0.01). The difference was primarily due to a reduction in the rate of stroke with clopidogrel. Stroke occurred in 296 patients receiving clopidogrel (2.4% per year) and 408 patients receiving placebo (3.3% per year) (relative risk, 0.72; 95% CI, 0.62 to 0.83; P<0.001). Myocardial infarction occurred in 90 patients receiving clopidogrel (0.7% per year) and in 115 receiving placebo (0.9% per year) (relative risk, 0.78; 95% CI, 0.59 to 1.03; P = 0.08). Major bleeding occurred in 251 patients receiving clopidogrel (2.0% per year) and in 162 patients receiving placebo (1.3% per year) (relative risk, 1.57; 95% CI, 1.29 to 1.92; P<0.001). Conclusions In patients with atrial fibrillation for whom vitamin K–antagonist therapy was unsuitable, the addition of clopidogrel to aspirin reduced the risk of major vascular events, especially stroke, and increased the risk of major hemorrhage. (ClinicalTrials.gov number, NCT00249873.)

Journal ArticleDOI
TL;DR: Positive relationships between intensity of Facebook use and students' life satisfaction, social trust, civic engagement, and political participation are found, suggesting that online social networks are not the most effective solution for youth disengagement from civic duty and democracy.
Abstract: This study examines if Facebook, one of the most popular social network sites among college students in the U.S., is related to attitudes and behaviors that enhance individuals' social capital. Using data from a random web survey of college students across Texas (n = 2,603), we find positive relationships between intensity of Facebook use and students' life satisfaction, social trust, civic engagement, and political participation. While these findings should ease the concerns of those who fear that Facebook has mostly negative effects on young adults, the positive and significant associations between Facebook variables and social capital were small, suggesting that online social networks are not the most effective solution for youth disengagement from civic duty and democracy.

Journal ArticleDOI
TL;DR: This tutorial article surveys some of these techniques based on stochastic geometry and the theory of random geometric graphs, discusses their application to model wireless networks, and presents some of the main results that have appeared in the literature.
Abstract: Wireless networks are fundamentally limited by the intensity of the received signals and by their interference. Since both of these quantities depend on the spatial location of the nodes, mathematical techniques have been developed in the last decade to provide communication-theoretic results accounting for the networks geometrical configuration. Often, the location of the nodes in the network can be modeled as random, following for example a Poisson point process. In this case, different techniques based on stochastic geometry and the theory of random geometric graphs -including point process theory, percolation theory, and probabilistic combinatorics-have led to results on the connectivity, the capacity, the outage probability, and other fundamental limits of wireless networks. This tutorial article surveys some of these techniques, discusses their application to model wireless networks, and presents some of the main results that have appeared in the literature. It also serves as an introduction to the field for the other papers in this special issue.

Journal ArticleDOI
TL;DR: The Wilkinson Microwave Anisotropy Probe (WMAP) is a medium-class Explorer (MIDEX) satellite aimed at elucidating cosmology through full-sky observations of the cosmic microwave background (CMB) as mentioned in this paper.
Abstract: The Wilkinson Microwave Anisotropy Probe (WMAP) is a Medium-Class Explorer (MIDEX) satellite aimed at elucidating cosmology through full-sky observations of the cosmic microwave background (CMB). The WMAP full-sky maps of the temperature and polarization anisotropy in five frequency bands provide our most accurate view to date of conditions in the early universe. The multi-frequency data facilitate the separation of the CMB signal from foreground emission arising both from our Galaxy and from extragalactic sources. The CMB angular power spectrum derived from these maps exhibits a highly coherent acoustic peak structure which makes it possible to extract a wealth of information about the composition and history of the universe. as well as the processes that seeded the fluctuations. WMAP data have played a key role in establishing ACDM as the new standard model of cosmology (Bennett et al. 2003: Spergel et al. 2003; Hinshaw et al. 2007: Spergel et al. 2007): a flat universe dominated by dark energy, supplemented by dark matter and atoms with density fluctuations seeded by a Gaussian, adiabatic, nearly scale invariant process. The basic properties of this universe are determined by five numbers: the density of matter, the density of atoms. the age of the universe (or equivalently, the Hubble constant today), the amplitude of the initial fluctuations, and their scale dependence. By accurately measuring the first few peaks in the angular power spectrum, WMAP data have enabled the following accomplishments: Showing the dark matter must be non-baryonic and interact only weakly with atoms and radiation. The WMAP measurement of the dark matter density puts important constraints on supersymmetric dark matter models and on the properties of other dark matter candidates. With five years of data and a better determination of our beam response, this measurement has been significantly improved. Precise determination of the density of atoms in the universe. The agreement between the atomic density derived from WMAP and the density inferred from the deuterium abundance is an important test of the standard big bang model. Determination of the acoustic scale at redshift z = 1090. Similarly, the recent measurement of baryon acoustic oscillations (BAO) in the galaxy power spectrum (Eisenstein et al. 2005) has determined the acoustic scale at redshift z approx. 0.35. When combined, these standard rulers accurately measure the geometry of the universe and the properties of the dark energy. These data require a nearly flat universe dominated by dark energy consistent with a cosmological constant. Precise determination of the Hubble Constant, in conjunction with BAO observations. Even when allowing curvature (Omega(sub 0) does not equal 1) and a free dark energy equation of state (w does not equal -1), the acoustic data determine the Hubble constant to within 3%. The measured value is in excellent agreement with independent results from the Hubble Key Project (Freedman et al. 2001), providing yet another important consistency test for the standard model. Significant constraint of the basic properties of the primordial fluctuations. The anti-correlation seen in the temperature/polarization (TE) correlation spectrum on 4deg scales implies that the fluctuations are primarily adiabatic and rule out defect models and isocurvature models as the primary source of fluctuations (Peiris et al. 2003).

Journal ArticleDOI
TL;DR: In this article, the authors combined information drawn from studies of individual clouds into a combined and updated statistical analysis of star-formation rates and efficiencies, numbers and lifetimes for spectral energy distribution (SED) classes, and clustering properties.
Abstract: The c2d Spitzer Legacy project obtained images and photometry with both IRAC and MIPS instruments for five large, nearby molecular clouds. Three of the clouds were also mapped in dust continuum emission at 1.1 mm, and optical spectroscopy has been obtained for some clouds. This paper combines information drawn from studies of individual clouds into a combined and updated statistical analysis of star-formation rates and efficiencies, numbers and lifetimes for spectral energy distribution (SED) classes, and clustering properties. Current star-formation efficiencies range from 3% to 6%; if star formation continues at current rates for 10 Myr, efficiencies could reach 15-30%. Star-formation rates and rates per unit area vary from cloud to cloud; taken together, the five clouds are producing about 260 M ☉ of stars per Myr. The star-formation surface density is more than an order of magnitude larger than would be predicted from the Kennicutt relation used in extragalactic studies, reflecting the fact that those relations apply to larger scales, where more diffuse matter is included in the gas surface density. Measured against the dense gas probed by the maps of dust continuum emission, the efficiencies are much higher, with stellar masses similar to masses of dense gas, and the current stock of dense cores would be exhausted in 1.8 Myr on average. Nonetheless, star formation is still slow compared to that expected in a free-fall time, even in the dense cores. The derived lifetime for the Class I phase is 0.54 Myr, considerably longer than some estimates. Similarly, the lifetime for the Class 0 SED class, 0.16 Myr, with the notable exception of the Ophiuchus cloud, is longer than early estimates. If photometry is corrected for estimated extinction before calculating class indicators, the lifetimes drop to 0.44 Myr for Class I and to 0.10 for Class 0. These lifetimes assume a continuous flow through the Class II phase and should be considered median lifetimes or half-lives. Star formation is highly concentrated to regions of high extinction, and the youngest objects are very strongly associated with dense cores. The great majority (90%) of young stars lie within loose clusters with at least 35 members and a stellar density of 1 M ☉ pc–3. Accretion at the sound speed from an isothermal sphere over the lifetime derived for the Class I phase could build a star of about 0.25 M ☉, given an efficiency of 0.3. Building larger mass stars by using higher mass accretion rates could be problematic, as our data confirm and aggravate the "luminosity problem" for protostars. At a given T bol, the values for L bol are mostly less than predicted by standard infall models and scatter over several orders of magnitude. These results strongly suggest that accretion is time variable, with prolonged periods of very low accretion. Based on a very simple model and this sample of sources, half the mass of a star would be accreted during only 7% of the Class I lifetime, as represented by the eight most luminous objects.

Journal ArticleDOI

Journal ArticleDOI
TL;DR: The authors showed that the revised ALE‐algorithm overcomes conceptual problems of former meta‐analyses and increases the specificity of the ensuing results without loosing the sensitivity of the original approach, and may provide a methodologically improved tool for coordinate‐based meta-analyses on functional imaging data.
Abstract: A widely used technique for coordinate-based meta-analyses of neuroimaging data is activation likelihood estimation (ALE). ALE assesses the overlap between foci based on modeling them as probability distributions centered at the respective coordinates. In this Human Brain Project/Neuroinformatics research, the authors present a revised ALE algorithm addressing drawbacks associated with former implementations. The first change pertains to the size of the probability distributions, which had to be specified by the used. To provide a more principled solution, the authors analyzed fMRI data of 21 subjects, each normalized into MNI space using nine different approaches. This analysis provided quantitative estimates of between-subject and between-template variability for 16 functionally defined regions, which were then used to explicitly model the spatial uncertainty associated with each reported coordinate. Secondly, instead of testing for an above-chance clustering between foci, the revised algorithm assesses above-chance clustering between experiments. The spatial relationship between foci in a given experiment is now assumed to be fixed and ALE results are assessed against a null-distribution of random spatial association between experiments. Critically, this modification entails a change from fixed- to random-effects inference in ALE analysis allowing generalization of the results to the entire population of studies analyzed. By comparative analysis of real and simulated data, the authors showed that the revised ALE-algorithm overcomes conceptual problems of former meta-analyses and increases the specificity of the ensuing results without loosing the sensitivity of the original approach. It may thus provide a methodologically improved tool for coordinate-based meta-analyses on functional imaging data.

Journal ArticleDOI
TL;DR: In this paper, the authors present cosmological constraints from the Wilkinson Microwave Anisotropy Probe (WMAP) alone for both the ACDM model and a set of possible extensions.
Abstract: The Wilkinson Microwave Anisotropy Probe (WMAP), launched in 2001, has mapped out the Cosmic Microwave Background with unprecedented accuracy over the whole sky. Its observations have led to the establishment of a simple concordance cosmological model for the contents and evolution of the universe, consistent with virtually all other astronomical measurements. The WMAP first-year and three-year data have allowed us to place strong constraints on the parameters describing the ACDM model. a flat universe filled with baryons, cold dark matter, neutrinos. and a cosmological constant. with initial fluctuations described by nearly scale-invariant power law fluctuations, as well as placing limits on extensions to this simple model (Spergel et al. 2003. 2007). With all-sky measurements of the polarization anisotropy (Kogut et al. 2003; Page et al. 2007), two orders of magnitude smaller than the intensity fluctuations. WMAP has not only given us an additional picture of the universe as it transitioned from ionized to neutral at redshift z approx.1100. but also an observation of the later reionization of the universe by the first stars. In this paper we present cosmological constraints from WMAP alone. for both the ACDM model and a set of possible extensions. We also consider tlle consistency of WMAP constraints with other recent astronomical observations. This is one of seven five-year WMAP papers. Hinshaw et al. (2008) describe the data processing and basic results. Hill et al. (2008) present new beam models arid window functions, Gold et al. (2008) describe the emission from Galactic foregrounds, and Wright et al. (2008) the emission from extra-Galactic point sources. The angular power spectra are described in Nolta et al. (2008), and Komatsu et al. (2008) present and interpret cosmological constraints based on combining WMAP with other data. WMAP observations are used to produce full-sky maps of the CMB in five frequency bands centered at 23, 33, 41, 61, and 94 GHz (Hinshaw et al. 2008). With five years of data, we are now able to place better limits on the ACDM model. as well as to move beyond it to test the composition of the universe. details of reionization. sub-dominant components, characteristics of inflation, and primordial fluctuations. We have more than doubled the amount of polarized data used for cosmological analysis. allowing a better measure of the large-scale E-mode signal (Nolta et al. 2008). To this end we describe an alternative way to remove Galactic foregrounds from low resolution polarization maps in which Galactic emission is marginalized over, providing a cross-check of our results. With longer integration we also better probe the second and third acoustic peaks in the temperature angular power spectrum, and have many more year-to-year difference maps available for cross-checking systematic effects (Hinshaw et al. 2008).

Journal ArticleDOI
TL;DR: It is reported that homogeneous colloidal suspensions of chemically modified graphene sheets were readily produced in a wide variety of organic solvent systems and "paperlike" materials generated by very simple filtration of the reduced graphene oxide sheets had electrical conductivity values as high as 16,000 S/m.
Abstract: We report that homogeneous colloidal suspensions of chemically modified graphene sheets were readily produced in a wide variety of organic solvent systems. Two different sets of solubility parameters are used to rationalize when stable colloidal suspensions of graphene oxide sheets and, separately, of reduced graphene oxide sheets in a given solvent type are possible and when they are not. As an example of the utility of such colloidal suspensions, “paperlike” materials generated by very simple filtration of the reduced graphene oxide sheets had electrical conductivity values as high as 16 000 S/m.

Journal ArticleDOI
TL;DR: In this article, improved versions of the relations between supermassive black hole mass (M BH) and host-galaxy bulge velocity dispersion (σ) and luminosity (L; the M-σ and M-L relations), based on 49 M BH measurements and 19 upper limits, were derived.
Abstract: We derive improved versions of the relations between supermassive black hole mass (M BH) and host-galaxy bulge velocity dispersion (σ) and luminosity (L; the M-σ and M-L relations), based on 49 M BH measurements and 19 upper limits. Particular attention is paid to recovery of the intrinsic scatter (e0) in both relations. We find log(M BH/M) = α + βlog(σ/200 km s-1) with (α, β, e0) = (8.12 0.08, 4.24 0.41, 0.44 0.06) for all galaxies and (α, β, e0) = (8.23 0.08, 3.96 0.42, 0.31 0.06) for ellipticals. The results for ellipticals are consistent with previous studies, but the intrinsic scatter recovered for spirals is significantly larger. The scatter inferred reinforces the need for its consideration when calculating local black hole mass function based on the M-σ relation, and further implies that there may be substantial selection bias in studies of the evolution of the M-σ relation. We estimate the M-L relationship as log(M BH/M) = α + βlog(LV /1011 L V) of (α, β, e0) = (8.95 0.11, 1.11 0.18, 0.38 0.09); using only early-type galaxies. These results appear to be insensitive to a wide range of assumptions about the measurement errors and the distribution of intrinsic scatter. We show that culling the sample according to the resolution of the black hole's sphere of influence biases the relations to larger mean masses, larger slopes, and incorrect intrinsic residuals. © 2009. The American Astronomical Society.

Journal ArticleDOI
TL;DR: This article derived improved versions of the relations between supermassive black hole mass and host-galaxy bulge velocity dispersion (sigma) and luminosity (L) (the M-sigma and M-L relations), based on 49 M_BH measurements and 19 upper limits.
Abstract: We derive improved versions of the relations between supermassive black hole mass (M_BH) and host-galaxy bulge velocity dispersion (sigma) and luminosity (L) (the M-sigma and M-L relations), based on 49 M_BH measurements and 19 upper limits. Particular attention is paid to recovery of the intrinsic scatter (epsilon_0) in both relations. We find log(M_BH / M_sun) = alpha + beta * log(sigma / 200 km/s) with (alpha, beta, epsilon_0) = (8.12 +/- 0.08, 4.24 +/- 0.41, 0.44 +/- 0.06) for all galaxies and (alpha, beta, epsilon_0) = (8.23 +/- 0.08, 3.96 +/- 0.42, 0.31 +/- 0.06) for ellipticals. The results for ellipticals are consistent with previous studies, but the intrinsic scatter recovered for spirals is significantly larger. The scatter inferred reinforces the need for its consideration when calculating local black hole mass function based on the M-sigma relation, and further implies that there may be substantial selection bias in studies of the evolution of the M-sigma relation. We estimate the M-L relationship as log(M_BH / M_sun) = alpha + beta * log(L_V / 10^11 L_sun,V) of (alpha, beta, epsilon_0) = (8.95 +/- 0.11, 1.11 +/- 0.18, 0.38 +/- 0.09); using only early-type galaxies. These results appear to be insensitive to a wide range of assumptions about the measurement errors and the distribution of intrinsic scatter. We show that culling the sample according to the resolution of the black hole's sphere of influence biases the relations to larger mean masses, larger slopes, and incorrect intrinsic residuals.

Journal ArticleDOI
TL;DR: The analysis of the relationship between users' needs and civic and political participation indicated that, as predicted, informational uses were more correlated toivic and political action than to recreational uses.
Abstract: A Web survey of 1,715 college students was conducted to examine Facebook Groups users' gratifications and the relationship between users' gratifications and their political and civic participation offline A factor analysis revealed four primary needs for participating in groups within Facebook: socializing, entertainment, self-status seeking, and information These gratifications vary depending on user demographics such as gender, hometown, and year in school The analysis of the relationship between users' needs and civic and political participation indicated that, as predicted, informational uses were more correlated to civic and political action than to recreational uses

Journal ArticleDOI
TL;DR: This work used carbon isotope labeling in conjunction with Raman spectroscopic mapping to track carbon during the growth process and shows that at high temperatures sequentially introduced isotopic carbon diffuses into the Ni first, mixes, and then segregates and precipitates at the surface of Ni forming graphene and/or graphite.
Abstract: Large-area graphene growth is required for the development and production of electronic devices. Recently, chemical vapor deposition (CVD) of hydrocarbons has shown some promise in growing large-area graphene or few-layer graphene films on metal substrates such as Ni and Cu. It has been proposed that CVD growth of graphene on Ni occurs by a C segregation or precipitation process whereas graphene on Cu grows by a surface adsorption process. Here we used carbon isotope labeling in conjunction with Raman spectroscopic mapping to track carbon during the growth process. The data clearly show that at high temperatures sequentially introduced isotopic carbon diffuses into the Ni first, mixes, and then segregates and precipitates at the surface of Ni forming graphene and/or graphite with a uniform mixture of 12C and 13C as determined by the peak position of the Raman G-band peak. On the other hand, graphene growth on Cu is clearly by surface adsorption where the spatial distribution of 12C and 13C follows the pre...

Proceedings ArticleDOI
17 May 2009
TL;DR: A framework for analyzing privacy and anonymity in social networks is presented and a new re-identification algorithm targeting anonymized social-network graphs is developed, showing that a third of the users who can be verified to have accounts on both Twitter and Flickr can be re-identified in the anonymous Twitter graph.
Abstract: Operators of online social networks are increasingly sharing potentially sensitive information about users and their relationships with advertisers, application developers, and data-mining researchers. Privacy is typically protected by anonymization, i.e., removing names, addresses, etc.We present a framework for analyzing privacy and anonymity in social networks and develop a new re-identification algorithm targeting anonymized social-network graphs. To demonstrate its effectiveness on real-world networks, we show that a third of the users who can be verified to have accounts on both Twitter, a popular microblogging service, and Flickr, an online photo-sharing site, can be re-identified in the anonymous Twitter graph with only a 12% error rate.Our de-anonymization algorithm is based purely on the network topology, does not require creation of a large number of dummy "sybil" nodes, is robust to noise and all existing defenses, and works even when the overlap between the target network and the adversary's auxiliary information is small.

Journal ArticleDOI
TL;DR: This work shows that it can make motion editing more efficient by generalizing the edits an animator makes on short sequences of motion to other sequences, and predicts frames for the motion using Gaussian process models of kinematics and dynamics.
Abstract: One way that artists create compelling character animations is by manipulating details of a character's motion. This process is expensive and repetitive. We show that we can make such motion editing more efficient by generalizing the edits an animator makes on short sequences of motion to other sequences. Our method predicts frames for the motion using Gaussian process models of kinematics and dynamics. These estimates are combined with probabilistic inference. Our method can be used to propagate edits from examples to an entire sequence for an existing character, and it can also be used to map a motion from a control character to a very different target character. The technique shows good generalization. For example, we show that an estimator, learned from a few seconds of edited example animation using our methods, generalizes well enough to edit minutes of character animation in a high-quality fashion. Learning is interactive: An animator who wants to improve the output can provide small, correcting examples and the system will produce improved estimates of motion. We make this interactive learning process efficient and natural with a fast, full-body IK system with novel features. Finally, we present data from interviews with professional character animators that indicate that generalizing and propagating animator edits can save artists significant time and work.

Journal ArticleDOI
TL;DR: In this paper, a model-independent framework of genetic units and bounding surfaces for sequence stratigraphy has been proposed, based on the interplay of accommodation and sedimentation (i.e., forced regressive, lowstand and highstand normal regressive), which are bounded by sequence stratigraphic surfaces.