scispace - formally typeset
Search or ask a question

Showing papers by "University of Texas at Austin published in 2012"


Journal ArticleDOI
Curtis Huttenhower1, Curtis Huttenhower2, Dirk Gevers2, Rob Knight3  +250 moreInstitutions (42)
14 Jun 2012-Nature
TL;DR: The Human Microbiome Project Consortium reported the first results of their analysis of microbial communities from distinct, clinically relevant body habitats in a human cohort; the insights into the microbial communities of a healthy population lay foundations for future exploration of the epidemiology, ecology and translational applications of the human microbiome as discussed by the authors.
Abstract: The Human Microbiome Project Consortium reports the first results of their analysis of microbial communities from distinct, clinically relevant body habitats in a human cohort; the insights into the microbial communities of a healthy population lay foundations for future exploration of the epidemiology, ecology and translational applications of the human microbiome.

8,410 citations


Journal ArticleDOI
TL;DR: Despite its simplicity, it is able to show that BRISQUE is statistically better than the full-reference peak signal-to-noise ratio and the structural similarity index, and is highly competitive with respect to all present-day distortion-generic NR IQA algorithms.
Abstract: We propose a natural scene statistic-based distortion-generic blind/no-reference (NR) image quality assessment (IQA) model that operates in the spatial domain. The new model, dubbed blind/referenceless image spatial quality evaluator (BRISQUE) does not compute distortion-specific features, such as ringing, blur, or blocking, but instead uses scene statistics of locally normalized luminance coefficients to quantify possible losses of “naturalness” in the image due to the presence of distortions, thereby leading to a holistic measure of quality. The underlying features used derive from the empirical distribution of locally normalized luminances and products of locally normalized luminances under a spatial natural scene statistic model. No transformation to another coordinate frame (DCT, wavelet, etc.) is required, distinguishing it from prior NR IQA approaches. Despite its simplicity, we are able to show that BRISQUE is statistically better than the full-reference peak signal-to-noise ratio and the structural similarity index, and is highly competitive with respect to all present-day distortion-generic NR IQA algorithms. BRISQUE has very low computational complexity, making it well suited for real time applications. BRISQUE features may be used for distortion-identification as well. To illustrate a new practical application of BRISQUE, we describe how a nonblind image denoising algorithm can be augmented with BRISQUE in order to perform blind image denoising. Results show that BRISQUE augmentation leads to performance improvements over state-of-the-art methods. A software release of BRISQUE is available online: http://live.ece.utexas.edu/research/quality/BRISQUE_release.zip for public use and evaluation.

3,780 citations


Journal ArticleDOI
06 Sep 2012-Nature
TL;DR: The first extensive map of human DHSs identified through genome-wide profiling in 125 diverse cell and tissue types is presented, revealing novel relationships between chromatin accessibility, transcription, DNA methylation and regulatory factor occupancy patterns.
Abstract: DNase I hypersensitive sites (DHSs) are markers of regulatory DNA and have underpinned the discovery of all classes of cis-regulatory elements including enhancers, promoters, insulators, silencers and locus control regions. Here we present the first extensive map of human DHSs identified through genome-wide profiling in 125 diverse cell and tissue types. We identify ∼2.9 million DHSs that encompass virtually all known experimentally validated cis-regulatory sequences and expose a vast trove of novel elements, most with highly cell-selective regulation. Annotating these elements using ENCODE data reveals novel relationships between chromatin accessibility, transcription, DNA methylation and regulatory factor occupancy patterns. We connect ∼580,000 distal DHSs with their target promoters, revealing systematic pairing of different classes of distal DHSs and specific promoter types. Patterning of chromatin accessibility at many regulatory regions is organized with dozens to hundreds of co-activated elements, and the transcellular DNase I sensitivity pattern at a given region can predict cell-type-specific functional behaviours. The DHS landscape shows signatures of recent functional evolutionary constraint. However, the DHS compartment in pluripotent and immortalized cells exhibits higher mutation rates than that in highly differentiated cells, exposing an unexpected link between chromatin accessibility, proliferative potential and patterns of human variation. An extensive map of human DNase I hypersensitive sites, markers of regulatory DNA, in 125 diverse cell and tissue types is described; integration of this information with other ENCODE-generated data sets identifies new relationships between chromatin accessibility, transcription, DNA methylation and regulatory factor occupancy patterns. This paper describes the first extensive map of human DNaseI hypersensitive sites — markers of regulatory DNA — in 125 diverse cell and tissue types. Integration of this information with other data sets generated by ENCODE (Encyclopedia of DNA Elements) identified new relationships between chromatin accessibility, transcription, DNA methylation and regulatory-factor occupancy patterns. Evolutionary-conservation analysis revealed signatures of recent functional constraint within DNaseI hypersensitive sites.

2,628 citations


Journal ArticleDOI
TL;DR: In this paper, the authors review progress over the past decade in observations of large-scale star formation, with a focus on the interface between extragalactic and Galactic studies.
Abstract: We review progress over the past decade in observations of large-scale star formation, with a focus on the interface between extragalactic and Galactic studies. Methods of measuring gas contents and star-formation rates are discussed, and updated prescriptions for calculating star-formation rates are provided. We review relations between star formation and gas on scales ranging from entire galaxies to individual molecular clouds.

2,525 citations


Journal ArticleDOI
TL;DR: It is shown, using first principles calculations, that monolayer molybdenum disulphide is an ideal material for valleytronics, for which valley polarization is achievable via valley-selective circular dichroism arising from its unique symmetry.
Abstract: The monolayer transition-metal dichalcogenide molybdenum disulphide has recently attracted attention owing to its distinctive electronic properties. Cao and co-workers present numerical evidence suggesting that circularly polarized light can preferentially excite a single valley in the band structure of this system.

2,163 citations


Proceedings ArticleDOI
16 Jun 2012
TL;DR: This paper proposes a new kernel-based method that takes advantage of low-dimensional structures that are intrinsic to many vision datasets, and introduces a metric that reliably measures the adaptability between a pair of source and target domains.
Abstract: In real-world applications of visual recognition, many factors — such as pose, illumination, or image quality — can cause a significant mismatch between the source domain on which classifiers are trained and the target domain to which those classifiers are applied. As such, the classifiers often perform poorly on the target domain. Domain adaptation techniques aim to correct the mismatch. Existing approaches have concentrated on learning feature representations that are invariant across domains, and they often do not directly exploit low-dimensional structures that are intrinsic to many vision datasets. In this paper, we propose a new kernel-based method that takes advantage of such structures. Our geodesic flow kernel models domain shift by integrating an infinite number of subspaces that characterize changes in geometric and statistical properties from the source to the target domain. Our approach is computationally advantageous, automatically inferring important algorithmic parameters without requiring extensive cross-validation or labeled data from either domain. We also introduce a metric that reliably measures the adaptability between a pair of source and target domains. For a given target domain and several source domains, the metric can be used to automatically select the optimal source domain to adapt and avoid less desirable ones. Empirical studies on standard datasets demonstrate the advantages of our approach over competing methods.

2,154 citations


Journal ArticleDOI
TL;DR: In this article, two different ways to fabricate nitrogen-doped graphene (N-graphene) and demonstrate its use as a metal-free catalyst to study the catalytic active center for the oxygen reduction reaction (ORR).
Abstract: We present two different ways to fabricate nitrogen-doped graphene (N-graphene) and demonstrate its use as a metal-free catalyst to study the catalytic active center for the oxygen reduction reaction (ORR). N-graphene was produced by annealing of graphene oxide (G-O) under ammonia or by annealing of a N-containing polymer/reduced graphene oxide (RG-O) composite (polyaniline/RG-O or polypyrrole/RG-O). The effects of the N precursors and annealing temperature on the performance of the catalyst were investigated. The bonding state of the N atom was found to have a significant effect on the selectivity and catalytic activity for ORR. Annealing of G-O with ammonia preferentially formed graphitic N and pyridinic N centers, while annealing of polyaniline/RG-O and polypyrrole/RG-O tended to generate pyridinic and pyrrolic N moieties, respectively. Most importantly, the electrocatalytic activity of the catalyst was found to be dependent on the graphitic N content which determined the limiting current density, while the pyridinic N content improved the onset potential for ORR. However, the total N content in the graphene-based non-precious metal catalyst does not play an important role in the ORR process.

2,008 citations


Journal ArticleDOI
TL;DR: This work discusses how ChIP quality, assessed in these ways, affects different uses of ChIP-seq data and develops a set of working standards and guidelines for ChIP experiments that are updated routinely.
Abstract: Chromatin immunoprecipitation (ChIP) followed by high-throughput DNA sequencing (ChIP-seq) has become a valuable and widely used approach for mapping the genomic location of transcription-factor binding and histone modifications in living cells. Despite its widespread use, there are considerable differences in how these experiments are conducted, how the results are scored and evaluated for quality, and how the data and metadata are archived for public use. These practices affect the quality and utility of any global ChIP experiment. Through our experience in performing ChIP-seq experiments, the ENCODE and modENCODE consortia have developed a set of working standards and guidelines for ChIP experiments that are updated routinely. The current guidelines address antibody validation, experimental replication, sequencing depth, data and metadata reporting, and data quality assessment. We discuss how ChIP quality, assessed in these ways, affects different uses of ChIP-seq data. All data sets used in the analysis have been deposited for public viewing and downloading at the ENCODE (http://encodeproject.org/ENCODE/) and modENCODE (http://www.modencode.org/) portals.

1,801 citations


Journal ArticleDOI
Abstract: Cellular networks are in a major transition from a carefully planned set of large tower-mounted base-stations (BSs) to an irregular deployment of heterogeneous infrastructure elements that often additionally includes micro, pico, and femtocells, as well as distributed antennas. In this paper, we develop a tractable, flexible, and accurate model for a downlink heterogeneous cellular network (HCN) consisting of K tiers of randomly located BSs, where each tier may differ in terms of average transmit power, supported data rate and BS density. Assuming a mobile user connects to the strongest candidate BS, the resulting Signal-to-Interference-plus-Noise-Ratio (SINR) is greater than 1 when in coverage, Rayleigh fading, we derive an expression for the probability of coverage (equivalently outage) over the entire network under both open and closed access, which assumes a strikingly simple closed-form in the high SINR regime and is accurate down to -4 dB even under weaker assumptions. For external validation, we compare against an actual LTE network (for tier 1) with the other K-1 tiers being modeled as independent Poisson Point Processes. In this case as well, our model is accurate to within 1-2 dB. We also derive the average rate achieved by a randomly located mobile and the average load on each tier of BSs. One interesting observation for interference-limited open access networks is that at a given \sinr, adding more tiers and/or BSs neither increases nor decreases the probability of coverage or outage when all the tiers have the same target-SINR.

1,640 citations


Journal ArticleDOI
TL;DR: A comprehensive study that projects the speedup potential of future multicores and examines the underutilization of integration capacity-dark silicon-is timely and crucial.
Abstract: A key question for the microprocessor research and design community is whether scaling multicores will provide the performance and value needed to scale down many more technology generations. To provide a quantitative answer to this question, a comprehensive study that projects the speedup potential of future multicores and examines the underutilization of integration capacity-dark silicon-is timely and crucial.

1,556 citations


Journal ArticleDOI
TL;DR: An efficient general-purpose blind/no-reference image quality assessment (IQA) algorithm using a natural scene statistics model of discrete cosine transform (DCT) coefficients, which requires minimal training and adopts a simple probabilistic model for score prediction.
Abstract: We develop an efficient general-purpose blind/no-reference image quality assessment (IQA) algorithm using a natural scene statistics (NSS) model of discrete cosine transform (DCT) coefficients. The algorithm is computationally appealing, given the availability of platforms optimized for DCT computation. The approach relies on a simple Bayesian inference model to predict image quality scores given certain extracted features. The features are based on an NSS model of the image DCT coefficients. The estimated parameters of the model are utilized to form features that are indicative of perceptual quality. These features are used in a simple Bayesian inference approach to predict quality scores. The resulting algorithm, which we name BLIINDS-II, requires minimal training and adopts a simple probabilistic model for score prediction. Given the extracted features from a test image, the quality score that maximizes the probability of the empirically determined inference model is chosen as the predicted quality score of that image. When tested on the LIVE IQA database, BLIINDS-II is shown to correlate highly with human judgments of quality, at a level that is competitive with the popular SSIM index.

Proceedings ArticleDOI
16 Jun 2012
TL;DR: This paper presents a novel approach for human action recognition with histograms of 3D joint locations (HOJ3D) as a compact representation of postures and achieves superior results on the challenging 3D action dataset.
Abstract: In this paper, we present a novel approach for human action recognition with histograms of 3D joint locations (HOJ3D) as a compact representation of postures. We extract the 3D skeletal joint locations from Kinect depth maps using Shotton et al.'s method [6]. The HOJ3D computed from the action depth sequences are reprojected using LDA and then clustered into k posture visual words, which represent the prototypical poses of actions. The temporal evolutions of those visual words are modeled by discrete hidden Markov models (HMMs). In addition, due to the design of our spherical coordinate system and the robust 3D skeleton estimation from Kinect, our method demonstrates significant view invariance on our 3D action dataset. Our dataset is composed of 200 3D sequences of 10 indoor activities performed by 10 individuals in varied views. Our method is real-time and achieves superior results on the challenging 3D action dataset. We also tested our algorithm on the MSR Action 3D dataset and our algorithm outperforms Li et al. [25] on most of the cases.

Journal ArticleDOI
TL;DR: A simple adjustment to the traditional lithium-sulphur battery configuration is reported to achieve high capacity with a long cycle life and rapid charge rate and with a significant improvement not only in the active material utilization but also in capacity retention without involving complex synthesis or surface modification.
Abstract: The limitations in the cathode capacity compared with that of the anode have been an impediment to advance the lithium-ion battery technology. The lithium–sulphur system is appealing in this regard, as sulphur exhibits an order of magnitude higher capacity than the currently used cathodes. However, low active material utilization and poor cycle life hinder the practicality of lithium–sulphur batteries. Here we report a simple adjustment to the traditional lithium–sulphur battery configuration to achieve high capacity with a long cycle life and rapid charge rate. With a bifunctional microporous carbon paper between the cathode and separator, we observe a significant improvement not only in the active material utilization but also in capacity retention, without involving complex synthesis or surface modification. The insertion of a microporous carbon interlayer decreases the internal charge transfer resistance and localizes the soluble polysulphide species, facilitating a commercially feasible means of fabricating the lithium–sulphur batteries. The practical performance of lithium sulphide batteries is much less than their predicted performance because redox products dissolve over time. Su and Manthiram show that microporous carbon membranes inserted between cathode and separator localize soluble polysulphide species and improve battery cycling characteristics.

Journal ArticleDOI
TL;DR: This tutorial article overviews the history of femtocells, demystifies their key aspects, and provides a preview of the next few years, which the authors believe will see a rapid acceleration towards small cell technology.
Abstract: Femtocells, despite their name, pose a potentially large disruption to the carefully planned cellular networks that now connect a majority of the planet's citizens to the Internet and with each other. Femtocells - which by the end of 2010 already outnumbered traditional base stations and at the time of publication are being deployed at a rate of about five million a year - both enhance and interfere with this network in ways that are not yet well understood. Will femtocells be crucial for offloading data and video from the creaking traditional network? Or will femtocells prove more trouble than they are worth, undermining decades of careful base station deployment with unpredictable interference while delivering only limited gains? Or possibly neither: are femtocells just a "flash in the pan"; an exciting but short-lived stage of network evolution that will be rendered obsolete by improved WiFi offloading, new backhaul regulations and/or pricing, or other unforeseen technological developments? This tutorial article overviews the history of femtocells, demystifies their key aspects, and provides a preview of the next few years, which the authors believe will see a rapid acceleration towards small cell technology. In the course of the article, we also position and introduce the articles that headline this special issue.

Journal ArticleDOI
TL;DR: The factors involved in the changing epidemiology of enterococcal infections are discussed, with an emphasis on Enterococcus faecium as an emergent and challenging nosocomial problem.
Abstract: Arias and Murray discuss the factors that may have contributed to the rise of enterococci as nosocomial pathogens, with an emphasis on the epidemiology and pathogenesis of these species and their mechanisms of resistance to the most relevant anti-enterococcal agents used in clinical practice.

Journal ArticleDOI
TL;DR: It is shown that the combination of the phase-field model and local adaptive refinement provides an effective method for simulating fracture in three dimensions.

Journal ArticleDOI
30 Nov 2012-Science
TL;DR: There is good agreement between different satellite methods—especially in Greenland and West Antarctica—and that combining satellite data sets leads to greater certainty, and the mass balance of Earth’s polar ice sheets is estimated by combining the results of existing independent techniques.
Abstract: We combined an ensemble of satellite altimetry, interferometry, and gravimetry data sets using common geographical regions, time intervals, and models of surface mass balance and glacial isostatic adjustment to estimate the mass balance of Earth’s polar ice sheets. We find that there is good agreement between different satellite methods—especially in Greenland and West Antarctica—and that combining satellite data sets leads to greater certainty. Between 1992 and 2011, the ice sheets of Greenland, East Antarctica, West Antarctica, and the Antarctic Peninsula changed in mass by –142 ± 49, +14 ± 43, –65 ± 26, and –20 ± 14 gigatonnes year−1, respectively. Since 1992, the polar ice sheets have contributed, on average, 0.59 ± 0.20 millimeter year−1 to the rate of global sea-level rise.

Journal ArticleDOI
TL;DR: In this paper, the authors review experimental and theoretical literature across several fields and conclude that the terms "pi stacking" and "pi-pi interactions" do not accurately describe the forces that drive association between aromatic molecules of the types most commonly studied in chemistry or biology laboratories.
Abstract: It has become common to reference “pi-stacking” forces or “pi–pi interactions” when describing the interactions between neighbouring aromatic rings. Here, we review experimental and theoretical literature across several fields and conclude that the terms “pi-stacking” and “pi–pi interactions” do not accurately describe the forces that drive association between aromatic molecules of the types most commonly studied in chemistry or biology laboratories. We therefore propose that these terms are misleading and should no longer be used. Even without these terms, electrostatic considerations relating to polarized pi systems, as described by Hunter and Sanders, have provided a good qualitative starting place for predicting and understanding the interactions between aromatics for almost two decades. More recent work, however, is revealing that direct electrostatic interactions between polarized atoms of substituents as well as solvation/desolvation effects in strongly interacting solvents must also be considered and even dominate in many circumstances.

Journal ArticleDOI
TL;DR: In this paper, the authors used the noise-weighted robust averaging of multi-quarter photo-center offsets derived from difference image analysis, which identifies likely background eclipsing binaries.
Abstract: New transiting planet candidates are identified in sixteen months (May 2009 - September 2010) of data from the Kepler spacecraft. Nearly five thousand periodic transit-like signals are vetted against astrophysical and instrumental false positives yielding 1,091 viable new planet candidates, bringing the total count up to over 2,300. Improved vetting metrics are employed, contributing to higher catalog reliability. Most notable is the noise-weighted robust averaging of multi-quarter photo-center offsets derived from difference image analysis which identifies likely background eclipsing binaries. Twenty-two months of photometry are used for the purpose of characterizing each of the new candidates. Ephemerides (transit epoch, T_0, and orbital period, P) are tabulated as well as the products of light curve modeling: reduced radius (Rp/R*), reduced semi-major axis (d/R*), and impact parameter (b). The largest fractional increases are seen for the smallest planet candidates (197% for candidates smaller than 2Re compared to 52% for candidates larger than 2Re) and those at longer orbital periods (123% for candidates outside of 50-day orbits versus 85% for candidates inside of 50-day orbits). The gains are larger than expected from increasing the observing window from thirteen months (Quarter 1-- Quarter 5) to sixteen months (Quarter 1 -- Quarter 6). This demonstrates the benefit of continued development of pipeline analysis software. The fraction of all host stars with multiple candidates has grown from 17% to 20%, and the paucity of short-period giant planets in multiple systems is still evident. The progression toward smaller planets at longer orbital periods with each new catalog release suggests that Earth-size planets in the Habitable Zone are forthcoming if, indeed, such planets are abundant.

Journal ArticleDOI
TL;DR: In this paper, the authors report the distribution of planets as a function of planet radius, orbital period, and stellar effective temperature for orbital periods less than 50 days around solar-type (GK) stars.
Abstract: We report the distribution of planets as a function of planet radius, orbital period, and stellar effective temperature for orbital periods less than 50 days around solar-type (GK) stars. These results are based on the 1235 planets (formally "planet candidates") from the Kepler mission that include a nearly complete set of detected planets as small as 2 R_⊕. For each of the 156,000 target stars, we assess the detectability of planets as a function of planet radius, R_p, and orbital period, P, using a measure of the detection efficiency for each star. We also correct for the geometric probability of transit, R_*/a. We consider first Kepler target stars within the "solar subset" having T_eff = 4100-6100 K, log g = 4.0-4.9, and Kepler magnitude K_p 2 R_⊕ we measure an occurrence of less than 0.001 planets per star. For all planets with orbital periods less than 50 days, we measure occurrence of 0.130 ± 0.008, 0.023 ± 0.003, and 0.013 ± 0.002 planets per star for planets with radii 2-4, 4-8, and 8-32 R_⊕, in agreement with Doppler surveys. We fit occurrence as a function of P to a power-law model with an exponential cutoff below a critical period P_0. For smaller planets, P_0 has larger values, suggesting that the "parking distance" for migrating planets moves outward with decreasing planet size. We also measured planet occurrence over a broader stellar T_eff range of 3600-7100 K, spanning M0 to F2 dwarfs. Over this range, the occurrence of 2-4 R_⊕ planets in the Kepler field increases with decreasing T_eff, with these small planets being seven times more abundant around cool stars (3600-4100 K) than the hottest stars in our sample (6600-7100 K).

Journal ArticleDOI
TL;DR: The authors conducted a comprehensive literature search, identifying 412 relevant articles, which were sorted into 5 categories: descriptive analysis of users, motivations for using Facebook, identity presentation, the role of Facebook in social interactions, and privacy and information disclosure.
Abstract: With over 800 million active users, Facebook is changing the way hundreds of millions of people relate to one another and share information. A rapidly growing body of research has accompanied the meteoric rise of Facebook as social scientists assess the impact of Facebook on social life. In addition, researchers have recognized the utility of Facebook as a novel tool to observe behavior in a naturalistic setting, test hypotheses, and recruit participants. However, research on Facebook emanates from a wide variety of disciplines, with results being published in a broad range of journals and conference proceedings, making it difficult to keep track of various findings. And because Facebook is a relatively recent phenomenon, uncertainty still exists about the most effective ways to do Facebook research. To address these issues, the authors conducted a comprehensive literature search, identifying 412 relevant articles, which were sorted into 5 categories: descriptive analysis of users, motivations for using Facebook, identity presentation, the role of Facebook in social interactions, and privacy and information disclosure. The literature review serves as the foundation from which to assess current findings and offer recommendations to the field for future research on Facebook and online social networks more broadly.

Journal ArticleDOI
TL;DR: A tractable framework for SINR analysis in downlink heterogeneous cellular networks (HCNs) with flexible cell association policies is developed and the average ergodic rate of the typical user, and the minimum average users throughput - the smallest value among the average user throughputs supported by one cell in each tier is derived.
Abstract: In this paper we develop a tractable framework for SINR analysis in downlink heterogeneous cellular networks (HCNs) with flexible cell association policies. The HCN is modeled as a multi-tier cellular network where each tier's base stations (BSs) are randomly located and have a particular transmit power, path loss exponent, spatial density, and bias towards admitting mobile users. For example, as compared to macrocells, picocells would usually have lower transmit power, higher path loss exponent (lower antennas), higher spatial density (many picocells per macrocell), and a positive bias so that macrocell users are actively encouraged to use the more lightly loaded picocells. In the present paper we implicitly assume all base stations have full queues; future work should relax this. For this model, we derive the outage probability of a typical user in the whole network or a certain tier, which is equivalently the downlink SINR cumulative distribution function. The results are accurate for all SINRs, and their expressions admit quite simple closed-forms in some plausible special cases. We also derive the average ergodic rate of the typical user, and the minimum average user throughput - the smallest value among the average user throughputs supported by one cell in each tier. We observe that neither the number of BSs or tiers changes the outage probability or average ergodic rate in an interference-limited full-loaded HCN with unbiased cell association (no biasing), and observe how biasing alters the various metrics.

Journal ArticleDOI
Kanchon K. Dasmahapatra1, James R. Walters2, Adriana D. Briscoe3, John W. Davey, Annabel Whibley, Nicola J. Nadeau2, Aleksey V. Zimin4, Daniel S.T. Hughes5, Laura Ferguson5, Simon H. Martin2, Camilo Salazar2, Camilo Salazar6, James J. Lewis3, Sebastian Adler7, Seung-Joon Ahn8, Dean A. Baker9, Simon W. Baxter2, Nicola Chamberlain10, Ritika Chauhan11, Brian A. Counterman12, Tamas Dalmay11, Lawrence E. Gilbert13, Karl H.J. Gordon14, David G. Heckel8, Heather M. Hines5, Katharina J. Hoff7, Peter W. H. Holland5, Emmanuelle Jacquin-Joly15, Francis M. Jiggins, Robert T. Jones, Durrell D. Kapan16, Durrell D. Kapan17, Paul J. Kersey, Gerardo Lamas, Daniel Lawson, Daniel Mapleson11, Luana S. Maroja18, Arnaud Martin3, Simon Moxon19, William J. Palmer2, Riccardo Papa20, Alexie Papanicolaou14, Yannick Pauchet8, David A. Ray12, Neil Rosser1, Steven L. Salzberg21, Megan A. Supple22, Alison K. Surridge2, Ayşe Tenger-Trolander10, Heiko Vogel8, Paul A. Wilkinson23, Derek Wilson, James A. Yorke4, Furong Yuan3, Alexi Balmuth24, Cathlene Eland, Karim Gharbi, Marian Thomson, Richard A. Gibbs25, Yi Han25, Joy Jayaseelan25, Christie Kovar25, Tittu Mathew25, Donna M. Muzny25, Fiona Ongeri25, Ling-Ling Pu25, Jiaxin Qu25, Rebecca Thornton25, Kim C. Worley25, Yuanqing Wu25, Mauricio Linares26, Mark Blaxter, Richard H. ffrench-Constant27, Mathieu Joron, Marcus R. Kronforst10, Sean P. Mullen28, Robert D. Reed3, Steven E. Scherer25, Stephen Richards25, James Mallet10, James Mallet1, W. Owen McMillan, Chris D. Jiggins2, Chris D. Jiggins6 
05 Jul 2012-Nature
TL;DR: It is inferred that closely related Heliconius species exchange protective colour-pattern genes promiscuously, implying that hybridization has an important role in adaptive radiation.
Abstract: Sequencing of the genome of the butterfly Heliconius melpomene shows that closely related Heliconius species exchange protective colour-pattern genes promiscuously.

Journal ArticleDOI
TL;DR: This panel addressed some of the limitations of the prior ARDS definition by incorporating current data, physiologic concepts, and clinical trials results to develop the Berlin definition, which should facilitate case recognition and better match treatment options to severity in both research trials and clinical practice.
Abstract: Our objective was to revise the definition of acute respiratory distress syndrome (ARDS) using a conceptual model incorporating reliability and validity, and a novel iterative approach with formal evaluation of the definition. The European Society of Intensive Care Medicine identified three chairs with broad expertise in ARDS who selected the participants and created the agenda. After 2 days of consensus discussions a draft definition was developed, which then underwent empiric evaluation followed by consensus revision. The Berlin Definition of ARDS maintains a link to prior definitions with diagnostic criteria of timing, chest imaging, origin of edema, and hypoxemia. Patients may have ARDS if the onset is within 1 week of a known clinical insult or new/worsening respiratory symptoms. For the bilateral opacities on chest radiograph criterion, a reference set of chest radiographs has been developed to enhance inter-observer reliability. The pulmonary artery wedge pressure criterion for hydrostatic edema was removed, and illustrative vignettes were created to guide judgments about the primary cause of respiratory failure. If no risk factor for ARDS is apparent, however, objective evaluation (e.g., echocardiography) is required to help rule out hydrostatic edema. A minimum level of positive end-expiratory pressure and mutually exclusive PaO2/FiO2 thresholds were chosen for the different levels of ARDS severity (mild, moderate, severe) to better categorize patients with different outcomes and potential responses to therapy. This panel addressed some of the limitations of the prior ARDS definition by incorporating current data, physiologic concepts, and clinical trials results to develop the Berlin definition, which should facilitate case recognition and better match treatment options to severity in both research trials and clinical practice.

Posted Content
TL;DR: A low-complexity distributed algorithm that converges to a near-optimal solution with a theoretical performance guarantee is provided, and it is observed that simple per-tier biasing loses surprisingly little, if the bias values Aj are chosen carefully.
Abstract: For small cell technology to significantly increase the capacity of tower-based cellular networks, mobile users will need to be actively pushed onto the more lightly loaded tiers (corresponding to, e.g., pico and femtocells), even if they offer a lower instantaneous SINR than the macrocell base station (BS). Optimizing a function of the long-term rates for each user requires (in general) a massive utility maximization problem over all the SINRs and BS loads. On the other hand, an actual implementation will likely resort to a simple biasing approach where a BS in tier j is treated as having its SINR multiplied by a factor A_j>=1, which makes it appear more attractive than the heavily-loaded macrocell. This paper bridges the gap between these approaches through several physical relaxations of the network-wide optimal association problem, whose solution is NP hard. We provide a low-complexity distributed algorithm that converges to a near-optimal solution with a theoretical performance guarantee, and we observe that simple per-tier biasing loses surprisingly little, if the bias values A_j are chosen carefully. Numerical results show a large (3.5x) throughput gain for cell-edge users and a 2x rate gain for median users relative to a max received power association.

Journal ArticleDOI
TL;DR: Increasing water storage through artificial recharge of excess surface water in aquifers by up to 3 km3 shows promise for coping with droughts and improving sustainability of groundwater resources in the Central Valley.
Abstract: Aquifer overexploitation could significantly impact crop production in the United States because 60% of irrigation relies on groundwater. Groundwater depletion in the irrigated High Plains and California Central Valley accounts for ∼50% of groundwater depletion in the United States since 1900. A newly developed High Plains recharge map shows that high recharge in the northern High Plains results in sustainable pumpage, whereas lower recharge in the central and southern High Plains has resulted in focused depletion of 330 km3 of fossil groundwater, mostly recharged during the past 13,000 y. Depletion is highly localized with about a third of depletion occurring in 4% of the High Plains land area. Extrapolation of the current depletion rate suggests that 35% of the southern High Plains will be unable to support irrigation within the next 30 y. Reducing irrigation withdrawals could extend the lifespan of the aquifer but would not result in sustainable management of this fossil groundwater. The Central Valley is a more dynamic, engineered system, with north/south diversions of surface water since the 1950s contributing to ∼7× higher recharge. However, these diversions are regulated because of impacts on endangered species. A newly developed Central Valley Hydrologic Model shows that groundwater depletion since the 1960s, totaling 80 km3, occurs mostly in the south (Tulare Basin) and primarily during droughts. Increasing water storage through artificial recharge of excess surface water in aquifers by up to 3 km3 shows promise for coping with droughts and improving sustainability of groundwater resources in the Central Valley.

Journal ArticleDOI
TL;DR: In this article, the authors developed a quantitative model for estimating the adsorbed gas estimate in the presence of moisture and thermal maturity of the gas-sorption ratio in shales.

Journal ArticleDOI
13 Sep 2012-Nature
TL;DR: These findings suggest that tropical protected areas are often intimately linked ecologically to their surrounding habitats, and that a failure to stem broad-scale loss and degradation of such habitats could sharply increase the likelihood of serious biodiversity declines.
Abstract: The rapid disruption of tropical forests probably imperils global biodiversity more than any other contemporary phenomenon(1-3). With deforestation advancing quickly, protected areas are increasingly becoming final refuges for threatened species and natural ecosystem processes. However, many protected areas in the tropics are themselves vulnerable to human encroachment and other environmental stresses(4-9). As pressures mount, it is vital to know whether existing reserves can sustain their biodiversity. A critical constraint in addressing this question has been that data describing a broad array of biodiversity groups have been unavailable for a sufficiently large and representative sample of reserves. Here we present a uniquely comprehensive data set on changes over the past 20 to 30 years in 31 functional groups of species and 21 potential drivers of environmental change, for 60 protected areas stratified across the world's major tropical regions. Our analysis reveals great variation in reserve 'health': about half of all reserves have been effective or performed passably, but the rest are experiencing an erosion of biodiversity that is often alarmingly widespread taxonomically and functionally. Habitat disruption, hunting and forest-product exploitation were the strongest predictors of declining reserve health. Crucially, environmental changes immediately outside reserves seemed nearly as important as those inside in determining their ecological fate, with changes inside reserves strongly mirroring those occurring around them. These findings suggest that tropical protected areas are often intimately linked ecologically to their surrounding habitats, and that a failure to stem broad-scale loss and degradation of such habitats could sharply increase the likelihood of serious biodiversity declines.

Journal ArticleDOI
TL;DR: In this paper, an infrared plasmonic surface based on a Fano-resonant asymmetric metamaterial exhibiting sharp resonances caused by the interference between subradiant and super-radiant resonances was introduced.
Abstract: Engineered optical metamaterials present a unique platform for biosensing applications owing to their ability to confine light to nanoscale regions and to their spectral selectivity. Infrared plasmonic metamaterials are especially attractive because their resonant response can be accurately tuned to that of the vibrational modes of the target biomolecules. Here we introduce an infrared plasmonic surface based on a Fano-resonant asymmetric metamaterial exhibiting sharp resonances caused by the interference between subradiant and superradiant plasmonic resonances. Owing to the metamaterial's asymmetry, the frequency of the subradiant resonance can be precisely determined and matched to the molecule's vibrational fingerprints. A multipixel array of Fano-resonant asymmetric metamaterials is used as a platform for multispectral biosensing of nanometre-scale monolayers of recognition proteins and their surface orientation, as well as for detecting chemical binding of target antibodies to recognition proteins.

Journal ArticleDOI
TL;DR: A new paradigm for the realization of optical metamaterials is introduced, showing that three-dimensional effects may be obtained without complicated inclusions, but instead by tailoring the relative orientation within the lattice.
Abstract: Optical metamaterials are usually based on planarized, complex-shaped, resonant nano-inclusions. Three-dimensional geometries may provide a wider set of functionalities, including broadband chirality to manipulate circular polarization at the nanoscale, but their fabrication becomes challenging as their dimensions get smaller. Here we introduce a new paradigm for the realization of optical metamaterials, showing that three-dimensional effects may be obtained without complicated inclusions, but instead by tailoring the relative orientation within the lattice. We apply this concept to realize planarized, broadband bianisotropic metamaterials as stacked nanorod arrays with a tailored rotational twist. Because of the coupling among closely spaced twisted plasmonic metasurfaces, metamaterials realized with conventional lithography may effectively operate as three-dimensional helical structures with broadband bianisotropic optical response. The proposed concept is also shown to relax alignment requirements common in three-dimensional metamaterial designs. The realized sample constitutes an ultrathin, broadband circular polarizer that may be directly integrated within nanophotonic systems.