scispace - formally typeset
Search or ask a question

Showing papers by "Columbia University published in 2016"


Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Matthew Abernathy1  +1008 moreInstitutions (96)
TL;DR: This is the first direct detection of gravitational waves and the first observation of a binary black hole merger, and these observations demonstrate the existence of binary stellar-mass black hole systems.
Abstract: On September 14, 2015 at 09:50:45 UTC the two detectors of the Laser Interferometer Gravitational-Wave Observatory simultaneously observed a transient gravitational-wave signal. The signal sweeps upwards in frequency from 35 to 250 Hz with a peak gravitational-wave strain of $1.0 \times 10^{-21}$. It matches the waveform predicted by general relativity for the inspiral and merger of a pair of black holes and the ringdown of the resulting single black hole. The signal was observed with a matched-filter signal-to-noise ratio of 24 and a false alarm rate estimated to be less than 1 event per 203 000 years, equivalent to a significance greater than 5.1 {\sigma}. The source lies at a luminosity distance of $410^{+160}_{-180}$ Mpc corresponding to a redshift $z = 0.09^{+0.03}_{-0.04}$. In the source frame, the initial black hole masses are $36^{+5}_{-4} M_\odot$ and $29^{+4}_{-4} M_\odot$, and the final black hole mass is $62^{+4}_{-4} M_\odot$, with $3.0^{+0.5}_{-0.5} M_\odot c^2$ radiated in gravitational waves. All uncertainties define 90% credible intervals.These observations demonstrate the existence of binary stellar-mass black hole systems. This is the first direct detection of gravitational waves and the first observation of a binary black hole merger.

9,596 citations


Journal ArticleDOI
Daniel J. Klionsky1, Kotb Abdelmohsen2, Akihisa Abe3, Joynal Abedin4  +2519 moreInstitutions (695)
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.

5,187 citations


Journal ArticleDOI
Theo Vos1, Christine Allen1, Megha Arora1, Ryan M Barber1  +696 moreInstitutions (260)
TL;DR: The Global Burden of Diseases, Injuries, and Risk Factors Study 2015 (GBD 2015) as discussed by the authors was used to estimate the incidence, prevalence, and years lived with disability for diseases and injuries at the global, regional, and national scale over the period of 1990 to 2015.

5,050 citations


Journal ArticleDOI
Haidong Wang1, Mohsen Naghavi1, Christine Allen1, Ryan M Barber1  +841 moreInstitutions (293)
TL;DR: The Global Burden of Disease 2015 Study provides a comprehensive assessment of all-cause and cause-specific mortality for 249 causes in 195 countries and territories from 1980 to 2015, finding several countries in sub-Saharan Africa had very large gains in life expectancy, rebounding from an era of exceedingly high loss of life due to HIV/AIDS.

4,804 citations


Journal ArticleDOI
TL;DR: In intermediate-risk patients, TAVR was similar to surgical aortic-valve replacement with respect to the primary end point of death or disabling stroke; surgery resulted in fewer major vascular complications and less paravalvular aorta regurgitation.
Abstract: BackgroundPrevious trials have shown that among high-risk patients with aortic stenosis, survival rates are similar with transcatheter aortic-valve replacement (TAVR) and surgical aortic-valve replacement. We evaluated the two procedures in a randomized trial involving intermediate-risk patients. MethodsWe randomly assigned 2032 intermediate-risk patients with severe aortic stenosis, at 57 centers, to undergo either TAVR or surgical replacement. The primary end point was death from any cause or disabling stroke at 2 years. The primary hypothesis was that TAVR would not be inferior to surgical replacement. Before randomization, patients were entered into one of two cohorts on the basis of clinical and imaging findings; 76.3% of the patients were included in the transfemoral-access cohort and 23.7% in the transthoracic-access cohort. ResultsThe rate of death from any cause or disabling stroke was similar in the TAVR group and the surgery group (P=0.001 for noninferiority). At 2 years, the Kaplan–Meier event...

3,744 citations


Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, M. R. Abernathy3  +970 moreInstitutions (114)
TL;DR: This second gravitational-wave observation provides improved constraints on stellar populations and on deviations from general relativity.
Abstract: We report the observation of a gravitational-wave signal produced by the coalescence of two stellar-mass black holes. The signal, GW151226, was observed by the twin detectors of the Laser Interferometer Gravitational-Wave Observatory (LIGO) on December 26, 2015 at 03:38:53 UTC. The signal was initially identified within 70 s by an online matched-filter search targeting binary coalescences. Subsequent off-line analyses recovered GW151226 with a network signal-to-noise ratio of 13 and a significance greater than 5 σ. The signal persisted in the LIGO frequency band for approximately 1 s, increasing in frequency and amplitude over about 55 cycles from 35 to 450 Hz, and reached a peak gravitational strain of 3.4+0.7−0.9×10−22. The inferred source-frame initial black hole masses are 14.2+8.3−3.7M⊙ and 7.5+2.3−2.3M⊙ and the final black hole mass is 20.8+6.1−1.7M⊙. We find that at least one of the component black holes has spin greater than 0.2. This source is located at a luminosity distance of 440+180−190 Mpc corresponding to a redshift 0.09+0.03−0.04. All uncertainties define a 90 % credible interval. This second gravitational-wave observation provides improved constraints on stellar populations and on deviations from general relativity.

3,448 citations


Journal ArticleDOI
25 Mar 2016-Science
TL;DR: A relationship between clonal neoantigen burden and overall survival in primary lung adenocarcinomas and the impact of neoantigens intratumor heterogeneity (ITH) on antitumor immunity is demonstrated.
Abstract: As tumors grow, they acquire mutations, some of which create neoantigens that influence the response of patients to immune checkpoint inhibitors. We explored the impact of neoantigen intratumor heterogeneity (ITH) on antitumor immunity. Through integrated analysis of ITH and neoantigen burden, we demonstrate a relationship between clonal neoantigen burden and overall survival in primary lung adenocarcinomas. CD8+ tumor-infiltrating lymphocytes reactive to clonal neoantigens were identified in early-stage non–small cell lung cancer and expressed high levels of PD-1. Sensitivity to PD-1 and CTLA-4 blockade in patients with advanced NSCLC and melanoma was enhanced in tumors enriched for clonal neoantigens. T cells recognizing clonal neoantigens were detectable in patients with durable clinical benefit. Cytotoxic chemotherapy–induced subclonal neoantigens, contributing to an increased mutational load, were enriched in certain poor responders. These data suggest that neoantigen heterogeneity may influence immune surveillance and support therapeutic developments targeting clonal neoantigens.

2,284 citations


Journal ArticleDOI
TL;DR: Together, the improvements made to both the small molecule and protein force field lead to a high level of accuracy in predicting protein-ligand binding measured over a wide range of targets and ligands (less than 1 kcal/mol RMS error) representing a 30% improvement over earlier variants of the OPLS force field.
Abstract: The parametrization and validation of the OPLS3 force field for small molecules and proteins are reported. Enhancements with respect to the previous version (OPLS2.1) include the addition of off-atom charge sites to represent halogen bonding and aryl nitrogen lone pairs as well as a complete refit of peptide dihedral parameters to better model the native structure of proteins. To adequately cover medicinal chemical space, OPLS3 employs over an order of magnitude more reference data and associated parameter types relative to other commonly used small molecule force fields (e.g., MMFF and OPLS_2005). As a consequence, OPLS3 achieves a high level of accuracy across performance benchmarks that assess small molecule conformational propensities and solvation. The newly fitted peptide dihedrals lead to significant improvements in the representation of secondary structure elements in simulated peptides and native structure stability over a number of proteins. Together, the improvements made to both the small mole...

2,127 citations


Journal ArticleDOI
21 Jun 2016-JAMA
TL;DR: It is concluded with high certainty that screening for colorectal cancer in average-risk, asymptomatic adults aged 50 to 75 years is of substantial net benefit.
Abstract: Importance Colorectal cancer is the second leading cause of cancer death in the United States. In 2016, an estimated 134 000 persons will be diagnosed with the disease, and about 49 000 will die from it. Colorectal cancer is most frequently diagnosed among adults aged 65 to 74 years; the median age at death from colorectal cancer is 73 years. Objective To update the 2008 US Preventive Services Task Force (USPSTF) recommendation on screening for colorectal cancer. Evidence Review The USPSTF reviewed the evidence on the effectiveness of screening with colonoscopy, flexible sigmoidoscopy, computed tomography colonography, the guaiac-based fecal occult blood test, the fecal immunochemical test, the multitargeted stool DNA test, and the methylated SEPT9 DNA test in reducing the incidence of and mortality from colorectal cancer or all-cause mortality; the harms of these screening tests; and the test performance characteristics of these tests for detecting adenomatous polyps, advanced adenomas based on size, or both, as well as colorectal cancer. The USPSTF also commissioned a comparative modeling study to provide information on optimal starting and stopping ages and screening intervals across the different available screening methods. Findings The USPSTF concludes with high certainty that screening for colorectal cancer in average-risk, asymptomatic adults aged 50 to 75 years is of substantial net benefit. Multiple screening strategies are available to choose from, with different levels of evidence to support their effectiveness, as well as unique advantages and limitations, although there are no empirical data to demonstrate that any of the reviewed strategies provide a greater net benefit. Screening for colorectal cancer is a substantially underused preventive health strategy in the United States. Conclusions and Recommendations The USPSTF recommends screening for colorectal cancer starting at age 50 years and continuing until age 75 years (A recommendation). The decision to screen for colorectal cancer in adults aged 76 to 85 years should be an individual one, taking into account the patient’s overall health and prior screening history (C recommendation).

2,100 citations


Journal ArticleDOI
TL;DR: This Commission outlines the opportunities and challenges for investment in adolescent health and wellbeing at both country and global levels (panel 1).

1,976 citations


Journal ArticleDOI
TL;DR: Several aspects of disease response assessment are clarified, along with endpoints for clinical trials, and future directions for disease response assessments are highlighted, to allow uniform reporting within and outside clinical trials.
Abstract: Treatment of multiple myeloma has substantially changed over the past decade with the introduction of several classes of new effective drugs that have greatly improved the rates and depth of response. Response criteria in multiple myeloma were developed to use serum and urine assessment of monoclonal proteins and bone marrow assessment (which is relatively insensitive). Given the high rates of complete response seen in patients with multiple myeloma with new treatment approaches, new response categories need to be defined that can identify responses that are deeper than those conventionally defined as complete response. Recent attempts have focused on the identification of residual tumour cells in the bone marrow using flow cytometry or gene sequencing. Furthermore, sensitive imaging techniques can be used to detect the presence of residual disease outside of the bone marrow. Combining these new methods, the International Myeloma Working Group has defined new response categories of minimal residual disease negativity, with or without imaging-based absence of extramedullary disease, to allow uniform reporting within and outside clinical trials. In this Review, we clarify several aspects of disease response assessment, along with endpoints for clinical trials, and highlight future directions for disease response assessments.

Journal ArticleDOI
TL;DR: It is demonstrated that human-caused climate change caused over half of the documented increases in fuel aridity since the 1970s and doubled the cumulative forest fire area since 1984, and suggests that anthropogenic climate change will continue to chronically enhance the potential for western US forest fire activity while fuels are not limiting.
Abstract: Increased forest fire activity across the western continental United States (US) in recent decades has likely been enabled by a number of factors, including the legacy of fire suppression and human settlement, natural climate variability, and human-caused climate change. We use modeled climate projections to estimate the contribution of anthropogenic climate change to observed increases in eight fuel aridity metrics and forest fire area across the western United States. Anthropogenic increases in temperature and vapor pressure deficit significantly enhanced fuel aridity across western US forests over the past several decades and, during 2000–2015, contributed to 75% more forested area experiencing high (>1 σ) fire-season fuel aridity and an average of nine additional days per year of high fire potential. Anthropogenic climate change accounted for ∼55% of observed increases in fuel aridity from 1979 to 2015 across western US forests, highlighting both anthropogenic climate change and natural climate variability as important contributors to increased wildfire potential in recent decades. We estimate that human-caused climate change contributed to an additional 4.2 million ha of forest fire area during 1984–2015, nearly doubling the forest fire area expected in its absence. Natural climate variability will continue to alternate between modulating and compounding anthropogenic increases in fuel aridity, but anthropogenic climate change has emerged as a driver of increased forest fire activity and should continue to do so while fuels are not limiting.

Journal ArticleDOI
TL;DR: The discovery of ferroptosis, the mechanism of ferraptosis regulation, and its increasingly appreciated relevance to both normal and pathological physiology are summarized.

Journal ArticleDOI
28 Jan 2016-Cell
TL;DR: The complete set of genes associated with 1,122 diffuse grade II-III-IV gliomas were defined from The Cancer Genome Atlas and molecular profiles were used to improve disease classification, identify molecular correlations, and provide insights into the progression from low- to high-grade disease.

Journal ArticleDOI
Nicholas J Kassebaum1, Megha Arora1, Ryan M Barber1, Zulfiqar A Bhutta2  +679 moreInstitutions (268)
TL;DR: In this paper, the authors used the Global Burden of Diseases, Injuries, and Risk Factors Study 2015 (GBD 2015) for all-cause mortality, cause-specific mortality, and non-fatal disease burden to derive HALE and DALYs by sex for 195 countries and territories from 1990 to 2015.

Journal ArticleDOI
TL;DR: Recent progress in the physics of metasurfaces operating at wavelengths ranging from microwave to visible is reviewed, with opinions of opportunities and challenges in this rapidly developing research field.
Abstract: Metamaterials are composed of periodic subwavelength metal/dielectric structures that resonantly couple to the electric and/or magnetic components of the incident electromagnetic fields, exhibiting properties that are not found in nature. This class of micro- and nano-structured artificial media have attracted great interest during the past 15 years and yielded ground-breaking electromagnetic and photonic phenomena. However, the high losses and strong dispersion associated with the resonant responses and the use of metallic structures, as well as the difficulty in fabricating the micro- and nanoscale 3D structures, have hindered practical applications of metamaterials. Planar metamaterials with subwavelength thickness, or metasurfaces, consisting of single-layer or few-layer stacks of planar structures, can be readily fabricated using lithography and nanoprinting methods, and the ultrathin thickness in the wave propagation direction can greatly suppress the undesirable losses. Metasurfaces enable a spatially varying optical response (e.g. scattering amplitude, phase, and polarization), mold optical wavefronts into shapes that can be designed at will, and facilitate the integration of functional materials to accomplish active control and greatly enhanced nonlinear response. This paper reviews recent progress in the physics of metasurfaces operating at wavelengths ranging from microwave to visible. We provide an overview of key metasurface concepts such as anomalous reflection and refraction, and introduce metasurfaces based on the Pancharatnam-Berry phase and Huygens' metasurfaces, as well as their use in wavefront shaping and beam forming applications, followed by a discussion of polarization conversion in few-layer metasurfaces and their related properties. An overview of dielectric metasurfaces reveals their ability to realize unique functionalities coupled with Mie resonances and their low ohmic losses. We also describe metasurfaces for wave guidance and radiation control, as well as active and nonlinear metasurfaces. Finally, we conclude by providing our opinions of opportunities and challenges in this rapidly developing research field.

Proceedings Article
08 Feb 2016
TL;DR: A binary matrix multiplication GPU kernel is written with which it is possible to run the MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy.
Abstract: We introduce a method to train Binarized Neural Networks (BNNs) - neural networks with binary weights and activations at run-time. At train-time the binary weights and activations are used for computing the parameter gradients. During the forward pass, BNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations, which is expected to substantially improve power-efficiency. To validate the effectiveness of BNNs, we conducted two sets of experiments on the Torch7 and Theano frameworks. On both, BNNs achieved nearly state-of-the-art results over the MNIST, CIFAR-10 and SVHN datasets. We also report our preliminary results on the challenging ImageNet dataset. Last but not least, we wrote a binary matrix multiplication GPU kernel with which it is possible to run our MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The code for training and running our BNNs is available on-line.

Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, M. R. Abernathy1  +976 moreInstitutions (107)
TL;DR: It is found that the final remnant's mass and spin, as determined from the low-frequency and high-frequency phases of the signal, are mutually consistent with the binary black-hole solution in general relativity.
Abstract: The LIGO detection of GW150914 provides an unprecedented opportunity to study the two-body motion of a compact-object binary in the large-velocity, highly nonlinear regime, and to witness the final merger of the binary and the excitation of uniquely relativistic modes of the gravitational field. We carry out several investigations to determine whether GW150914 is consistent with a binary black-hole merger in general relativity. We find that the final remnant’s mass and spin, as determined from the low-frequency (inspiral) and high-frequency (postinspiral) phases of the signal, are mutually consistent with the binary black-hole solution in general relativity. Furthermore, the data following the peak of GW150914 are consistent with the least-damped quasinormal mode inferred from the mass and spin of the remnant black hole. By using waveform models that allow for parametrized general-relativity violations during the inspiral and merger phases, we perform quantitative tests on the gravitational-wave phase in the dynamical regime and we determine the first empirical bounds on several high-order post-Newtonian coefficients. We constrain the graviton Compton wavelength, assuming that gravitons are dispersed in vacuum in the same way as particles with mass, obtaining a 90%-confidence lower bound of 1013 km. In conclusion, within our statistical uncertainties, we find no evidence for violations of general relativity in the genuinely strong-field regime of gravity.

Posted Content
TL;DR: A binary matrix multiplication GPU kernel is programmed with which it is possible to run the MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy.
Abstract: We introduce a method to train Quantized Neural Networks (QNNs) --- neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At train-time the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online.

Proceedings ArticleDOI
20 Mar 2016
TL;DR: In this paper, a deep network is trained to assign contrastive embedding vectors to each time-frequency region of the spectrogram in order to implicitly predict the segmentation labels of the target spectrogram from the input mixtures.
Abstract: We address the problem of "cocktail-party" source separation in a deep learning framework called deep clustering. Previous deep network approaches to separation have shown promising performance in scenarios with a fixed number of sources, each belonging to a distinct signal class, such as speech and noise. However, for arbitrary source classes and number, "class-based" methods are not suitable. Instead, we train a deep network to assign contrastive embedding vectors to each time-frequency region of the spectrogram in order to implicitly predict the segmentation labels of the target spectrogram from the input mixtures. This yields a deep network-based analogue to spectral clustering, in that the embeddings form a low-rank pair-wise affinity matrix that approximates the ideal affinity matrix, while enabling much faster performance. At test time, the clustering step "decodes" the segmentation implicit in the embeddings by optimizing K-means with respect to the unknown assignments. Preliminary experiments on single-channel mixtures from multiple speakers show that a speaker-independent model trained on two-speaker mixtures can improve signal quality for mixtures of held-out speakers by an average of 6dB. More dramatically, the same model does surprisingly well with three-speaker mixtures.

Journal ArticleDOI
TL;DR: Bone scintigraphy enables the diagnosis of cardiac ATTR amyloidosis to be made reliably without the need for histology in patients who do not have a monoclonal gammopathy, and proposes noninvasive diagnostic criteria that are applicable to the majority of patients with this disease.
Abstract: Background—Cardiac transthyretin (ATTR) amyloidosis is a progressive and fatal cardiomyopathy for which several promising therapies are in development. The diagnosis is frequently delayed or missed...

Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Matthew Abernathy3  +978 moreInstitutions (112)
TL;DR: The first observational run of the Advanced LIGO detectors, from September 12, 2015 to January 19, 2016, saw the first detections of gravitational waves from binary black hole mergers as discussed by the authors.
Abstract: The first observational run of the Advanced LIGO detectors, from September 12, 2015 to January 19, 2016, saw the first detections of gravitational waves from binary black hole mergers. In this paper we present full results from a search for binary black hole merger signals with total masses up to 100M⊙ and detailed implications from our observations of these systems. Our search, based on general-relativistic models of gravitational wave signals from binary black hole systems, unambiguously identified two signals, GW150914 and GW151226, with a significance of greater than 5σ over the observing period. It also identified a third possible signal, LVT151012, with substantially lower significance, which has a 87% probability of being of astrophysical origin. We provide detailed estimates of the parameters of the observed systems. Both GW150914 and GW151226 provide an unprecedented opportunity to study the two-body motion of a compact-object binary in the large velocity, highly nonlinear regime. We do not observe any deviations from general relativity, and place improved empirical bounds on several high-order post-Newtonian coefficients. From our observations we infer stellar-mass binary black hole merger rates lying in the range 9−240Gpc−3yr−1. These observations are beginning to inform astrophysical predictions of binary black hole formation rates, and indicate that future observing runs of the Advanced detector network will yield many more gravitational wave detections.

Journal ArticleDOI
Swapan Mallick1, Swapan Mallick2, Swapan Mallick3, Heng Li3, Mark Lipson2, Iain Mathieson2, Melissa Gymrek, Fernando Racimo4, Mengyao Zhao2, Mengyao Zhao1, Mengyao Zhao3, Niru Chennagiri2, Niru Chennagiri1, Niru Chennagiri3, Susanne Nordenfelt3, Susanne Nordenfelt2, Susanne Nordenfelt1, Arti Tandon3, Arti Tandon2, Pontus Skoglund3, Pontus Skoglund2, Iosif Lazaridis3, Iosif Lazaridis2, Sriram Sankararaman3, Sriram Sankararaman2, Sriram Sankararaman5, Qiaomei Fu2, Qiaomei Fu3, Qiaomei Fu6, Nadin Rohland3, Nadin Rohland2, Gabriel Renaud7, Yaniv Erlich8, Thomas Willems9, Carla Gallo10, Jeffrey P. Spence4, Yun S. Song4, Yun S. Song11, Giovanni Poletti10, Francois Balloux12, George van Driem13, Peter de Knijff14, Irene Gallego Romero15, Aashish R. Jha16, Doron M. Behar17, Claudio M. Bravi18, Cristian Capelli19, Tor Hervig20, Andrés Moreno-Estrada, Olga L. Posukh21, Elena Balanovska, Oleg Balanovsky22, Sena Karachanak-Yankova23, Hovhannes Sahakyan24, Hovhannes Sahakyan17, Draga Toncheva23, Levon Yepiskoposyan24, Chris Tyler-Smith25, Yali Xue25, M. Syafiq Abdullah26, Andres Ruiz-Linares12, Cynthia M. Beall27, Anna Di Rienzo16, Choongwon Jeong16, Elena B. Starikovskaya, Ene Metspalu17, Ene Metspalu28, Jüri Parik17, Richard Villems29, Richard Villems28, Richard Villems17, Brenna M. Henn30, Ugur Hodoglugil31, Robert W. Mahley32, Antti Sajantila33, George Stamatoyannopoulos34, Joseph Wee, Rita Khusainova35, Elza Khusnutdinova35, Sergey Litvinov17, Sergey Litvinov35, George Ayodo36, David Comas37, Michael F. Hammer38, Toomas Kivisild39, Toomas Kivisild17, William Klitz, Cheryl A. Winkler40, Damian Labuda41, Michael J. Bamshad34, Lynn B. Jorde42, Sarah A. Tishkoff11, W. Scott Watkins42, Mait Metspalu17, Stanislav Dryomov, Rem I. Sukernik43, Lalji Singh5, Lalji Singh44, Kumarasamy Thangaraj44, Svante Pääbo7, Janet Kelso7, Nick Patterson3, David Reich3, David Reich2, David Reich1 
13 Oct 2016-Nature
TL;DR: It is demonstrated that indigenous Australians, New Guineans and Andamanese do not derive substantial ancestry from an early dispersal of modern humans; instead, their modern human ancestry is consistent with coming from the same source as that of other non-Africans.
Abstract: Here we report the Simons Genome Diversity Project data set: high quality genomes from 300 individuals from 142 diverse populations. These genomes include at least 5.8 million base pairs that are not present in the human reference genome. Our analysis reveals key features of the landscape of human genome variation, including that the rate of accumulation of mutations has accelerated by about 5% in non-Africans compared to Africans since divergence. We show that the ancestors of some pairs of present-day human populations were substantially separated by 100,000 years ago, well before the archaeologically attested onset of behavioural modernity. We also demonstrate that indigenous Australians, New Guineans and Andamanese do not derive substantial ancestry from an early dispersal of modern humans; instead, their modern human ancestry is consistent with coming from the same source as that of other non-Africans.

Journal ArticleDOI
TL;DR: Metamaterials are composed of periodic subwavelength metal/dielectric structures that resonantly couple to the electric and/or magnetic components of the incident electromagnetic fields, exhibiting properties that are not found in nature as discussed by the authors.
Abstract: Metamaterials are composed of periodic subwavelength metal/dielectric structures that resonantly couple to the electric and/or magnetic components of the incident electromagnetic fields, exhibiting properties that are not found in nature. Planar metamaterials with subwavelength thickness, or metasurfaces, consisting of single-layer or few-layer stacks of planar structures, can be readily fabricated using lithography and nanoprinting methods, and the ultrathin thickness in the wave propagation direction can greatly suppress the undesirable losses. Metasurfaces enable a spatially varying optical response, mold optical wavefronts into shapes that can be designed at will, and facilitate the integration of functional materials to accomplish active control and greatly enhanced nonlinear response. This paper reviews recent progress in the physics of metasurfaces operating at wavelengths ranging from microwave to visible. We provide an overview of key metasurface concepts such as anomalous reflection and refraction, and introduce metasurfaces based on the Pancharatnam-Berry phase and Huygens' metasurfaces, as well as their use in wavefront shaping and beam forming applications, followed by a discussion of polarization conversion in few-layer metasurfaces and their related properties. An overview of dielectric metasurfaces reveals their ability to realize unique functionalities coupled with Mie resonances and their low ohmic losses. We also describe metasurfaces for wave guidance and radiation control, as well as active and nonlinear metasurfaces. Finally, we conclude by providing our opinions of opportunities and challenges in this rapidly developing research field.


Journal ArticleDOI
TL;DR: It is found that PUFA oxidation by lipoxygenases via a PHKG2-dependent iron pool is necessary for ferroptosis and that the covalent inhibition of the catalytic selenocysteine in Gpx4 prevents elimination of PUFA hydroperoxides; these findings suggest new strategies for controlling ferroPTosis in diverse contexts.
Abstract: Ferroptosis is form of regulated nonapoptotic cell death that is involved in diverse disease contexts. Small molecules that inhibit glutathione peroxidase 4 (GPX4), a phospholipid peroxidase, cause lethal accumulation of lipid peroxides and induce ferroptotic cell death. Although ferroptosis has been suggested to involve accumulation of reactive oxygen species (ROS) in lipid environments, the mediators and substrates of ROS generation and the pharmacological mechanism of GPX4 inhibition that generates ROS in lipid environments are unknown. We report here the mechanism of lipid peroxidation during ferroptosis, which involves phosphorylase kinase G2 (PHKG2) regulation of iron availability to lipoxygenase enzymes, which in turn drive ferroptosis through peroxidation of polyunsaturated fatty acids (PUFAs) at the bis-allylic position; indeed, pretreating cells with PUFAs containing the heavy hydrogen isotope deuterium at the site of peroxidation (D-PUFA) prevented PUFA oxidation and blocked ferroptosis. We further found that ferroptosis inducers inhibit GPX4 by covalently targeting the active site selenocysteine, leading to accumulation of PUFA hydroperoxides. In summary, we found that PUFA oxidation by lipoxygenases via a PHKG2-dependent iron pool is necessary for ferroptosis and that the covalent inhibition of the catalytic selenocysteine in Gpx4 prevents elimination of PUFA hydroperoxides; these findings suggest new strategies for controlling ferroptosis in diverse contexts.

Journal ArticleDOI
TL;DR: The results support the hypothesis that rare coding variants can pinpoint causal genes within known genetic loci and illustrate that applying the approach systematically to detect new loci requires extremely large sample sizes.
Abstract: Advanced age-related macular degeneration (AMD) is the leading cause of blindness in the elderly, with limited therapeutic options. Here we report on a study of >12 million variants, including 163,714 directly genotyped, mostly rare, protein-altering variants. Analyzing 16,144 patients and 17,832 controls, we identify 52 independently associated common and rare variants (P < 5 × 10(-8)) distributed across 34 loci. Although wet and dry AMD subtypes exhibit predominantly shared genetics, we identify the first genetic association signal specific to wet AMD, near MMP9 (difference P value = 4.1 × 10(-10)). Very rare coding variants (frequency <0.1%) in CFH, CFI and TIMP3 suggest causal roles for these genes, as does a splice variant in SLC16A8. Our results support the hypothesis that rare coding variants can pinpoint causal genes within known genetic loci and illustrate that applying the approach systematically to detect new loci requires extremely large sample sizes.

Journal ArticleDOI
26 Jan 2016-JAMA
TL;DR: Screening for depression in the general adult population, including pregnant and postpartum women, should be implemented with adequate systems in place to ensure accurate diagnosis, effective treatment, and appropriate follow-up.
Abstract: Description Update of the 2009 US Preventive Services Task Force (USPSTF) recommendation on screening for depression in adults. Methods The USPSTF reviewed the evidence on the benefits and harms of screening for depression in adult populations, including older adults and pregnant and postpartum women; the accuracy of depression screening instruments; and the benefits and harms of depression treatment in these populations. Population This recommendation applies to adults 18 years and older. Recommendation The USPSTF recommends screening for depression in the general adult population, including pregnant and postpartum women. Screening should be implemented with adequate systems in place to ensure accurate diagnosis, effective treatment, and appropriate follow-up. (B recommendation)

Journal ArticleDOI
TL;DR: This work uses recently available data on infrastructure, land cover and human access into natural areas to construct a globally standardized measure of the cumulative human footprint on the terrestrial environment at 1 km2 resolution from 1993 to 2009.
Abstract: Human pressures on the environment are changing spatially and temporally, with profound implications for the planet’s biodiversity and human economies. Here we use recently available data on infrastructure, land cover and human access into natural areas to construct a globally standardized measure of the cumulative human footprint on the terrestrial environment at 1 km2 resolution from 1993 to 2009. We note that while the human population has increased by 23% and the world economy has grown 153%, the human footprint has increased by just 9%. Still, 75% the planet’s land surface is experiencing measurable human pressures. Moreover, pressures are perversely intense, widespread and rapidly intensifying in places with high biodiversity. Encouragingly, we discover decreases in environmental pressures in the wealthiest countries and those with strong control of corruption. Clearly the human footprint on Earth is changing, yet there are still opportunities for conservation gains. Habitat loss and urbanization are primary components of human impact on the environment. Here, Venter et al.use global data on infrastructure, agriculture, and urbanization to show that the human footprint is growing slower than the human population, but footprints are increasing in biodiverse regions.

Journal ArticleDOI
TL;DR: In this article, the science case of an Electron-Ion Collider (EIC), focused on the structure and interactions of gluon-dominated matter, with the intent to articulate it to the broader nuclear science community, is presented.
Abstract: This White Paper presents the science case of an Electron-Ion Collider (EIC), focused on the structure and interactions of gluon-dominated matter, with the intent to articulate it to the broader nuclear science community. It was commissioned by the managements of Brookhaven National Laboratory (BNL) and Thomas Jefferson National Accelerator Facility (JLab) with the objective of presenting a summary of scientific opportunities and goals of the EIC as a follow-up to the 2007 NSAC Long Range plan. This document is a culmination of a community-wide effort in nuclear science following a series of workshops on EIC physics over the past decades and, in particular, the focused ten-week program on “Gluons and quark sea at high energies” at the Institute for Nuclear Theory in Fall 2010. It contains a brief description of a few golden physics measurements along with accelerator and detector concepts required to achieve them. It has been benefited profoundly from inputs by the users’ communities of BNL and JLab. This White Paper offers the promise to propel the QCD science program in the US, established with the CEBAF accelerator at JLab and the RHIC collider at BNL, to the next QCD frontier.