scispace - formally typeset
Search or ask a question

Showing papers by "University of California, Santa Barbara published in 2016"


Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, Monique Arnaud3, M. Ashdown4  +334 moreInstitutions (82)
TL;DR: In this article, the authors present a cosmological analysis based on full-mission Planck observations of temperature and polarization anisotropies of the cosmic microwave background (CMB) radiation.
Abstract: This paper presents cosmological results based on full-mission Planck observations of temperature and polarization anisotropies of the cosmic microwave background (CMB) radiation. Our results are in very good agreement with the 2013 analysis of the Planck nominal-mission temperature data, but with increased precision. The temperature and polarization power spectra are consistent with the standard spatially-flat 6-parameter ΛCDM cosmology with a power-law spectrum of adiabatic scalar perturbations (denoted “base ΛCDM” in this paper). From the Planck temperature data combined with Planck lensing, for this cosmology we find a Hubble constant, H0 = (67.8 ± 0.9) km s-1Mpc-1, a matter density parameter Ωm = 0.308 ± 0.012, and a tilted scalar spectral index with ns = 0.968 ± 0.006, consistent with the 2013 analysis. Note that in this abstract we quote 68% confidence limits on measured parameters and 95% upper limits on other parameters. We present the first results of polarization measurements with the Low Frequency Instrument at large angular scales. Combined with the Planck temperature and lensing data, these measurements give a reionization optical depth of τ = 0.066 ± 0.016, corresponding to a reionization redshift of . These results are consistent with those from WMAP polarization measurements cleaned for dust emission using 353-GHz polarization maps from the High Frequency Instrument. We find no evidence for any departure from base ΛCDM in the neutrino sector of the theory; for example, combining Planck observations with other astrophysical data we find Neff = 3.15 ± 0.23 for the effective number of relativistic degrees of freedom, consistent with the value Neff = 3.046 of the Standard Model of particle physics. The sum of neutrino masses is constrained to ∑ mν < 0.23 eV. The spatial curvature of our Universe is found to be very close to zero, with | ΩK | < 0.005. Adding a tensor component as a single-parameter extension to base ΛCDM we find an upper limit on the tensor-to-scalar ratio of r0.002< 0.11, consistent with the Planck 2013 results and consistent with the B-mode polarization constraints from a joint analysis of BICEP2, Keck Array, and Planck (BKP) data. Adding the BKP B-mode data to our analysis leads to a tighter constraint of r0.002 < 0.09 and disfavours inflationarymodels with a V(φ) ∝ φ2 potential. The addition of Planck polarization data leads to strong constraints on deviations from a purely adiabatic spectrum of fluctuations. We find no evidence for any contribution from isocurvature perturbations or from cosmic defects. Combining Planck data with other astrophysical data, including Type Ia supernovae, the equation of state of dark energy is constrained to w = −1.006 ± 0.045, consistent with the expected value for a cosmological constant. The standard big bang nucleosynthesis predictions for the helium and deuterium abundances for the best-fit Planck base ΛCDM cosmology are in excellent agreement with observations. We also constraints on annihilating dark matter and on possible deviations from the standard recombination history. In neither case do we find no evidence for new physics. The Planck results for base ΛCDM are in good agreement with baryon acoustic oscillation data and with the JLA sample of Type Ia supernovae. However, as in the 2013 analysis, the amplitude of the fluctuation spectrum is found to be higher than inferred from some analyses of rich cluster counts and weak gravitational lensing. We show that these tensions cannot easily be resolved with simple modifications of the base ΛCDM cosmology. Apart from these tensions, the base ΛCDM cosmology provides an excellent description of the Planck CMB observations and many other astrophysical data sets.

10,728 citations


Journal ArticleDOI
Daniel J. Klionsky1, Kotb Abdelmohsen2, Akihisa Abe3, Joynal Abedin4  +2519 moreInstitutions (695)
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.

5,187 citations


Journal ArticleDOI
Kurt Lejaeghere1, Gustav Bihlmayer2, Torbjörn Björkman3, Torbjörn Björkman4, Peter Blaha5, Stefan Blügel2, Volker Blum6, Damien Caliste7, Ivano E. Castelli8, Stewart J. Clark9, Andrea Dal Corso10, Stefano de Gironcoli10, Thierry Deutsch7, J. K. Dewhurst11, Igor Di Marco12, Claudia Draxl13, Claudia Draxl14, Marcin Dulak15, Olle Eriksson12, José A. Flores-Livas11, Kevin F. Garrity16, Luigi Genovese7, Paolo Giannozzi17, Matteo Giantomassi18, Stefan Goedecker19, Xavier Gonze18, Oscar Grånäs20, Oscar Grånäs12, E. K. U. Gross11, Andris Gulans13, Andris Gulans14, Francois Gygi21, D. R. Hamann22, P. J. Hasnip23, Natalie Holzwarth24, Diana Iusan12, Dominik B. Jochym25, F. Jollet, Daniel M. Jones26, Georg Kresse27, Klaus Koepernik28, Klaus Koepernik29, Emine Kucukbenli8, Emine Kucukbenli10, Yaroslav Kvashnin12, Inka L. M. Locht12, Inka L. M. Locht30, Sven Lubeck14, Martijn Marsman27, Nicola Marzari8, Ulrike Nitzsche28, Lars Nordström12, Taisuke Ozaki31, Lorenzo Paulatto32, Chris J. Pickard33, Ward Poelmans1, Matt Probert23, Keith Refson25, Keith Refson34, Manuel Richter28, Manuel Richter29, Gian-Marco Rignanese18, Santanu Saha19, Matthias Scheffler35, Matthias Scheffler13, Martin Schlipf21, Karlheinz Schwarz5, Sangeeta Sharma11, Francesca Tavazza16, Patrik Thunström5, Alexandre Tkatchenko36, Alexandre Tkatchenko13, Marc Torrent, David Vanderbilt22, Michiel van Setten18, Veronique Van Speybroeck1, John M. Wills37, Jonathan R. Yates26, Guo-Xu Zhang38, Stefaan Cottenier1 
25 Mar 2016-Science
TL;DR: A procedure to assess the precision of DFT methods was devised and used to demonstrate reproducibility among many of the most widely used DFT codes, demonstrating that the precisionof DFT implementations can be determined, even in the absence of one absolute reference code.
Abstract: The widespread popularity of density functional theory has given rise to an extensive range of dedicated codes for predicting molecular and crystalline properties. However, each code implements the formalism in a different way, raising questions about the reproducibility of such predictions. We report the results of a community-wide effort that compared 15 solid-state codes, using 40 different potentials or basis set types, to assess the quality of the Perdew-Burke-Ernzerhof equations of state for 71 elemental crystals. We conclude that predictions from recent codes and pseudopotentials agree very well, with pairwise differences that are comparable to those between different high-precision experiments. Older methods, however, have less precise agreement. Our benchmark provides a framework for users and developers to document the precision of new applications and methodological improvements.

1,141 citations


Journal ArticleDOI
TL;DR: In this article, the authors provide an overview and outlook for the silicon waveguide platform, optical sources, optical modulators, photodetectors, integration approaches, packaging, applications of silicon photonics and approaches required to satisfy applications at mid-infrared wavelengths.
Abstract: Silicon photonics research can be dated back to the 1980s. However, the previous decade has witnessed an explosive growth in the field. Silicon photonics is a disruptive technology that is poised to revolutionize a number of application areas, for example, data centers, high-performance computing and sensing. The key driving force behind silicon photonics is the ability to use CMOS-like fabrication resulting in high-volume production at low cost. This is a key enabling factor for bringing photonics to a range of technology areas where the costs of implementation using traditional photonic elements such as those used for the telecommunications industry would be prohibitive. Silicon does however have a number of shortcomings as a photonic material. In its basic form it is not an ideal material in which to produce light sources, optical modulators or photodetectors for example. A wealth of research effort from both academia and industry in recent years has fueled the demonstration of multiple solutions to these and other problems, and as time progresses new approaches are increasingly being conceived. It is clear that silicon photonics has a bright future. However, with a growing number of approaches available, what will the silicon photonic integrated circuit of the future look like? This roadmap on silicon photonics delves into the different technology and application areas of the field giving an insight into the state-of-the-art as well as current and future challenges faced by researchers worldwide. Contributions authored by experts from both industry and academia provide an overview and outlook for the silicon waveguide platform, optical sources, optical modulators, photodetectors, integration approaches, packaging, applications of silicon photonics and approaches required to satisfy applications at mid-infrared wavelengths. Advances in science and technology required to meet challenges faced by the field in each of these areas are also addressed together with predictions of where the field is destined to reach.

939 citations


Journal ArticleDOI
Nabila Aghanim1, Monique Arnaud2, M. Ashdown3, J. Aumont1  +291 moreInstitutions (73)
TL;DR: In this article, the authors present the Planck 2015 likelihoods, statistical descriptions of the 2-point correlation functions of the cosmic microwave background (CMB) temperature and polarization fluctuations that account for relevant uncertainties.
Abstract: This paper presents the Planck 2015 likelihoods, statistical descriptions of the 2-point correlationfunctions of the cosmic microwave background (CMB) temperature and polarization fluctuations that account for relevant uncertainties, both instrumental and astrophysical in nature. They are based on the same hybrid approach used for the previous release, i.e., a pixel-based likelihood at low multipoles (l< 30) and a Gaussian approximation to the distribution of cross-power spectra at higher multipoles. The main improvements are the use of more and better processed data and of Planck polarization information, along with more detailed models of foregrounds and instrumental uncertainties. The increased redundancy brought by more than doubling the amount of data analysed enables further consistency checks and enhanced immunity to systematic effects. It also improves the constraining power of Planck, in particular with regard to small-scale foreground properties. Progress in the modelling of foreground emission enables the retention of a larger fraction of the sky to determine the properties of the CMB, which also contributes to the enhanced precision of the spectra. Improvements in data processing and instrumental modelling further reduce uncertainties. Extensive tests establish the robustness and accuracy of the likelihood results, from temperature alone, from polarization alone, and from their combination. For temperature, we also perform a full likelihood analysis of realistic end-to-end simulations of the instrumental response to the sky, which were fed into the actual data processing pipeline; this does not reveal biases from residual low-level instrumental systematics. Even with the increase in precision and robustness, the ΛCDM cosmological model continues to offer a very good fit to the Planck data. The slope of the primordial scalar fluctuations, n_s, is confirmed smaller than unity at more than 5σ from Planck alone. We further validate the robustness of the likelihood results against specific extensions to the baseline cosmology, which are particularly sensitive to data at high multipoles. For instance, the effective number of neutrino species remains compatible with the canonical value of 3.046. For this first detailed analysis of Planck polarization spectra, we concentrate at high multipoles on the E modes, leaving the analysis of the weaker B modes to future work. At low multipoles we use temperature maps at all Planck frequencies along with a subset of polarization data. These data take advantage of Planck’s wide frequency coverage to improve the separation of CMB and foreground emission. Within the baseline ΛCDM cosmology this requires τ = 0.078 ± 0.019 for the reionization optical depth, which is significantly lower than estimates without the use of high-frequency data for explicit monitoring of dust emission. At high multipoles we detect residual systematic errors in E polarization, typically at the μK^2 level; we therefore choose to retain temperature information alone for high multipoles as the recommended baseline, in particular for testing non-minimal models. Nevertheless, the high-multipole polarization spectra from Planck are already good enough to enable a separate high-precision determination of the parameters of the ΛCDM model, showing consistency with those established independently from temperature information alone.

932 citations


Journal ArticleDOI
TL;DR: In this paper, the first electronic structure calculation performed on a quantum computer without exponentially costly precompilation is reported, where a programmable array of superconducting qubits is used to compute the energy surface of molecular hydrogen using two distinct quantum algorithms.
Abstract: We report the first electronic structure calculation performed on a quantum computer without exponentially costly precompilation. We use a programmable array of superconducting qubits to compute the energy surface of molecular hydrogen using two distinct quantum algorithms. First, we experimentally execute the unitary coupled cluster method using the variational quantum eigensolver. Our efficient implementation predicts the correct dissociation energy to within chemical accuracy of the numerically exact result. Second, we experimentally demonstrate the canonical quantum algorithm for chemistry, which consists of Trotterization and quantum phase estimation. We compare the experimental performance of these approaches to show clear evidence that the variational quantum eigensolver is robust to certain errors. This error tolerance inspires hope that variational quantum simulations of classically intractable molecules may be viable in the near future.

925 citations


Journal ArticleDOI
TL;DR: In this article, the authors discuss recent breakthroughs for organic materials with high thermoelectric figures of merit and indicate how these materials may be incorporated into new module designs that take advantage of their mechanical properties.
Abstract: Conjugated polymers and related processing techniques have been developed for organic electronic devices ranging from lightweight photovoltaics to flexible displays. These breakthroughs have recently been used to create organic thermoelectric materials, which have potential for wearable heating and cooling devices, and near-room-temperature energy generation. So far, the best thermoelectric materials have been inorganic compounds (such as Bi2Te3) that have relatively low Earth abundance and are fabricated through highly complex vacuum processing routes. Molecular materials and hybrid organic–inorganic materials now demonstrate figures of merit approaching those of these inorganic materials, while also exhibiting unique transport behaviours that are suggestive of optimization pathways and device geometries that were not previously possible. In this Review, we discuss recent breakthroughs for organic materials with high thermoelectric figures of merit and indicate how these materials may be incorporated into new module designs that take advantage of their mechanical and thermoelectric properties. Thermoelectrics can be used to harvest energy and control temperature. Organic semiconducting materials have thermoelectric performance comparable to many inorganic materials near room temperature. Better understanding of their performance will provide a pathway to new types of conformal thermoelectric modules.

860 citations


Journal ArticleDOI
Sergey Alekhin, Wolfgang Altmannshofer1, Takehiko Asaka2, Brian Batell3, Fedor Bezrukov4, Kyrylo Bondarenko5, Alexey Boyarsky5, Ki-Young Choi6, Cristóbal Corral7, Nathaniel Craig8, David Curtin9, Sacha Davidson10, Sacha Davidson11, André de Gouvêa12, Stefano Dell'Oro, Patrick deNiverville13, P. S. Bhupal Dev14, Herbi K. Dreiner15, Marco Drewes16, Shintaro Eijima17, Rouven Essig18, Anthony Fradette13, Björn Garbrecht16, Belen Gavela19, Gian F. Giudice3, Mark D. Goodsell20, Mark D. Goodsell21, Dmitry Gorbunov22, Stefania Gori1, Christophe Grojean23, Alberto Guffanti24, Thomas Hambye25, Steen Honoré Hansen24, Juan Carlos Helo7, Juan Carlos Helo26, Pilar Hernández27, Alejandro Ibarra16, Artem Ivashko5, Artem Ivashko28, Eder Izaguirre1, Joerg Jaeckel29, Yu Seon Jeong30, Felix Kahlhoefer, Yonatan Kahn31, Andrey Katz3, Andrey Katz32, Andrey Katz33, Choong Sun Kim30, Sergey Kovalenko7, Gordan Krnjaic1, Valery E. Lyubovitskij34, Valery E. Lyubovitskij35, Valery E. Lyubovitskij36, Simone Marcocci, Matthew McCullough3, David McKeen37, Guenakh Mitselmakher38, Sven Moch39, Rabindra N. Mohapatra9, David E. Morrissey40, Maksym Ovchynnikov28, Emmanuel A. Paschos, Apostolos Pilaftsis14, Maxim Pospelov1, Maxim Pospelov13, Mary Hall Reno41, Andreas Ringwald, Adam Ritz13, Leszek Roszkowski, Valery Rubakov, Oleg Ruchayskiy24, Oleg Ruchayskiy17, Ingo Schienbein42, Daniel Schmeier15, Kai Schmidt-Hoberg, Pedro Schwaller3, Goran Senjanovic43, Osamu Seto44, Mikhail Shaposhnikov17, Lesya Shchutska38, J. Shelton45, Robert Shrock18, Brian Shuve1, Michael Spannowsky46, Andrew Spray47, Florian Staub3, Daniel Stolarski3, Matt Strassler33, Vladimir Tello, Francesco Tramontano48, Anurag Tripathi, Sean Tulin49, Francesco Vissani, Martin Wolfgang Winkler15, Kathryn M. Zurek50, Kathryn M. Zurek51 
Perimeter Institute for Theoretical Physics1, Niigata University2, CERN3, University of Connecticut4, Leiden University5, Korea Astronomy and Space Science Institute6, Federico Santa María Technical University7, University of California, Santa Barbara8, University of Maryland, College Park9, University of Lyon10, Claude Bernard University Lyon 111, Northwestern University12, University of Victoria13, University of Manchester14, University of Bonn15, Technische Universität München16, École Polytechnique Fédérale de Lausanne17, Stony Brook University18, Autonomous University of Madrid19, Centre national de la recherche scientifique20, University of Paris21, Moscow Institute of Physics and Technology22, Autonomous University of Barcelona23, University of Copenhagen24, Université libre de Bruxelles25, University of La Serena26, University of Valencia27, Taras Shevchenko National University of Kyiv28, Heidelberg University29, Yonsei University30, Princeton University31, University of Geneva32, Harvard University33, Tomsk State University34, Tomsk Polytechnic University35, University of Tübingen36, University of Washington37, University of Florida38, University of Hamburg39, TRIUMF40, University of Iowa41, University of Grenoble42, International Centre for Theoretical Physics43, Hokkai Gakuen University44, University of Illinois at Urbana–Champaign45, Durham University46, University of Melbourne47, University of Naples Federico II48, York University49, University of California, Berkeley50, Lawrence Berkeley National Laboratory51
TL;DR: It is demonstrated that the SHiP experiment has a unique potential to discover new physics and can directly probe a number of solutions of beyond the standard model puzzles, such as neutrino masses, baryon asymmetry of the Universe, dark matter, and inflation.
Abstract: This paper describes the physics case for a new fixed target facility at CERN SPS. The SHiP (search for hidden particles) experiment is intended to hunt for new physics in the largely unexplored domain of very weakly interacting particles with masses below the Fermi scale, inaccessible to the LHC experiments, and to study tau neutrino physics. The same proton beam setup can be used later to look for decays of tau-leptons with lepton flavour number non-conservation, $\tau \to 3\mu $ and to search for weakly-interacting sub-GeV dark matter candidates. We discuss the evidence for physics beyond the standard model and describe interactions between new particles and four different portals—scalars, vectors, fermions or axion-like particles. We discuss motivations for different models, manifesting themselves via these interactions, and how they can be probed with the SHiP experiment and present several case studies. The prospects to search for relatively light SUSY and composite particles at SHiP are also discussed. We demonstrate that the SHiP experiment has a unique potential to discover new physics and can directly probe a number of solutions of beyond the standard model puzzles, such as neutrino masses, baryon asymmetry of the Universe, dark matter, and inflation.

842 citations


Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, Monique Arnaud3, M. Ashdown4  +301 moreInstitutions (72)
TL;DR: In this paper, the implications of Planck data for models of dark energy (DE) and modified gravity (MG) beyond the standard cosmological constant scenario were studied, and it was shown that the density of DE at early times has to be below 2% of the critical density, even when forced to play a role for z < 50.
Abstract: We study the implications of Planck data for models of dark energy (DE) and modified gravity (MG) beyond the standard cosmological constant scenario. We start with cases where the DE only directly affects the background evolution, considering Taylor expansions of the equation of state w(a), as well as principal component analysis and parameterizations related to the potential of a minimally coupled DE scalar field. When estimating the density of DE at early times, we significantly improve present constraints and find that it has to be below ~2% (at 95% confidence) of the critical density, even when forced to play a role for z < 50 only. We then move to general parameterizations of the DE or MG perturbations that encompass both effective field theories and the phenomenology of gravitational potentials in MG models. Lastly, we test a range of specific models, such as k-essence, f(R) theories, and coupled DE. In addition to the latest Planck data, for our main analyses, we use background constraints from baryonic acoustic oscillations, type-Ia supernovae, and local measurements of the Hubble constant. We further show the impact of measurements of the cosmological perturbations, such as redshift-space distortions and weak gravitational lensing. These additional probes are important tools for testing MG models and for breaking degeneracies that are still present in the combination of Planck and background data sets. All results that include only background parameterizations (expansion of the equation of state, early DE, general potentials in minimally-coupled scalar fields or principal component analysis) are in agreement with ΛCDM. When testing models that also change perturbations (even when the background is fixed to ΛCDM), some tensions appear in a few scenarios: the maximum one found is ~2σ for Planck TT+lowP when parameterizing observables related to the gravitational potentials with a chosen time dependence; the tension increases to, at most, 3σ when external data sets are included. It however disappears when including CMB lensing.

816 citations


Proceedings ArticleDOI
01 Jan 2016
TL;DR: Driller is presented, a hybrid vulnerability excavation tool which leverages fuzzing and selective concolic execution in a complementary manner, to find deeper bugs and mitigate their weaknesses, avoiding the path explosion inherent in concolic analysis and the incompleteness of fuzzing.
Abstract: Memory corruption vulnerabilities are an everpresent risk in software, which attackers can exploit to obtain unauthorized access to confidential information. As products with access to sensitive data are becoming more prevalent, the number of potentially exploitable systems is also increasing, resulting in a greater need for automated software vetting tools. DARPA recently funded a competition, with millions of dollars in prize money, to further research focusing on automated vulnerability finding and patching, showing the importance of research in this area. Current techniques for finding potential bugs include static, dynamic, and concolic analysis systems, which each having their own advantages and disadvantages. A common limitation of systems designed to create inputs which trigger vulnerabilities is that they only find shallow bugs and struggle to exercise deeper paths in executables. We present Driller, a hybrid vulnerability excavation tool which leverages fuzzing and selective concolic execution in a complementary manner, to find deeper bugs. Inexpensive fuzzing is used to exercise compartments of an application, while concolic execution is used to generate inputs which satisfy the complex checks separating the compartments. By combining the strengths of the two techniques, we mitigate their weaknesses, avoiding the path explosion inherent in concolic analysis and the incompleteness of fuzzing. Driller uses selective concolic execution to explore only the paths deemed interesting by the fuzzer and to generate inputs for conditions that the fuzzer cannot satisfy. We evaluate Driller on 126 applications released in the qualifying event of the DARPA Cyber Grand Challenge and show its efficacy by identifying the same number of vulnerabilities, in the same time, as the top-scoring team of the qualifying event.

778 citations


Journal ArticleDOI
TL;DR: Time translation symmetry can be spontaneously broken in driven quantum systems opening the door to time crystals, according to new theoretical predictions as discussed by the authors, which can be seen as an example of quantum time crystals.
Abstract: Time-translation symmetry can be spontaneously broken in driven quantum systems opening the door to time crystals, according to new theoretical predictions.

Proceedings ArticleDOI
22 May 2016
TL;DR: This paper presents a binary analysis framework that implements a number of analysis techniques that have been proposed in the past and implements these techniques in a unifying framework, which allows other researchers to compose them and develop new approaches.
Abstract: Finding and exploiting vulnerabilities in binary code is a challenging task. The lack of high-level, semantically rich information about data structures and control constructs makes the analysis of program properties harder to scale. However, the importance of binary analysis is on the rise. In many situations binary analysis is the only possible way to prove (or disprove) properties about the code that is actually executed. In this paper, we present a binary analysis framework that implements a number of analysis techniques that have been proposed in the past. We present a systematized implementation of these techniques, which allows other researchers to compose them and develop new approaches. In addition, the implementation of these techniques in a unifying framework allows for the direct comparison of these apporaches and the identification of their advantages and disadvantages. The evaluation included in this paper is performed using a recent dataset created by DARPA for evaluating the effectiveness of binary vulnerability analysis techniques. Our framework has been open-sourced and is available to the security community.

Journal ArticleDOI
TL;DR: In this article, the authors study the task of sampling from the output distributions of (pseudo-)random quantum circuits, a natural task for benchmarking quantum computers, and show that this sampling task must take exponential time in a classical computer.
Abstract: A critical question for the field of quantum computing in the near future is whether quantum devices without error correction can perform a well-defined computational task beyond the capabilities of state-of-the-art classical computers, achieving so-called quantum supremacy. We study the task of sampling from the output distributions of (pseudo-)random quantum circuits, a natural task for benchmarking quantum computers. Crucially, sampling this distribution classically requires a direct numerical simulation of the circuit, with computational cost exponential in the number of qubits. This requirement is typical of chaotic systems. We extend previous results in computational complexity to argue more formally that this sampling task must take exponential time in a classical computer. We study the convergence to the chaotic regime using extensive supercomputer simulations, modeling circuits with up to 42 qubits - the largest quantum circuits simulated to date for a computational task that approaches quantum supremacy. We argue that while chaotic states are extremely sensitive to errors, quantum supremacy can be achieved in the near-term with approximately fifty superconducting qubits. We introduce cross entropy as a useful benchmark of quantum circuits which approximates the circuit fidelity. We show that the cross entropy can be efficiently measured when circuit simulations are available. Beyond the classically tractable regime, the cross entropy can be extrapolated and compared with theoretical estimates of circuit fidelity to define a practical quantum supremacy test.

Journal ArticleDOI
Lourens Poorter1, Frans Bongers1, T. Mitchell Aide2, Angelica M. Almeyda Zambrano3, Patricia Balvanera4, Justin M. Becknell5, Vanessa K. Boukili6, Pedro H. S. Brancalion7, Eben N. Broadbent3, Robin L. Chazdon6, Dylan Craven8, Dylan Craven9, Jarcilene S. Almeida-Cortez10, George A. L. Cabral10, Ben H. J. de Jong, Julie S. Denslow11, Daisy H. Dent12, Daisy H. Dent9, Saara J. DeWalt13, Juan Manuel Dupuy, Sandra M. Durán14, Mário M. Espírito-Santo, María C. Fandiño, Ricardo Gomes César7, Jefferson S. Hall9, José Luis Hernández-Stefanoni, Catarina C. Jakovac1, Catarina C. Jakovac15, André Braga Junqueira15, André Braga Junqueira1, Deborah K. Kennard16, Susan G. Letcher17, Juan Carlos Licona, Madelon Lohbeck1, Madelon Lohbeck18, Erika Marin-Spiotta19, Miguel Martínez-Ramos4, Paulo Eduardo dos Santos Massoca15, Jorge A. Meave4, Rita C. G. Mesquita15, Francisco Mora4, Rodrigo Muñoz4, Robert Muscarella20, Robert Muscarella21, Yule Roberta Ferreira Nunes, Susana Ochoa-Gaona, Alexandre Adalardo de Oliveira7, Edith Orihuela-Belmonte, Marielos Peña-Claros1, Eduardo A. Pérez-García4, Daniel Piotto, Jennifer S. Powers22, Jorge Rodríguez-Velázquez4, I. Eunice Romero-Pérez4, Jorge Ruiz23, Jorge Ruiz24, Juan Saldarriaga, Arturo Sanchez-Azofeifa14, Naomi B. Schwartz21, Marc K. Steininger, Nathan G. Swenson25, Marisol Toledo, María Uriarte21, Michiel van Breugel9, Michiel van Breugel26, Michiel van Breugel27, Hans van der Wal28, Maria das Dores Magalhães Veloso, Hans F. M. Vester29, Alberto Vicentini15, Ima Célia Guimarães Vieira30, Tony Vizcarra Bentos15, G. Bruce Williamson15, G. Bruce Williamson31, Danaë M. A. Rozendaal6, Danaë M. A. Rozendaal32, Danaë M. A. Rozendaal1 
11 Feb 2016-Nature
TL;DR: A biomass recovery map of Latin America is presented, which illustrates geographical and climatic variation in carbon sequestration potential during forest regrowth and will support policies to minimize forest loss in areas where biomass resilience is naturally low and promote forest regeneration and restoration in humid tropical lowland areas with high biomass resilience.
Abstract: Land-use change occurs nowhere more rapidly than in the tropics, where the imbalance between deforestation and forest regrowth has large consequences for the global carbon cycle. However, considerable uncertainty remains about the rate of biomass recovery in secondary forests, and how these rates are influenced by climate, landscape, and prior land use. Here we analyse aboveground biomass recovery during secondary succession in 45 forest sites and about 1,500 forest plots covering the major environmental gradients in the Neotropics. The studied secondary forests are highly productive and resilient. Aboveground biomass recovery after 20 years was on average 122 megagrams per hectare (Mg ha(-1)), corresponding to a net carbon uptake of 3.05 Mg C ha(-1) yr(-1), 11 times the uptake rate of old-growth forests. Aboveground biomass stocks took a median time of 66 years to recover to 90% of old-growth values. Aboveground biomass recovery after 20 years varied 11.3-fold (from 20 to 225 Mg ha(-1)) across sites, and this recovery increased with water availability (higher local rainfall and lower climatic water deficit). We present a biomass recovery map of Latin America, which illustrates geographical and climatic variation in carbon sequestration potential during forest regrowth. The map will support policies to minimize forest loss in areas where biomass resilience is naturally low (such as seasonally dry forest regions) and promote forest regeneration and restoration in humid tropical lowland areas with high biomass resilience.

Journal ArticleDOI
Vardan Khachatryan1, Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam  +2283 moreInstitutions (141)
TL;DR: Combined fits to CMS UE proton–proton data at 7TeV and to UEProton–antiproton data from the CDF experiment at lower s, are used to study the UE models and constrain their parameters, providing thereby improved predictions for proton-proton collisions at 13.
Abstract: New sets of parameters ("tunes") for the underlying-event (UE) modeling of the PYTHIA8, PYTHIA6 and HERWIG++ Monte Carlo event generators are constructed using different parton distribution functions. Combined fits to CMS UE data at sqrt(s) = 7 TeV and to UE data from the CDF experiment at lower sqrt(s), are used to study the UE models and constrain their parameters, providing thereby improved predictions for proton-proton collisions at 13 TeV. In addition, it is investigated whether the values of the parameters obtained from fits to UE observables are consistent with the values determined from fitting observables sensitive to double-parton scattering processes. Finally, comparisons of the UE tunes to "minimum bias" (MB) events, multijet, and Drell-Yan (q q-bar to Z / gamma* to lepton-antilepton + jets) observables at 7 and 8 TeV are presented, as well as predictions of MB and UE observables at 13 TeV.

Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, Monique Arnaud3, Frederico Arroja4  +306 moreInstitutions (75)
TL;DR: In this article, the Planck full mission cosmic microwave background (CMB) temperature and E-mode polarization maps are analysed to obtain constraints on primordial non-Gaussianity (NG).
Abstract: The Planck full mission cosmic microwave background (CMB) temperature and E-mode polarization maps are analysed to obtain constraints on primordial non-Gaussianity (NG). Using three classes of optimal bispectrum estimators – separable template-fitting (KSW), binned, and modal – we obtain consistent values for the primordial local, equilateral, and orthogonal bispectrum amplitudes, quoting as our final result from temperature alone ƒlocalNL = 2.5 ± 5.7, ƒequilNL= -16 ± 70, , and ƒorthoNL = -34 ± 32 (68% CL, statistical). Combining temperature and polarization data we obtain ƒlocalNL = 0.8 ± 5.0, ƒequilNL= -4 ± 43, and ƒorthoNL = -26 ± 21 (68% CL, statistical). The results are based on comprehensive cross-validation of these estimators on Gaussian and non-Gaussian simulations, are stable across component separation techniques, pass an extensive suite of tests, and are consistent with estimators based on measuring the Minkowski functionals of the CMB. The effect of time-domain de-glitching systematics on the bispectrum is negligible. In spite of these test outcomes we conservatively label the results including polarization data as preliminary, owing to a known mismatch of the noise model in simulations and the data. Beyond estimates of individual shape amplitudes, we present model-independent, three-dimensional reconstructions of the Planck CMB bispectrum and derive constraints on early universe scenarios that generate primordial NG, including general single-field models of inflation, axion inflation, initial state modifications, models producing parity-violating tensor bispectra, and directionally dependent vector models. We present a wide survey of scale-dependent feature and resonance models, accounting for the “look elsewhere” effect in estimating the statistical significance of features. We also look for isocurvature NG, and find no signal, but we obtain constraints that improve significantly with the inclusion of polarization. The primordial trispectrum amplitude in the local model is constrained to be

Journal ArticleDOI
TL;DR: A novel perylene bisimide (PBI) acceptor, SdiPBI-Se, in which selenium atoms were introduced into the perylene core, suggesting that non-fullerene acceptors have enormous potential to rival or even surpass the performance of their fullerene counterparts.
Abstract: Non-fullerene acceptors have recently attracted tremendous interest because of their potential as alternatives to fullerene derivatives in bulk heterojunction organic solar cells. However, the power conversion efficiencies (PCEs) have lagged far behind those of the polymer/fullerene system, mainly because of the low fill factor (FF) and photocurrent. Here we report a novel perylene bisimide (PBI) acceptor, SdiPBI-Se, in which selenium atoms were introduced into the perylene core. With a well-established wide-band-gap polymer (PDBT-T1) as the donor, a high efficiency of 8.4% with an unprecedented high FF of 70.2% is achieved for solution-processed non-fullerene organic solar cells. Efficient photon absorption, high and balanced charge carrier mobility, and ultrafast charge generation processes in PDBT-T1:SdiPBI-Se films account for the high photovoltaic performance. Our results suggest that non-fullerene acceptors have enormous potential to rival or even surpass the performance of their fullerene counterparts.

Journal ArticleDOI
TL;DR: In this paper, the performances of traditional technologies and nanotechnology for water treatment and environmental remediation were compared with the goal of providing an up-to-date reference on the state of treatment techniques for researchers, industry, and policy makers.

Journal ArticleDOI
TL;DR: In this article, the authors review evidence for the responses of marine life to recent climate change across ocean regions, from tropical seas to polar oceans, and find that general trends in species responses are consistent with expectations from climate change, including poleward and deeper distributional shifts, advances in spring phenology, declines in calcification and increases in the abundance of warm water species.
Abstract: Climate change is driving changes in the physical and chemical properties of the ocean that have consequences for marine ecosystems. Here, we review evidence for the responses of marine life to recent climate change across ocean regions, from tropical seas to polar oceans. We consider observed changes in calcification rates, demography, abundance, distribution and phenology of marine species. We draw on a database of observed climate change impacts on marine species, supplemented with evidence in the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. We discuss factors that limit or facilitate species’ responses, such as fishing pressure, the availability of prey, habitat, light and other resources, and dispersal by ocean currents. We find that general trends in species responses are consistent with expectations from climate change, including poleward and deeper distributional shifts, advances in spring phenology, declines in calcification and increases in the abundance of warm-water species. The volume and type of evidence of species responses to climate change is variable across ocean regions and taxonomic groups, with much evidence derived from the heavily-studied north Atlantic Ocean. Most investigations of marine biological impacts of climate change are of the impacts of changing temperature, with few observations of effects of changing oxygen, wave climate, precipitation (coastal waters) or ocean acidification. Observations of species responses that have been linked to anthropogenic climate change are widespread, but are still lacking for some taxonomic groups (e.g., phytoplankton, benthic invertebrates, marine mammals).

Journal ArticleDOI
TL;DR: In this paper, a constitutive approach to the study of organizational contradictions, dialectics, paradoxes, and tensions is presented, highlighting five constitutive dimensions (i.e., discourse, developmental actions, socio-historical conditions, presence in multiples, and praxis).
Abstract: This article presents a constitutive approach to the study of organizational contradictions, dialectics, paradoxes, and tensions. In particular, it highlights five constitutive dimensions (i.e., discourse, developmental actions, socio-historical conditions, presence in multiples, and praxis) that appear across the literature in five metatheoretical traditions—process-based systems, structuration, critical, postmodern, and relational dialectics. In exploring these dimensions, it defines and distinguishes among key constructs, links research to process outcomes, and sets forth a typology of alternative ways of responding to organizational tensions. It concludes by challenging researchers to sharpen their focus on time in process studies, privilege emotion in relation to rationality, and explore the dialectic between order and disorder.

Journal ArticleDOI
R. Adam1, Peter A. R. Ade2, Nabila Aghanim3, M. I. R. Alves4  +281 moreInstitutions (69)
TL;DR: In this paper, the authors consider the problem of diffuse astrophysical component separation, and process these maps within a Bayesian framework to derive an internally consistent set of full-sky astrophysical components maps.
Abstract: Planck has mapped the microwave sky in temperature over nine frequency bands between 30 and 857 GHz and in polarization over seven frequency bands between 30 and 353 GHz in polarization. In this paper we consider the problem of diffuse astrophysical component separation, and process these maps within a Bayesian framework to derive an internally consistent set of full-sky astrophysical component maps. Component separation dedicated to cosmic microwave background (CMB) reconstruction is described in a companion paper. For the temperature analysis, we combine the Planck observations with the 9-yr Wilkinson Microwave Anisotropy Probe (WMAP) sky maps and the Haslam et al. 408 MHz map, to derive a joint model of CMB, synchrotron, free-free, spinning dust, CO, line emission in the 94 and 100 GHz channels, and thermal dust emission. Full-sky maps are provided for each component, with an angular resolution varying between 7.5 and 1deg. Global parameters (monopoles, dipoles, relative calibration, and bandpass errors) are fitted jointly with the sky model, and best-fit values are tabulated. For polarization, the model includes CMB, synchrotron, and thermal dust emission. These models provide excellent fits to the observed data, with rms temperature residuals smaller than 4μK over 93% of the sky for all Planck frequencies up to 353 GHz, and fractional errors smaller than 1% in the remaining 7% of the sky. The main limitations of the temperature model at the lower frequencies are internal degeneracies among the spinning dust, free-free, and synchrotron components; additional observations from external low-frequency experiments will be essential to break these degeneracies. The main limitations of the temperature model at the higher frequencies are uncertainties in the 545 and 857 GHz calibration and zero-points. For polarization, the main outstanding issues are instrumental systematics in the 100–353 GHz bands on large angular scales in the form of temperature-to-polarization leakage, uncertainties in the analogue-to-digital conversion, and corrections for the very long time constant of the bolometer detectors, all of which are expected to improve in the near future.

Journal ArticleDOI
TL;DR: Epigenetic aging rates are significantly associated with sex, race/ethnicity, and to a lesser extent with CHD risk factors, but not with incident CHD outcomes.
Abstract: Epigenetic biomarkers of aging (the “epigenetic clock”) have the potential to address puzzling findings surrounding mortality rates and incidence of cardio-metabolic disease such as: (1) women consistently exhibiting lower mortality than men despite having higher levels of morbidity; (2) racial/ethnic groups having different mortality rates even after adjusting for socioeconomic differences; (3) the black/white mortality cross-over effect in late adulthood; and (4) Hispanics in the United States having a longer life expectancy than Caucasians despite having a higher burden of traditional cardio-metabolic risk factors. We analyzed blood, saliva, and brain samples from seven different racial/ethnic groups. We assessed the intrinsic epigenetic age acceleration of blood (independent of blood cell counts) and the extrinsic epigenetic aging rates of blood (dependent on blood cell counts and tracks the age of the immune system). In blood, Hispanics and Tsimane Amerindians have lower intrinsic but higher extrinsic epigenetic aging rates than Caucasians. African-Americans have lower extrinsic epigenetic aging rates than Caucasians and Hispanics but no differences were found for the intrinsic measure. Men have higher epigenetic aging rates than women in blood, saliva, and brain tissue. Epigenetic aging rates are significantly associated with sex, race/ethnicity, and to a lesser extent with CHD risk factors, but not with incident CHD outcomes. These results may help elucidate lower than expected mortality rates observed in Hispanics, older African-Americans, and women.

Posted Content
TL;DR: This paper examined whether the expansion of U.S. investment in foreign equities and the change in the composition of the foreign portfolio over time is consistent with standard models of international portfolio choice.
Abstract: 1995). Our research examines whether the expansion of U.S. investment in foreign equities and the change in the composition of the foreign portfolio over time is consistent with standard models of international portfolio choice. To answer this question, we combine data on cross-border transactions in foreign equities with data on equity returns. Our approach has three important advantages over previous tests of models of international portfolio choice. First, we bring data on asset prices as well as data on the quantities of assets purchased to bear in the empirical tests.' Second, our testing procedure is robust to time-invariant hedge factors. This is because net purchases reflect changes in portfolio weights, and constant hedge factors that might explain home bias in levels do not affect the adjustment of the port

Journal ArticleDOI
TL;DR: This paper builds from a first-principle analysis of decentralized primary droop control on centralized, decentralized, and distributed architectures for secondary frequency regulation and finds that averaging-based distributed controllers using communication among the generation units offer the best combination of flexibility and performance.
Abstract: Modeled after the hierarchical control architecture of power transmission systems, a layering of primary, secondary, and tertiary control has become the standard operation paradigm for islanded microgrids. Despite this superficial similarity, the control objectives in microgrids across these three layers are varied and ambitious, and they must be achieved while allowing for robust plug-and-play operation and maximal flexibility, without hierarchical decision making and time-scale separations. In this paper, we explore control strategies for these three layers and illuminate some possibly unexpected connections and dependencies among them. Building from a first-principle analysis of decentralized primary droop control, we study centralized, decentralized, and distributed architectures for secondary frequency regulation. We find that averaging-based distributed controllers using communication among the generation units offer the best combination of flexibility and performance. We further leverage these results to study constrained ac economic dispatch in a tertiary control layer. Surprisingly, we show that the minimizers of the economic dispatch problem are in one-to-one correspondence with the set of steady states reachable by droop control. In other words, the adoption of droop control is necessary and sufficient to achieve economic optimization. This equivalence results in simple guidelines to select the droop coefficients, which include the known criteria for power sharing. We illustrate the performance and robustness of our designs through simulations.

Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, Monique Arnaud3, M. Ashdown4  +289 moreInstitutions (73)
TL;DR: The most significant measurement of the cosmic microwave background (CMB) lensing potential at a level of 40σ using temperature and polarization data from the Planck 2015 full-mission release was presented in this article.
Abstract: We present the most significant measurement of the cosmic microwave background (CMB) lensing potential to date (at a level of 40σ), using temperature and polarization data from the Planck 2015 full-mission release. Using a polarization-only estimator, we detect lensing at a significance of 5σ. We cross-check the accuracy of our measurement using the wide frequency coverage and complementarity of the temperature and polarization measurements. Public products based on this measurement include an estimate of the lensing potential over approximately 70% of the sky, an estimate of the lensing potential power spectrum in bandpowers for the multipole range 40 ≤ L ≤ 400, and an associated likelihood for cosmological parameter constraints. We find good agreement between our measurement of the lensing potential power spectrum and that found in the ΛCDM model that best fits the Planck temperature and polarization power spectra. Using the lensing likelihood alone we obtain a percent-level measurement of the parameter combination σ8Ω0.25m = 0.591 ± 0.021. We combine our determination of the lensing potential with the E-mode polarization, also measured by Planck, to generate an estimate of the lensing B-mode. We show that this lensing B-mode estimate is correlated with the B-modes observed directly by Planck at the expected level and with a statistical significance of 10σ, confirming Planck’s sensitivity to this known sky signal. We also correlate our lensing potential estimate with the large-scale temperature anisotropies, detecting a cross-correlation at the 3σ level, as expected because of dark energy in the concordance ΛCDM model.

Journal ArticleDOI
TL;DR: This research presents a new generation of metallic materials that are fundamental to advanced aircraft engines and their applications are expanding the scope for discovery and implementation for future generations of advanced propulsion systems.
Abstract: Metallic materials are fundamental to advanced aircraft engines. While perceived as mature, emerging computational, experimental and processing innovations are expanding the scope for discovery and implementation of new metallic materials for future generations of advanced propulsion systems.

Journal ArticleDOI
TL;DR: In this tutorial review, an up to date summary of recent progress in CADA reactions of phenol and aniline derivatives is presented.
Abstract: Phenols are widely used as starting materials in both industrial and academic society. Dearomatization reactions of phenols provide an efficient way to construct highly functionalized cyclohexadienones. The main challenge to make them asymmetric by catalytic methods is to control the selectivity while overcoming the loss of aromaticity. In this tutorial review, an up to date summary of recent progress in CADA reactions of phenol and aniline derivatives is presented.

Journal ArticleDOI
Guo Jie Li1, Kevin D. Hyde2, Kevin D. Hyde3, Kevin D. Hyde4  +161 moreInstitutions (45)
TL;DR: This paper is a compilation of notes on 142 fungal taxa, including five new families, 20 new genera, and 100 new species, representing a wide taxonomic and geographic range.
Abstract: Notes on 113 fungal taxa are compiled in this paper, including 11 new genera, 89 new species, one new subspecies, three new combinations and seven reference specimens. A wide geographic and taxonomic range of fungal taxa are detailed. In the Ascomycota the new genera Angustospora (Testudinaceae), Camporesia (Xylariaceae), Clematidis, Crassiparies (Pleosporales genera incertae sedis), Farasanispora, Longiostiolum (Pleosporales genera incertae sedis), Multilocularia (Parabambusicolaceae), Neophaeocryptopus (Dothideaceae), Parameliola (Pleosporales genera incertae sedis), and Towyspora (Lentitheciaceae) are introduced. Newly introduced species are Angustospora nilensis, Aniptodera aquibella, Annulohypoxylon albidiscum, Astrocystis thailandica, Camporesia sambuci, Clematidis italica, Colletotrichum menispermi, C. quinquefoliae, Comoclathris pimpinellae, Crassiparies quadrisporus, Cytospora salicicola, Diatrype thailandica, Dothiorella rhamni, Durotheca macrostroma, Farasanispora avicenniae, Halorosellinia rhizophorae, Humicola koreana, Hypoxylon lilloi, Kirschsteiniothelia tectonae, Lindgomyces okinawaensis, Longiostiolum tectonae, Lophiostoma pseudoarmatisporum, Moelleriella phukhiaoensis, M. pongdueatensis, Mucoharknessia anthoxanthi, Multilocularia bambusae, Multiseptospora thysanolaenae, Neophaeocryptopus cytisi, Ocellularia arachchigei, O. ratnapurensis, Ochronectria thailandica, Ophiocordyceps karstii, Parameliola acaciae, P. dimocarpi, Parastagonospora cumpignensis, Pseudodidymosphaeria phlei, Polyplosphaeria thailandica, Pseudolachnella brevifusiformis, Psiloglonium macrosporum, Rhabdodiscus albodenticulatus, Rosellinia chiangmaiensis, Saccothecium rubi, Seimatosporium pseudocornii, S. pseudorosae, Sigarispora ononidis and Towyspora aestuari. New combinations are provided for Eutiarosporella dactylidis (sexual morph described and illustrated) and Pseudocamarosporium pini. Descriptions, illustrations and / or reference specimens are designated for Aposphaeria corallinolutea, Cryptovalsa ampelina, Dothiorella vidmadera, Ophiocordyceps formosana, Petrakia echinata, Phragmoporthe conformis and Pseudocamarosporium pini. The new species of Basidiomycota are Agaricus coccyginus, A. luteofibrillosus, Amanita atrobrunnea, A. digitosa, A. gleocystidiosa, A. pyriformis, A. strobilipes, Bondarzewia tibetica, Cortinarius albosericeus, C. badioflavidus, C. dentigratus, C. duboisensis, C. fragrantissimus, C. roseobasilis, C. vinaceobrunneus, C. vinaceogrisescens, C. wahkiacus, Cyanoboletus hymenoglutinosus, Fomitiporia atlantica, F. subtilissima, Ganoderma wuzhishanensis, Inonotus shoreicola, Lactifluus armeniacus, L. ramipilosus, Leccinum indoaurantiacum, Musumecia alpina, M. sardoa, Russula amethystina subp. tengii and R. wangii are introduced. Descriptions, illustrations, notes and / or reference specimens are designated for Clarkeinda trachodes, Dentocorticium ussuricum, Galzinia longibasidia, Lentinus stuppeus and Leptocorticium tenellum. The other new genera, species new combinations are Anaeromyces robustus, Neocallimastix californiae and Piromyces finnis from Neocallimastigomycota, Phytophthora estuarina, P. rhizophorae, Salispina, S. intermedia, S. lobata and S. spinosa from Oomycota, and Absidia stercoraria, Gongronella orasabula, Mortierella calciphila, Mucor caatinguensis, M. koreanus, M. merdicola and Rhizopus koreanus in Zygomycota.

Journal ArticleDOI
TL;DR: In this paper, 16 researchers, each a world-leading expert in their respective subfields, contribute a section to this invited review article, summarizing their views on state-of-the-art and future developments in optical communications.
Abstract: Lightwave communications is a necessity for the information age. Optical links provide enormous bandwidth, and the optical fiber is the only medium that can meet the modern society's needs for transporting massive amounts of data over long distances. Applications range from global high-capacity networks, which constitute the backbone of the internet, to the massively parallel interconnects that provide data connectivity inside datacenters and supercomputers. Optical communications is a diverse and rapidly changing field, where experts in photonics, communications, electronics, and signal processing work side by side to meet the ever-increasing demands for higher capacity, lower cost, and lower energy consumption, while adapting the system design to novel services and technologies. Due to the interdisciplinary nature of this rich research field, Journal of Optics has invited 16 researchers, each a world-leading expert in their respective subfields, to contribute a section to this invited review article, summarizing their views on state-of-the-art and future developments in optical communications.

Journal ArticleDOI
TL;DR: The results show that commonsense reforms to fishery management would dramatically improve overall fish abundance while increasing food security and profits, and that, with appropriate reforms, recovery can happen quickly.
Abstract: Data from 4,713 fisheries worldwide, representing 78% of global reported fish catch, are analyzed to estimate the status, trends, and benefits of alternative approaches to recovering depleted fisheries. For each fishery, we estimate current biological status and forecast the impacts of contrasting management regimes on catch, profit, and biomass of fish in the sea. We estimate unique recovery targets and trajectories for each fishery, calculate the year-by-year effects of alternative recovery approaches, and model how alternative institutional reforms affect recovery outcomes. Current status is highly heterogeneous-the median fishery is in poor health (overfished, with further overfishing occurring), although 32% of fisheries are in good biological, although not necessarily economic, condition. Our business-as-usual scenario projects further divergence and continued collapse for many of the world's fisheries. Applying sound management reforms to global fisheries in our dataset could generate annual increases exceeding 16 million metric tons (MMT) in catch, $53 billion in profit, and 619 MMT in biomass relative to business as usual. We also find that, with appropriate reforms, recovery can happen quickly, with the median fishery taking under 10 y to reach recovery targets. Our results show that commonsense reforms to fishery management would dramatically improve overall fish abundance while increasing food security and profits.