scispace - formally typeset
Search or ask a question

Showing papers by "University of Delaware published in 2021"


Journal ArticleDOI
TL;DR: The workflows designed to enable researchers to interpret data can constrain the biological questions that can be asked as discussed by the authors, but the workflows can also be difficult to adapt to real-world applications.
Abstract: Big data abound in microbiology, but the workflows designed to enable researchers to interpret data can constrain the biological questions that can be asked. Five years after anvi’o was first published, this community-led multi-omics platform is maturing into an open software ecosystem that reduces constraints in ‘omics data analyses.

220 citations


Journal ArticleDOI
02 Jul 2021-Science
TL;DR: The emergence of industrial adoption of recycling and upcycling approaches is encouraging, solidifying the critical role for these strategies in addressing the fate of plastics and driving advances in next-generation materials design as mentioned in this paper.
Abstract: Plastics have revolutionized modern life, but have created a global waste crisis driven by our reliance and demand for low-cost, disposable materials. New approaches are vital to address challenges related to plastics waste heterogeneity, along with the property reductions induced by mechanical recycling. Chemical recycling and upcycling of polymers may enable circularity through separation strategies, chemistries that promote closed-loop recycling inherent to macromolecular design, and transformative processes that shift the life-cycle landscape. Polymer upcycling schemes may enable lower-energy pathways and minimal environmental impacts compared with traditional mechanical and chemical recycling. The emergence of industrial adoption of recycling and upcycling approaches is encouraging, solidifying the critical role for these strategies in addressing the fate of plastics and driving advances in next-generation materials design.

177 citations


Journal ArticleDOI
M. G. Aartsen1, Rasha Abbasi2, Markus Ackermann, Jenni Adams1  +440 moreInstitutions (60)
TL;DR: In this article, the authors present an overview of a next-generation instrument, IceCube-Gen2, which will sharpen our understanding of the processes and environments that govern the Universe at the highest energies.
Abstract: The observation of electromagnetic radiation from radio to γ-ray wavelengths has provided a wealth of information about the Universe. However, at PeV (1015 eV) energies and above, most of the Universe is impenetrable to photons. New messengers, namely cosmic neutrinos, are needed to explore the most extreme environments of the Universe where black holes, neutron stars, and stellar explosions transform gravitational energy into non-thermal cosmic rays. These energetic particles havemillions of times higher energies than those produced in the most powerful particle accelerators on Earth. As neutrinos can escape from regions otherwise opaque to radiation, they allow an unique view deep into exploding stars and the vicinity of the event horizons of black holes. The discovery of cosmic neutrinos with IceCube has opened this new window on the Universe. IceCube has been successful in finding first evidence for cosmic particle acceleration in the jet of an active galactic nucleus. Yet, ultimately, its sensitivity is too limited to detect even the brightest neutrino sources with high significance, or to detect populations of less luminous sources. In thiswhite paper, we present an overview of a next-generation instrument, IceCube-Gen2, which will sharpen our understanding of the processes and environments that govern the Universe at the highest energies. IceCube-Gen2 is designed to: (a) Resolve the high-energy neutrino sky from TeV to EeV energies (b) Investigate cosmic particle acceleration through multi-messenger observations (c) Reveal the sources and propagation of the highest energy particles in the Universe (d) Probe fundamental physics with high-energy neutrinos IceCube-Gen2 will enhance the existing IceCube detector at the South Pole. It will increase the annual rate of observed cosmic neutrinos by a factor of ten compared to IceCube, and will be able to detect sources five times fainter than its predecessor. Furthermore, through the addition of a radio array, IceCube- Gen2 will extend the energy range by several orders of magnitude compared to IceCube. Construction will take 8 years and cost about $350M. The goal is to have IceCube-Gen2 fully operational by 2033. IceCube-Gen2 will play an essential role in shaping the new era of multimessenger astronomy, fundamentally advancing our knowledge of the highenergy Universe. This challenging mission can be fully addressed only through the combination of the information from the neutrino, electromagnetic, and gravitational wave emission of high-energy sources, in concert with the new survey instruments across the electromagnetic spectrum and gravitational wave detectors which will be available in the coming years.

172 citations


Journal ArticleDOI
20 Oct 2021-Nature
TL;DR: In this article, the authors proposed a method for achieving high performance solid polymer ion conductors by engineering of molecular channels, which enables fast transport of Li+ ions along the polymer chains.
Abstract: Although solid-state lithium (Li)-metal batteries promise both high energy density and safety, existing solid ion conductors fail to satisfy the rigorous requirements of battery operations. Inorganic ion conductors allow fast ion transport, but their rigid and brittle nature prevents good interfacial contact with electrodes. Conversely, polymer ion conductors that are Li-metal-stable usually provide better interfacial compatibility and mechanical tolerance, but typically suffer from inferior ionic conductivity owing to the coupling of the ion transport with the motion of the polymer chains1–3. Here we report a general strategy for achieving high-performance solid polymer ion conductors by engineering of molecular channels. Through the coordination of copper ions (Cu2+) with one-dimensional cellulose nanofibrils, we show that the opening of molecular channels within the normally ion-insulating cellulose enables rapid transport of Li+ ions along the polymer chains. In addition to high Li+ conductivity (1.5 × 10−3 siemens per centimetre at room temperature along the molecular chain direction), the Cu2+-coordinated cellulose ion conductor also exhibits a high transference number (0.78, compared with 0.2–0.5 in other polymers2) and a wide window of electrochemical stability (0–4.5 volts) that can accommodate both the Li-metal anode and high-voltage cathodes. This one-dimensional ion conductor also allows ion percolation in thick LiFePO4 solid-state cathodes for application in batteries with a high energy density. Furthermore, we have verified the universality of this molecular-channel engineering approach with other polymers and cations, achieving similarly high conductivities, with implications that could go beyond safe, high-performance solid-state batteries. By coordinating copper ions with the oxygen-containing groups of cellulose nanofibrils, the molecular spacing in the nanofibrils is increased, allowing fast transport of lithium ions and offering hopes for solid-state batteries.

172 citations


Journal ArticleDOI
TL;DR: In this article, a three-phase implementation plan for hydrogen into the industrial sector as a chemical feedstock, the transportation sector for long-range, heavy-duty vehicles, the buildings sector for heat, and the power sector for seasonal storage is presented.
Abstract: A hydrogen economy has long been promoted as a ground-breaking aspect of a low-carbon future. However, there is little consensus on what this future entails, with some overly concerned about lack of demand and others disregarding hydrogen’s limitations. Here, we fill the need for a comprehensive definition of the ‘hydrogen economy’ and illustrate a vision in which hydrogen will primarily be used for decarbonization where no alternative exists. We propose a three-phase implementation plan for hydrogen into the industrial sector as a chemical feedstock, the transportation sector for long-range, heavy-duty vehicles, the buildings sector for heat, and the power sector for seasonal storage. We find that hydrogen will not be the largest energy economy, but with a projected need of 2.3 Gt H2 annually, it can decarbonize around 18% of energy-related sectors. In the long-term, hydrogen can complement renewable electricity and be the keystone to a 100% renewable future.

162 citations



Journal ArticleDOI
TL;DR: In this paper, an integrated photothermal-photocatalytic biphase system was proposed, which significantly reduced the interface barrier and drastically reduced the transport resistance of the hydrogen gas by nearly two orders of magnitude.
Abstract: Solar-driven hydrogen production from water using particulate photocatalysts is considered the most economical and effective approach to produce hydrogen fuel with little environmental concern. However, the efficiency of hydrogen production from water in particulate photocatalysis systems is still low. Here, we propose an efficient biphase photocatalytic system composed of integrated photothermal–photocatalytic materials that use charred wood substrates to convert liquid water to water steam, simultaneously splitting hydrogen under light illumination without additional energy. The photothermal–photocatalytic system exhibits biphase interfaces of photothermally-generated steam/photocatalyst/hydrogen, which significantly reduce the interface barrier and drastically lower the transport resistance of the hydrogen gas by nearly two orders of magnitude. In this work, an impressive hydrogen production rate up to 220.74 μmol h−1 cm−2 in the particulate photocatalytic systems has been achieved based on the wood/CoO system, demonstrating that the photothermal–photocatalytic biphase system is cost-effective and greatly advantageous for practical applications. The solar-driven H2 production from water by particulate photocatalysts is an effective approach to produce H2 fuel. Here, the authors propose an integrated photothermal–photocatalytic biphase system, which lowers the reaction barrier and the delivery resistance of the H2, boosting the catalytic H2 evolution rate.

147 citations


Journal ArticleDOI
TL;DR: In this paper, a method to selectively convert polyolefins to branched, liquid fuels including diesel, jet, and gasoline-range hydrocarbons, with high yield up to 85% over Pt/WO3/ZrO2 and HY zeolite in hydrogen at temperatures as low as 225°C.
Abstract: Single-use plastics impose an enormous environmental threat, but their recycling, especially of polyolefins, has been proven challenging. We report a direct method to selectively convert polyolefins to branched, liquid fuels including diesel, jet, and gasoline-range hydrocarbons, with high yield up to 85% over Pt/WO3/ZrO2 and HY zeolite in hydrogen at temperatures as low as 225°C. The process proceeds via tandem catalysis with initial activation of the polymer primarily over Pt, with subsequent cracking over the acid sites of WO3/ZrO2 and HY zeolite, isomerization over WO3/ZrO2 sites, and hydrogenation of olefin intermediates over Pt. The process can be tuned to convert different common plastic wastes, including low- and high-density polyethylene, polypropylene, polystyrene, everyday polyethylene bottles and bags, and composite plastics to desirable fuels and light lubricants.

146 citations


Journal ArticleDOI
12 Jul 2021
TL;DR: In this article, the authors provide a comprehensive techno-economic assessment of four major products and prioritizes the technological development with systematic guidelines to facilitate the market deployment of low-temperature CO2 electrolysis.
Abstract: Low-temperature CO2 electrolysis represents a potential enabling process in the production of renewable chemicals and fuels, notably carbon monoxide, formic acid, ethylene and ethanol. Because this technology has progressed rapidly in recent years, a systematic techno-economic assessment has become necessary to evaluate its feasibility as a CO2 utilization approach. Here this work provides a comprehensive techno-economic assessment of four major products and prioritizes the technological development with systematic guidelines to facilitate the market deployment of low-temperature CO2 electrolysis. First, we survey state-of-the-art electrolyser performance and parameterize figures of merit. The analysis shows that production costs of carbon monoxide and formic acid (C1 products) are approaching US$0.44 and 0.59 kg–1, respectively, competitive with conventional processes. In comparison, the production of ethylene and ethanol (C2 products) is not immediately feasible due to their substantially higher costs of US$2.50 and 2.06 kg–1, respectively. We then provide a detailed roadmap to making C2 product production economically viable: an improvement in energetic efficiency to ~50% and a reduction in electricity price to US$0.01 kWh–1. We also propose industrially relevant benchmarks: 5-year stability of electrolyser components and the single-pass conversion of 30 and 15% for C1 and C2 products, respectively. Finally we discuss the economic aspects of two potential strategies to address electrolyte neutralization utilizing either an anion exchange membrane or bipolar membrane. Low-temperature CO2 electrolysis is a promising process for producing renewable chemicals and fuels. This work provides a systematic techno-economic assessment of four major products, prioritizing technological development, and proposes guidelines to facilitate market adoption.

138 citations


Journal ArticleDOI
Lawrence Berkeley National Laboratory1, National University of Singapore2, Stanford University3, National Ecological Observatory Network4, University of Wisconsin-Madison5, Oak Ridge National Laboratory6, McMaster University7, University of Nebraska–Lincoln8, University of California, Berkeley9, Agricultural Research Service10, University of British Columbia11, University of Colorado Boulder12, Ohio State University13, University of Florida14, University of Guelph15, University of Kansas16, Michigan State University17, Pacific Northwest National Laboratory18, United States Department of Agriculture19, University of New Mexico20, National Research Council21, Marine Biological Laboratory22, University of Alberta23, Virginia Commonwealth University24, University of Minnesota25, Université de Montréal26, Dalhousie University27, Carleton University28, Shinshu University29, Japan Agency for Marine-Earth Science and Technology30, Northern Arizona University31, Oregon State University32, Yale University33, Washington State University34, Harvard University35, Texas A&M University36, Indiana University37, Florida International University38, San Diego State University39, California State University, East Bay40, Wayne State University41, University of Sydney42, Wilfrid Laurier University43, University of Alabama44, Environment Canada45, United States Geological Survey46, Argonne National Laboratory47, Osaka Prefecture University48, University of Delaware49, University of Missouri50, University of Sheffield51
TL;DR: In this article, the authors evaluate the representativeness of flux footprints and evaluate potential biases as a consequence of the footprint-to-target-area mismatch, which can be used as a guide to identify site-periods suitable for specific applications.

137 citations


Journal ArticleDOI
01 Jan 2021
TL;DR: This paper performed a scoping review of the literature to examine entry points for environmental variability along the food supply chain, the evidence of propagation or attenuation of this variability, and the food items and types of shock that have been studied.
Abstract: Environmental variability and shock events can be propagated or attenuated along food supply chains by various economic, political and infrastructural factors. Understanding these processes is central to reducing risks associated with periodic food shortages, price spikes and reductions in food quality. Here we perform a scoping review of the literature to examine entry points for environmental variability along the food supply chain, the evidence of propagation or attenuation of this variability, and the food items and types of shock that have been studied. We find that research on food supply shocks has primarily focused on maize, rice and wheat, on agricultural production and on extreme rainfall and temperatures—indicating the need to expand research into the full food basket, diverse sources of environmental variability and the links connecting food production to consumption and nutrition. Insights from this new knowledge can inform key responses—at the level of an individual (for example, substituting foods), a company (for example, switching sources) or a government (for example, strategic reserves)—for coping with disruptions. Understanding the propagation or attenuation of environmental variability and shocks along food supply chains is key to food security. This scoping review identifies entry points for variability, the main factors for variability diffusion, research gaps in terms of food items and types of shock studied, and risk reduction responses at individual, company and governmental levels.

Journal ArticleDOI
TL;DR: This review will offer a comprehensive summary and a detailed discussion of the significant progress and breakthroughs in the development of ZIBs, aiming to provide insightful design principles for future research activities from a fundamental perspective.

Journal ArticleDOI
TL;DR: In this article, the synthesis of melanin materials with a special focus beyond polydopamine has been discussed, with a focus beyond the conventional form of synthetic eumelanin, which has dominated the literature on melanin-based materials.
Abstract: Melanin is ubiquitous in living organisms across different biological kingdoms of life, making it an important, natural biomaterial Its presence in nature from microorganisms to higher animals and plants is attributed to the many functions of melanin, including pigmentation, radical scavenging, radiation protection, and thermal regulation Generally, melanin is classified into five types-eumelanin, pheomelanin, neuromelanin, allomelanin, and pyomelanin-based on the various chemical precursors used in their biosynthesis Despite its long history of study, the exact chemical makeup of melanin remains unclear, and it moreover has an inherent diversity and complexity of chemical structure, likely including many functions and properties that remain to be identified Synthetic mimics have begun to play a broader role in unraveling structure and function relationships of natural melanins In the past decade, polydopamine, which has served as the conventional form of synthetic eumelanin, has dominated the literature on melanin-based materials, while the synthetic analogues of other melanins have received far less attention In this perspective, we will discuss the synthesis of melanin materials with a special focus beyond polydopamine We will emphasize efforts to elucidate biosynthetic pathways and structural characterization approaches that can be harnessed to interrogate specific structure-function relationships, including electron paramagnetic resonance (EPR) and solid-state nuclear magnetic resonance (ssNMR) spectroscopy We believe that this timely Perspective will introduce this class of biopolymer to the broader chemistry community, where we hope to stimulate new opportunities in novel, melanin-based poly-functional synthetic materials

Journal ArticleDOI
TL;DR: In this article, the authors present a systematic and comprehensive global stocktake of implemented human adaptation to climate change and identify eight priorities for global adaptation research: assess the effectiveness of adaptation responses, enhance the understanding of limits to adaptation, enable individuals and civil society to adapt, include missing places, scholars and scholarship, understand private sector responses, improve methods for synthesizing different forms of evidence, assess the adaptation at different temperature thresholds, and improve the inclusion of timescale and the dynamics of responses.
Abstract: Assessing global progress on human adaptation to climate change is an urgent priority. Although the literature on adaptation to climate change is rapidly expanding, little is known about the actual extent of implementation. We systematically screened >48,000 articles using machine learning methods and a global network of 126 researchers. Our synthesis of the resulting 1,682 articles presents a systematic and comprehensive global stocktake of implemented human adaptation to climate change. Documented adaptations were largely fragmented, local and incremental, with limited evidence of transformational adaptation and negligible evidence of risk reduction outcomes. We identify eight priorities for global adaptation research: assess the effectiveness of adaptation responses, enhance the understanding of limits to adaptation, enable individuals and civil society to adapt, include missing places, scholars and scholarship, understand private sector responses, improve methods for synthesizing different forms of evidence, assess the adaptation at different temperature thresholds, and improve the inclusion of timescale and the dynamics of responses. Determining progress in adaptation to climate change is challenging, yet critical as climate change impacts increase. A stocktake of the scientific literature on implemented adaptation now shows that adaptation is mostly fragmented and incremental, with evidence lacking for its impact on reducing risk.

Journal ArticleDOI
TL;DR: This research explores whether, and when, the in-sample measures such as the model selection criteria can substitute for out-of-sample criteria that require a holdout sample, and recommends against using standard the PLS-PM criteria, and specifically the out- of-sample mean absolute percentage error (MAPE) for prediction-oriented model selection purposes.
Abstract: Partial least squares path modeling (PLS-PM) has become popular in various disciplines to model structural relationships among latent variables measured by manifest variables. To fully benefit from the predictive capabilities of PLS-PM, researchers must understand the efficacy of predictive metrics used. In this research, we compare the performance of standard PLS-PM criteria and model selection criteria derived from Information Theory, in terms of selecting the best predictive model among a cohort of competing models. We use Monte Carlo simulation to study this question under various sample sizes, effect sizes, item loadings, and model setups. Specifically, we explore whether, and when, the in-sample measures such as the model selection criteria can substitute for out-of-sample criteria that require a holdout sample. Such a substitution is advantageous when creating a holdout causes considerable loss of statistical and predictive power due to an overall small sample. We find that when the researcher does not have the luxury of a holdout sample, and the goal is selecting correctly specified models with low prediction error, the in-sample model selection criteria, in particular the Bayesian Information Criterion (BIC) and Geweke-Meese Criterion (GM), are useful substitutes for out-of-sample criteria. When a holdout sample is available, the best performing out-of-sample criteria include the root mean squared error (RMSE) and mean absolute deviation (MAD). Finally, we recommend against using standard the PLS-PM criteria (R, Adjusted R, and Q), and specifically the out-of-sample mean absolute percentage error (MAPE) for prediction-oriented model selection purposes. Finally, we illustrate the model selection criteria’s practical utility using a well-known corporate reputation model.

Journal ArticleDOI
TL;DR: Shihan et al. as mentioned in this paper presented a workflow for quantifying the relative levels of a molecule of interest by measuring mean fluorescent intensity across a region of interest, cell number, and the percentage of cells in a sample positive for staining with the fluorescent probe of interest.
Abstract: Western blotting (WB), enzyme-linked immunosorbent assay (ELISA) and flow cytometry (FC) have long been used to assess and quantitate relative protein expression in cultured cells and tissue samples. However, WB and ELISA have limited ability to meaningfully quantitate relative protein levels in tissues with complex cell composition, while tissue dissociation followed by FC is not feasible when tissue is limiting and/or cells difficult to isolate. While protein detection in tissue using immunofluorescent (IF) probes has traditionally been considered a qualitative technique, advances in probe stability and confocal imaging allow IF data to be easily quantitated, although reproducible quantitation of relative protein expression requires careful attention to appropriate controls, experiment design, and data collection. Here we describe the methods used to quantify the data presented in Shihan et al. Matrix Biology, 2020 which lays out a workflow where IF data collected on a confocal microscope can be used to quantitate the relative levels of a molecule of interest by measuring mean fluorescent intensity across a region of interest, cell number, and the percentage of cells in a sample "positive" for staining with the fluorescent probe of interest. Overall, this manuscript discusses considerations for collecting quantifiable fluorescent images on a confocal microscope and provides explicit methods for quantitating IF data using FIJI-ImageJ.

Journal ArticleDOI
TL;DR: The integration of metal-organic frameworks (MOFs) with polymer fibres enables the formation of fibrous composite materials with advantages over traditional single-component polymer films and mixed-matrix membranes.
Abstract: The integration of metal–organic frameworks (MOFs) with polymer fibres enables the formation of fibrous composite materials with advantages over traditional single-component polymer films and mixed-matrix membranes. In comparison with mixed-matrix membranes, MOF–polymer fibrous composites offer improved molecular transport through the material and easier access to the active sites of MOFs. These attributes make fibrous composites appealing for clothing, personal protective equipment, air purification and filtration, biomedical equipment and delivery of therapeutics, along with detection and sensing applications. In this Review, we outline approaches for the incorporation of MOFs into, or onto, polymer fibres and present some applications for MOF–polymer fabrics. The integration of MOFs and polymers can either occur prior to fibre formation (namely, MOF-first), via particle deposition (resulting in either covalent or non-covalent attachment) of MOFs to the fibre or by in situ MOF growth after fibre formation (namely, fibre-first). We focus on the structure–processing–activity relationships — for example, MOF loading, MOF crystal size, polymer concentration and processing parameters — that impact the behaviour of fibrous composites. We conclude with a discussion of research avenues that can advance this burgeoning field. Composites made from metal–organic frameworks and polymer fibres are gaining popularity in many applications because of their tailorable morphologies and properties. This Review summarizes various methods for fabricating these composites, explores structure–processing–activity relationships and discusses future research opportunities.

Journal ArticleDOI
TL;DR: This paper examined the effects of the COVID-19 pandemic on student learning in seven intermediate economics courses and found substantial heterogeneity in learning outcomes across courses, including gender, race, and first-generation status.

Journal ArticleDOI
TL;DR: The COVID-19 pandemic has the potential to affect the human microbiome in infected and uninfected individuals, having a substantial impact on human health over the long term.
Abstract: The COVID-19 pandemic has the potential to affect the human microbiome in infected and uninfected individuals, having a substantial impact on human health over the long term. This pandemic intersects with a decades-long decline in microbial diversity and ancestral microbes due to hygiene, antibiotics, and urban living (the hygiene hypothesis). High-risk groups succumbing to COVID-19 include those with preexisting conditions, such as diabetes and obesity, which are also associated with microbiome abnormalities. Current pandemic control measures and practices will have broad, uneven, and potentially long-term effects for the human microbiome across the planet, given the implementation of physical separation, extensive hygiene, travel barriers, and other measures that influence overall microbial loss and inability for reinoculation. Although much remains uncertain or unknown about the virus and its consequences, implementing pandemic control practices could significantly affect the microbiome. In this Perspective, we explore many facets of COVID-19-induced societal changes and their possible effects on the microbiome, and discuss current and future challenges regarding the interplay between this pandemic and the microbiome. Recent recognition of the microbiome's influence on human health makes it critical to consider both how the microbiome, shaped by biosocial processes, affects susceptibility to the coronavirus and, conversely, how COVID-19 disease and prevention measures may affect the microbiome. This knowledge may prove key in prevention and treatment, and long-term biological and social outcomes of this pandemic.

Journal ArticleDOI
TL;DR: The practical and scientific argument in support of a Fusarium that includes the FSSC and several other basal lineages is reasserted, consistent with the longstanding use of this name among plant pathologists, medical mycologists, quarantine officials, regulatory agencies, students and researchers with a stake in its taxonomy.
Abstract: Scientific communication is facilitated by a data-driven, scientifically sound taxonomy that considers the end-user's needs and established successful practice. Previously (Geiser et al. 2013; Phytopathology 103:400-408. 2013), the Fusarium community voiced near unanimous support for a concept of Fusarium that represented a clade comprising all agriculturally and clinically important Fusarium species, including the F. solani Species Complex (FSSC). Subsequently, this concept was challenged by one research group (Lombard et al. 2015 Studies in Mycology 80: 189-245) who proposed dividing Fusarium into seven genera, including the FSSC as the genus Neocosmospora, with subsequent justification based on claims that the Geiser et al. (2013) concept of Fusarium is polyphyletic (Sandoval-Denis et al. 2018; Persoonia 41:109-129). Here we test this claim, and provide a phylogeny based on exonic nucleotide sequences of 19 orthologous protein-coding genes that strongly support the monophyly of Fusarium including the FSSC. We reassert the practical and scientific argument in support of a Fusarium that includes the FSSC and several other basal lineages, consistent with the longstanding use of this name among plant pathologists, medical mycologists, quarantine officials, regulatory agencies, students and researchers with a stake in its taxonomy. In recognition of this monophyly, 40 species recently described as Neocosmospora were recombined in Fusarium, and nine others were renamed Fusarium. Here the global Fusarium community voices strong support for the inclusion of the FSSC in Fusarium, as it remains the best scientific, nomenclatural and practical taxonomic option available.

Journal ArticleDOI
18 Jan 2021
TL;DR: In this article, the authors outline recent ideas in the potential use of a range of solid-state mechanical sensing technologies to aid in the search for dark matter in a number of energy scales and with a variety of coupling mechanisms.
Abstract: Numerous astrophysical and cosmological observations are best explained by the existence of dark matter, a mass density which interacts only very weakly with visible, baryonic matter. Searching for the extremely weak signals produced by this dark matter strongly motivate the development of new, ultra-sensitive detector technologies. Paradigmatic advances in the control and readout of massive mechanical systems, in both the classical and quantum regimes, have enabled unprecedented levels of sensitivity. In this overview paper, we outline recent ideas in the potential use of a range of solid-state mechanical sensing technologies to aid in the search for dark matter in a number of energy scales and with a variety of coupling mechanisms.

Journal ArticleDOI
TL;DR: This article proposes a clustering algorithm based on an improved $K$ -means method to divide IoT devices into several groups so that the number of devices in each group is roughly the same, and a modified-Hungarian-based dynamic many–many matching (HD4M) algorithm is designed for assigning subchannels to IoT devices, which can efficiently mitigate the interference.
Abstract: As the commercial launch of the fifth-generation (5G) wireless communications gets near, the trend from the Internet of Things (IoT) to the Internet of Everything (IoE) is emerging. Due to the advantages of the high mobility, high Line-of-Sight (LoS) probability and low labor cost, unmanned aerial vehicles (UAVs) may play an important role in the future IoT communication networks, e.g., data collection in remote areas. In this article, we study the 3-D placement and resource allocation of multiple UAV-mounted base stations (BSs) in an uplink IoT network, where the balanced task for the UAV-BSs, the limited channel resource, and the signal interference are taken into consideration. In the considered system, the total transmission power of IoT devices is minimized, subject to a signal-to-interference-and-noise ratio (SINR) threshold for each device. First, aiming to balance the task of each UAV, we propose a clustering algorithm based on an improved $K$ -means method to divide IoT devices into several groups so that the number of devices in each group is roughly the same. Then, based on matching theory, a modified-Hungarian-based dynamic many–many matching (HD4M) algorithm is designed for assigning subchannels to IoT devices, which can efficiently mitigate the interference. Finally, we jointly optimize the transmission power of IoT devices and the altitudes of UAVs via an alternating iterative method. The simulation results show that the total transmission power decreases significantly after applying the proposed algorithms.


Posted Content
TL;DR: A framework that leverages contemporary image generators to render high-resolution videos and introduces a new task, which is called cross-domain video synthesis, in which the image and motion generators are trained on disjoint datasets belonging to different domains.
Abstract: Image and video synthesis are closely related areas aiming at generating content from noise. While rapid progress has been demonstrated in improving image-based models to handle large resolutions, high-quality renderings, and wide variations in image content, achieving comparable video generation results remains problematic. We present a framework that leverages contemporary image generators to render high-resolution videos. We frame the video synthesis problem as discovering a trajectory in the latent space of a pre-trained and fixed image generator. Not only does such a framework render high-resolution videos, but it also is an order of magnitude more computationally efficient. We introduce a motion generator that discovers the desired trajectory, in which content and motion are disentangled. With such a representation, our framework allows for a broad range of applications, including content and motion manipulation. Furthermore, we introduce a new task, which we call cross-domain video synthesis, in which the image and motion generators are trained on disjoint datasets belonging to different domains. This allows for generating moving objects for which the desired video data is not available. Extensive experiments on various datasets demonstrate the advantages of our methods over existing video generation techniques. Code will be released at https://github.com/snap-research/MoCoGAN-HD.

Journal ArticleDOI
TL;DR: In this article, the PtIrZn/CeO2-ZIF-8 catalyst was applied to a direct ammonia fuel cell (DAFC) to achieve a peak power density of 91 mW cm−2.
Abstract: Low-temperature direct ammonia fuel cells (DAFCs) use carbon-neutral ammonia as a fuel, which has attracted increasing attention recently due to ammonia's low source-to-tank energy cost, easy transport and storage, and wide availability. However, current DAFC technologies are greatly limited by the kinetically sluggish ammonia oxidation reaction (AOR) at the anode. Herein, we report an AOR catalyst, in which ternary PtIrZn nanoparticles with an average size of 2.3 ± 0.2 nm were highly dispersed on a binary composite support comprising cerium oxide (CeO2) and zeolitic imidazolate framework-8 (ZIF-8)-derived carbon (PtIrZn/CeO2-ZIF-8) through a sonochemical-assisted synthesis method. The PtIrZn alloy, with the aid of abundant OHad provided by CeO2 and uniform particle dispersibility contributed by porous ZIF-8 carbon (surface area: ∼600 m2 g−1), has shown highly efficient catalytic activity for the AOR in alkaline media, superior to that of commercial PtIr/C. The rotating disk electrode (RDE) results indicate a lower onset potential (0.35 vs. 0.43 V), relative to the reversible hydrogen electrode at room temperature, and a decreased activation energy (∼36.7 vs. 50.8 kJ mol−1) relative to the PtIr/C catalyst. Notably, the PtIrZn/CeO2-ZIF-8 catalyst was assembled with a high-performance hydroxide anion-exchange membrane to fabricate an alkaline DAFC, reaching a peak power density of 91 mW cm−2. Unlike in aqueous electrolytes, supports play a critical role in improving uniform ionomer distribution and mass transport in the anode. PtIrZn nanoparticles on silicon dioxide (SiO2) integrated with carboxyl-functionalized carbon nanotubes (CNT–COOH) were further studied as the anode in a DAFC. A significantly enhanced peak power density of 314 mW cm−2 was achieved. Density functional theory calculations elucidated that Zn atoms in the PtIr alloy can reduce the theoretical limiting potential of *NH2 dehydrogenation to *NH by ∼0.1 V, which can be attributed to a Zn-modulated upshift of the Pt–Ir d-band that facilitates the N–H bond breakage.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated a communication system assisted by multiple UAV-mounted base stations (BSs), aiming to minimize the number of required UAVs and to improve the coverage rate by optimizing the three-dimensional (3D) positions of UAV, user clustering, and frequency band allocation.
Abstract: Recently, unmanned aerial vehicles (UAVs) have attracted lots of attention because of their high mobility and low cost. This article investigates a communication system assisted by multiple UAV-mounted base stations (BSs), aiming to minimize the number of required UAVs and to improve the coverage rate by optimizing the three-dimensional (3D) positions of UAVs, user clustering, and frequency band allocation. Compared with the existing works, the constraints of the required quality of service (QoS) and the service ability of each UAV are considered, which makes the problem more challenging. A three-step method is developed to solve the formulated mixed-integer programming problem. First, to ensure that each UAV can serve more number of users, the maximum service radius of UAVs is derived according to the required minimum power of the received signals for the users. Second, an algorithm based on artificial bee colony (ABC) algorithm is proposed to minimize the number of required UAVs. Third, the 3D position and the frequency band of each UAV are designed to increase the power of the target signals and to reduce the interference. Finally, simulation results are presented to demonstrate the superiority of the proposed solution for UAV-assisted communication systems.

Journal ArticleDOI
Ayan Acharyya1, R. Adam2, C. Adams3, I. Agudo4  +453 moreInstitutions (104)
TL;DR: In this paper, the authors provide an updated assessment of the power of the Cherenkov Telescope Array (CTA) to search for thermally produced dark matter at the TeV scale via the associated gamma-ray signal from pair-annihilating dark matter particles in the region around the Galactic centre.
Abstract: We provide an updated assessment of the power of the Cherenkov Telescope Array (CTA) to search for thermally produced dark matter at the TeV scale, via the associated gamma-ray signal from pair-annihilating dark matter particles in the region around the Galactic centre. We find that CTA will open a new window of discovery potential, significantly extending the range of robustly testable models given a standard cuspy profile of the dark matter density distribution. Importantly, even for a cored profile, the projected sensitivity of CTA will be sufficient to probe various well-motivated models of thermally produced dark matter at the TeV scale. This is due to CTA's unprecedented sensitivity, angular and energy resolutions, and the planned observational strategy. The survey of the inner Galaxy will cover a much larger region than corresponding previous observational campaigns with imaging atmospheric Cherenkov telescopes. CTA will map with unprecedented precision the large-scale diffuse emission in high-energy gamma rays, constituting a background for dark matter searches for which we adopt state-of-the-art models based on current data. Throughout our analysis, we use up-to-date event reconstruction Monte Carlo tools developed by the CTA consortium, and pay special attention to quantifying the level of instrumental systematic uncertainties, as well as background template systematic errors, required to probe thermally produced dark matter at these energies.

Journal ArticleDOI
TL;DR: The findings highlight the multisystem nature of ASD, the need to recognize motor impairments as one of the diagnostic criteria or specifiers for ASD, and the need for appropriate motor screening and assessment of children with ASD.
Abstract: Eighty-seven percent of a large sample of children with autism spectrum disorder (ASD) are at risk for motor impairment (Bhat, Physical Therapy, 2020, 100, 633-644). In spite of the high prevalence for motor impairment in children with ASD, it is not considered among the diagnostic criteria or specifiers within DSM-V. In this article, we analyzed the SPARK study dataset (n = 13,887) to examine associations between risk for motor impairment using the Developmental Coordination Disorder-Questionnaire (DCD-Q), social communication impairment using the Social Communication Questionnaire (SCQ), repetitive behavior severity using the Repetitive Behaviors Scale - Revised (RBS-R), and parent-reported categories of cognitive, functional, and language impairments. Upon including children with ASD with cognitive impairments, 88.2% of the SPARK sample was at risk for motor impairment. The relative risk ratio for motor impairment in children with ASD was 22.2 times greater compared to the general population and that risk further increased up to 6.2 with increasing social communication (5.7), functional (6.2), cognitive (3.8), and language (1.6) impairments as well as repetitive behavior severity (5.0). Additionally, the magnitude of risk for motor impairment (fine- and gross-motor) increased with increasing severity of all impairment types with medium to large effects. These findings highlight the multisystem nature of ASD, the need to recognize motor impairments as one of the diagnostic criteria or specifiers for ASD, and the need for appropriate motor screening and assessment of children with ASD. Interventions must address not only the social communication and cognitive/behavioral challenges of children with ASD but also their motor function and participation. LAY ABSTRACT: Eighty-eight percent of the SPARK sample of children with ASD were at risk for motor impairment. The relative risk for motor impairment was 22.2 times greater in children with ASD compared to the general population and the risk increased with more social communication, repetitive behavior, cognitive, and functional impairment. It is important to recognize motor impairments as one of the diagnostic criteria or specifiers for ASD and there is a need to administer appropriate motor screening, assessment, and interventions in children with ASD.

Journal ArticleDOI
14 Oct 2021-Nature
TL;DR: In this article, the authors show that the release of one ton of CO2 today is projected to reduce total future energy expenditures, with most estimates valued between −US$3 and − US$1, depending on discount rates.
Abstract: Estimates of global economic damage caused by carbon dioxide (CO2) emissions can inform climate policy1–3. The social cost of carbon (SCC) quantifies these damages by characterizing how additional CO2 emissions today impact future economic outcomes through altering the climate4–6. Previous estimates have suggested that large, warming-driven increases in energy expenditures could dominate the SCC7,8, but they rely on models9–11 that are spatially coarse and not tightly linked to data2,3,6,7,12,13. Here we show that the release of one ton of CO2 today is projected to reduce total future energy expenditures, with most estimates valued between −US$3 and −US$1, depending on discount rates. Our results are based on an architecture that integrates global data, econometrics and climate science to estimate local damages worldwide. Notably, we project that emerging economies in the tropics will dramatically increase electricity consumption owing to warming, which requires critical infrastructure planning. However, heating reductions in colder countries offset this increase globally. We estimate that 2099 annual global electricity consumption increases by about 4.5 exajoules (7 per cent of current global consumption) per one-degree-Celsius increase in global mean surface temperature (GMST), whereas direct consumption of other fuels declines by about 11.3 exajoules (7 per cent of current global consumption) per one-degree-Celsius increase in GMST. Our finding of net savings contradicts previous research7,8, because global data indicate that many populations will remain too poor for most of the twenty-first century to substantially increase energy consumption in response to warming. Importantly, damage estimates would differ if poorer populations were given greater weight14. Using global data, econometrics and climate science to estimate the damages induced by the emission of one ton of carbon dioxide, climate change is projected to increase electricity spending but reduce overall end-use energy expenditure.

Journal ArticleDOI
TL;DR: In this paper, a dissolved oxygen and galvanic corrosion method was developed to synthesize vertically aligned fluoride-incorporated nickel-iron oxyhydroxide nanosheet arrays on a compressed Ni foam.
Abstract: Here, we have developed a dissolved oxygen and galvanic corrosion method to synthesize vertically aligned fluoride-incorporated nickel–iron oxyhydroxide nanosheet arrays on a compressed Ni foam as ...