scispace - formally typeset
Search or ask a question

Showing papers in "Science in 2023"


Journal ArticleDOI
27 Jan 2023-Science
TL;DR: ChatGPT has become a cultural sensation as mentioned in this paper , and it is freely accessible through a web portal created by the tool's developer, OpenAI, and it can automatically create text based on written prompts.
Abstract: In less than 2 months, the artificial intelligence (AI) program ChatGPT has become a cultural sensation. It is freely accessible through a web portal created by the tool’s developer, OpenAI. The program—which automatically creates text based on written prompts—is so popular that it’s likely to be “at capacity right now” if you attempt to use it. When you do get through, ChatGPT provides endless entertainment. I asked it to rewrite the first scene of the classic American play Death of a Salesman, but to feature Princess Elsa from the animated movie Frozen as the main character instead of Willy Loman. The output was an amusing conversation in which Elsa—who has come home from a tough day of selling—is told by her son Happy, “Come on, Mom. You’re Elsa from Frozen. You have ice powers and you’re a queen. You’re unstoppable.” Mash-ups like this are certainly fun, but there are serious implications for generative AI programs like ChatGPT in science and academia.

275 citations


Journal ArticleDOI
20 Jan 2023-Science
TL;DR: Wang et al. as mentioned in this paper reviewed the origins and utility of CRISPR-based genome editing, the successes and current limitations of the technology, and where innovation and engineering are needed.
Abstract: The advent of clustered regularly interspaced short palindromic repeat (CRISPR) genome editing, coupled with advances in computing and imaging capabilities, has initiated a new era in which genetic diseases and individual disease susceptibilities are both predictable and actionable. Likewise, genes responsible for plant traits can be identified and altered quickly, transforming the pace of agricultural research and plant breeding. In this Review, we discuss the current state of CRISPR-mediated genetic manipulation in human cells, animals, and plants along with relevant successes and challenges and present a roadmap for the future of this technology. Description A decade of CRISPR In the decade since the publication of CRISPR-Cas9 as a genome-editing technology, the CRISPR toolbox and its applications have profoundly changed basic and applied biological research. Wang and Doudna now review the origins and utility of CRISPR-based genome editing, the successes and current limitations of the technology, and where innovation and engineering are needed. The authors describe important advances in the development of CRISPR genome-editing technology and make predictions about where the field is headed. They also highlight specific examples in medicine and agriculture that show how CRISPR is already affecting society, with exciting opportunities for the future. —DJ A review discusses the current state of CRISPR-mediated genetic manipulation in human cells, animals, and plants and considers its future potential. BACKGROUND The fields of molecular biology, genetics, and genomics are at a critical juncture—a moment in history when a convergence of knowledge and methods has made it both technically possible and incredibly useful to edit specific base pairs or segments of DNA in cells and living organisms. The advent of clustered regularly interspaced short palindromic repeat (CRISPR) genome editing, coupled with advances in computing and imaging capabilities, has initiated a new era in which we can not only diagnose human diseases and even predict individual susceptibility based on personal genetics but also act on that information. Likewise, we can both identify and rapidly alter genes responsible for plant traits, transforming the pace of agricultural research and plant breeding. The applications of this technology convergence are profound and far reaching—and they are happening now. In the decade since the publication of CRISPR-Cas9 as a genome editing technology, the CRISPR toolbox and its applications have profoundly changed biological research, impacting not only patients with genetic diseases but also agricultural practices and products. As a specific example from the field of genomic medicine, it has become feasible to obtain a complete sequence of the human genome in less than 24 hours—a staggering advance considering the first such sequence took 5 years to generate. Notably, designing and putting to use a potent CRISPR genome editor to obtain clinically actionable information from that genome—previously a near-intractable challenge—now takes only a matter of days. For additional background and related topics, we refer readers to in-depth reviews of the microbiology and structural biology of CRISPR systems and to articles about the considerable ethical and societal challenges of this technology. ADVANCES The past decade has witnessed the discovery, engineering, and deployment of RNA-programmed genome editors across many applications. By leveraging CRISPR-Cas9’s most fundamental activity to create a targeted genetic disruption in a gene or gene regulatory element, scientists have built successful platforms for the rapid creation of knockout mice and other animal models, genetic screening, and multiplexed editing. Beyond traditional CRISPR-Cas9–induced knockouts, base editing—a technology utilizing engineered Cas9’s fused to enzymes that alter the chemical nature of DNA bases—has also provided a highly useful strategy to generate site-specific and precise point mutations. Over the past decade, scientists have utilized CRISPR technology as a readily adaptable tool to probe biological function, dissect genetic interactions, and inform strategies to combat human diseases and engineer crops. This Review covers the origins and successes of CRISPR-based genome editing and discusses the most pressing challenges, which include improving editing accuracy and precision, implementing strategies for precise programmable genetic sequence insertions, improving targeted delivery of CRISPR editors, and increasing access and affordability. We examine current efforts addressing these challenges, including emerging gene insertion technologies and new delivery modalities, and describe where further innovation and engineering are needed. CRISPR genome editors are already being deployed in medicine and agriculture, and this Review highlights key examples, including a CRISPR-based therapy treating sickle cell disease, a more nutritious CRISPR-edited tomato, and a high-yield, disease-resistant CRISPR-edited wheat, to illustrate CRISPR’s current and potential future impacts in society. OUTLOOK In the decade ahead, genome editing research and applications will continue to expand and will intersect with advances in technologies, such as machine learning, live cell imaging, and sequencing. A combination of discovery and engineering will diversify and refine the CRISPR toolbox to combat current challenges and enable more wide-ranging applications in both fundamental and applied research. Just as during the advent of CRISPR genome editing, a combination of scientific curiosity and the desire to benefit society will drive the next decade of innovation in CRISPR technology. CRISPR: past, present, and future. The past decade of CRISPR technology has focused on building the platforms for generating gene knockouts, creating knockout mice and other animal models, genetic screening, and multiplexed editing. CRISPR’s applications in medicine and agriculture are already beginning and will serve as the focus for the next decade as society’s demands drive further innovation in CRISPR technology.

55 citations


Journal ArticleDOI
13 Jan 2023-Science
TL;DR: In this paper , animal-free alternatives before human trials were used for animal-assisted human trials in animal agriculture trials, where animal-based alternatives were used before human-based trials.
Abstract: Description Agency can rely on animal-free alternatives before human trials Agency can rely on animal-free alternatives before human trials

37 citations


Journal ArticleDOI
06 Jan 2023-Science
TL;DR: This paper showed the existence of a fourth meningeal layer called the subarachnoid lymphatic-like membrane (SLYM), which is morpho and immunophenotypically similar to the mesothelial membrane lining of peripheral organs and body cavities, and it encases blood vessels and harbors immune cells.
Abstract: The central nervous system is lined by meninges, classically known as dura, arachnoid, and pia mater. We show the existence of a fourth meningeal layer that compartmentalizes the subarachnoid space in the mouse and human brain, designated the subarachnoid lymphatic-like membrane (SLYM). SLYM is morpho- and immunophenotypically similar to the mesothelial membrane lining of peripheral organs and body cavities, and it encases blood vessels and harbors immune cells. Functionally, the close apposition of SLYM with the endothelial lining of the meningeal venous sinus permits direct exchange of small solutes between cerebrospinal fluid and venous blood, thus representing the mouse equivalent of the arachnoid granulations. The functional characterization of SLYM provides fundamental insights into brain immune barriers and fluid transport. Description An extra layer lines the brain The traditional view is that the brain is surrounded by three layers, the dura, arachnoid, and pia mater. Møllgård et al. found a fourth meningeal layer called the subarachnoid lymphatic-like membrane (SLYM). SLYM is immunophenotypically distinct from the other meningeal layers in the human and mouse brain and represents a tight barrier for solutes of more than 3 kilodaltons, effectively subdividing the subarachnoid space into two different compartments. SLYM is the host for a large population of myeloid cells, the number of which increases in response to inflammation and aging, so this layer represents an innate immune niche ideally positioned to surveil the cerebrospinal fluid. —SMH A fourth meningeal layer acts as a barrier that divides the subarachnoid space into two distinct compartments.

36 citations


Journal ArticleDOI
06 Jan 2023-Science
TL;DR: A high-level review of the history of LN as an optical material, its different photonic platforms, engineering concepts, spectral coverage, and essential applications is provided in this article .
Abstract: Lithium niobate (LN), first synthesized 70 years ago, has been widely used in diverse applications ranging from communications to quantum optics. These high-volume commercial applications have provided the economic means to establish a mature manufacturing and processing industry for high-quality LN crystals and wafers. Breakthrough science demonstrations to commercial products have been achieved owing to the ability of LN to generate and manipulate electromagnetic waves across a broad spectrum, from microwave to ultraviolet frequencies. Here, we provide a high-level Review of the history of LN as an optical material, its different photonic platforms, engineering concepts, spectral coverage, and essential applications before providing an outlook for the future of LN. Description Lithium niobate photonics The optoelectronic and nonlinear optical properties of lithium niobate make it a workhorse material for applications in optics and communication technology. Boes et al. reviewed the science and technology of lithium niobate and its role in various aspects of photonic technology. They surveyed the evolution from bulk lithium niobate through weakly confining waveguides to the recent developments with thin-film lithium niobate. The ability to span the entire spectral range from radio to optical wavelengths illustrates the versatility of lithium niobate as a platform material in integrated photonics. —ISO A review discusses the science and technology of lithium niobate and its role in various aspects of photonics. BACKGROUND Electromagnetic (EM) waves underpin modern society in profound ways. They are used to carry information, enabling broadcast radio and television, mobile telecommunications, and ubiquitous access to data networks through Wi-Fi and form the backbone of our modern broadband internet through optical fibers. In fundamental physics, EM waves serve as an invaluable tool to probe objects from cosmic to atomic scales. For example, the Laser Interferometer Gravitational-Wave Observatory and atomic clocks, which are some of the most precise human-made instruments in the world, rely on EM waves to reach unprecedented accuracies. This has motivated decades of research to develop coherent EM sources over broad spectral ranges with impressive results: Frequencies in the range of tens of gigahertz (radio and microwave regimes) can readily be generated by electronic oscillators. Resonant tunneling diodes enable the generation of millimeter (mm) and terahertz (THz) waves, which span from tens of gigahertz to a few terahertz. At even higher frequencies, up to the petahertz level, which are usually defined as optical frequencies, coherent waves can be generated by solid-state and gas lasers. However, these approaches often suffer from narrow spectral bandwidths, because they usually rely on well-defined energy states of specific materials, which results in a rather limited spectral coverage. To overcome this limitation, nonlinear frequency-mixing strategies have been developed. These approaches shift the complexity from the EM source to nonresonant-based material effects. Particularly in the optical regime, a wealth of materials exist that support effects that are suitable for frequency mixing. Over the past two decades, the idea of manipulating these materials to form guiding structures (waveguides) has provided improvements in efficiency, miniaturization, and production scale and cost and has been widely implemented for diverse applications. ADVANCES Lithium niobate, a crystal that was first grown in 1949, is a particularly attractive photonic material for frequency mixing because of its favorable material properties. Bulk lithium niobate crystals and weakly confining waveguides have been used for decades for accessing different parts of the EM spectrum, from gigahertz to petahertz frequencies. Now, this material is experiencing renewed interest owing to the commercial availability of thin-film lithium niobate (TFLN). This integrated photonic material platform enables tight mode confinement, which results in frequency-mixing efficiency improvements by orders of magnitude while at the same time offering additional degrees of freedom for engineering the optical properties by using approaches such as dispersion engineering. Importantly, the large refractive index contrast of TFLN enables, for the first time, the realization of lithium niobate–based photonic integrated circuits on a wafer scale. OUTLOOK The broad spectral coverage, ultralow power requirements, and flexibilities of lithium niobate photonics in EM wave generation provides a large toolset to explore new device functionalities. Furthermore, the adoption of lithium niobate–integrated photonics in foundries is a promising approach to miniaturize essential bench-top optical systems using wafer scale production. Heterogeneous integration of active materials with lithium niobate has the potential to create integrated photonic circuits with rich functionalities. Applications such as high-speed communications, scalable quantum computing, artificial intelligence and neuromorphic computing, and compact optical clocks for satellites and precision sensing are expected to particularly benefit from these advances and provide a wealth of opportunities for commercial exploration. Also, bulk crystals and weakly confining waveguides in lithium niobate are expected to keep playing a crucial role in the near future because of their advantages in high-power and loss-sensitive quantum optics applications. As such, lithium niobate photonics holds great promise for unlocking the EM spectrum and reshaping information technologies for our society in the future. Lithium niobate spectral coverage. The EM spectral range and processes for generating EM frequencies when using lithium niobate (LN) for frequency mixing. AO, acousto-optic; AOM, acousto-optic modulation; χ(2), second-order nonlinearity; χ(3), third-order nonlinearity; EO, electro-optic; EOM, electro-optic modulation; HHG, high-harmonic generation; IR, infrared; OFC, optical frequency comb; OPO, optical paramedic oscillator; OR, optical rectification; SCG, supercontinuum generation; SHG, second-harmonic generation; UV, ultraviolet.

31 citations


Journal ArticleDOI
27 Jan 2023-Science
TL;DR: Li et al. as mentioned in this paper demonstrated p-i-n perovskite solar cells with a record power conversion efficiency of 24.6% over 18 square millimeters and 23.1% over 1 square centimeter, which retained 96 and 88% of the efficiency after 1000 hours of 1-sun maximum power point tracking at 25° and 75°C.
Abstract: Daily temperature variations induce phase transitions and lattice strains in halide perovskites, challenging their stability in solar cells. We stabilized the perovskite black phase and improved solar cell performance using the ordered dipolar structure of β-poly(1,1-difluoroethylene) to control perovskite film crystallization and energy alignment. We demonstrated p-i-n perovskite solar cells with a record power conversion efficiency of 24.6% over 18 square millimeters and 23.1% over 1 square centimeter, which retained 96 and 88% of the efficiency after 1000 hours of 1-sun maximum power point tracking at 25° and 75°C, respectively. Devices under rapid thermal cycling between −60° and +80°C showed no sign of fatigue, demonstrating the impact of the ordered dipolar structure on the operational stability of perovskite solar cells. Description Running hot and cold Like other solar cells, commercial perovskite solar cells (PSCs) would not only need to maintain operation at the high temperatures generated in direct sunlight but also endure the lattice strain created by temperature changes throughout the year. Li et al. fabricated high-quality perovskite crystalline films by adding a fluorinated polymer, the dipoles of which lowered formation energy of the perovskite black phase, decreased defect density, and also tuned the surface work function for charge extraction. Power conversion efficiencies of 23% were achieved for 1-square-centimeter devices that retained over 90% of their efficiency after testing conditions for 3000 hours and after repeated cycling between −60° and 80°C. —PDS Dipoles in a fluorinated polymer lower the formation energy of a photoactive perovskite phase and reduce its defect density.

28 citations


Journal ArticleDOI
13 Jan 2023-Science
TL;DR: Jain et al. as mentioned in this paper found that manipulation of the gut microbiota resulted in a notable reduction of tau pathology and neurodegeneration in an ApoE isoform-dependent manner.
Abstract: Tau-mediated neurodegeneration is a hallmark of Alzheimer’s disease. Primary tauopathies are characterized by pathological tau accumulation and neuronal and synaptic loss. Apolipoprotein E (ApoE)–mediated neuroinflammation is involved in the progression of tau-mediated neurodegeneration, and emerging evidence suggests that the gut microbiota regulates neuroinflammation in an APOE genotype–dependent manner. However, evidence of a causal link between the microbiota and tau-mediated neurodegeneration is lacking. In this study, we characterized a genetically engineered mouse model of tauopathy expressing human ApoE isoforms reared under germ-free conditions or after perturbation of their gut microbiota with antibiotics. Both of these manipulations reduced gliosis, tau pathology, and neurodegeneration in a sex- and ApoE isoform–dependent manner. The findings reveal mechanistic and translationally relevant interrelationships between the microbiota, neuroinflammation, and tau-mediated neurodegeneration. Description Microbiota and tau-mediated disease The accumulation of certain forms of the tau protein in the brain is linked to loss of nerve cells, inflammation, and cognitive decline in Alzheimer’s disease and several other neurodegenerative diseases. Apolipoprotein-E (APOE), the strongest genetic risk factor for Alzheimer’s disease, regulates brain inflammation and tau-mediated brain damage; however, the gut microbiota also regulates brain inflammation. In a mouse model of tau-mediated brain injury, Seo et al. found that manipulation of the gut microbiota resulted in a strong reduction of inflammation, tau pathology, and brain damage in a sex- and APOE-dependent manner (see the Perspective by Jain and Li). —SMH Manipulation of gut microbiota attenuates brain atrophy in a genetically engineered mouse model of tau-mediated neurodegeneration. INTRODUCTION Alzheimer’s disease (AD) is characterized by early deposition of amyloid-β (Aβ) plaques followed by pathological tau accumulation. Although Aβ is a necessary factor in AD pathogenesis, its accumulation in and of itself is insufficient for neurodegeneration and cognitive decline. By contrast, pathological tau accumulation is closely linked with neurodegeneration and cognitive decline in AD and primary tauopathies. Alterations of the gut microbiota have been reported in AD, which suggests that the microbiota may contribute to AD progression. Animal studies to date have focused mainly on how gut microbiota alterations affect Aβ pathology and not tauopathy and neurodegeneration. Additionally, recent studies have suggested that apolipoprotein E (ApoE) isoforms, which strongly influence AD risk and regulate tau-mediated neurodegeneration, differentially affect the gut microbiota. Therefore, further investigations to characterize the contribution of the gut microbiota to tauopathy and neurodegeneration are important. RATIONALE We assess the hypothesis that the gut microbiota regulates tau pathology and tau-mediated neurodegeneration in an ApoE isoform–dependent manner. A mouse model of tauopathy (P301S tau transgenic mice) expressing human ApoE isoforms (ApoE3 and ApoE4), referred to as TE3 and TE4, was subjected to the manipulation of the gut microbiota using two approaches: (i) being raised in germ-free (GF) conditions and (ii) short-term antibiotic (ABX) treatment early in life. Animals were fed a standard mouse chow diet ad libitum until euthanasia at 40 weeks of age, when this mouse model typically has severe brain atrophy. RESULTS The gut microbiota manipulations resulted in a notable reduction of tau pathology and neurodegeneration in an ApoE isoform–dependent manner. Both male and female GF TE4 mice showed a marked decrease in brain atrophy compared with conventionally raised (Conv-R) mice. Conv-R ABX-treated TE3 mice had significantly milder hippocampal atrophy compared with controls. ABX-treated TE4 animals showed trends of milder hippocampal atrophy, but the effect did not achieve statistical significance. These phenotypic effects of ABX treatment were not observed in females. Male GF TE4 mice and male ABX-treated TE3 mice showed significantly lower phosphorylated tau (p-tau) in the hippocampus compared with their controls. Astrocyte and microglial morphology and transcriptomic analysis revealed that the manipulation of the gut microbiota drives glial cells to a more homeostatic-like state, which indicates that gut microbiota strongly influence neuroinflammation and tau-mediated neurodegeneration. Microbiome and metabolite analysis suggests that microbially produced short-chain fatty acids (SCFAs) are mediators of the neuroinflammation-neurodegeneration axis. Supplementation of SCFAs to GF TE4 mice resulted in more reactive glial morphologies and gene expression as well as increased p-tau pathology. CONCLUSION The findings reveal mechanistic and translationally relevant interrelationships between the microbiota, the immune response, and tau-mediated neurodegeneration. ApoE-associated gut microbiota targeting may provide an avenue to further explore the prevention or treatment of AD and primary tauopathies. P301S tau transgenic mice expressing human APOE (TE mice). The dysregulated gut-brain axis and its effect on tauopathy and tau-mediated neurodegeneration. Dysbiosis, unbalanced gut microbiota composition (bottom center), contributes to tau-mediated neurodegeneration by generating bacterial metabolites (e.g., SCFAs) that influence peripheral immune cells. These cells promote central nervous system (CNS) inflammation and contribute to tau aggregation and neurodegeneration. Short-term antibiotics (bottom right) or germ-free conditions (bottom left) reshape or eliminate gut microbiota and reduce their metabolites. These microbiota manipulations influence effects of peripheral immune cells on CNS inflammation and tau-mediated neurodegeneration. ApoE4 in the CNS exacerbates local toxicity and blood-brain barrier dysfunction.

28 citations


Journal ArticleDOI
13 Jan 2023-Science
TL;DR: Noy et al. as mentioned in this paper used a polyelectrolyte-confined fluidic memristor (PFM) to simulate chemical-regulated electric pulses. But their work focused on different aspects of neuromorphic engineering, but both showed precise control of ion transport in water across nanoscale channels.
Abstract: Reproducing ion channel–based neural functions with artificial fluidic systems has long been an aspirational goal for both neuromorphic computing and biomedical applications. In this study, neuromorphic functions were successfully accomplished with a polyelectrolyte-confined fluidic memristor (PFM), in which confined polyelectrolyte–ion interactions contributed to hysteretic ion transport, resulting in ion memory effects. Various electric pulse patterns were emulated by PFM with ultralow energy consumption. The fluidic property of PFM enabled the mimicking of chemical-regulated electric pulses. More importantly, chemical-electric signal transduction was implemented with a single PFM. With its structural similarity to ion channels, PFM is versatile and easily interfaces with biological systems, paving a way to building neuromorphic devices with advanced functions by introducing rich chemical designs. Description Toward fluidic neuromorphic computing There is considerable interest in strategies that mimic the structure of human brain and could lead to the development of next-generation neuromorphic devices. Many recent studies have focused on solid-state devices, although information in biological systems is conveyed by ions solvated in water, an approach now explored in two papers in this issue (see the Perspective by Noy and Darling). Robin et al. created nanofluidic devices consisting of nanometer-thick two-dimensional slits filled with a salt solution, whereas Xiong et al. present a nanofluidic ionic memristor based on confined polyelectrolyte-ion interactions. The two studies are focused on different aspects of neuromorphic engineering, but both show precise control of ion transport in water across nanoscale channels. These studies show promising directions for creating neuromorphic functions using energy-efficient fluidic memristors that could mimic biological systems down to their fundamental principles. —YS A polyelectrolyte-confined aqueous ionic memristor demonstrated a range of neuromorphic functions with high energy efficiency.

27 citations


Journal ArticleDOI
06 Jan 2023-Science
TL;DR: Aðalgeirsdóttir et al. as discussed by the authors presented global glacier projections, excluding the ice sheets, for shared socioeconomic pathways calibrated with data for each glacier, and found that glaciers are projected to lose 26 ± 6% to 41 ± 11% (+4°C) of their mass by 2100, relative to 2015, for global temperature change scenarios.
Abstract: Glacier mass loss affects sea level rise, water resources, and natural hazards. We present global glacier projections, excluding the ice sheets, for shared socioeconomic pathways calibrated with data for each glacier. Glaciers are projected to lose 26 ± 6% (+1.5°C) to 41 ± 11% (+4°C) of their mass by 2100, relative to 2015, for global temperature change scenarios. This corresponds to 90 ± 26 to 154 ± 44 millimeters sea level equivalent and will cause 49 ± 9 to 83 ± 7% of glaciers to disappear. Mass loss is linearly related to temperature increase and thus reductions in temperature increase reduce mass loss. Based on climate pledges from the Conference of the Parties (COP26), global mean temperature is projected to increase by +2.7°C, which would lead to a sea level contribution of 115 ± 40 millimeters and cause widespread deglaciation in most mid-latitude regions by 2100. Description Melting away Mountain glaciers, perennial ice masses excluding the Greenland and Antarctic ice sheets, are a critical water resource for nearly two billion people and are threatened by global warming. Rounce et al. projected how those glaciers will be affected under global temperature increases of 1.5° to 4°C, finding losses of one quarter to nearly one half of their mass by 2100 (see the Perspective by Aðalgeirsdóttir and James). Their calculations suggest that glaciers will lose substantially more mass and contribute more to sea level rise than current estimates indicate. —HJS Glaciers are melting more rapidly than expected due to global warming.

26 citations


Journal ArticleDOI
17 Feb 2023-Science
TL;DR: In this article , the role of location bias in 5-HT2AR signaling, and the intriguing possibility that serotonin might not be the endogenous ligand for intracellular 5-H2ARs in the cortex.
Abstract: Decreased dendritic spine density in the cortex is a hallmark of several neuropsychiatric diseases, and the ability to promote cortical neuron growth has been hypothesized to underlie the rapid and sustained therapeutic effects of psychedelics. Activation of 5-hydroxytryptamine (serotonin) 2A receptors (5-HT2ARs) is essential for psychedelic-induced cortical plasticity, but it is currently unclear why some 5-HT2AR agonists promote neuroplasticity, whereas others do not. We used molecular and genetic tools to demonstrate that intracellular 5-HT2ARs mediate the plasticity-promoting properties of psychedelics; these results explain why serotonin does not engage similar plasticity mechanisms. This work emphasizes the role of location bias in 5-HT2AR signaling, identifies intracellular 5-HT2ARs as a therapeutic target, and raises the intriguing possibility that serotonin might not be the endogenous ligand for intracellular 5-HT2ARs in the cortex. Description The mechanism underlying psychedelic action Psychedelic compounds promote cortical structural and functional neuroplasticity through the activation of serotonin 2A receptors. However, the mechanisms by which receptor activation leads to changes in neuronal growth are still poorly defined. Vargas et al. found that activation of intracellular serotonin 2A receptors is responsible for the plasticity-promoting and antidepressant-like properties of psychedelic compounds, but serotonin may not be the natural ligand for those intracellular receptors (see the Perspective by Hess and Gould). —PRS Membrane-permeable psychedelics promote cortical neuron growth by activating intracellular serotonin 2A receptors.

26 citations


Journal ArticleDOI
17 Feb 2023-Science
TL;DR: Peng et al. as mentioned in this paper proposed a porous insulator contact (PIC) for perovskite solar cells to reduce the trade-off between open-circuit voltage and fill factor.
Abstract: Inserting an ultrathin low-conductivity interlayer between the absorber and transport layer has emerged as an important strategy for reducing surface recombination in the best perovskite solar cells. However, a challenge with this approach is a trade-off between the open-circuit voltage (Voc) and the fill factor (FF). Here, we overcame this challenge by introducing a thick (about 100 nanometers) insulator layer with random nanoscale openings. We performed drift-diffusion simulations for cells with this porous insulator contact (PIC) and realized it using a solution process by controlling the growth mode of alumina nanoplates. Leveraging a PIC with an approximately 25% reduced contact area, we achieved an efficiency of up to 25.5% (certified steady-state efficiency 24.7%) in p-i-n devices. The product of Voc × FF was 87.9% of the Shockley-Queisser limit. The surface recombination velocity at the p-type contact was reduced from 64.2 to 9.2 centimeters per second. The bulk recombination lifetime was increased from 1.2 to 6.0 microseconds because of improvements in the perovskite crystallinity. The improved wettability of the perovskite precursor solution allowed us to demonstrate a 23.3% efficient 1-square-centimeter p-i-n cell. We demonstrate here its broad applicability for different p-type contacts and perovskite compositions. Description Through thick but not thin To maintain high charge carrier conductivity in perovskite solar cells, the passivating layer is usually very thin (~1 nanometer) to enable electron tunneling. However, this approach limits efficiency because it creates a trade-off between open-circuit voltage and fill factor and challenges in fabricating thin films from solution over large areas. Peng et al. grew a thick (~100 nanometer) dielectric mask formed by depositing alumina nanoplates and thus created random nanoscale openings for carrier transport. This layer reduced nonradiative recombination and boosted power conversion efficiencies from 23 to 25.5% compared with a conventional passivation layer. —PDS A solution-processed thick dielectric mask with nanoscale openings can maintain both open-circuit voltage and fill factor.

Journal ArticleDOI
17 Feb 2023-Science
TL;DR: Li et al. as mentioned in this paper used density functional theory to screen potential Lewis bases and found that phosphorus-containing molecules showed the strongest binding to lead, and the best inverted PSC treated with 1,3-bis(diphenylphosphino)propane (DPPP), a diphosphine Lewis base that passivates, binds, and bridges interfaces and grain boundaries, retained a power conversion efficiency slightly higher than its initial PCE of ~23% after continuous operation under simulated AM1.5 illumination at the maximum power point and at ~40°C for >
Abstract: Lewis base molecules that bind undercoordinated lead atoms at interfaces and grain boundaries (GBs) are known to enhance the durability of metal halide perovskite solar cells (PSCs). Using density functional theory calculations, we found that phosphine-containing molecules have the strongest binding energy among members of a library of Lewis base molecules studied herein. Experimentally, we found that the best inverted PSC treated with 1,3-bis(diphenylphosphino)propane (DPPP), a diphosphine Lewis base that passivates, binds, and bridges interfaces and GBs, retained a power conversion efficiency (PCE) slightly higher than its initial PCE of ~23% after continuous operation under simulated AM1.5 illumination at the maximum power point and at ~40°C for >3500 hours. DPPP-treated devices showed a similar increase in PCE after being kept under open-circuit conditions at 85°C for >1500 hours. Description Phosphorus stabilization of perovskites Lewis base molecules that contain electron-donating atoms such as oxygen or sulfur can bind to undercoordinated lead atoms and passivate defects at interfaces and grain boundaries in perovskite films. Li et al. used density functional theory to screen potential Lewis bases and found that phosphorus-containing molecules showed the strongest binding to lead. A small amount of 1,3-bis(diphenylphosphino)propane stabilized inverted perovskite solar cells. The solar cells could maintain a power conversion efficiency of about 23% for more than 1500 hours under open-circuit conditions at 85°C. —PDS A phosphorus-containing Lewis-base molecule passivates and bridges perovskite grain boundaries and interfaces.

Journal ArticleDOI
26 Jan 2023-Science
TL;DR: Iwata et al. as mentioned in this paper found that mitochondria are important regulators of the pace of neuronal development underlying human-specific brain neoteny and found that increased metabolic rates in mitochondria appear to somehow help set the speed of neuron development.
Abstract: Neuronal development in the human cerebral cortex is considerably prolonged compared with that of other mammals. We explored whether mitochondria influence the species-specific timing of cortical neuron maturation. By comparing human and mouse cortical neuronal maturation at high temporal and cell resolution, we found a slower mitochondria development in human cortical neurons compared with that in the mouse, together with lower mitochondria metabolic activity, particularly that of oxidative phosphorylation. Stimulation of mitochondria metabolism in human neurons resulted in accelerated development in vitro and in vivo, leading to maturation of cells weeks ahead of time, whereas its inhibition in mouse neurons led to decreased rates of maturation. Mitochondria are thus important regulators of the pace of neuronal development underlying human-specific brain neoteny. Description Metabolism sets neuronal development pace The pace of neuronal development varies between species, and the relatively slow development of the human brain may help allow its exceptional complexity. Iwata et al. propose that the pace of neuronal development depends on the rate of metabolic activity in mitochondria. Human and mouse neurons exhibited distinct paces of development in culture correlated with the tricarboxylic acid cycle and oxidative activity in mitochondria. Manipulations to increase oxygen consumption rates and tricarboxylic acid cycle activity in human cells increased the rate of neuronal development. Slowing metabolic rates in mouse neurons slowed neuronal maturation. Thus, metabolic rates in mitochondria appear to somehow help set the speed of neuronal development. —LBR Mitochondria are important regulators of the pace of neuronal development underlying human-specific brain maturation. INTRODUCTION During embryonic development, the temporal sequence of events is usually conserved throughout evolution, but it can occur at very different time scales depending on the species or cell type considered. The human cerebral cortex is characterized by a considerably prolonged timing of neuronal development compared with other species, taking months to years to reach maturity compared with only a few weeks in the mouse. The resulting neoteny is thought to be a key mechanism enabling enhanced function and plasticity of the human brain. Human and nonhuman cortical neurons cultured in vitro or xenotransplanted into the mouse brain develop along their species-specific timeline. This suggests that species-specific developmental timing is controlled by cell-intrinsic mechanisms, but these remain essentially unknown. RATIONALE Metabolism and mitochondria are key drivers of cell fate transitions in many systems, including the developing brain. Here, we tested whether they could be involved in the species-specific tempo of cortical neuron development and human brain neoteny. We developed a system of genetic birth-dating to label newly born neurons with high temporal and cellular resolution, and directly compared the development of human and mouse cortical neurons over time. We thus profiled, across time and species, mitochondria morphology, gene expression, oxygen consumption, and glucose metabolism. Next, we used pharmacological or genetic manipulation of human or mouse neurons to enhance or decrease their mitochondria function, and determined the consequences on the speed of neuronal development. RESULTS We found that mitochondria are initally low in size and quantity in newborn neurons, and then grow gradually as neurons undergo maturation following a species-specific timeline. Whereas in mouse neurons, mitochondria reach mature patterns in 3 to 4 weeks, they only do so after several months in human neurons. We next measured mitochondria oxidative activity and glucose metabolism in human and mouse developing cortical neurons. This revealed a species-specific timeline of functional maturation of mitochondria, with mouse neurons displaying a much faster increase in mitochondria-dependent oxidative activity than human neurons. We also found that human cortical neurons displayed lower levels of mitochondria-driven glucose metabolism than did mouse neurons at the same age. Finally, we tested whether mitochondria function affects neuronal developmental timing. We performed pharmacological or genetic manipulation of human developing cortical neurons to enhance mitochondria oxidative metabolism. This led to accelerated neuronal maturation, with neurons exhibiting more mature features weeks ahead of time, including complex morphology, increased electrical excitability, and functional synapse formation. Similar treatments on mouse neurons also led to faster maturation, whereas inhibition of mitochondria metabolism in mouse neurons led to a decrease in developmental rates. CONCLUSION Our work identifies a species-specific temporal pattern of mitochondria and metabolic development that sets the tempo of neuronal maturation. Accelerated human neuronal maturation using metabolic manipulation might benefit pluripotent stem cell–based modeling of neural diseases, which remains greatly hindered by protracted neuron development. Tools to accelerate or decelerate neuronal development could allow testing of the impact of neuronal neoteny on brain function, plasticity, and disease. Mitochondria metabolism sets the tempo of neuronal development. Mitochondria dynamics and metabolism display species-specific timelines during cortical neuron development. In newborn neurons, mitochondria are small in number and metabolic activity, and then increase gradually during neuronal maturation. Enhanced mitochondria metabolism in human neurons leads to accelerated maturation, including increased neurite complexity, excitability, and synaptic function. Decreased mitochondria metabolism in mouse neurons leads to decelerated neuronal maturation.

Journal ArticleDOI
17 Feb 2023-Science
TL;DR: The extracellular matrix (ECM) is a dynamic tissue support network that is made up of components including fibrillar proteins, glycosaminoglycans, proteoglycan, and mucus as discussed by the authors .
Abstract: For decades, immunologists have studied the role of circulating immune cells in host protection, with a more recent appreciation of immune cells resident within the tissue microenvironment and the intercommunication between nonhematopoietic cells and immune cells. However, the extracellular matrix (ECM), which comprises at least a third of tissue structures, remains relatively underexplored in immunology. Similarly, matrix biologists often overlook regulation of complex structural matrices by the immune system. We are only beginning to understand the scale at which ECM structures determine immune cell localization and function. Additionally, we need to better understand how immune cells dictate ECM complexity. This review aims to highlight the potential for biological discovery at the interface of immunology and matrix biology. Description Immune cells in the extracellular matrix The extracellular matrix is a dynamic tissue support network that is made up of components including fibrillar proteins, glycosaminoglycans, proteoglycans, and mucus. There is growing knowledge that this network has an intricate relationship with immune cells, which has important implications for our understanding of host defense, wound repair, fibrotic disease, and aging. Sutherland et al. reviewed the latest research on the interconnectedness of these systems with an emphasis on how this cross talk is an important element in the success (or failure) of immune cell–based therapies. —STS A review explains that the extracellular matrix and immune system are highly interconnected networks integral to tissue homeostasis and disease. BACKGROUND The extracellular matrix (ECM) forms a dynamic structure around cells that is essential for the supply of environmental factors, mechanical support, and protection of tissues. It includes components such as fibrillar proteins, glycosaminoglycans (GAGs), proteoglycans, and mucus. The molecular, physical, and mechanical properties of the ECM regulate immune cell mobility, survival, and function. In turn, the immune system maintains and regulates healthy matrix and restores matrix integrity after injury. A dysregulated ECM–immune system partnership contributes to most diseases. Exploring the complex interconnectivity between ECM biology and immune cells has the potential to help treat disease and maintain healthy aging. ADVANCES Immune cells are perpetually in contact with the ECM, yet the potential consequences of these interactions often remain unexplored. One function of the ECM is to guide immune cell movement and positioning. T cells, for example, move through sites containing thin ECM fibers in preference to more densely cross-linked collagen matrices, whereas heparan sulfate proteoglycans within the vasculature and tissue parenchyma bind and present chemokines to form gradients that direct cell movement. During inflammation, injury, infection, or even aging, ECM components can be released to act as “danger signals.” Conversely, the breakdown of the ECM by matrix-degrading enzymes can generate immunoregulatory fragments. Critically, because cytokines are often bound to GAGs, ECM changes can regulate cytokine availability or activity. During aging and fibrotic diseases, changes to the fibrillar collagen network result in pathological tissue stiffness and loss of mechanical compliance. Additionally, increases in the GAG hyaluronan contribute to altered ECM mechanical properties that accompany aging and disease. There is increasing evidence that immune cell function is regulated by mechanosensing receptors such as Piezo1, and these ECM changes have major impacts on immune function. Furthermore, a decline in the activity of the mechanosensing transcriptional activators YAP and TAZ during physiological aging results in failure to down-regulate inflammation. Thus, ECM composition actively regulates immune processes, but immune signals will themselves regulate ECM composition, reflecting an essential bidirectional dialog. One prominent way that the immune system regulates the ECM is through transforming growth factor–β (TGF-β), which promotes myofibroblast differentiation and collagen production and inhibits matrix-degrading metalloproteinases. Furthermore, type 2 cytokines, particularly interleukin-13 (IL-13), have emerged as modulators of ECM quantity and quality, which includes regulating the mucosal barrier. Moreover, direct biophysical interaction with chemokines or cytokines can alter ECM structure and/or function. For instance, CXCL4 (PF4) functions by binding to GAGs rather than by directly binding to chemokine receptors. This can lead to remodeling of the cell surface ECM and signaling through proteoglycans. Immune cells control ECM not only through the production of cytokines and chemokines but also by direct synthesis of ECM components and the enzymes that break them down. Enzymatic remodeling of the rigid basement membrane by tissue-infiltrating myeloid cells, for example, can provide routes for lymphocytes to follow. Macrophages, which are pivotal in ECM turnover through receptor-mediated uptake of and degradation of collagen, also produce collagens that may provide templates for tissue remodeling. Neutrophils can pull and carry preexisting matrix from nearby sites to wound beds early in the tissue repair process to reestablish new ECM scaffolds. OUTLOOK The ECM, long considered an inert scaffold, can now be seen as a highly dynamic partner to the immune system. In this review, we aim to highlight the absolute interdependence of these systems with consequences for therapy. For example, immune cell–based therapies may fail if they are placed in diseased matrix that itself drives pathology. Ultimately, to answer the most pressing questions in tissue health, it will be critical that immunologists and matrix biologists work together. Interconnectivity between the ECM and the immune system. (A) Physical and molecular properties of the ECM control immune cell positioning and migration within a tissue during pathology. (B) Aging not only alters the mechanical properties of ECM, which are sensed by the immune system, but also reduces mechanosensors that down-regulate proinflammatory pathways. (C and D) The immunomodulatory cytokine IL-13 can remodel the mucus barrier (C), whereas immune cells themselves can carry matrix across tissues to help build an ECM scaffold for repair (D). (Figure created with BioRender.)

Journal ArticleDOI
13 Jan 2023-Science
TL;DR: In this article , the authors quantitatively evaluated all available global warming projections documented by Exxon and ExxonMobil scientists between 1977 and 2003 and found that most of their projections accurately forecast warming that is consistent with subsequent observations.
Abstract: Climate projections by the fossil fuel industry have never been assessed. On the basis of company records, we quantitatively evaluated all available global warming projections documented by—and in many cases modeled by—Exxon and ExxonMobil Corp scientists between 1977 and 2003. We find that most of their projections accurately forecast warming that is consistent with subsequent observations. Their projections were also consistent with, and at least as skillful as, those of independent academic and government models. Exxon and ExxonMobil Corp also correctly rejected the prospect of a coming ice age, accurately predicted when human-caused global warming would first be detected, and reasonably estimated the “carbon budget” for holding warming below 2°C. On each of these points, however, the company’s public statements about climate science contradicted its own scientific data. Description Insider knowledge For decades, some members of the fossil fuel industry tried to convince the public that a causative link between fossil fuel use and climate warming could not be made because the models used to project warming were too uncertain. Supran et al. show that one of those fossil fuel companies, ExxonMobil, had their own internal models that projected warming trajectories consistent with those forecast by the independent academic and government models. What they understood about climate models thus contradicted what they led the public to believe. —HJS ExxonMobil’s own climate models showed that fossil fuel use would cause climate warming. BACKGROUND In 2015, investigative journalists discovered internal company memos indicating that Exxon oil company has known since the late 1970s that its fossil fuel products could lead to global warming with “dramatic environmental effects before the year 2050.” Additional documents then emerged showing that the US oil and gas industry’s largest trade association had likewise known since at least the 1950s, as had the coal industry since at least the 1960s, and electric utilities, Total oil company, and GM and Ford motor companies since at least the 1970s. Scholars and journalists have analyzed the texts contained in these documents, providing qualitative accounts of fossil fuel interests’ knowledge of climate science and its implications. In 2017, for instance, we demonstrated that Exxon’s internal documents, as well as peer-reviewed studies published by Exxon and ExxonMobil Corp scientists, overwhelmingly acknowledged that climate change is real and human-caused. By contrast, the majority of Mobil and ExxonMobil Corp’s public communications promoted doubt on the matter. ADVANCES Many of the uncovered fossil fuel industry documents include explicit projections of the amount of warming expected to occur over time in response to rising atmospheric greenhouse gas concentrations. Yet, these numerical and graphical data have received little attention. Indeed, no one has systematically reviewed climate modeling projections by any fossil fuel interest. What exactly did oil and gas companies know, and how accurate did their knowledge prove to be? Here, we address these questions by reporting and analyzing all known global warming projections documented by—and in many cases modeled by—Exxon and ExxonMobil Corp scientists between 1977 and 2003. Our results show that in private and academic circles since the late 1970s and early 1980s, ExxonMobil predicted global warming correctly and skillfully. Using established statistical techniques, we find that 63 to 83% of the climate projections reported by ExxonMobil scientists were accurate in predicting subsequent global warming. ExxonMobil’s average projected warming was 0.20° ± 0.04°C per decade, which is, within uncertainty, the same as that of independent academic and government projections published between 1970 and 2007. The average “skill score” and level of uncertainty of ExxonMobil’s climate models (67 to 75% and ±21%, respectively) were also similar to those of the independent models. Moreover, we show that ExxonMobil scientists correctly dismissed the possibility of a coming ice age in favor of a “carbon dioxide induced ‘super-interglacial’”; accurately predicted that human-caused global warming would first be detectable in the year 2000 ± 5; and reasonably estimated how much CO2 would lead to dangerous warming. OUTLOOK Today, dozens of cities, counties, and states are suing oil and gas companies for their “longstanding internal scientific knowledge of the causes and consequences of climate change and public deception campaigns.” The European Parliament and the US Congress have held hearings, US President Joe Biden has committed to holding fossil fuel companies accountable, and a grassroots social movement has arisen under the moniker #ExxonKnew. Our findings demonstrate that ExxonMobil didn’t just know “something” about global warming decades ago—they knew as much as academic and government scientists knew. But whereas those scientists worked to communicate what they knew, ExxonMobil worked to deny it—including overemphasizing uncertainties, denigrating climate models, mythologizing global cooling, feigning ignorance about the discernibility of human-caused warming, and staying silent about the possibility of stranded fossil fuel assets in a carbon-constrained world. Historically observed temperature change (red) and atmospheric carbon dioxide concentration (blue) over time, compared against global warming projections reported by ExxonMobil scientists. (A) “Proprietary” 1982 Exxon-modeled projections. (B) Summary of projections in seven internal company memos and five peer-reviewed publications between 1977 and 2003 (gray lines). (C) A 1977 internally reported graph of the global warming “effect of CO2 on an interglacial scale.” (A) and (B) display averaged historical temperature observations, whereas the historical temperature record in (C) is a smoothed Earth system model simulation of the last 150,000 years.

Journal ArticleDOI
10 Feb 2023-Science
TL;DR: The earliest known hominin tools from around 2.6 million years ago have been found to have been used to process diverse foods including megafauna and associated with Paranthropus from its onset as discussed by the authors .
Abstract: The oldest Oldowan tool sites, from around 2.6 million years ago, have previously been confined to Ethiopia’s Afar Triangle. We describe sites at Nyayanga, Kenya, dated to 3.032 to 2.581 million years ago and expand this distribution by over 1300 kilometers. Furthermore, we found two hippopotamid butchery sites associated with mosaic vegetation and a C4 grazer–dominated fauna. Tool flaking proficiency was comparable with that of younger Oldowan assemblages, but pounding activities were more common. Tool use-wear and bone damage indicate plant and animal tissue processing. Paranthropus sp. teeth, the first from southwestern Kenya, possessed carbon isotopic values indicative of a diet rich in C4 foods. We argue that the earliest Oldowan was more widespread than previously known, used to process diverse foods including megafauna, and associated with Paranthropus from its onset. Description Earlier Oldowan Oldowan tools, consisting of stones with one to a few flakes removed, are the oldest widespread and temporally persistent hominin tools. The oldest of these were previously known from around 2.6 million years ago in Ethiopia, and by 2 million years ago, they were found to be quite widespread. Plummer et al. report on an older fossil site from around 3 to 2.6 million years ago in Kenya, where Oldowan tools were not only present, but were also being used to process a variety of foods, including hippopotamus. Thus, it appears that these tools were widespread much earlier than previous estimates and were widely used for food processing. Which hominins were using these tools remains uncertain, but Paranthropus fossils occur at the site. —SNV Ancient Oldowan sites from Nyayanga show evidence of hippo butchery, plant processing, and the first Paranthropus, a type of extinct hominin, from southwest Kenya.

Journal ArticleDOI
20 Jan 2023-Science
TL;DR: For example, Falchi et al. as discussed by the authors investigated the change in global sky brightness from 2011 to 2022 using 51,351 citizen scientist observations of naked-eye stellar visibility and found that the number of visible stars decreased by an amount that can be explained by an increase in sky brightness of 7 to 10% per year in the human visible band.
Abstract: The artificial glow of the night sky is a form of light pollution; its global change over time is not well known. Developments in lighting technology complicate any measurement because of changes in lighting practice and emission spectra. We investigated the change in global sky brightness from 2011 to 2022 using 51,351 citizen scientist observations of naked-eye stellar visibility. The number of visible stars decreased by an amount that can be explained by an increase in sky brightness of 7 to 10% per year in the human visible band. This increase is faster than emissions changes indicated by satellite observations. We ascribe this difference to spectral changes in light emission and to the average angle of light emissions. Description The night sky is rapidly getting brighter Artificial lighting that escapes into the sky causes it to glow, preventing humans and animals from seeing the stars. Satellites can measure the light emitted upward, but they are not sensitive to all wavelengths produced by LED lighting or to light emitted horizontally. Kyba et al. used data from citizen scientists to measure how light pollution is affecting human views of the stars worldwide (see the Perspective by Falchi and Bará). Participants were shown maps of the sky at different levels of light pollution and asked which most closely matched their view. Trends in the data showed that the average night sky got brighter by 9.6% per year from 2011 to 2022, which is equivalent to doubling the sky brightness every 8 years. —KTS Observations of the night sky by citizen scientists show that it is rapidly getting brighter due to light pollution.

Journal ArticleDOI
09 Mar 2023-Science
TL;DR: In this paper , the authors explore the evolution of placental mammals, including humans, through reference-free whole-genome alignment of 240 species and protein-coding alignments for 428 species and estimate 10.7% of the human genome is evolutionarily constrained.
Abstract: Evolutionary constraint and acceleration are powerful, cell-type agnostic measures of functional importance. Previous studies in mammals were limited by species number and reliance on human-referenced alignments. We explore the evolution of placental mammals, including humans, through reference-free whole-genome alignment of 240 species and protein-coding alignments for 428 species. We estimate 10.7% of the human genome is evolutionarily constrained. We resolve constraint to single nucleotides, pinpointing functional positions, and refine and expand by over seven-fold the catalog of ultraconserved elements. Overall, 48.5% of constrained bases are as yet unannotated, suggesting yet-to-be-discovered functional importance. Using species-level phenotypes and an updated phylogeny, we associate coding and regulatory variation with olfaction and hibernation. Focusing on biodiversity conservation, we identify genomic metrics that predict species at risk of extinction.

Journal ArticleDOI
27 Jan 2023-Science
TL;DR: In this article , the authors synthesize data on forest loss and degradation in the Amazon basin, providing a robust picture of its current status and future prospects, and show that degradation will remain a dominant source of carbon emissions independent of deforestation rates.
Abstract: Approximately 2.5 × 106 square kilometers of the Amazon forest are currently degraded by fire, edge effects, timber extraction, and/or extreme drought, representing 38% of all remaining forests in the region. Carbon emissions from this degradation total up to 0.2 petagrams of carbon per year (Pg C year−1), which is equivalent to, if not greater than, the emissions from Amazon deforestation (0.06 to 0.21 Pg C year−1). Amazon forest degradation can reduce dry-season evapotranspiration by up to 34% and cause as much biodiversity loss as deforestation in human-modified landscapes, generating uneven socioeconomic burdens, mainly to forest dwellers. Projections indicate that degradation will remain a dominant source of carbon emissions independent of deforestation rates. Policies to tackle degradation should be integrated with efforts to curb deforestation and complemented with innovative measures addressing the disturbances that degrade the Amazon forest. Description Losing the Amazon The Amazon rainforest is a biodiversity hotspot under threat from ongoing land conversion and climate change. Two Analytical Reviews in this issue synthesize data on forest loss and degradation in the Amazon basin, providing a clearer picture of its current status and future prospects. Albert et al. reviewed the drivers of change in the Amazon and show that anthropogenic changes are occurring much faster than naturally occurring environmental changes of the past. Although deforestation has been widely documented in the Amazon, degradation is also having major impacts on biodiversity and carbon storage. Lapola et al. synthesized the drivers and outcomes of Amazon forest degradation from timber extraction and habitat fragmentation, fires, and drought. —BEL Two Reviews spotlight the threats of ongoing deforestation and degradation in the Amazon. BACKGROUND Most analyses of land-use and land-cover change in the Amazon forest have focused on the causes and effects of deforestation. However, anthropogenic disturbances cause degradation of the remaining Amazon forest and threaten their future. Among such disturbances, the most important are edge effects (due to deforestation and the resulting habitat fragmentation), timber extraction, fire, and extreme droughts that have been intensified by human-induced climate change. We synthesize knowledge on these disturbances that lead to Amazon forest degradation, including their causes and impacts, possible future extents, and some of the interventions required to curb them. ADVANCES Analysis of existing data on the extent of fire, edge effects, and timber extraction between 2001 and 2018 reveals that 0.36 ×106 km2 (5.5%) of the Amazon forest is under some form of degradation, which corresponds to 112% of the total area deforested in that period. Adding data on extreme droughts increases the estimate of total degraded area to 2.5 ×106 km2, or 38% of the remaining Amazonian forests. Estimated carbon loss from these forest disturbances ranges from 0.05 to 0.20 Pg C year−1 and is comparable to carbon loss from deforestation (0.06 to 0.21 Pg C year−1). Disturbances can bring about as much biodiversity loss as deforestation itself, and forests degraded by fire and timber extraction can have a 2 to 34% reduction in dry-season evapotranspiration. The underlying drivers of disturbances (e.g., agricultural expansion or demand for timber) generate material benefits for a restricted group of regional and global actors, whereas the burdens permeate across a broad range of scales and social groups ranging from nearby forest dwellers to urban residents of Andean countries. First-order 2050 projections indicate that the four main disturbances will remain a major threat and source of carbon fluxes to the atmosphere, independent of deforestation trajectories. OUTLOOK Whereas some disturbances such as edge effects can be tackled by curbing deforestation, others, like constraining the increase in extreme droughts, require additional measures, including global efforts to reduce greenhouse gas emissions. Curbing degradation will also require engaging with the diverse set of actors that promote it, operationalizing effective monitoring of different disturbances, and refining policy frameworks such as REDD+. These will all be supported by rapid and multidisciplinary advances in our socioenvironmental understanding of tropical forest degradation, providing a robust platform on which to co-construct appropriate policies and programs to curb it. An overview of tropical forest degradation processes in the Amazon. Underlying drivers (a few of which are shown in gray at the bottom) stimulate disturbances (timber extraction, fire, edge effects, and extreme drought) that cause forest degradation. A satellite illustrates the attempts to estimate degradation’s spatial extent and associated carbon losses. Impacts (in red and insets) are either local—causing biodiversity losses or affecting forest-dweller livelihoods—or remote, for example, with smoke affecting people’s health in cities or causing the melting of Andean glaciers owing to black carbon deposition. Credit: Alex Argozino/Studio Argozino

Journal ArticleDOI
Hiroshi Naraoka, Yu Takano, Jason P. Dworkin, Yasuhiro Oba, Kenji Hamase, Aogu Furusho, Nanako O. Ogawa, Minako Hashiguchi, Kazuhiko Fukushima, Dan Aoki, Ph. Schmitt-Kopplin, José C. Aponte, Eric T. Parker, Daniel P. Glavin, H. L. McLain, Jamie E. Elsila, Heather Graham, John M. Eiler, François-Régis Orthous-Daunay, Cédric Wolters, J. Isa, Véronique Vuitton, Roland Thissen, Saburo Sakai, Toshihiro Yoshimura, Toshiki Koga, Naohiko Ohkouchi, Yoshito Chikaraishi, Haruna Sugahara, Hajime Mita, Yoshihiro Furukawa, Norbert Hertkorn, Alexander Ruf, Hisayoshi Yurimoto, Tomoki Nakamura, Takaaki Noguchi, Ryuji Okazaki, Hikaru Yabuta, Kanako Sakamoto, Shogo Tachibana, Harold C. Connolly, Dante S. Lauretta, Masanao Abe, Toru Yada, M. Nishimura, Kasumi Yogata, Aiko Nakato, M. Yoshitake, Ayako I. Suzuki, Akiko Miyazaki, Shizuho Furuya, Kentaro Hatakeda, Hiromichi Soejima, Yuya Hitomi, K. Kumagai, Tomohiro Usui, Tasuku Hayashi, Daiki Yamamoto, Ryota Fukai, Kohei Kitazato, Seiji Sugita, Noriyuki Namiki, Masahiko Arakawa, H. Ikeda, Masateru Ishiguro, Naru Hirata, Koji Wada, Yoshiaki Ishihara, Rina Noguchi, Tomokatsu Morota, Naoya Sakatani, Koji Matsumoto, Hiroki Senshu, Rie Honda, Eri Tatsumi, Yasuhiro Yokota, Chikatoshi Honda, Tatsuhiro Michikami, Moe Matsuoka, Akira Miura, Hirotomo Noda, Tetsuya Yamada, Keisuke Yoshihara, Kosuke Kawahara, Masanobu Ozaki, Yuichi Iijima, Hajime Yano, Masahiro Hayakawa, Takahiro Iwata, Ryudo Tsukizaki, Hirotaka Sawada, Satoshi Hosoda, Kazunori Ogawa, Chisato Okamoto, Naoyuki Hirata, Kei Shirai, Yuri Shimaki, Manabu Yamada, Tatsuaki Okada, Yukio Yamamoto, Hiroshi Takeuchi, Atsushi Fujii, Yuto Takei, Kent Yoshikawa, Yuya Mimasu, Go Ono, Naoko Ogawa, Shota Kikuchi, Satoru Nakazawa, Fuyuto Terui, Satoshi Tanaka, Takanao Saiki, Makoto Yoshikawa, Sei-ichiro Watanabe, Yuichi Tsuda 
24 Feb 2023-Science
TL;DR: The Hayabusa2 spacecraft collected samples from the surface of the carbonaceous near-Earth asteroid (162173) Ryugu and brought them to Earth by the International Space Station (ISS) as mentioned in this paper .
Abstract: The Hayabusa2 spacecraft collected samples from the surface of the carbonaceous near-Earth asteroid (162173) Ryugu and brought them to Earth. The samples were expected to contain organic molecules, which record processes that occurred in the early Solar System. We analyzed organic molecules extracted from the Ryugu surface samples. We identified a variety of molecules containing the atoms CHNOS, formed by methylation, hydration, hydroxylation, and sulfurization reactions. Amino acids, aliphatic amines, carboxylic acids, polycyclic aromatic hydrocarbons, and nitrogen-heterocyclic compounds were detected, which had properties consistent with an abiotic origin. These compounds likely arose from an aqueous reaction on Ryugu’s parent body and are similar to the organics in Ivuna-type meteorites. These molecules can survive on the surfaces of asteroids and be transported throughout the Solar System. Description INTRODUCTION Surface material from the near-Earth carbonaceous (C-type) asteroid (162173) Ryugu was collected and brought to Earth by the Hayabusa2 spacecraft. Ryugu is a dark, primitive asteroid containing hydrous minerals that are similar to the most hydrated carbonaceous meteorites. C-type asteroids are common in the asteroid belt and have been proposed as the parent bodies of carbonaceous meteorites. The samples of Ryugu provide an opportunity to investigate organic compounds for comparison with those from carbonaceous meteorites. Unlike meteorites, the Ryugu samples were collected and delivered for study under controlled conditions, reducing terrestrial contamination and the effects of atmospheric entry. RATIONALE Primitive carbonaceous chondrite meteorites are known to contain a variety of soluble organic molecules (SOMs), including prebiotic molecules such as amino acids. Meteorites might have delivered amino acids and other prebiotic organic molecules to the early Earth and other rocky planets. Organic matter in the Ryugu samples is the product of physical and chemical processes that occurred in the interstellar medium, the protosolar nebula, and/or on the planetesimal that became Ryugu’s parent body. We investigated SOMs in Ryugu samples principally using mass spectrometry coupled with liquid or gas chromatography. RESULTS We identified numerous organic molecules in the Ryugu samples. Mass spectroscopy detected hundreds of thousands of ion signals, which we assigned to ~20,000 elementary compositions consisting of carbon, hydrogen, nitrogen, oxygen, and/or sulfur. Fifteen amino acids, including glycine, alanine, and α-aminobutyric acid, were identified. These were present as racemic mixtures (equal right- and left-handed abundances), consistent with an abiotic origin. Aliphatic amines (such as methylamine) and carboxylic acids (such as acetic acid) were also detected, likely retained on Ryugu as organic salts. The presence of aromatic hydrocarbons, including alkylbenzenes, fluoranthene, and pyrene, implies hydrothermal processing on Ryugu’s parent body and/or presolar synthesis in the interstellar medium. Nitrogen-containing heterocyclic compounds were identified as their alkylated homologs, which could have been synthesized from simple aldehydes and ammonia. In situ analysis of a grain surface showed heterogeneous spatial distribution of alkylated homologs of nitrogen- and/or oxygen-containing compounds. CONCLUSION The wide variety of molecules identified indicates that prolonged chemical processes contributed to the synthesis of soluble organics on Ryugu or its parent body. The highly diverse mixture of SOMs in the samples resembles that seen in some carbonaceous chondrites. However, the SOM concentration in Ryugu is less than that in moderately aqueously altered CM (Mighei-type) chondrites, being more similar to that seen in warm aqueously altered CI (Ivuna-type) chondrites. The chemical diversity with low SOM concentration in Ryugu is consistent with aqueous organic chemistry at modest temperatures on Ryugu’s parent asteroid. The samples collected from the surface of Ryugu were exposed to the hard vacuum of space, energetic particle irradiation, heating by sunlight, and micrometeoroid impacts, but the SOM is still preserved, likely by being associated with minerals. The presence of prebiotic molecules on the asteroid surface suggests that these molecules can be transported throughout the Solar System. SOMs detected in surface samples of asteroid Ryugu. Chemical structural models are shown for example molecules from several classes identified in the Ryugu samples. Gray balls are carbon, white are hydrogen, red are oxygen, and blue are nitrogen. Clockwise from top: amines (represented by ethylamine), nitrogen-containing heterocycles (pyridine), a photograph of the sample vials for analysis, polycyclic aromatic hydrocarbons (PAHs) (pyrene), carboxylic acids (acetic acid), and amino acids (β-alanine). The central hexagon shows a photograph of the Ryugu sample in the sample collector of the Hayabusa2 spacecraft. The background image shows Ryugu in a photograph taken by Hayabusa2. CREDIT: JAXA, University of Tokyo, Kochi University, Rikkyo University, Nagoya University, Chiba Institute of Technology, Meiji University, University of Aizu, AIST, NASA, Dan Gallagher.

Journal ArticleDOI
06 Jan 2023-Science
TL;DR: Hata et al. as discussed by the authors reported that diet-induced obesity earlier in life triggers persistent reprogramming of the innate immune system, lasting long after normalization of metabolic abnormalities.
Abstract: Age-related macular degeneration is a prevalent neuroinflammatory condition and a major cause of blindness driven by genetic and environmental factors such as obesity. In diseases of aging, modifiable factors can be compounded over the life span. We report that diet-induced obesity earlier in life triggers persistent reprogramming of the innate immune system, lasting long after normalization of metabolic abnormalities. Stearic acid, acting through Toll-like receptor 4 (TLR4), is sufficient to remodel chromatin landscapes and selectively enhance accessibility at binding sites for activator protein-1 (AP-1). Myeloid cells show less oxidative phosphorylation and shift to glycolysis, ultimately leading to proinflammatory cytokine transcription, aggravation of pathological retinal angiogenesis, and neuronal degeneration associated with loss of visual function. Thus, a past history of obesity reprograms mononuclear phagocytes and predisposes to neuroinflammation. Description Lingering immune changes after obesity A past period of obesity caused by a high-fat diet in mice produces persistent changes in innate immunity even after weight loss and normalization of metabolism. Hata et al. found that such diet-induced obesity in mice, even after it was resolved, led to persistent epigenetic changes in chromatin in macrophages associated with increased expression of genes that function in inflammatory responses (see the Perspective by Mangum and Gallagher). Experiments with transplants of adipose tissue or bone marrow implicated alterations of myeloid cells in exacerbating inflammatory responses to experimentally induced injury in the eye. If similar processes occur in humans, the authors propose that such changes could contribute to predisposition to age-related macular degeneration associated with obesity. —LBR A history of obesity stably alters chromatin in macrophages and promotes inflammatory disease.

Journal ArticleDOI
26 Jan 2023-Science
TL;DR: In this article , exon junction complexes are identified as m6A suppressors that protect proximal RNA within coding sequences from methylation and regulate mRNA stability through m6a suppression.
Abstract: N6-methyladenosine (m6A) is the most abundant messenger RNA (mRNA) modification and plays crucial roles in diverse physiological processes. Using a massively parallel assay for m6A (MPm6A), we discover that m6A specificity is globally regulated by suppressors that prevent m6A deposition in unmethylated transcriptome regions. We identify exon junction complexes (EJCs) as m6A suppressors that protect exon junction–proximal RNA within coding sequences from methylation and regulate mRNA stability through m6A suppression. EJC suppression of m6A underlies multiple global characteristics of mRNA m6A specificity, with the local range of EJC protection sufficient to suppress m6A deposition in average-length internal exons but not in long internal and terminal exons. EJC-suppressed methylation sites colocalize with EJC-suppressed splice sites, which suggests that exon architecture broadly determines local mRNA accessibility to regulatory complexes. Description Methylation suppression Methylation of the N6 position of adenine (m6A) is a chemical tag for many messenger RNAs (mRNAs) added by m6A “writer” proteins. Tagged mRNAs are marked for regulation through m6A “reader” proteins, which bind methylated mRNAs and alter their expression. However, how the cell selects specific regions on mRNAs to be marked has remained unclear. He et al. identified a new function for the exon junction complex as an m6A “suppressor” that packages nearby mRNA and protects these regions from m6A marking. Exon architecture controls mRNA accessibility to m6A methylation, as well as potentially a broader set of regulatory complexes. —DJ Exon junction complexes are m6A suppressors that control global mRNA m6A specificity by protecting proximal RNA from methylation.

Journal ArticleDOI
17 Feb 2023-Science
TL;DR: In this article , a continuous-flow electrolyzer equipped with 25 square centimeter-effective area gas diffusion electrodes was used for ammonia synthesis with a gold-platinum alloy catalyst.
Abstract: Ammonia is a critical component in fertilizers, pharmaceuticals, and fine chemicals and is an ideal, carbon-free fuel. Recently, lithium-mediated nitrogen reduction has proven to be a promising route for electrochemical ammonia synthesis at ambient conditions. In this work, we report a continuous-flow electrolyzer equipped with 25–square centimeter–effective area gas diffusion electrodes wherein nitrogen reduction is coupled with hydrogen oxidation. We show that the classical catalyst platinum is not stable for hydrogen oxidation in the organic electrolyte, but a platinum-gold alloy lowers the anode potential and avoids the decremental decomposition of the organic electrolyte. At optimal operating conditions, we achieve, at 1 bar, a faradaic efficiency for ammonia production of up to 61 ± 1% and an energy efficiency of 13 ± 1% at a current density of −6 milliamperes per square centimeter. Description Protons from H2 for ammonia synthesis Electrochemical synthesis of ammonia from nitrogen (N2) and hydrogen (H2) could advantageously decentralize the current mass production of fertilizer. One promising method being explored involves lithium ion cathodic reduction in an organic solvent electrolyte, followed by reaction of the lithium with N2. However, conventional H2 oxidation catalysts for the complementary anodic process are unstable in these conditions. Fu et al. report that a gold–platinum alloy can robustly catalyze this oxidation and thus steadily produce the protons for ammonia under continuous flow conditions. —JSY A gold–platinum alloy catalyst proved stable for hydrogen oxidation to couple with nitrogen reduction in an ethereal solvent.

Journal ArticleDOI
20 Jan 2023-Science
TL;DR: In this paper , a single metasurface illuminated by visible light with different polarizations was used to generate 36 distinct images, forming a holographic keyboard pattern, which is the highest capacity reported for polarization multiplexing.
Abstract: Noise is usually undesired yet inevitable in science and engineering. However, by introducing the engineered noise to the precise solution of Jones matrix elements, we break the fundamental limit of polarization multiplexing capacity of metasurfaces that roots from the dimension constraints of the Jones matrix. We experimentally demonstrate up to 11 independent holographic images using a single metasurface illuminated by visible light with different polarizations. To the best of our knowledge, it is the highest capacity reported for polarization multiplexing. Combining the position multiplexing scheme, the metasurface can generate 36 distinct images, forming a holographic keyboard pattern. This discovery implies a new paradigm for high-capacity optical display, information encryption, and data storage. Description For more information, make some noise Holograms can be considered large-capacity memory storage media that are capable of holding information about three-dimensional scenes and compressing it into a two-dimensional pattern (metasurface). Attempting to store more images onto the one hologram pattern tends to result in cross talk and corruption of the stored data. Xiong et al. found that introducing engineered noise into the process enabled an increase in storage capacity. This approach should work for metasurfaces in applications such as high-capacity optical displays, information encryption, and data storage. —ISO Engineered noise can increase the channel capacity for polarization multiplexing with a metasurface hologram.

Journal ArticleDOI
24 Mar 2023-Science
TL;DR: Zhang et al. as mentioned in this paper detected a major locus, Alkaline Tolerance 1 (AT1), specifically related to alkaline-salinity sensitivity in sorghum.
Abstract: The use of alkaline salt lands for crop production is hindered by a scarcity of knowledge and breeding efforts for plant alkaline tolerance. Through genome association analysis of sorghum, a naturally high-alkaline–tolerant crop, we detected a major locus, Alkaline Tolerance 1 (AT1), specifically related to alkaline-salinity sensitivity. An at1 allele with a carboxyl-terminal truncation increased sensitivity, whereas knockout of AT1 increased tolerance to alkalinity in sorghum, millet, rice, and maize. AT1 encodes an atypical G protein γ subunit that affects the phosphorylation of aquaporins to modulate the distribution of hydrogen peroxide (H2O2). These processes appear to protect plants against oxidative stress by alkali. Designing knockouts of AT1 homologs or selecting its natural nonfunctional alleles could improve crop productivity in sodic lands. Description Growth on alkaline soils Alkaline soils limit the ability of plants to take in nutrients and manage salt stress. Zhang et al. have now identified a locus in sorghum that determines sensitivity to salty alkaline soils. The Alkali Tolerance 1 (AT1) locus encodes a guanine nucleotide–binding protein gamma subunit that regulates the phosphorylation of aquaporins, channels that can transport hydrogen peroxide to alleviate oxidative stress. Crops that could better manage growth on alkaline soils could open up agriculture to the millions of hectares of alkaline soils. —PJH Regulation of phosphorylation on aquaporin underlies the ability of sorghum to tolerate alkaline soils. INTRODUCTION According to the Food and Agriculture Organization (FAO), there are currently >1 billion ha of land affected by salt. Among these, ~60% are classified as sodic soil areas. These have high pH and are dominated by sodium bicarbonate (NaHCO3) and sodium carbonate (Na2CO3). The effects of global warming and a lack of fresh water will lead to >50% of arable land becoming affected by salt by 2050, thus severely affecting the world’s food security. Identifying and/or engineering sodic-tolerant crops is imperative to solve this challenge. Although salinity tolerance has been studied extensively, alkalinity tolerance in plants has not been studied in depth. RATIONALE Sorghum originates from Africa, where it can grow in harsh environments. As a result, sorghum has evolved greater tolerance to adapt to multiple abiotic stresses compared with other crops. Some sorghum varieties can survive in sodic soil with a pH as high as 10.0. A genome-wide association study (GWAS) analysis was performed with a large sorghum association panel consisting of 352 representative sorghum accessions. We detected a major locus, Alkaline tolerance 1 (AT1), linked to alkaline tolerance. We found that AT1, encoding an atypical G protein γ subunit (a homolog to rice GS3), contributes to alkaline sensitivity by modulating the efflux of hydrogen peroxide (H2O2) under environmental stress. RESULTS On the basis of the results of the GWAS analysis, we sequenced the cDNA regions of SbAT1 (Sorghum bicolor AT1) in 37 sorghum accessions with different degrees of alkaline sensitivity. Two typical haplotypes (Hap1 and Hap2) of SbAT1 were identified according to the five leading variant sites associated with sorghum alkali sensitivity. Hap1 encodes an intact SbAT1. A frame shift mutation (from “G” to “GGTGGC”) within Hap2 results in a premature stop codon probably encoding a truncated protein with only 136 amino acids at the N terminus (named Sbat1). To confirm the function of the AT1 locus, we generated a pair of near-isogenic lines (NILs) with two AT1 haplotypes to assess the allelic effect of AT1 on sorghum tolerance to alkali. We found that the Sbat1 allele (Hap2), encoding a truncated form of SbAT1, increased plant alkaline sensitivity compared with wild-type full-length SbAT1 (Hap1). Overexpression of AT1/GS3 reduced alkaline tolerance in sorghum and rice, and overexpression of the C-terminal truncated AT1/GS3 showed a more severe alkaline sensitive response. This was confirmed in millet and rice, which suggests that AT1/GS3 functions negatively in plant alkali tolerance. By contrast, knockout (ko) of AT1/GS3 increased tolerance to alkaline stress in sorghum, millet, rice, and maize, which indicates a conserved pathway in monocot crops. By immunoprecipitation in combination with mass spectrometry (IP-MS), we found that AT1/GS3 interacts with aquaporin PIP2s that are involved in reactive oxygen species (ROS) homeostasis. Genetic analysis showed that OsPIP2;1ko/2;2ko had lower alkaline tolerance than their wild-type control. The redox probe Cyto-roGFP2-Orp1 sensing H2O2 in the cytoplasm was applied. The results showed that, upon alkaline treatment, the relative H2O2 level increased in OsPIP2;1ko/2;2ko compared with wild-type plants. These results suggested that the phosphorylation of aquaporins could modulate the efflux of H2O2. Gγ negatively regulates the phosphorylation of PIP2;1, leading to elevated ROS levels in plants under alkaline stress. To assess the application of the AT1/GS3 gene for crop production, field tests were carried out. We found that the nonfunctional mutant, either obtained from natural varieties or generated by gene editing in several monocots, including sorghum, millet, rice, and maize, can improve the field performance of crops in terms of biomass or grain production when cultivated on sodic lands. CONCLUSION We concluded that SbAT1 encodes an atypical G protein γ subunit and inhibits the phosphorylation of aquaporins that may be used as H2O2 exporters under alkaline stress. With this knowledge, genetically engineered crops with knockouts of AT1 homologs or use of natural nonfunctional alleles could greatly improve crop yield in sodic lands. This may contribute to maximizing the use of global sodic lands to ensure food security. Genetic modification of AT1 enhances alkaline stress tolerance. The Gγ subunit, AT1, pairs with Gβ to negatively modulate the phosphorylation level of PIP2 aquaporins. Thus, AT1 reduces the H2O2 export activity of PIP2s, leading to the overaccumulation of H2O2 and resulting in alkaline stress sensitivity. By contrast, the artificial or natural knockouts of AT1 homologs release the inhibition of PIP2s by AT1 in crops and have improved survival rates and yield under alkaline stress. [Figure created using BioRender]

Journal ArticleDOI
13 Jan 2023-Science
TL;DR: Ma et al. as discussed by the authors revealed the underlying mechanisms of facet-dependent degradation of formamidinium lead iodide (FAPbI3) films and showed that the (100) facet is substantially more vulnerable to moisture-induced degradation than the (111) facet.
Abstract: A myriad of studies and strategies have already been devoted to improving the stability of perovskite films; however, the role of the different perovskite crystal facets in stability is still unknown. Here, we reveal the underlying mechanisms of facet-dependent degradation of formamidinium lead iodide (FAPbI3) films. We show that the (100) facet is substantially more vulnerable to moisture-induced degradation than the (111) facet. With combined experimental and theoretical studies, the degradation mechanisms are revealed; a strong water adhesion following an elongated lead-iodine (Pb-I) bond distance is observed, which leads to a δ-phase transition on the (100) facet. Through engineering, a higher surface fraction of the (111) facet can be achieved, and the (111)-dominated crystalline FAPbI3 films show exceptional stability against moisture. Our findings elucidate unknown facet-dependent degradation mechanisms and kinetics. Description Facet-stabilized films The degradation of formamidinium–based lead iodide perovskite has been shown to depend on which crystal facets are exposed to the surface. C. Ma et al. found that water adhesion was stronger on the (100) facet than on the (111) facet and made these materials more prone to moisture-induced degradation. The authors show how ligand-assisted perovskite film growth could be used create (111)-dominated films with high stability against moisture (up to 85% relative humidity) and thermal stress (85°C) without additional surface passivation. —PDS Formamidinium–based lead iodide perovskite films can be stabilized by growing films with a greater fraction of (111) facets.

Journal ArticleDOI
06 Jan 2023-Science
TL;DR: Ren et al. as mentioned in this paper developed an approach for detecting the stochastic keyhole porosity generation events with sub-millisecond temporal resolution and near-perfect prediction rate.
Abstract: Porosity defects are currently a major factor that hinders the widespread adoption of laser-based metal additive manufacturing technologies. One common porosity occurs when an unstable vapor depression zone (keyhole) forms because of excess laser energy input. With simultaneous high-speed synchrotron x-ray imaging and thermal imaging, coupled with multiphysics simulations, we discovered two types of keyhole oscillation in laser powder bed fusion of Ti-6Al-4V. Amplifying this understanding with machine learning, we developed an approach for detecting the stochastic keyhole porosity generation events with submillisecond temporal resolution and near-perfect prediction rate. The highly accurate data labeling enabled by operando x-ray imaging allowed us to demonstrate a facile and practical way to adopt our approach in commercial systems. Description Tracking down the pores Laser fusion techniques build metal parts through a high-energy melting process that too often creates structural defects in the form of pores. Ren et al. used x-rays to track the formation of these pores while also making observations with a thermal imaging system. This setup allowed the authors to develop a high-accuracy method for detecting pore formation from that thermal signature with the help of a machine learning method. Implementing this sort of tracking of pore formation would help avoid building parts with high porosity that are more likely to fail. —BG Thermal imaging can detect pore formation during laser powder bed fusion, helping to ensure quality control.

Journal ArticleDOI
06 Jan 2023-Science
TL;DR: In this paper , the authors show that immotile cilia at the node undergo asymmetric deformation along the dorsoventral axis in response to the flow, which is the first sign of a left-right difference.
Abstract: Immotile cilia at the ventral node of mouse embryos are required for sensing leftward fluid flow that breaks left-right symmetry of the body. However, the flow-sensing mechanism has long remained elusive. In this work, we show that immotile cilia at the node undergo asymmetric deformation along the dorsoventral axis in response to the flow. Application of mechanical stimuli to immotile cilia by optical tweezers induced calcium ion transients and degradation of Dand5 messenger RNA (mRNA) in the targeted cells. The Pkd2 channel protein was preferentially localized to the dorsal side of immotile cilia, and calcium ion transients were preferentially induced by mechanical stimuli directed toward the ventral side. Our results uncover the biophysical mechanism by which immotile cilia at the node sense the direction of fluid flow. Description Going with the flow In most vertebrates, left-right differences are specified during early embryogenesis by a small cluster of cells called the left-right organizer. Within this organizer, motile cilia move rapidly to create a leftward directional flow of extracellular fluid that is the first sign of a left-right difference, but how this flow is sensed and transduced into later molecular and anatomical left-right asymmetry has been unclear. Working with mouse embryos, Katoh et al. found that immotile cilia sense the mechanical force generated by the flow and suggest a biophysical mechanism by which the direction of the flow is sensed. Independently, working in zebrafish, Djenoune et al. used optical tweezers and live imaging to show that immotile cilia in the organizer function as mechanosensors that translate extracellular fluid flow into calcium signals. When motile cilia were paralyzed and normal flow stopped, mechanical manipulation of the cilia could rescue, or even reverse, left-right patterning. Thus, ciliary force sensing is necessary, sufficient, and instructive for embryonic laterality. —SMH Left-right symmetry is broken when immotile cilia sense the direction of fluid flow in mouse embryos.

Journal ArticleDOI
03 Mar 2023-Science
TL;DR: In this paper , plant nucleotide-binding, leucine-rich repeat immune receptors (NLRs) are used as scaffolds for nanobody (single-domain antibody fragment) fusions that bind fluorescent proteins (FPs).
Abstract: Plant pathogens cause recurrent epidemics, threatening crop yield and global food security. Efforts to retool the plant immune system have been limited to modifying natural components and can be nullified by the emergence of new pathogen strains. Made-to-order synthetic plant immune receptors provide an opportunity to tailor resistance to pathogen genotypes present in the field. In this work, we show that plant nucleotide-binding, leucine-rich repeat immune receptors (NLRs) can be used as scaffolds for nanobody (single-domain antibody fragment) fusions that bind fluorescent proteins (FPs). These fusions trigger immune responses in the presence of the corresponding FP and confer resistance against plant viruses expressing FPs. Because nanobodies can be raised against most molecules, immune receptor-nanobody fusions have the potential to generate resistance against plant pathogens and pests delivering effectors inside host cells.

Journal ArticleDOI
13 Jan 2023-Science
TL;DR: In this paper , the authors used base editing to ablate the oxidative activation sites of CaMKIIδ, a primary driver of cardiac disease, and showed in cardiomyocytes derived from human induced pluripotent stem cells that editing the CaMK IIδ gene to eliminate oxidation-sensitive methionine residues confers protection from ischemia/reperfusion (IR) injury.
Abstract: CRISPR-Cas9 gene editing is emerging as a prospective therapy for genomic mutations. However, current editing approaches are directed primarily toward relatively small cohorts of patients with specific mutations. Here, we describe a cardioprotective strategy potentially applicable to a broad range of patients with heart disease. We used base editing to ablate the oxidative activation sites of CaMKIIδ, a primary driver of cardiac disease. We show in cardiomyocytes derived from human induced pluripotent stem cells that editing the CaMKIIδ gene to eliminate oxidation-sensitive methionine residues confers protection from ischemia/reperfusion (IR) injury. Moreover, CaMKIIδ editing in mice at the time of IR enables the heart to recover function from otherwise severe damage. CaMKIIδ gene editing may thus represent a permanent and advanced strategy for heart disease therapy. Description Editing away heart disease Ischemia-reperfusion injury, tissue damage that occurs after oxygen deprivation, can be observed after a variety of insults, including common ones such as heart attack or stroke. A key protein that plays a role in this damage is calcium calmodulin-dependent protein kinase IIδ (CaMKIIδ). Lebek et al. found that targeting CaMKIIδ using CRISPR-Cas9 gene editing was a viable intervention to protect the heart tissue from ischemia-reperfusion damage in mouse models. Injecting gene editing reagents soon after ischemia exposure was sufficient for the mice to recover from severe heart damage, suggesting that it may not be too late to intervene after a heart attack happens. —YN Removing oxidative activation sites in CaMKIIδ by base editing sustains heart function after ischemia-reperfusion injury.