scispace - formally typeset
Search or ask a question

Showing papers by "University of Texas at Dallas published in 2020"


Posted Content
TL;DR: A comprehensive review of recent pioneering efforts in semantic and instance segmentation, including convolutional pixel-labeling networks, encoder-decoder architectures, multiscale and pyramid-based approaches, recurrent networks, visual attention models, and generative models in adversarial settings are provided.
Abstract: Image segmentation is a key topic in image processing and computer vision with applications such as scene understanding, medical image analysis, robotic perception, video surveillance, augmented reality, and image compression, among many others. Various algorithms for image segmentation have been developed in the literature. Recently, due to the success of deep learning models in a wide range of vision applications, there has been a substantial amount of works aimed at developing image segmentation approaches using deep learning models. In this survey, we provide a comprehensive review of the literature at the time of this writing, covering a broad spectrum of pioneering works for semantic and instance-level segmentation, including fully convolutional pixel-labeling networks, encoder-decoder architectures, multi-scale and pyramid based approaches, recurrent networks, visual attention models, and generative models in adversarial settings. We investigate the similarity, strengths and challenges of these deep learning models, examine the most widely used datasets, report performances, and discuss promising future research directions in this area.

950 citations


DOI
Claudia Backes1, Claudia Backes2, Amr M. Abdelkader3, Concepción Alonso4, Amandine Andrieux-Ledier5, Raul Arenal6, Raul Arenal7, Jon Azpeitia7, Nilanthy Balakrishnan8, Luca Banszerus9, Julien Barjon5, Ruben Bartali10, Sebastiano Bellani11, Claire Berger12, Claire Berger13, Reinhard Berger14, M.M. Bernal Ortega15, Carlo Bernard16, Peter H. Beton8, André Beyer17, Alberto Bianco18, Peter Bøggild19, Francesco Bonaccorso11, Gabriela Borin Barin20, Cristina Botas, Rebeca A. Bueno7, Daniel Carriazo21, Andres Castellanos-Gomez7, Meganne Christian, Artur Ciesielski18, Tymoteusz Ciuk, Matthew T. Cole, Jonathan N. Coleman1, Camilla Coletti11, Luigi Crema10, Huanyao Cun16, Daniela Dasler22, Domenico De Fazio3, Noel Díez, Simon Drieschner23, Georg S. Duesberg24, Roman Fasel20, Roman Fasel25, Xinliang Feng14, Alberto Fina15, Stiven Forti11, Costas Galiotis26, Costas Galiotis27, Giovanni Garberoglio28, Jorge M. Garcia7, Jose A. Garrido, Marco Gibertini29, Armin Gölzhäuser17, Julio Gómez, Thomas Greber16, Frank Hauke22, Adrian Hemmi16, Irene Hernández-Rodríguez7, Andreas Hirsch22, Stephen A. Hodge3, Yves Huttel7, Peter Uhd Jepsen19, I. Jimenez7, Ute Kaiser30, Tommi Kaplas31, HoKwon Kim29, Andras Kis29, Konstantinos Papagelis32, Konstantinos Papagelis27, Kostas Kostarelos33, Aleksandra Krajewska34, Kangho Lee24, Changfeng Li35, Harri Lipsanen35, Andrea Liscio, Martin R. Lohe14, Annick Loiseau5, Lucia Lombardi3, María Francisca López7, Oliver Martin22, Cristina Martín36, Lidia Martínez7, José A. Martín-Gago7, José I. Martínez7, Nicola Marzari29, Alvaro Mayoral37, Alvaro Mayoral6, John B. McManus1, Manuela Melucci, Javier Méndez7, Cesar Merino, Pablo Merino7, Andreas Meyer22, Elisa Miniussi16, Vaidotas Miseikis11, Neeraj Mishra11, Vittorio Morandi, Carmen Munuera7, Roberto Muñoz7, Hugo Nolan1, Luca Ortolani, A. K. Ott38, A. K. Ott3, Irene Palacio7, Vincenzo Palermo39, John Parthenios27, Iwona Pasternak40, Amalia Patanè8, Maurizio Prato21, Maurizio Prato41, Henri Prevost5, Vladimir Prudkovskiy13, Nicola M. Pugno42, Nicola M. Pugno43, Nicola M. Pugno44, Teófilo Rojo45, Antonio Rossi11, Pascal Ruffieux20, Paolo Samorì18, Léonard Schué5, Eki J. Setijadi10, Thomas Seyller46, Giorgio Speranza10, Christoph Stampfer9, I. Stenger5, Wlodek Strupinski40, Yuri Svirko31, Simone Taioli28, Simone Taioli47, Kenneth B. K. Teo, Matteo Testi10, Flavia Tomarchio3, Mauro Tortello15, Emanuele Treossi, Andrey Turchanin48, Ester Vázquez36, Elvira Villaro, Patrick Rebsdorf Whelan19, Zhenyuan Xia39, Rositza Yakimova, Sheng Yang14, G. Reza Yazdi, Chanyoung Yim24, Duhee Yoon3, Xianghui Zhang17, Xiaodong Zhuang14, Luigi Colombo49, Andrea C. Ferrari3, Mar García-Hernández7 
Trinity College, Dublin1, Heidelberg University2, University of Cambridge3, Autonomous University of Madrid4, Université Paris-Saclay5, University of Zaragoza6, Spanish National Research Council7, University of Nottingham8, RWTH Aachen University9, Kessler Foundation10, Istituto Italiano di Tecnologia11, Georgia Institute of Technology12, University of Grenoble13, Dresden University of Technology14, Polytechnic University of Turin15, University of Zurich16, Bielefeld University17, University of Strasbourg18, Technical University of Denmark19, Swiss Federal Laboratories for Materials Science and Technology20, Ikerbasque21, University of Erlangen-Nuremberg22, Technische Universität München23, Bundeswehr University Munich24, University of Bern25, University of Patras26, Foundation for Research & Technology – Hellas27, Center for Theoretical Studies, University of Miami28, École Polytechnique Fédérale de Lausanne29, University of Ulm30, University of Eastern Finland31, Aristotle University of Thessaloniki32, University of Manchester33, Polish Academy of Sciences34, Aalto University35, University of Castilla–La Mancha36, ShanghaiTech University37, University of Exeter38, Chalmers University of Technology39, Warsaw University of Technology40, University of Trieste41, Instituto Politécnico Nacional42, University of Trento43, Queen Mary University of London44, University of the Basque Country45, Chemnitz University of Technology46, Charles University in Prague47, University of Jena48, University of Texas at Dallas49
29 Jan 2020
TL;DR: In this article, the authors present an overview of the main techniques for production and processing of graphene and related materials (GRMs), as well as the key characterization procedures, adopting a 'hands-on' approach, providing practical details and procedures as derived from literature and from the authors' experience, in order to enable the reader to reproduce the results.
Abstract: © 2020 The Author(s). We present an overview of the main techniques for production and processing of graphene and related materials (GRMs), as well as the key characterization procedures. We adopt a 'hands-on' approach, providing practical details and procedures as derived from literature as well as from the authors' experience, in order to enable the reader to reproduce the results. Section I is devoted to 'bottom up' approaches, whereby individual constituents are pieced together into more complex structures. We consider graphene nanoribbons (GNRs) produced either by solution processing or by on-surface synthesis in ultra high vacuum (UHV), as well carbon nanomembranes (CNM). Production of a variety of GNRs with tailored band gaps and edge shapes is now possible. CNMs can be tuned in terms of porosity, crystallinity and electronic behaviour. Section II covers 'top down' techniques. These rely on breaking down of a layered precursor, in the graphene case usually natural crystals like graphite or artificially synthesized materials, such as highly oriented pyrolythic graphite, monolayers or few layers (FL) flakes. The main focus of this section is on various exfoliation techniques in a liquid media, either intercalation or liquid phase exfoliation (LPE). The choice of precursor, exfoliation method, medium as well as the control of parameters such as time or temperature are crucial. A definite choice of parameters and conditions yields a particular material with specific properties that makes it more suitable for a targeted application. We cover protocols for the graphitic precursors to graphene oxide (GO). This is an important material for a range of applications in biomedicine, energy storage, nanocomposites, etc. Hummers' and modified Hummers' methods are used to make GO that subsequently can be reduced to obtain reduced graphene oxide (RGO) with a variety of strategies. GO flakes are also employed to prepare three-dimensional (3d) low density structures, such as sponges, foams, hydro- or aerogels. The assembly of flakes into 3d structures can provide improved mechanical properties. Aerogels with a highly open structure, with interconnected hierarchical pores, can enhance the accessibility to the whole surface area, as relevant for a number of applications, such as energy storage. The main recipes to yield graphite intercalation compounds (GICs) are also discussed. GICs are suitable precursors for covalent functionalization of graphene, but can also be used for the synthesis of uncharged graphene in solution. Degradation of the molecules intercalated in GICs can be triggered by high temperature treatment or microwave irradiation, creating a gas pressure surge in graphite and exfoliation. Electrochemical exfoliation by applying a voltage in an electrolyte to a graphite electrode can be tuned by varying precursors, electrolytes and potential. Graphite electrodes can be either negatively or positively intercalated to obtain GICs that are subsequently exfoliated. We also discuss the materials that can be amenable to exfoliation, by employing a theoretical data-mining approach. The exfoliation of LMs usually results in a heterogeneous dispersion of flakes with different lateral size and thickness. This is a critical bottleneck for applications, and hinders the full exploitation of GRMs produced by solution processing. The establishment of procedures to control the morphological properties of exfoliated GRMs, which also need to be industrially scalable, is one of the key needs. Section III deals with the processing of flakes. (Ultra)centrifugation techniques have thus far been the most investigated to sort GRMs following ultrasonication, shear mixing, ball milling, microfluidization, and wet-jet milling. It allows sorting by size and thickness. Inks formulated from GRM dispersions can be printed using a number of processes, from inkjet to screen printing. Each technique has specific rheological requirements, as well as geometrical constraints. The solvent choice is critical, not only for the GRM stability, but also in terms of optimizing printing on different substrates, such as glass, Si, plastic, paper, etc, all with different surface energies. Chemical modifications of such substrates is also a key step. Sections IV-VII are devoted to the growth of GRMs on various substrates and their processing after growth to place them on the surface of choice for specific applications. The substrate for graphene growth is a key determinant of the nature and quality of the resultant film. The lattice mismatch between graphene and substrate influences the resulting crystallinity. Growth on insulators, such as SiO2, typically results in films with small crystallites, whereas growth on the close-packed surfaces of metals yields highly crystalline films. Section IV outlines the growth of graphene on SiC substrates. This satisfies the requirements for electronic applications, with well-defined graphene-substrate interface, low trapped impurities and no need for transfer. It also allows graphene structures and devices to be measured directly on the growth substrate. The flatness of the substrate results in graphene with minimal strain and ripples on large areas, allowing spectroscopies and surface science to be performed. We also discuss the surface engineering by intercalation of the resulting graphene, its integration with Si-wafers and the production of nanostructures with the desired shape, with no need for patterning. Section V deals with chemical vapour deposition (CVD) onto various transition metals and on insulators. Growth on Ni results in graphitized polycrystalline films. While the thickness of these films can be optimized by controlling the deposition parameters, such as the type of hydrocarbon precursor and temperature, it is difficult to attain single layer graphene (SLG) across large areas, owing to the simultaneous nucleation/growth and solution/precipitation mechanisms. The differing characteristics of polycrystalline Ni films facilitate the growth of graphitic layers at different rates, resulting in regions with differing numbers of graphitic layers. High-quality films can be grown on Cu. Cu is available in a variety of shapes and forms, such as foils, bulks, foams, thin films on other materials and powders, making it attractive for industrial production of large area graphene films. The push to use CVD graphene in applications has also triggered a research line for the direct growth on insulators. The quality of the resulting films is lower than possible to date on metals, but enough, in terms of transmittance and resistivity, for many applications as described in section V. Transfer technologies are the focus of section VI. CVD synthesis of graphene on metals and bottom up molecular approaches require SLG to be transferred to the final target substrates. To have technological impact, the advances in production of high-quality large-area CVD graphene must be commensurate with those on transfer and placement on the final substrates. This is a prerequisite for most applications, such as touch panels, anticorrosion coatings, transparent electrodes and gas sensors etc. New strategies have improved the transferred graphene quality, making CVD graphene a feasible option for CMOS foundries. Methods based on complete etching of the metal substrate in suitable etchants, typically iron chloride, ammonium persulfate, or hydrogen chloride although reliable, are time- and resourceconsuming, with damage to graphene and production of metal and etchant residues. Electrochemical delamination in a low-concentration aqueous solution is an alternative. In this case metallic substrates can be reused. Dry transfer is less detrimental for the SLG quality, enabling a deterministic transfer. There is a large range of layered materials (LMs) beyond graphite. Only few of them have been already exfoliated and fully characterized. Section VII deals with the growth of some of these materials. Amongst them, h-BN, transition metal tri- and di-chalcogenides are of paramount importance. The growth of h-BN is at present considered essential for the development of graphene in (opto) electronic applications, as h-BN is ideal as capping layer or substrate. The interesting optical and electronic properties of TMDs also require the development of scalable methods for their production. Large scale growth using chemical/physical vapour deposition or thermal assisted conversion has been thus far limited to a small set, such as h-BN or some TMDs. Heterostructures could also be directly grown.

330 citations


Journal ArticleDOI
TL;DR: In this article, a downlink multiple-input single-output intelligent reflecting surface (IRS) aided non-orthogonal multiple access (NOMA) system is investigated, where a base station (BS) serves multiple users with the aid of RISs.
Abstract: This paper investigates a downlink multiple-input single-output intelligent reflecting surface (IRS) aided non-orthogonal multiple access (NOMA) system, where a base station (BS) serves multiple users with the aid of IRSs. Our goal is to maximize the sum rate of all users by jointly optimizing the active beamforming at the BS and the passive beamforming at the IRS, subject to successive interference cancellation decoding rate conditions and IRS reflecting elements constraints. In term of the characteristics of reflection amplitudes and phase shifts, we consider ideal and non-ideal IRS assumptions. To tackle the formulated non-convex problems, we propose efficient algorithms by invoking alternating optimization, which design the active beamforming and passive beamforming alternately. For the ideal IRS scenario, the two subproblems are solved by invoking the successive convex approximation technique. For the non-ideal IRS scenario, constant modulus IRS elements are further divided into continuous phase shifts and discrete phase shifts. To tackle the passive beamforming problem with continuous phase shifts, a novel algorithm is developed by utilizing the sequential rank-one constraint relaxation approach, which is guaranteed to find a locally optimal rank-one solution. Then, a quantization-based scheme is proposed for discrete phase shifts. Finally, numerical results illustrate that: i) the system sum rate can be significantly improved by deploying the IRS with the proposed algorithms; ii) 3-bit phase shifters are capable of achieving almost the same performance as the ideal IRS; iii) the proposed IRS-aided NOMA systems achieve higher system sum rate than the IRS-aided orthogonal multiple access system.

325 citations


Posted Content
TL;DR: A comprehensive overview of the state-of-the-art on RISs, with focus on their operating principles, performance evaluation, beamforming design and resource management, applications of machine learning to RIS-enhanced wireless networks, as well as the integration of RISs with other emerging technologies is provided in this article.
Abstract: Reconfigurable intelligent surfaces (RISs), also known as intelligent reflecting surfaces (IRSs), have received significant attention for their potential to enhance the capacity and coverage of wireless networks by smartly reconfiguring the wireless propagation environment. Therefore, RISs are considered a promising technology for the sixth-generation (6G) communication networks. In this context, we provide a comprehensive overview of the state-of-the-art on RISs, with focus on their operating principles, performance evaluation, beamforming design and resource management, applications of machine learning to RIS-enhanced wireless networks, as well as the integration of RISs with other emerging technologies. We describe the basic principles of RISs both from physics and communications perspectives, based on which we present performance evaluation of multi-antenna assisted RIS systems. In addition, we systematically survey existing designs for RIS-enhanced wireless networks encompassing performance analysis, information theory, and performance optimization perspectives. Furthermore, we survey existing research contributions that apply machine learning for tackling challenges in dynamic scenarios, such as random fluctuations of wireless channels and user mobility in RIS-enhanced wireless networks. Last but not least, we identify major issues and research opportunities associated with the integration of RISs and other emerging technologies for application to next-generation networks.

323 citations


Journal ArticleDOI
Georges Aad1, Brad Abbott2, Dale Charles Abbott3, Ovsat Abdinov4  +2934 moreInstitutions (199)
TL;DR: In this article, a search for the electroweak production of charginos and sleptons decaying into final states with two electrons or muons is presented, based on 139.fb$^{-1}$ of proton-proton collisions recorded by the ATLAS detector at the Large Hadron Collider at
Abstract: A search for the electroweak production of charginos and sleptons decaying into final states with two electrons or muons is presented. The analysis is based on 139 fb$^{-1}$ of proton–proton collisions recorded by the ATLAS detector at the Large Hadron Collider at $\sqrt{s}=13$ $\text {TeV}$. Three R-parity-conserving scenarios where the lightest neutralino is the lightest supersymmetric particle are considered: the production of chargino pairs with decays via either W bosons or sleptons, and the direct production of slepton pairs. The analysis is optimised for the first of these scenarios, but the results are also interpreted in the others. No significant deviations from the Standard Model expectations are observed and limits at 95% confidence level are set on the masses of relevant supersymmetric particles in each of the scenarios. For a massless lightest neutralino, masses up to 420 $\text {Ge}\text {V}$ are excluded for the production of the lightest-chargino pairs assuming W-boson-mediated decays and up to 1 $\text {TeV}$ for slepton-mediated decays, whereas for slepton-pair production masses up to 700 $\text {Ge}\text {V}$ are excluded assuming three generations of mass-degenerate sleptons.

272 citations


Journal ArticleDOI
TL;DR: In this paper, a flexible reduced graphene oxide (rGO) sheet was crosslinked by a conjugated molecule (1-aminopyrene-disuccinimidyl suberate, AD), which reduced the voids within the graphene sheet and improved the alignment of graphene platelets, resulting in much higher compactness and high toughness.
Abstract: Flexible reduced graphene oxide (rGO) sheets are being considered for applications in portable electrical devices and flexible energy storage systems. However, the poor mechanical properties and electrical conductivities of rGO sheets are limiting factors for the development of such devices. Here we use MXene (M) nanosheets to functionalize graphene oxide platelets through Ti-O-C covalent bonding to obtain MrGO sheets. A MrGO sheet was crosslinked by a conjugated molecule (1-aminopyrene-disuccinimidyl suberate, AD). The incorporation of MXene nanosheets and AD molecules reduces the voids within the graphene sheet and improves the alignment of graphene platelets, resulting in much higher compactness and high toughness. In situ Raman spectroscopy and molecular dynamics simulations reveal the synergistic interfacial interaction mechanisms of Ti-O-C covalent bonding, sliding of MXene nanosheets, and π-π bridging. Furthermore, a supercapacitor based on our super-tough MXene-functionalized graphene sheets provides a combination of energy and power densities that are high for flexible supercapacitors.

257 citations


Journal ArticleDOI
TL;DR: In this article, the authors reviewed the rapidly growing domain of global value chain (GVC) research by analyzing several highly cited conceptual frameworks and then appraising GVC studies published in such disciplines as international business, general management, supply chain management, operations management, economic geography, regional and development studies, and international political economy.
Abstract: This article reviews the rapidly growing domain of global value chain (GVC) research by analyzing several highly cited conceptual frameworks and then appraising GVC studies published in such disciplines as international business, general management, supply chain management, operations management, economic geography, regional and development studies, and international political economy. Building on GVC conceptual frameworks, we conducted the review based on a comparative institutional perspective that encompasses critical governance issues at the micro-, GVC, and macro-levels. Our results indicate that some of these issues have garnered significantly more scholarly attention than others. We suggest several future research topics such as microfoundations of GVC governance, GVC mapping, learning, impact of lead firm ownership and strategy, dynamics of GVC arrangements, value creation and distribution, financialization, digitization, the impact of renewed protectionism, the impact of GVCs on their macro-environment, and chain-level performance management.

223 citations


Journal ArticleDOI
TL;DR: Data is used from Dallas, Texas to examine the extent to which a stay-at-home/shelter-in-place lockdown-style order was associated with an increase in domestic violence and provides some evidence for a short-term spike in the 2 weeks after the lockdown was instituted but a decrease thereafter.
Abstract: COVID-19 has wreaked havoc on the lives of persons around the world and social scientists are just beginning to understand its consequences on human behavior. One policy that public health officials put in place to help stop the spread of the virus were stay-at-home/shelter-in-place lockdown-style orders. While designed to protect people from the coronavirus, one potential and unintended consequence of such orders could be an increase in domestic violence - including abuse of partners, elders or children. Stay-at-home orders result in perpetrators and victims being confined in close quarters for long periods of time. In this study, we use data from Dallas, Texas to examine the extent to which a local order was associated with an increase in domestic violence. Our results provide some evidence for a short-term spike in the 2 weeks after the lockdown was instituted but a decrease thereafter. We note that it is difficult to determine just how much the lockdown was the cause of this increase as the domestic violence trend was increasing prior to the order.

215 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper studied the relationship among GSCM pressures, practices, and performance under the moderating effect of quick response (QR) technology, and established several results.

208 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigated the downlink communications of intelligent reflecting surface (IRS) assisted non-orthogonal multiple access (NOMA) systems and formulated a joint optimization problem over the channel assignment, decoding order of NOMA users, power allocation, and reflection coefficients.
Abstract: This article investigates the downlink communications of intelligent reflecting surface (IRS) assisted non-orthogonal multiple access (NOMA) systems. To maximize the system throughput, we formulate a joint optimization problem over the channel assignment, decoding order of NOMA users, power allocation, and reflection coefficients. The formulated problem is proved to be NP-hard. To tackle this problem, a three-step novel resource allocation algorithm is proposed. Firstly, the channel assignment problem is solved by a many-to-one matching algorithm. Secondly, by considering the IRS reflection coefficients design, a low-complexity decoding order optimization algorithm is proposed. Thirdly, given a channel assignment and decoding order, a joint optimization algorithm is proposed for solving the joint power allocation and reflection coefficient design problem. Numerical results illustrate that: i) with the aid of IRS, the proposed IRS-NOMA system outperforms the conventional NOMA system without the IRS in terms of system throughput; ii) the proposed IRS-NOMA system achieves higher system throughput than the IRS assisted orthogonal multiple access (IRS-OMA) systems; iii) simulation results show that the performance gains of the IRS-NOMA and the IRS-OMA systems can be enhanced via carefully choosing the location of the IRS.

190 citations


Journal ArticleDOI
Georges Aad1, Brad Abbott2, Dale Charles Abbott3, A. Abed Abud4  +2954 moreInstitutions (198)
TL;DR: In this paper, the trigger algorithms and selection were optimized to control the rates while retaining a high efficiency for physics analyses at the ATLAS experiment to cope with a fourfold increase of peak LHC luminosity from 2015 to 2018 (Run 2), and a similar increase in the number of interactions per beam-crossing to about 60.
Abstract: Electron and photon triggers covering transverse energies from 5 GeV to several TeV are essential for the ATLAS experiment to record signals for a wide variety of physics: from Standard Model processes to searches for new phenomena in both proton–proton and heavy-ion collisions. To cope with a fourfold increase of peak LHC luminosity from 2015 to 2018 (Run 2), to 2.1×1034cm-2s-1, and a similar increase in the number of interactions per beam-crossing to about 60, trigger algorithms and selections were optimised to control the rates while retaining a high efficiency for physics analyses. For proton–proton collisions, the single-electron trigger efficiency relative to a single-electron offline selection is at least 75% for an offline electron of 31 GeV, and rises to 96% at 60 GeV; the trigger efficiency of a 25 GeV leg of the primary diphoton trigger relative to a tight offline photon selection is more than 96% for an offline photon of 30 GeV. For heavy-ion collisions, the primary electron and photon trigger efficiencies relative to the corresponding standard offline selections are at least 84% and 95%, respectively, at 5 GeV above the corresponding trigger threshold.

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Dale Charles Abbott3, A. Abed Abud4  +2962 moreInstitutions (199)
TL;DR: A search for heavy neutral Higgs bosons is performed using the LHC Run 2 data, corresponding to an integrated luminosity of 139 fb^{-1} of proton-proton collisions at sqrt[s]=13‬TeV recorded with the ATLAS detector.
Abstract: A search for heavy neutral Higgs bosons is performed using the LHC Run 2 data, corresponding to an integrated luminosity of 139 fb^{-1} of proton-proton collisions at sqrt[s]=13 TeV recorded with the ATLAS detector. The search for heavy resonances is performed over the mass range 0.2-2.5 TeV for the τ^{+}τ^{-} decay with at least one τ-lepton decaying into final states with hadrons. The data are in good agreement with the background prediction of the standard model. In the M_{h}^{125} scenario of the minimal supersymmetric standard model, values of tanβ>8 and tanβ>21 are excluded at the 95% confidence level for neutral Higgs boson masses of 1.0 and 1.5 TeV, respectively, where tanβ is the ratio of the vacuum expectation values of the two Higgs doublets.

Journal ArticleDOI
31 Jan 2020-Science
TL;DR: Isotopically pure cubic boron nitride has an ultrahigh thermal conductivity, 75% that of diamond, which makes cBN a promising material for microelectronics thermal management, high-power electronics, and optoelectronics applications.
Abstract: Materials with high thermal conductivity (κ) are of technological importance and fundamental interest. We grew cubic boron nitride (cBN) crystals with controlled abundance of boron isotopes and measured κ greater than 1600 watts per meter-kelvin at room temperature in samples with enriched 10B or 11B. In comparison, we found that the isotope enhancement of κ is considerably lower for boron phosphide and boron arsenide as the identical isotopic mass disorder becomes increasingly invisible to phonons. The ultrahigh κ in conjunction with its wide bandgap (6.2 electron volts) makes cBN a promising material for microelectronics thermal management, high-power electronics, and optoelectronics applications.

Proceedings ArticleDOI
27 Jun 2020
TL;DR: DLFix is a two-tier DL model that treats APR as code transformation learning from the prior bug fixes and the surrounding code contexts of the fixes, and does not require hard-coding of bug-fixing patterns as in those tools.
Abstract: Automated Program Repair (APR) is very useful in helping developers in the process of software development and maintenance. Despite recent advances in deep learning (DL), the DL-based APR approaches still have limitations in learning bug-fixing code changes and the context of the surrounding source code of the bug-fixing code changes. These limitations lead to incorrect fixing locations or fixes. In this paper, we introduce DLFix, a two-tier DL model that treats APR as code transformation learning from the prior bug fixes and the surrounding code contexts of the fixes. The first layer is a tree-based RNN model that learns the contexts of bug fixes and its result is used as an additional weighting input for the second layer designed to learn the bug-fixing code transformations. We conducted several experiments to evaluate DLFix in two benchmarks: Defect4j and Bugs.jar, and a newly built bug datasets with a total of +20K real-world bugs in eight projects. We compared DLFix against a total of 13 state-of-the-art pattern-based APR tools. Our results show that DLFix can auto-fix more bugs than 11 of them, and is comparable and complementary to the top two pattern-based APR tools in which there are 7 and 11 unique bugs that they cannot detect, respectively, but we can. Importantly, DLFix is fully automated and data-driven, and does not require hard-coding of bug-fixing patterns as in those tools. We compared DLFix against 4 state-of-the-art deep learning based APR models. DLFix is able to fix 2.5 times more bugs than the best performing baseline.

Posted Content
TL;DR: This paper provides a detailed review of massive access from the perspectives of theory, protocols, techniques, coverage, energy, and security, and several future research directions and challenges are identified.
Abstract: Massive access, also known as massive connectivity or massive machine-type communication (mMTC), is one of the main use cases of the fifth-generation (5G) and beyond 5G (B5G) wireless networks. A typical application of massive access is the cellular Internet of Things (IoT). Different from conventional human-type communication, massive access aims at realizing efficient and reliable communications for a massive number of IoT devices. Hence, the main characteristics of massive access include low power, massive connectivity, and broad coverage, which require new concepts, theories, and paradigms for the design of next-generation cellular networks. This paper presents a comprehensive survey of aspects of massive access design for B5G wireless networks. Specifically, we provide a detailed review of massive access from the perspectives of theory, protocols, techniques, coverage, energy, and security. Furthermore, several future research directions and challenges are identified.

Journal ArticleDOI
TL;DR: In this article, the authors present three attacks, namely signal probability skew (SPS), AppSAT guided removal (AGR), and Sensitization guided SAT (SGS), that can break Anti-SAT and AND-tree insertion (ATI) within minutes.
Abstract: With the adoption of a globalized and distributed IC design flow, IP piracy, reverse engineering, and counterfeiting threats are becoming more prevalent. Logic obfuscation techniques including logic locking and IC camouflaging have been developed to address these emergent challenges. A major challenge for logic locking and camouflaging techniques is to resist Boolean satisfiability (SAT) based attacks that can circumvent state-of-the-art solutions within minutes. Over the past year, multiple SAT attack resilient solutions such as Anti-SAT and AND-tree insertion (ATI) have been presented. In this paper, we perform a security analysis of these countermeasures and show that they leave structural traces behind in their attempts to thwart the SAT attack. We present three attacks, namely “signal probability skew” (SPS) attack, “AppSAT guided removal (AGR) attack, and “sensitization guided SAT” (SGS) attack”, that can break Anti-SAT and ATI, within minutes.

Journal ArticleDOI
TL;DR: The current knowledge on the infection strategies and regulatory networks controlling virulence and adaptation mechanisms from Xanthomonas species are summarized and the novel opportunities that this body of work has provided for disease control and plant health are discussed.
Abstract: Xanthomonas is a well-studied genus of bacterial plant pathogens whose members cause a variety of diseases in economically important crops worldwide. Genomic and functional studies of these phytopathogens have provided significant understanding of microbial-host interactions, bacterial virulence and host adaptation mechanisms including microbial ecology and epidemiology. In addition, several strains of Xanthomonas are important as producers of the extracellular polysaccharide, xanthan, used in the food and pharmaceutical industries. This polymer has also been implicated in several phases of the bacterial disease cycle. In this review, we summarise the current knowledge on the infection strategies and regulatory networks controlling virulence and adaptation mechanisms from Xanthomonas species and discuss the novel opportunities that this body of work has provided for disease control and plant health.

Journal ArticleDOI
TL;DR: This review covers mostly recent advances in C–H functionalization reactions involving the HAT step to carbon-centered radicals, which allow for relatively easy activation of inert C-H bonds under mild conditions.
Abstract: Selective functionalization of ubiquitous unactivated C–H bonds is a continuous quest for synthetic organic chemists. In addition to transition metal catalysis, which typically operates under a two-electron manifold, a recent renaissance in the radical approach relying on the hydrogen atom transfer (HAT) process has led to tremendous growth in the area. Despite several challenges, protocols proceeding via HAT are highly sought after as they allow for relatively easy activation of inert C–H bonds under mild conditions leading to a broader scope and higher functional group tolerance and sometimes complementary reactivity over methods relying on traditional transition metal catalysis. A number of methods operating via heteroatom-based HAT have been extensively reported over the past few years, while methods employing more challenging carbon analogues have been less explored. Recent developments of mild methodologies for generation of various carbon-centered radical species enabled their utilization in the HAT process, which, in turn, led to the development of remote C(sp3)–H functionalization reactions of alcohols, amines, amides and related compounds. This review covers mostly recent advances in C–H functionalization reactions involving the HAT step to carbon-centered radicals.

Journal ArticleDOI
08 Sep 2020-JAMA
TL;DR: Among patients with moderate to severe TBI, out-of-hospital tranexamic acid administration within 2 hours of injury compared with placebo did not significantly improve 6-month neurologic outcome as measured by the Glasgow Outcome Scale-Extended.
Abstract: Importance Traumatic brain injury (TBI) is the leading cause of death and disability due to trauma. Early administration of tranexamic acid may benefit patients with TBI. Objective To determine whether tranexamic acid treatment initiated in the out-of-hospital setting within 2 hours of injury improves neurologic outcome in patients with moderate or severe TBI. Design, Setting, and Participants Multicenter, double-blinded, randomized clinical trial at 20 trauma centers and 39 emergency medical services agencies in the US and Canada from May 2015 to November 2017. Eligible participants (N = 1280) included out-of-hospital patients with TBI aged 15 years or older with Glasgow Coma Scale score of 12 or less and systolic blood pressure of 90 mm Hg or higher. Interventions Three interventions were evaluated, with treatment initiated within 2 hours of TBI: out-of-hospital tranexamic acid (1 g) bolus and in-hospital tranexamic acid (1 g) 8-hour infusion (bolus maintenance group; n = 312), out-of-hospital tranexamic acid (2 g) bolus and in-hospital placebo 8-hour infusion (bolus only group; n = 345), and out-of-hospital placebo bolus and in-hospital placebo 8-hour infusion (placebo group; n = 309). Main Outcomes and Measures The primary outcome was favorable neurologic function at 6 months (Glasgow Outcome Scale-Extended score >4 [moderate disability or good recovery]) in the combined tranexamic acid group vs the placebo group. Asymmetric significance thresholds were set at 0.1 for benefit and 0.025 for harm. There were 18 secondary end points, of which 5 are reported in this article: 28-day mortality, 6-month Disability Rating Scale score (range, 0 [no disability] to 30 [death]), progression of intracranial hemorrhage, incidence of seizures, and incidence of thromboembolic events. Results Among 1063 participants, a study drug was not administered to 96 randomized participants and 1 participant was excluded, resulting in 966 participants in the analysis population (mean age, 42 years; 255 [74%] male participants; mean Glasgow Coma Scale score, 8). Of these participants, 819 (84.8%) were available for primary outcome analysis at 6-month follow-up. The primary outcome occurred in 65% of patients in the tranexamic acid groups vs 62% in the placebo group (difference, 3.5%; [90% 1-sided confidence limit for benefit, −0.9%];P = .16; [97.5% 1-sided confidence limit for harm, 10.2%];P = .84). There was no statistically significant difference in 28-day mortality between the tranexamic acid groups vs the placebo group (14% vs 17%; difference, −2.9% [95% CI, −7.9% to 2.1%];P = .26), 6-month Disability Rating Scale score (6.8 vs 7.6; difference, −0.9 [95% CI, −2.5 to 0.7];P = .29), or progression of intracranial hemorrhage (16% vs 20%; difference, −5.4% [95% CI, −12.8% to 2.1%];P = .16). Conclusions and Relevance Among patients with moderate to severe TBI, out-of-hospital tranexamic acid administration within 2 hours of injury compared with placebo did not significantly improve 6-month neurologic outcome as measured by the Glasgow Outcome Scale-Extended. Trial Registration ClinicalTrials.gov Identifier:NCT01990768


Journal ArticleDOI
TL;DR: In this paper, the authors classified the organic, mineral, and intimate organic associations of elements in coal into three categories: organic-mineral, mineral-and intimate-organic associations.

Journal ArticleDOI
01 Jan 2020-Nature
TL;DR: The data suggest a compelling mechanism for how FACT maintains chromatin integrity during polymerase passage, by facilitating removal of the H2A-H2B dimer, stabilizing intermediate subnucleosomal states and promoting nucleosome reassembly.
Abstract: The organization of genomic DNA into nucleosomes profoundly affects all DNA-related processes in eukaryotes. The histone chaperone known as ‘facilitates chromatin transcription’ (FACT1) (consisting of subunits SPT16 and SSRP1) promotes both disassembly and reassembly of nucleosomes during gene transcription, DNA replication and DNA repair2. However, the mechanism by which FACT causes these opposing outcomes is unknown. Here we report two cryo-electron-microscopic structures of human FACT in complex with partially assembled subnucleosomes, with supporting biochemical and hydrogen–deuterium exchange data. We find that FACT is engaged in extensive interactions with nucleosomal DNA and all histone variants. The large DNA-binding surface on FACT appears to be protected by the carboxy-terminal domains of both of its subunits, and this inhibition is released by interaction with H2A–H2B, allowing FACT–H2A–H2B to dock onto a complex containing DNA and histones H3 and H4 (ref. 3). SPT16 binds nucleosomal DNA and tethers H2A–H2B through its carboxy-terminal domain by acting as a placeholder for DNA. SSRP1 also contributes to DNA binding, and can assume two conformations, depending on whether a second H2A–H2B dimer is present. Our data suggest a compelling mechanism for how FACT maintains chromatin integrity during polymerase passage, by facilitating removal of the H2A–H2B dimer, stabilizing intermediate subnucleosomal states and promoting nucleosome reassembly. Our findings reconcile discrepancies regarding the many roles of FACT and underscore the dynamic interactions between histone chaperones and nucleosomes. Two cryo-electron-microscopy images of the histone chaperone FACT interacting with components of nucleosomes shed light on how FACT manipulates nucleosomes to promote transcription, DNA repair and DNA replication.

Journal ArticleDOI
TL;DR: To analyze and compare solar forecasts, the well-established Murphy–Winkler framework for distribution-oriented forecast verification is recommended as a standard practice and the use of the root mean square error (RMSE) skill score based on the optimal convex combination of climatology and persistence methods is highly recommended.

Journal ArticleDOI
TL;DR: Elagolix with add-back therapy was effective in reducing heavy menstrual bleeding in women with uterine fibroids and Hypoestrogenic effects of elagolIX, especially decreases in bone mineral density, were attenuated withAdd- back therapy.
Abstract: Background Uterine fibroids are hormone-responsive neoplasms that are associated with heavy menstrual bleeding. Elagolix, an oral gonadotropin-releasing hormone antagonist resulting in rap...

Journal ArticleDOI
TL;DR: An approach for realizing the power delivery scheme for an extreme fast charging (XFC) station that is meant to simultaneously charge multiple electric vehicles (EVs) by making use of partial power rated dc–dc converters to charge the individual EVs.
Abstract: This article proposes an approach for realizing the power delivery scheme for an extreme fast charging (XFC) station that is meant to simultaneously charge multiple electric vehicles (EVs). A cascaded H-bridge converter is utilized to directly interface with the medium voltage grid while dual-active-bridge based soft-switched solid-state transformers are used to achieve galvanic isolation. The proposed approach eliminates redundant power conversion by making use of partial power rated dc–dc converters to charge the individual EVs. Partial power processing enables independent charging control over each EV, while processing only a fraction of the total battery charging power. Practical implementation schemes for the partial power charger unit are analyzed. A phase-shifted full-bridge converter-based charger is proposed. Design and control considerations for enabling multiple charging points are elucidated. Experimental results from a down-scaled laboratory test-bed are provided to validate the control aspects, functionality, and effectiveness of the proposed XFC station power delivery scheme. With a down-scaled partial power converter that is rated to handle only 27% of the battery power, an efficiency improvement of 0.6% at full-load and 1.6% at 50% load is demonstrated.

Journal ArticleDOI
28 Apr 2020-Mbio
TL;DR: Future diagnostic, prognostic, and therapeutic options for the management of UTI may soon incorporate efforts to measure, restore, and/or preserve the native, healthy ecology of the urinary microbiomes.
Abstract: Recent advances in the analysis of microbial communities colonizing the human body have identified a resident microbial community in the human urinary tract (UT). Compared to many other microbial niches, the human UT harbors a relatively low biomass. Studies have identified many genera and species that may constitute a core urinary microbiome. However, the contribution of the UT microbiome to urinary tract infection (UTI) and recurrent UTI (rUTI) pathobiology is not yet clearly understood. Evidence suggests that commensal species within the UT and urogenital tract (UGT) microbiomes, such as Lactobacillus crispatus, may act to protect against colonization with uropathogens. However, the mechanisms and fundamental biology of the urinary microbiome-host relationship are not understood. The ability to measure and characterize the urinary microbiome has been enabled through the development of next-generation sequencing and bioinformatic platforms that allow for the unbiased detection of resident microbial DNA. Translating technological advances into clinical insight will require further study of the microbial and genomic ecology of the urinary microbiome in both health and disease. Future diagnostic, prognostic, and therapeutic options for the management of UTI may soon incorporate efforts to measure, restore, and/or preserve the native, healthy ecology of the urinary microbiomes.

Journal ArticleDOI
TL;DR: In this article, a polyvinyl alcohol/chitosan/CeO2-NPs hydrogel was synthesized via the freeze-thaw technique with 0 to 1% (wt) 5nm cerium oxide nanoparticles and showed better antibacterial activities after just 12 hours (with MRSA but not E coli) and healthy human dermal fibroblast viabilities up to 5 days (more than 90%) compared to the control group.

Journal ArticleDOI
01 Feb 2020
TL;DR: It is shown that decreasing fatty-acid oxidation extends the perinatal cardiomyocyte proliferative window and can reintroduce cell-cycle activity in adult cardiomers, and may be a viable target for cardiac regenerative therapies.
Abstract: The neonatal mammalian heart is capable of regeneration for a brief window of time after birth. However, this regenerative capacity is lost within the first week of life, which coincides with a postnatal shift from anaerobic glycolysis to mitochondrial oxidative phosphorylation, particularly towards fatty-acid utilization. Despite the energy advantage of fatty-acid beta-oxidation, cardiac mitochondria produce elevated rates of reactive oxygen species when utilizing fatty acids, which is thought to play a role in cardiomyocyte cell-cycle arrest through induction of DNA damage and activation of DNA-damage response (DDR) pathway. Here we show that inhibiting fatty-acid utilization promotes cardiomyocyte proliferation in the postnatatal heart. First, neonatal mice fed fatty-acid deficient milk showed prolongation of the postnatal cardiomyocyte proliferative window, however cell cycle arrest eventually ensued. Next, we generated a tamoxifen-inducible cardiomyocyte-specific, pyruvate dehydrogenase kinase 4 (PDK4) knockout mouse model to selectively enhance oxidation of glycolytically derived pyruvate in cardiomyocytes. Conditional PDK4 deletion resulted in an increase in pyruvate dehydrogenase activity and consequently an increase in glucose relative to fatty-acid oxidation. Loss of PDK4 also resulted in decreased cardiomyocyte size, decreased DNA damage and expression of DDR markers and an increase in cardiomyocyte proliferation. Following myocardial infarction, inducible deletion of PDK4 improved left ventricular function and decreased remodelling. Collectively, inhibition of fatty-acid utilization in cardiomyocytes promotes proliferation, and may be a viable target for cardiac regenerative therapies.

Proceedings ArticleDOI
25 Oct 2020
TL;DR: The INTERSPEECH 2020 Deep Noise Suppression (DNS) Challenge as mentioned in this paper is intended to promote collaborative research in real-time single-channel Speech Enhancement aimed to maximize the subjective (perceptual) quality of the enhanced speech.
Abstract: The INTERSPEECH 2020 Deep Noise Suppression (DNS) Challenge is intended to promote collaborative research in realtime single-channel Speech Enhancement aimed to maximize the subjective (perceptual) quality of the enhanced speech. A typical approach to evaluate the noise suppression methods is to use objective metrics on the test set obtained by splitting the original dataset. While the performance is good on the synthetic test set, often the model performance degrades significantly on real recordings. Also, most of the conventional objective metrics do not correlate well with subjective tests and lab subjective tests are not scalable for a large test set. In this challenge, we open-sourced a large clean speech and noise corpus for training the noise suppression models and a representative test set to real-world scenarios consisting of both synthetic and real recordings. We also open-sourced an online subjective test framework based on ITU-T P.808 for researchers to reliably test their developments. We evaluated the results using P.808 on a blind test set. The results and the key learnings from the challenge are discussed. The datasets and scripts can be found here for quick access https://github.com/microsoft/DNS-Challenge.

Posted Content
TL;DR: A large clean speech and noise corpus is open-sourced for training the noise suppression models and a representative test set to real-world scenarios consisting of both synthetic and real recordings and an online subjective test framework based on ITU-T P.808 is opened for researchers to reliably test their developments.
Abstract: The INTERSPEECH 2020 Deep Noise Suppression (DNS) Challenge is intended to promote collaborative research in real-time single-channel Speech Enhancement aimed to maximize the subjective (perceptual) quality of the enhanced speech. A typical approach to evaluate the noise suppression methods is to use objective metrics on the test set obtained by splitting the original dataset. While the performance is good on the synthetic test set, often the model performance degrades significantly on real recordings. Also, most of the conventional objective metrics do not correlate well with subjective tests and lab subjective tests are not scalable for a large test set. In this challenge, we open-sourced a large clean speech and noise corpus for training the noise suppression models and a representative test set to real-world scenarios consisting of both synthetic and real recordings. We also open-sourced an online subjective test framework based on ITU-T P.808 for researchers to reliably test their developments. We evaluated the results using P.808 on a blind test set. The results and the key learnings from the challenge are discussed. The datasets and scripts can be found here for quick access this https URL.