scispace - formally typeset
Search or ask a question

Showing papers by "Carleton University published in 2020"


Journal ArticleDOI
TL;DR: Evidence from a selection of research topics relevant to pandemics is discussed, including work on navigating threats, social and cultural influences on behaviour, science communication, moral decision-making, leadership, and stress and coping.
Abstract: The COVID-19 pandemic represents a massive global health crisis. Because the crisis requires large-scale behaviour change and places significant psychological burdens on individuals, insights from the social and behavioural sciences can be used to help align human behaviour with the recommendations of epidemiologists and public health experts. Here we discuss evidence from a selection of research topics relevant to pandemics, including work on navigating threats, social and cultural influences on behaviour, science communication, moral decision-making, leadership, and stress and coping. In each section, we note the nature and quality of prior research, including uncertainty and unsettled issues. We identify several insights for effective response to the COVID-19 pandemic and highlight important gaps researchers should move quickly to fill in the coming weeks and months.

3,223 citations


Book
Georges Aad1, E. Abat2, Jalal Abdallah3, Jalal Abdallah4  +3029 moreInstitutions (164)
23 Feb 2020
TL;DR: The ATLAS detector as installed in its experimental cavern at point 1 at CERN is described in this paper, where a brief overview of the expected performance of the detector when the Large Hadron Collider begins operation is also presented.
Abstract: The ATLAS detector as installed in its experimental cavern at point 1 at CERN is described in this paper. A brief overview of the expected performance of the detector when the Large Hadron Collider begins operation is also presented.

3,111 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present an emergency recovery plan to bend the curve of freshwater biodiversity loss, which includes accelerating implementation of environmental flows; improving water quality; protecting and restoring critical habitats; managing the exploitation of freshwater ecosystem resources, especially species and riverine aggregates; preventing and controlling nonnative species invasions; and safeguarding and restoring river connectivity.
Abstract: Despite their limited spatial extent, freshwater ecosystems host remarkable biodiversity, including one-third of all vertebrate species. This biodiversity is declining dramatically: Globally, wetlands are vanishing three times faster than forests, and freshwater vertebrate populations have fallen more than twice as steeply as terrestrial or marine populations. Threats to freshwater biodiversity are well documented but coordinated action to reverse the decline is lacking. We present an Emergency Recovery Plan to bend the curve of freshwater biodiversity loss. Priority actions include accelerating implementation of environmental flows; improving water quality; protecting and restoring critical habitats; managing the exploitation of freshwater ecosystem resources, especially species and riverine aggregates; preventing and controlling nonnative species invasions; and safeguarding and restoring river connectivity. We recommend adjustments to targets and indicators for the Convention on Biological Diversity and the Sustainable Development Goals and roles for national and international state and nonstate actors.

420 citations


Journal ArticleDOI
TL;DR: Over one-quarter of the most viewed YouTube videos on COVID-19 contained misleading information, reaching millions of viewers worldwide, highlighting the need to better use YouTube to deliver timely and accurate information and to minimise the spread of misinformation.
Abstract: Introduction The COVID-19 pandemic is this century’s largest public health emergency and its successful management relies on the effective dissemination of factual information. As a social media platform with billions of daily views, YouTube has tremendous potential to both support and hinder public health efforts. However, the usefulness and accuracy of most viewed YouTube videos on COVID-19 have not been investigated. Methods A YouTube search was performed on 21 March 2020 using keywords ‘coronavirus’ and ‘COVID-19’, and the top 75 viewed videos from each search were analysed. Videos that were duplicates, non-English, non-audio and non-visual, exceeding 1 hour in duration, live and unrelated to COVID-19 were excluded. Two reviewers coded the source, content and characteristics of included videos. The primary outcome was usability and reliability of videos, analysed using the novel COVID-19 Specific Score (CSS), modified DISCERN (mDISCERN) and modified JAMA (mJAMA) scores. Results Of 150 videos screened, 69 (46%) were included, totalling 257 804 146 views. Nineteen (27.5%) videos contained non-factual information, totalling 62 042 609 views. Government and professional videos contained only factual information and had higher CSS than consumer videos (mean difference (MD) 2.21, 95% CI 0.10 to 4.32, p=0.037); mDISCERN scores than consumer videos (MD 2.46, 95% CI 0.50 to 4.42, p=0.008), internet news videos (MD 2.20, 95% CI 0.19 to 4.21, p=0.027) and entertainment news videos (MD 2.57, 95% CI 0.66 to 4.49, p=0.004); and mJAMA scores than entertainment news videos (MD 1.21, 95% CI 0.07 to 2.36, p=0.033) and consumer videos (MD 1.27, 95% CI 0.10 to 2.44, p=0.028). However, they only accounted for 11% of videos and 10% of views. Conclusion Over one-quarter of the most viewed YouTube videos on COVID-19 contained misleading information, reaching millions of viewers worldwide. As the current COVID-19 pandemic worsens, public health agencies must better use YouTube to deliver timely and accurate information and to minimise the spread of misinformation. This may play a significant role in successfully managing the COVID-19 pandemic.

354 citations


Journal ArticleDOI
Georges Aad1, Brad Abbott2, Dale Charles Abbott3, Ovsat Abdinov4  +2934 moreInstitutions (199)
TL;DR: In this article, a search for the electroweak production of charginos and sleptons decaying into final states with two electrons or muons is presented, based on 139.fb$^{-1}$ of proton-proton collisions recorded by the ATLAS detector at the Large Hadron Collider at
Abstract: A search for the electroweak production of charginos and sleptons decaying into final states with two electrons or muons is presented. The analysis is based on 139 fb$^{-1}$ of proton–proton collisions recorded by the ATLAS detector at the Large Hadron Collider at $\sqrt{s}=13$ $\text {TeV}$. Three R-parity-conserving scenarios where the lightest neutralino is the lightest supersymmetric particle are considered: the production of chargino pairs with decays via either W bosons or sleptons, and the direct production of slepton pairs. The analysis is optimised for the first of these scenarios, but the results are also interpreted in the others. No significant deviations from the Standard Model expectations are observed and limits at 95% confidence level are set on the masses of relevant supersymmetric particles in each of the scenarios. For a massless lightest neutralino, masses up to 420 $\text {Ge}\text {V}$ are excluded for the production of the lightest-chargino pairs assuming W-boson-mediated decays and up to 1 $\text {TeV}$ for slepton-mediated decays, whereas for slepton-pair production masses up to 700 $\text {Ge}\text {V}$ are excluded assuming three generations of mass-degenerate sleptons.

272 citations


Journal ArticleDOI
TL;DR: This paper surveys the different rate optimization scenarios studied in the literature when PD-NOMA is combined with one or more of the candidate schemes and technologies for B5G networks including multiple-input-single-output (MISO), multiple- input-multiple- Output (MIMO), massive-MIMo), advanced antenna architectures, higher frequency millimeter-wave (mmWave) and terahertz (THz) communications.
Abstract: The ambitious high data-rate applications in the envisioned future beyond fifth-generation (B5G) wireless networks require new solutions, including the advent of more advanced architectures than the ones already used in 5G networks, and the coalition of different communications schemes and technologies to enable these applications requirements. Among the candidate communications schemes for future wireless networks are non-orthogonal multiple access (NOMA) schemes that allow serving more than one user in the same resource block by multiplexing users in other domains than frequency or time. In this way, NOMA schemes tend to offer several advantages over orthogonal multiple access (OMA) schemes such as improved user fairness and spectral efficiency, higher cell-edge throughput, massive connectivity support, and low transmission latency. With these merits, NOMA-enabled transmission schemes are being increasingly looked at as promising multiple access schemes for future wireless networks. When the power domain is used to multiplex the users, it is referred to as the power domain NOMA (PD-NOMA). In this paper, we survey the integration of PD-NOMA with the enabling communications schemes and technologies that are expected to meet the various requirements of B5G networks. In particular, this paper surveys the different rate optimization scenarios studied in the literature when PD-NOMA is combined with one or more of the candidate schemes and technologies for B5G networks including multiple-input-single-output (MISO), multiple-input-multiple-output (MIMO), massive-MIMO (mMIMO), advanced antenna architectures, higher frequency millimeter-wave (mmWave) and terahertz (THz) communications, advanced coordinated multi-point (CoMP) transmission and reception schemes, cooperative communications, cognitive radio (CR), visible light communications (VLC), unmanned aerial vehicle (UAV) assisted communications and others. The considered system models, the optimization methods utilized to maximize the achievable rates, and the main lessons learnt on the optimization and the performance of these NOMA-enabled schemes and technologies are discussed in detail along with the future research directions for these combined schemes. Moreover, the role of machine learning in optimizing these NOMA-enabled technologies is addressed.

253 citations


Journal ArticleDOI
TL;DR: This article develops an asynchronous advantage actor–critic-based cooperation computation offloading and resource allocation algorithm to solve the MDP problem and designs a multiobjective function to maximize the computation rate of MEC systems and the transaction throughput of blockchain systems.
Abstract: Mobile-edge computing (MEC) is a promising paradigm to improve the quality of computation experience of mobile devices because it allows mobile devices to offload computing tasks to MEC servers, benefiting from the powerful computing resources of MEC servers. However, the existing computation-offloading works have also some open issues: 1) security and privacy issues; 2) cooperative computation offloading; and 3) dynamic optimization. To address the security and privacy issues, we employ the blockchain technology that ensures the reliability and irreversibility of data in MEC systems. Meanwhile, we jointly design and optimize the performance of blockchain and MEC. In this article, we develop a cooperative computation offloading and resource allocation framework for blockchain-enabled MEC systems. In the framework, we design a multiobjective function to maximize the computation rate of MEC systems and the transaction throughput of blockchain systems by jointly optimizing offloading decision, power allocation, block size, and block interval. Due to the dynamic characteristics of the wireless fading channel and the processing queues at MEC servers, the joint optimization is formulated as a Markov decision process (MDP). To tackle the dynamics and complexity of the blockchain-enabled MEC system, we develop an asynchronous advantage actor–critic-based cooperation computation offloading and resource allocation algorithm to solve the MDP problem. In the algorithm, deep neural networks are optimized by utilizing asynchronous gradient descent and eliminating the correlation of data. The simulation results show that the proposed algorithm converges fast and achieves significant performance improvements over existing schemes in terms of total reward.

241 citations


Journal ArticleDOI
Juliette Alimena1, James Baker Beacham2, Martino Borsato3, Yangyang Cheng4  +213 moreInstitutions (105)
TL;DR: In this paper, the authors present a survey of the current state of LLP searches at the Large Hadron Collider (LHC) and chart a path for the development of LLP searches into the future, both in the upcoming Run 3 and at the high-luminosity LHC.
Abstract: Particles beyond the Standard Model (SM) can generically have lifetimes that are long compared to SM particles at the weak scale. When produced at experiments such as the Large Hadron Collider (LHC) at CERN, these long-lived particles (LLPs) can decay far from the interaction vertex of the primary proton–proton collision. Such LLP signatures are distinct from those of promptly decaying particles that are targeted by the majority of searches for new physics at the LHC, often requiring customized techniques to identify, for example, significantly displaced decay vertices, tracks with atypical properties, and short track segments. Given their non-standard nature, a comprehensive overview of LLP signatures at the LHC is beneficial to ensure that possible avenues of the discovery of new physics are not overlooked. Here we report on the joint work of a community of theorists and experimentalists with the ATLAS, CMS, and LHCb experiments—as well as those working on dedicated experiments such as MoEDAL, milliQan, MATHUSLA, CODEX-b, and FASER—to survey the current state of LLP searches at the LHC, and to chart a path for the development of LLP searches into the future, both in the upcoming Run 3 and at the high-luminosity LHC. The work is organized around the current and future potential capabilities of LHC experiments to generally discover new LLPs, and takes a signature-based approach to surveying classes of models that give rise to LLPs rather than emphasizing any particular theory motivation. We develop a set of simplified models; assess the coverage of current searches; document known, often unexpected backgrounds; explore the capabilities of proposed detector upgrades; provide recommendations for the presentation of search results; and look towards the newest frontiers, namely high-multiplicity 'dark showers', highlighting opportunities for expanding the LHC reach for these signals.

218 citations


Journal ArticleDOI
TL;DR: The proposed landscape scenarios represent an optimal compromise between delivery of goods and services to humans and preserving most forest wildlife, and can therefore guide forest preservation and restoration strategies.
Abstract: Agriculture and development transform forest ecosystems to human-modified landscapes. Decades of research in ecology have generated myriad concepts for the appropriate management of these landscapes. Yet, these concepts are often contradictory and apply at different spatial scales, making the design of biodiversity-friendly landscapes challenging. Here, we combine concepts with empirical support to design optimal landscape scenarios for forest-dwelling species. The supported concepts indicate that appropriately sized landscapes should contain ≥ 40% forest cover, although higher percentages are likely needed in the tropics. Forest cover should be configured with c. 10% in a very large forest patch, and the remaining 30% in many evenly dispersed smaller patches and semi-natural treed elements (e.g. vegetation corridors). Importantly, the patches should be embedded in a high-quality matrix. The proposed landscape scenarios represent an optimal compromise between delivery of goods and services to humans and preserving most forest wildlife, and can therefore guide forest preservation and restoration strategies.

216 citations


Journal ArticleDOI
TL;DR: The fuzzy control and adaptive backstepping schemes are applied to construct an improved fault-tolerant controller without requiring the specific knowledge of control gains and actuator faults, including both stuck constant value and loss of effectiveness.
Abstract: This paper addresses the trajectory tracking control problem of a class of nonstrict-feedback nonlinear systems with the actuator faults. The functional relationship in the affine form between the nonlinear functions with whole state and error variables is established by using the structure consistency of intermediate control signals and the variable-partition technique. The fuzzy control and adaptive backstepping schemes are applied to construct an improved fault-tolerant controller without requiring the specific knowledge of control gains and actuator faults, including both stuck constant value and loss of effectiveness. The proposed fault-tolerant controller ensures that all signals in the closed-loop system are semiglobally practically finite-time stable and the tracking error remains in a small neighborhood of the origin after a finite period of time. The developed control method is verified through two numerical examples.

210 citations


Journal ArticleDOI
TL;DR: In this paper, the authors identify some of the main political-economic factors behind car dependence, drawing together research from several fields, including automotive industry, provision of car infrastructure, political economy of urban sprawl, and provision of public transport.
Abstract: Research on car dependence exposes the difficulty of moving away from a car-dominated, high-carbon transport system, but neglects the political-economic factors underpinning car-dependent societies. Yet these factors are key constraints to attempts to ‘decouple' human well-being from energy use and climate change emissions. In this critical review paper, we identify some of the main political-economic factors behind car dependence, drawing together research from several fields. Five key constituent elements of what we call the ‘car-dependent transport system’ are identified: i) the automotive industry; ii) the provision of car infrastructure; iii) the political economy of urban sprawl; iv) the provision of public transport; v) cultures of car consumption. Using the ‘systems of provision’ approach within political economy, we locate the part played by each element within the key dynamic processes of the system as a whole. Such processes encompass industrial structure, political-economic relations, the built environment, and cultural feedback loops. We argue that linkages between these processes are crucial to maintaining car dependence and thus create carbon lock-in. In developing our argument we discuss several important characteristics of car-dependent transport systems: the role of integrated socio-technical aspects of provision, the opportunistic use of contradictory economic arguments serving industrial agendas, the creation of an apolitical facade around pro-car decision-making, and the ‘capture’ of the state within the car-dependent transport system. Through uncovering the constituents, processes and characteristics of car-dependent transport systems, we show that moving past the automobile age will require an overt and historically aware political program of research and action.

Journal ArticleDOI
TL;DR: In this paper, the authors shed light on some of the major enabling technologies for 6G, which are expected to revolutionize the fundamental architectures of cellular networks and provide multiple homogeneous artificial intelligence-empowered services, including distributed communications, control, computing, sensing and energy, from its core to its end nodes.
Abstract: The fifth generation (5G) mobile networks are envisaged to enable a plethora of breakthrough advancements in wireless technologies, providing support of a diverse set of services over a single platform. While the deployment of 5G systems is scaling up globally, it is time to look ahead for beyond 5G systems. This is mainly driven by the emerging societal trends, calling for fully automated systems and intelligent services supported by extended reality and haptics communications. To accommodate the stringent requirements of their prospective applications, which are data-driven and defined by extremely low-latency, ultra-reliable, fast and seamless wireless connectivity, research initiatives are currently focusing on a progressive roadmap towards the sixth generation (6G) networks, which are expected to bring transformative changes to this premise. In this article, we shed light on some of the major enabling technologies for 6G, which are expected to revolutionize the fundamental architectures of cellular networks and provide multiple homogeneous artificial intelligence-empowered services, including distributed communications, control, computing, sensing, and energy, from its core to its end nodes. In particular, the present paper aims to answer several 6G framework related questions: What are the driving forces for the development of 6G? How will the enabling technologies of 6G differ from those in 5G? What kind of applications and interactions will they support which would not be supported by 5G? We address these questions by presenting a comprehensive study of the 6G vision and outlining seven of its disruptive technologies, i.e., mmWave communications, terahertz communications, optical wireless communications, programmable metasurfaces, drone-based communications, backscatter communications and tactile internet, as well as their potential applications. Then, by leveraging the state-of-the-art literature surveyed for each technology, we discuss the associated requirements, key challenges, and open research problems. These discussions are thereafter used to open up the horizon for future research directions.

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Dale Charles Abbott3, A. Abed Abud4  +2954 moreInstitutions (198)
TL;DR: In this paper, the trigger algorithms and selection were optimized to control the rates while retaining a high efficiency for physics analyses at the ATLAS experiment to cope with a fourfold increase of peak LHC luminosity from 2015 to 2018 (Run 2), and a similar increase in the number of interactions per beam-crossing to about 60.
Abstract: Electron and photon triggers covering transverse energies from 5 GeV to several TeV are essential for the ATLAS experiment to record signals for a wide variety of physics: from Standard Model processes to searches for new phenomena in both proton–proton and heavy-ion collisions. To cope with a fourfold increase of peak LHC luminosity from 2015 to 2018 (Run 2), to 2.1×1034cm-2s-1, and a similar increase in the number of interactions per beam-crossing to about 60, trigger algorithms and selections were optimised to control the rates while retaining a high efficiency for physics analyses. For proton–proton collisions, the single-electron trigger efficiency relative to a single-electron offline selection is at least 75% for an offline electron of 31 GeV, and rises to 96% at 60 GeV; the trigger efficiency of a 25 GeV leg of the primary diphoton trigger relative to a tight offline photon selection is more than 96% for an offline photon of 30 GeV. For heavy-ion collisions, the primary electron and photon trigger efficiencies relative to the corresponding standard offline selections are at least 84% and 95%, respectively, at 5 GeV above the corresponding trigger threshold.

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Dale Charles Abbott3, A. Abed Abud4  +2962 moreInstitutions (199)
TL;DR: A search for heavy neutral Higgs bosons is performed using the LHC Run 2 data, corresponding to an integrated luminosity of 139 fb^{-1} of proton-proton collisions at sqrt[s]=13‬TeV recorded with the ATLAS detector.
Abstract: A search for heavy neutral Higgs bosons is performed using the LHC Run 2 data, corresponding to an integrated luminosity of 139 fb^{-1} of proton-proton collisions at sqrt[s]=13 TeV recorded with the ATLAS detector. The search for heavy resonances is performed over the mass range 0.2-2.5 TeV for the τ^{+}τ^{-} decay with at least one τ-lepton decaying into final states with hadrons. The data are in good agreement with the background prediction of the standard model. In the M_{h}^{125} scenario of the minimal supersymmetric standard model, values of tanβ>8 and tanβ>21 are excluded at the 95% confidence level for neutral Higgs boson masses of 1.0 and 1.5 TeV, respectively, where tanβ is the ratio of the vacuum expectation values of the two Higgs doublets.

Journal ArticleDOI
TL;DR: The results indicate that societies that are more economically unequal and lack capacity in some dimensions of social capital experienced more COVID-19 deaths, possibly due to behavioural contagion and incongruence with physical distancing policy.

Journal ArticleDOI
TL;DR: This article proposes to minimize the long-term energy consumption of a THz wireless access-based MEC system for high quality immersive VR video services support by jointly optimizing the viewport rendering offloading and downlink transmit power control policies and an asynchronous advantage actor–critic (A3C)-based joint optimization algorithm.
Abstract: Immersive virtual reality (VR) video is becoming increasingly popular owing to its enhanced immersive experience To enjoy ultrahigh resolution immersive VR video with wireless user equipments, such as head-mounted displays (HMDs), ultralow-latency viewport rendering, and data transmission are the core prerequisites, which could not be achieved without a huge bandwidth and superior processing capabilities Besides, potentially very high energy consumption at the HMD may impede the rapid development of wireless panoramic VR video Multiaccess edge computing (MEC) has emerged as a promising technology to reduce both the task processing latency and the energy consumption for HMD, while bandwidth-rich terahertz (THz) communication is expected to enable ultrahigh-speed wireless data transmission In this article, we propose to minimize the long-term energy consumption of a THz wireless access-based MEC system for high quality immersive VR video services support by jointly optimizing the viewport rendering offloading and downlink transmit power control Considering the time-varying nature of wireless channel conditions, we propose a deep reinforcement learning-based approach to learn the optimal viewport rendering offloading and transmit power control policies and an asynchronous advantage actor–critic (A3C)-based joint optimization algorithm is proposed The simulation results demonstrate that the proposed algorithm converges fast under different learning rates, and outperforms existing algorithms in terms of minimized energy consumption and maximized reward

Journal ArticleDOI
TL;DR: This work proposes the adoption of a transcriptome-based taxonomy of cell types for mammalian neocortex that should be hierarchical and use a standardized nomenclature, and could serve as an example for cell type atlases in other parts of the body.
Abstract: To understand the function of cortical circuits, it is necessary to catalog their cellular diversity. Past attempts to do so using anatomical, physiological or molecular features of cortical cells have not resulted in a unified taxonomy of neuronal or glial cell types, partly due to limited data. Single-cell transcriptomics is enabling, for the first time, systematic high-throughput measurements of cortical cells and generation of datasets that hold the promise of being complete, accurate and permanent. Statistical analyses of these data reveal clusters that often correspond to cell types previously defined by morphological or physiological criteria and that appear conserved across cortical areas and species. To capitalize on these new methods, we propose the adoption of a transcriptome-based taxonomy of cell types for mammalian neocortex. This classification should be hierarchical and use a standardized nomenclature. It should be based on a probabilistic definition of a cell type and incorporate data from different approaches, developmental stages and species. A community-based classification and data aggregation model, such as a knowledge graph, could provide a common foundation for the study of cortical circuits. This community-based classification, nomenclature and data aggregation could serve as an example for cell type atlases in other parts of the body.

Journal ArticleDOI
M. Aaron MacNeil1, Demian D. Chapman2, Michelle R. Heupel3, Colin A. Simpfendorfer4, Michael R. Heithaus2, Mark G. Meekan3, Mark G. Meekan5, Euan S. Harvey6, Jordan Goetze7, Jordan Goetze6, Jeremy J. Kiszka2, Mark E. Bond2, Leanne M. Currey-Randall3, Conrad W. Speed5, Conrad W. Speed3, C. Samantha Sherman4, Matthew J. Rees3, Matthew J. Rees8, Vinay Udyawer3, Kathryn I. Flowers2, GM Clementi2, Jasmine Valentin-Albanese9, Taylor Gorham1, M. Shiham Adam, Khadeeja Ali2, Fabián Pina-Amargós, Jorge Angulo-Valdés10, Jorge Angulo-Valdés11, Jacob Asher12, Jacob Asher13, Laura García Barcia2, Océane Beaufort, Cecilie Benjamin, Anthony T. F. Bernard14, Anthony T. F. Bernard15, Michael L. Berumen16, Stacy L. Bierwagen4, Erika Bonnema2, Rosalind M. K. Bown, Darcey Bradley17, Edd J. Brooks18, J. Jed Brown19, Dayne Buddo20, Patrick J. Burke21, Camila Cáceres2, Diego Cardeñosa9, Jeffrey C. Carrier22, Jennifer E. Caselle17, Venkatesh Charloo, Thomas Claverie23, Eric Clua24, Jesse E. M. Cochran16, Neil D. Cook25, Jessica E. Cramp4, Brooke M. D’Alberto4, Martin de Graaf26, Mareike Dornhege27, Andy Estep, Lanya Fanovich, Naomi F. Farabough2, Daniel Fernando, Anna L. Flam, Camilla Floros, Virginia Fourqurean2, Ricardo C. Garla28, Kirk Gastrich2, Lachlan George4, Rory Graham, Tristan L. Guttridge, Royale S. Hardenstine16, Stephen Heck9, Aaron C. Henderson29, Aaron C. Henderson30, Heidi Hertler29, Robert E. Hueter31, Mohini Johnson32, Stacy D. Jupiter7, Devanshi Kasana2, Steven T. Kessel33, Benedict Kiilu, Taratu Kirata, Baraka Kuguru, Fabian Kyne20, Tim J. Langlois5, Elodie J. I. Lédée34, Steve Lindfield, Andrea Luna-Acosta35, JQ Maggs36, B. Mabel Manjaji-Matsumoto37, Andrea D. Marshall, Philip Matich38, Erin McCombs39, Dianne L. McLean3, Dianne L. McLean5, Llewelyn Meggs, Stephen E. Moore, Sushmita Mukherji4, Ryan R. Murray, Muslimin Kaimuddin, Stephen J. Newman40, Josep Nogués41, Clay Obota, Owen R. O’Shea, Kennedy Osuka42, Yannis P. Papastamatiou2, Nishan Perera, Bradley J. Peterson9, Alessandro Ponzo, Andhika Prima Prasetyo, L. M. Sjamsul Quamar, Jessica Quinlan2, Alexei Ruiz-Abierno10, Enric Sala, Melita Samoilys43, Michelle Schärer-Umpierre, Audrey M. Schlaff4, Nikola Simpson, Adam N. H. Smith44, Lauren Sparks, Akshay Tanna45, Rubén Torres, Michael J. Travers40, Maurits P. M. van Zinnicq Bergmann2, Laurent Vigliola46, Juney Ward, Alexandra M. Watts45, Colin K. C. Wen47, Elizabeth R. Whitman2, Aaron J. Wirsing48, Aljoscha Wothke, Esteban Zarza-Gonzâlez, Joshua E. Cinner4 
Dalhousie University1, Florida International University2, Australian Institute of Marine Science3, James Cook University4, University of Western Australia5, Curtin University6, Wildlife Conservation Society7, University of Wollongong8, Stony Brook University9, University of Havana10, Eckerd College11, Joint Institute for Marine and Atmospheric Research12, National Oceanic and Atmospheric Administration13, South African Institute for Aquatic Biodiversity14, Rhodes University15, King Abdullah University of Science and Technology16, University of California, Santa Barbara17, Cape Eleuthera Institute18, Florida State University College of Arts and Sciences19, University of the West Indies20, Macquarie University21, Albion College22, University of Montpellier23, PSL Research University24, Cardiff University25, Wageningen University and Research Centre26, Sophia University27, Federal University of Rio Grande do Norte28, The School for Field Studies29, United Arab Emirates University30, Mote Marine Laboratory31, Operation Wallacea32, Shedd Aquarium33, Carleton University34, Pontifical Xavierian University35, National Institute of Water and Atmospheric Research36, Universiti Malaysia Sabah37, Texas A&M University at Galveston38, Aquarium of the Pacific39, Government of Western Australia40, Island Conservation Society41, University of York42, University of Oxford43, Massey University44, Manchester Metropolitan University45, Institut de recherche pour le développement46, Tunghai University47, University of Washington48
22 Jul 2020-Nature
TL;DR: The results reveal the profound impact that fishing has had on reef shark populations: no sharks on almost 20% of the surveyed reefs, and shark depletion was strongly related to socio-economic conditions such as the size and proximity of the nearest market, poor governance and the density of the human population.
Abstract: Decades of overexploitation have devastated shark populations, leaving considerable doubt as to their ecological status1,2. Yet much of what is known about sharks has been inferred from catch records in industrial fisheries, whereas far less information is available about sharks that live in coastal habitats3. Here we address this knowledge gap using data from more than 15,000 standardized baited remote underwater video stations that were deployed on 371 reefs in 58 nations to estimate the conservation status of reef sharks globally. Our results reveal the profound impact that fishing has had on reef shark populations: we observed no sharks on almost 20% of the surveyed reefs. Reef sharks were almost completely absent from reefs in several nations, and shark depletion was strongly related to socio-economic conditions such as the size and proximity of the nearest market, poor governance and the density of the human population. However, opportunities for the conservation of reef sharks remain: shark sanctuaries, closed areas, catch limits and an absence of gillnets and longlines were associated with a substantially higher relative abundance of reef sharks. These results reveal several policy pathways for the restoration and management of reef shark populations, from direct top-down management of fishing to indirect improvement of governance conditions. Reef shark populations will only have a high chance of recovery by engaging key socio-economic aspects of tropical fisheries. Fishing has had a profound impact on global reef shark populations, and the absence or presence of sharks is strongly correlated with national socio-economic conditions and reef governance.

Journal ArticleDOI
TL;DR: This paper identifies several important aspects of integrating blockchain and ML, including overview, benefits, and applications, and discusses some open issues, challenges, and broader perspectives that need to be addressed to jointly consider blockchain andML for communications and networking systems.
Abstract: Recently, with the rapid development of information and communication technologies, the infrastructures, resources, end devices, and applications in communications and networking systems are becoming much more complex and heterogeneous. In addition, the large volume of data and massive end devices may bring serious security, privacy, services provisioning, and network management challenges. In order to achieve decentralized, secure, intelligent, and efficient network operation and management, the joint consideration of blockchain and machine learning (ML) may bring significant benefits and have attracted great interests from both academia and industry. On one hand, blockchain can significantly facilitate training data and ML model sharing, decentralized intelligence, security, privacy, and trusted decision-making of ML. On the other hand, ML will have significant impacts on the development of blockchain in communications and networking systems, including energy and resource efficiency, scalability, security, privacy, and intelligent smart contracts. However, some essential open issues and challenges that remain to be addressed before the widespread deployment of the integration of blockchain and ML, including resource management, data processing, scalable operation, and security issues. In this paper, we present a survey on the existing works for blockchain and ML technologies. We identify several important aspects of integrating blockchain and ML, including overview, benefits, and applications. Then we discuss some open issues, challenges, and broader perspectives that need to be addressed to jointly consider blockchain and ML for communications and networking systems.

Journal ArticleDOI
E. Kou, Phillip Urquijo1, Wolfgang Altmannshofer2, F. Beaujean3  +558 moreInstitutions (137)
TL;DR: In the original version of this manuscript, an error was introduced on pp352. '2.7nb:1.6nb' has been corrected to ''2.4nb: 1.3nb'' in the current online and printed version.
Abstract: In the original version of this manuscript, an error was introduced on pp352. '2.7nb:1.6nb' has been corrected to '2.4nb:1.3nb' in the current online and printed version. doi:10.1093/ptep/ptz106.

Journal ArticleDOI
06 Mar 2020-bioRxiv
TL;DR: This work generates the first globally-consistent, continuous index of forest condition as determined by degree of anthropogenic modification, by integrating data on observed and inferred human pressures and an index of lost connectivity.
Abstract: Many global environmental agendas, including halting biodiversity loss, reversing land degradation, and limiting climate change, depend upon retaining forests with high ecological integrity, yet the scale and degree of forest modification remain poorly quantified and mapped. By integrating data on observed and inferred human pressures and an index of lost connectivity, we generate a globally consistent, continuous index of forest condition as determined by the degree of anthropogenic modification. Globally, only 17.4 million km2 of forest (40.5%) has high landscape-level integrity (mostly found in Canada, Russia, the Amazon, Central Africa, and New Guinea) and only 27% of this area is found in nationally designated protected areas. Of the forest inside protected areas, only 56% has high landscape-level integrity. Ambitious policies that prioritize the retention of forest integrity, especially in the most intact areas, are now urgently needed alongside current efforts aimed at halting deforestation and restoring the integrity of forests globally.


Journal ArticleDOI
TL;DR: A globally consistent, continuous index of forest condition as determined by the degree of anthropogenic modification is generated by integrating data on observed and inferred human pressures and an index of lost connectivity.
Abstract: Many global environmental agendas, including halting biodiversity loss, reversing land degradation, and limiting climate change, depend upon retaining forests with high ecological integrity, yet the scale and degree of forest modification remain poorly quantified and mapped. By integrating data on observed and inferred human pressures and an index of lost connectivity, we generate a globally consistent, continuous index of forest condition as determined by the degree of anthropogenic modification. Globally, only 17.4 million km2 of forest (40.5%) has high landscape-level integrity (mostly found in Canada, Russia, the Amazon, Central Africa, and New Guinea) and only 27% of this area is found in nationally designated protected areas. Of the forest inside protected areas, only 56% has high landscape-level integrity. Ambitious policies that prioritize the retention of forest integrity, especially in the most intact areas, are now urgently needed alongside current efforts aimed at halting deforestation and restoring the integrity of forests globally.

Journal ArticleDOI
TL;DR: This work collected 2 years of data from Chinese stock market and proposed a comprehensive customization of feature engineering and deep learning-based model for predicting price trend of stock markets, which achieves overall high accuracy for stock market trend prediction.
Abstract: In the era of big data, deep learning for predicting stock market prices and trends has become even more popular than before. We collected 2 years of data from Chinese stock market and proposed a comprehensive customization of feature engineering and deep learning-based model for predicting price trend of stock markets. The proposed solution is comprehensive as it includes pre-processing of the stock market dataset, utilization of multiple feature engineering techniques, combined with a customized deep learning based system for stock market price trend prediction. We conducted comprehensive evaluations on frequently used machine learning models and conclude that our proposed solution outperforms due to the comprehensive feature engineering that we built. The system achieves overall high accuracy for stock market trend prediction. With the detailed design and evaluation of prediction term lengths, feature engineering, and data pre-processing methods, this work contributes to the stock analysis research community both in the financial and technical domains.

Journal ArticleDOI
TL;DR: Although longitudinal studies with individual-level data may be imperfect, they are needed to adequately address this topic and the complexities involved in these types of studies underscore the need for careful design and for peer review.
Abstract: Background: Studies have reported that ambient air pollution is associated with an increased risk of developing or dying from coronavirus-2 (COVID-19). Methodological approaches to investigate the ...

Journal ArticleDOI
TL;DR: A blockchain-based mobile edge computing (B-MEC) framework for adaptive resource allocation and computation offloading in future wireless networks, where the blockchain works as an overlaid system to provide management and control functions is presented.
Abstract: In this paper, we present a blockchain-based mobile edge computing (B-MEC) framework for adaptive resource allocation and computation offloading in future wireless networks, where the blockchain works as an overlaid system to provide management and control functions. In this framework, how to reach a consensus between the nodes while simultaneously guaranteeing the performance of both MEC and blockchain systems is a major challenge. Meanwhile, resource allocation, block size, and the number of consecutive blocks produced by each producer are critical to the performance of B-MEC. Therefore, an adaptive resource allocation and block generation scheme is proposed. To improve the throughput of the overlaid blockchain system and the quality of services (QoS) of the users in the underlaid MEC system, spectrum allocation, size of the blocks, and number of producing blocks for each producer are formulated as a joint optimization problem, where the time-varying wireless links and computation capacity of the MEC servers are considered. Since this problem is intractable using traditional methods, we resort to the deep reinforcement learning approach. Simulation results show the effectiveness of the proposed approach by comparing with other baseline methods.

Journal ArticleDOI
Samantha Joel1, Paul W. Eastwick2, Colleen J. Allison3, Ximena B. Arriaga4, Zachary G. Baker5, Eran Bar-Kalifa6, Sophie Bergeron7, Gurit E. Birnbaum8, Rebecca L. Brock9, Claudia Chloe Brumbaugh10, Cheryl L. Carmichael10, Serena Chen11, Jennifer Clarke12, Rebecca J. Cobb13, Michael K. Coolsen14, Jody L. Davis15, David C. de Jong16, Anik Debrot17, Eva C. DeHaas3, Jaye L. Derrick5, Jami Eller18, Marie Joelle Estrada19, Ruddy Faure20, Eli J. Finkel21, R. Chris Fraley22, Shelly L. Gable23, Reuma Gadassi-Polack24, Yuthika U. Girme3, Amie M. Gordon25, Courtney L. Gosnell26, Matthew D. Hammond27, Peggy A. Hannon28, Cheryl Harasymchuk29, Wilhelm Hofmann30, Andrea B. Horn31, Emily A. Impett32, Jeremy P. Jamieson19, Dacher Keltner10, James J. Kim32, Jeffrey L. Kirchner33, Esther S. Kluwer34, Esther S. Kluwer35, Madoka Kumashiro36, Grace M. Larson37, Gal Lazarus38, Jill M. Logan3, Laura B. Luchies39, Geoff MacDonald32, Laura V. Machia40, Michael R. Maniaci41, Jessica A. Maxwell42, Moran Mizrahi43, Amy Muise44, Sylvia Niehuis13, Brian G. Ogolsky22, C. Rebecca Oldham13, Nickola C. Overall42, Meinrad Perrez45, Brett J. Peters46, Paula R. Pietromonaco47, Sally I. Powers47, Thery Prok23, Rony Pshedetzky-Shochat38, Eshkol Rafaeli38, Eshkol Rafaeli48, Erin L. Ramsdell9, Maija Reblin49, Michael Reicherts45, Alan Reifman13, Harry T. Reis19, Galena K. Rhoades50, William S. Rholes51, Francesca Righetti20, Lindsey M. Rodriguez49, Ron Rogge19, Natalie O. Rosen52, Darby E. Saxbe53, Haran Sened38, Jeffry A. Simpson18, Erica B. Slotter54, Scott M. Stanley50, Shevaun L. Stocker55, Cathy Surra56, Hagar Ter Kuile35, Allison A. Vaughn57, Amanda M. Vicary58, Mariko L. Visserman32, Mariko L. Visserman44, Scott T. Wolf33 
University of Western Ontario1, University of California, Davis2, Simon Fraser University3, Purdue University4, University of Houston5, Ben-Gurion University of the Negev6, Université de Montréal7, Interdisciplinary Center Herzliya8, University of Nebraska–Lincoln9, City University of New York10, University of California, Berkeley11, University of Colorado Colorado Springs12, Texas Tech University13, Shippensburg University of Pennsylvania14, Virginia Commonwealth University15, Western Carolina University16, University of Lausanne17, University of Minnesota18, University of Rochester19, VU University Amsterdam20, Northwestern University21, University of Illinois at Urbana–Champaign22, University of California, Santa Barbara23, Yale University24, University of Michigan25, Pace University26, Victoria University of Wellington27, University of Washington28, Carleton University29, Ruhr University Bochum30, University of Zurich31, University of Toronto32, University of North Carolina at Chapel Hill33, Radboud University Nijmegen34, Utrecht University35, Goldsmiths, University of London36, University of Cologne37, Bar-Ilan University38, Calvin University39, Syracuse University40, Florida Atlantic University41, University of Auckland42, Ariel University43, York University44, University of Fribourg45, Ohio University46, University of Massachusetts Amherst47, Barnard College48, University of South Florida49, University of Denver50, Texas A&M University51, Dalhousie University52, University of Southern California53, Villanova University54, University of Wisconsin–Superior55, University of Texas at Austin56, San Diego State University57, Illinois Wesleyan University58
TL;DR: The findings imply that the sum of all individual differences and partner experiences exert their influence on relationship quality via a person’s own relationship-specific experiences, and effects due to moderation byindividual differences and moderation by partner-reports may be quite small.
Abstract: Given the powerful implications of relationship quality for health and well-being, a central mission of relationship science is explaining why some romantic relationships thrive more than others. This large-scale project used machine learning (i.e., Random Forests) to 1) quantify the extent to which relationship quality is predictable and 2) identify which constructs reliably predict relationship quality. Across 43 dyadic longitudinal datasets from 29 laboratories, the top relationship-specific predictors of relationship quality were perceived-partner commitment, appreciation, sexual satisfaction, perceived-partner satisfaction, and conflict. The top individual-difference predictors were life satisfaction, negative affect, depression, attachment avoidance, and attachment anxiety. Overall, relationship-specific variables predicted up to 45% of variance at baseline, and up to 18% of variance at the end of each study. Individual differences also performed well (21% and 12%, respectively). Actor-reported variables (i.e., own relationship-specific and individual-difference variables) predicted two to four times more variance than partner-reported variables (i.e., the partner's ratings on those variables). Importantly, individual differences and partner reports had no predictive effects beyond actor-reported relationship-specific variables alone. These findings imply that the sum of all individual differences and partner experiences exert their influence on relationship quality via a person's own relationship-specific experiences, and effects due to moderation by individual differences and moderation by partner-reports may be quite small. Finally, relationship-quality change (i.e., increases or decreases in relationship quality over the course of a study) was largely unpredictable from any combination of self-report variables. This collective effort should guide future models of relationships.

Journal ArticleDOI
TL;DR: A novel representation learning-based domain adaptation method to transfer information from the source domain to the target domain where labeled data is scarce and it outperforms several state-of-the-art domain adaptation methods and the progressive learning strategy is promising.
Abstract: Domain adaptation aims to exploit the supervision knowledge in a source domain for learning prediction models in a target domain. In this article, we propose a novel representation learning-based domain adaptation method, i.e., neural embedding matching (NEM) method, to transfer information from the source domain to the target domain where labeled data is scarce. The proposed approach induces an intermediate common representation space for both domains with a neural network model while matching the embedding of data from the two domains in this common representation space. The embedding matching is based on the fundamental assumptions that a cross-domain pair of instances will be close to each other in the embedding space if they belong to the same class category, and the local geometry property of the data can be maintained in the embedding space. The assumptions are encoded via objectives of metric learning and graph embedding techniques to regularize and learn the semisupervised neural embedding model. We also provide a generalization bound analysis for the proposed domain adaptation method. Meanwhile, a progressive learning strategy is proposed and it improves the generalization ability of the neural network gradually. Experiments are conducted on a number of benchmark data sets and the results demonstrate that the proposed method outperforms several state-of-the-art domain adaptation methods and the progressive learning strategy is promising.

Journal ArticleDOI
TL;DR: The habitat amount hypothesis predicts that species richness in plots of fixed size is more strongly and positively related to the amount of habitat around the plot than to patch size or isolation, and habitat amount better predicts species density than patch size and isolation combined.
Abstract: Decades of research suggest that species richness depends on spatial characteristics of habitat patches, especially their size and isolation. In contrast, the habitat amount hypothesis predicts that (1) species richness in plots of fixed size (species density) is more strongly and positively related to the amount of habitat around the plot than to patch size or isolation; (2) habitat amount better predicts species density than patch size and isolation combined, (3) there is no effect of habitat fragmentation per se on species density and (4) patch size and isolation effects do not become stronger with declining habitat amount. Data on eight taxonomic groups from 35 studies around the world support these predictions. Conserving species density requires minimising habitat loss, irrespective of the configuration of the patches in which that habitat is contained.

Journal ArticleDOI
TL;DR: The weight of evidence indicates that recombination failure, cohesin deterioration, spindle assembly checkpoint (SAC) disregulation, abnormalities in post-translational modification of histones and tubulin, and mitochondrial dysfunction are the leading causes of oocyte aneuploidy associated with maternal aging.
Abstract: It is well established that maternal age is associated with a rapid decline in the production of healthy and high-quality oocytes resulting in reduced fertility in women older than 35 years of age In particular, chromosome segregation errors during meiotic divisions are increasingly common and lead to the production of oocytes with an incorrect number of chromosomes, a condition known as aneuploidy When an aneuploid oocyte is fertilized by a sperm it gives rise to an aneuploid embryo that, except in rare situations, will result in a spontaneous abortion As females advance in age, they are at higher risk of infertility, miscarriage, or having a pregnancy affected by congenital birth defects such as Down syndrome (trisomy 21), Edwards syndrome (trisomy 18), and Turner syndrome (monosomy X) Here, we review the potential molecular mechanisms associated with increased chromosome segregation errors during meiosis as a function of maternal age Our review shows that multiple exogenous and endogenous factors contribute to the age-related increase in oocyte aneuploidy Specifically, the weight of evidence indicates that recombination failure, cohesin deterioration, spindle assembly checkpoint (SAC) disregulation, abnormalities in post-translational modification of histones and tubulin, and mitochondrial dysfunction are the leading causes of oocyte aneuploidy associated with maternal aging There is also growing evidence that dietary and other bioactive interventions may mitigate the effect of maternal aging on oocyte quality and oocyte aneuploidy, thereby improving fertility outcomes Maternal age is a major concern for aneuploidy and genetic disorders in the offspring in the context of an increasing proportion of mothers having children at increasingly older ages A better understanding of the mechanisms associated with maternal aging leading to aneuploidy and of intervention strategies that may mitigate these detrimental effects and reduce its occurrence are essential for preventing abnormal reproductive outcomes in the human population