scispace - formally typeset
Search or ask a question

Showing papers on "Context (language use) published in 2021"


Journal ArticleDOI
TL;DR: A review of the literature on mutations of the SARS-CoV-2 spike protein, the primary antigen, focusing on their impacts on antigenicity and contextualizing them in the protein structure is presented in this article.
Abstract: Although most mutations in the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) genome are expected to be either deleterious and swiftly purged or relatively neutral, a small proportion will affect functional properties and may alter infectivity, disease severity or interactions with host immunity. The emergence of SARS-CoV-2 in late 2019 was followed by a period of relative evolutionary stasis lasting about 11 months. Since late 2020, however, SARS-CoV-2 evolution has been characterized by the emergence of sets of mutations, in the context of ‘variants of concern’, that impact virus characteristics, including transmissibility and antigenicity, probably in response to the changing immune profile of the human population. There is emerging evidence of reduced neutralization of some SARS-CoV-2 variants by postvaccination serum; however, a greater understanding of correlates of protection is required to evaluate how this may impact vaccine effectiveness. Nonetheless, manufacturers are preparing platforms for a possible update of vaccine sequences, and it is crucial that surveillance of genetic and antigenic changes in the global virus population is done alongside experiments to elucidate the phenotypic impacts of mutations. In this Review, we summarize the literature on mutations of the SARS-CoV-2 spike protein, the primary antigen, focusing on their impacts on antigenicity and contextualizing them in the protein structure, and discuss them in the context of observed mutation frequencies in global sequence datasets. The evolution of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has been characterized by the emergence of mutations and so-called variants of concern that impact virus characteristics, including transmissibility and antigenicity. In this Review, members of the COVID-19 Genomics UK (COG-UK) Consortium and colleagues summarize mutations of the SARS-CoV-2 spike protein, focusing on their impacts on antigenicity and contextualizing them in the protein structure, and discuss them in the context of observed mutation frequencies in global sequence datasets.

2,047 citations


Proceedings ArticleDOI
20 Jun 2021
TL;DR: Zhang et al. as discussed by the authors proposed a pure transformer to encode an image as a sequence of patches, which can be combined with a simple decoder to provide a powerful segmentation model.
Abstract: Most recent semantic segmentation methods adopt a fully-convolutional network (FCN) with an encoder-decoder architecture. The encoder progressively reduces the spatial resolution and learns more abstract/semantic visual concepts with larger receptive fields. Since context modeling is critical for segmentation, the latest efforts have been focused on increasing the receptive field, through either dilated/atrous convolutions or inserting attention modules. However, the encoder-decoder based FCN architecture remains unchanged. In this paper, we aim to provide an alternative perspective by treating semantic segmentation as a sequence-to-sequence prediction task. Specifically, we deploy a pure transformer (i.e., without convolution and resolution reduction) to encode an image as a sequence of patches. With the global context modeled in every layer of the transformer, this encoder can be combined with a simple decoder to provide a powerful segmentation model, termed SEgmentation TRansformer (SETR). Extensive experiments show that SETR achieves new state of the art on ADE20K (50.28% mIoU), Pascal Context (55.83% mIoU) and competitive results on Cityscapes. Particularly, we achieve the first position in the highly competitive ADE20K test server leaderboard on the day of submission.

1,761 citations


Journal ArticleDOI
25 Feb 2021-Nature
TL;DR: It is demonstrated that relatively low antibody titers are sufficient for protection against SARS-CoV-2 in rhesus macaques, and that cellular immune responses may also contribute to protection if antibody responses are suboptimal.
Abstract: Recent studies have reported the protective efficacy of both natural1 and vaccine-induced2–7 immunity against challenge with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in rhesus macaques. However, the importance of humoral and cellular immunity for protection against infection with SARS-CoV-2 remains to be determined. Here we show that the adoptive transfer of purified IgG from convalescent rhesus macaques (Macaca mulatta) protects naive recipient macaques against challenge with SARS-CoV-2 in a dose-dependent fashion. Depletion of CD8+ T cells in convalescent macaques partially abrogated the protective efficacy of natural immunity against rechallenge with SARS-CoV-2, which suggests a role for cellular immunity in the context of waning or subprotective antibody titres. These data demonstrate that relatively low antibody titres are sufficient for protection against SARS-CoV-2 in rhesus macaques, and that cellular immune responses may contribute to protection if antibody responses are suboptimal. We also show that higher antibody titres are required for treatment of SARS-CoV-2 infection in macaques. These findings have implications for the development of SARS-CoV-2 vaccines and immune-based therapeutic agents. Adoptive transfer of purified IgG from convalescent macaques protects naive macaques against SARS-CoV-2 infection, and cellular immune responses contribute to protection against rechallenge with SARS-CoV-2.

881 citations


Journal ArticleDOI
TL;DR: The goal of this survey article is to give a coherent and comprehensive review of the literature around the construction and use of Normalizing Flows for distribution learning to provide context and explanation of the models.
Abstract: Normalizing Flows are generative models which produce tractable distributions where both sampling and density evaluation can be efficient and exact. The goal of this survey article is to give a coherent and comprehensive review of the literature around the construction and use of Normalizing Flows for distribution learning. We aim to provide context and explanation of the models, review current state-of-the-art literature, and identify open questions and promising future directions.

683 citations


Journal ArticleDOI
TL;DR: In this article, the authors systematically summarize methodologies and discuss challenges for deep multi-modal object detection and semantic segmentation in autonomous driving and provide an overview of on-board sensors on test vehicles, open datasets, and background information for object detection.
Abstract: Recent advancements in perception for autonomous driving are driven by deep learning. In order to achieve robust and accurate scene understanding, autonomous vehicles are usually equipped with different sensors (e.g. cameras, LiDARs, Radars), and multiple sensing modalities can be fused to exploit their complementary properties. In this context, many methods have been proposed for deep multi-modal perception problems. However, there is no general guideline for network architecture design, and questions of “what to fuse”, “when to fuse”, and “how to fuse” remain open. This review paper attempts to systematically summarize methodologies and discuss challenges for deep multi-modal object detection and semantic segmentation in autonomous driving. To this end, we first provide an overview of on-board sensors on test vehicles, open datasets, and background information for object detection and semantic segmentation in autonomous driving research. We then summarize the fusion methodologies and discuss challenges and open questions. In the appendix, we provide tables that summarize topics and methods. We also provide an interactive online platform to navigate each reference: https://boschresearch.github.io/multimodalperception/ .

674 citations


Journal ArticleDOI
TL;DR: In this article, the current progress on carbon based pseudo-material composites for supercapacitor application in a well-systematic and easy manner which can guide the early researchers and emerging scientists dealing with or interested in supercapACitor.

640 citations


Journal ArticleDOI
TL;DR: This paper aims to provide a comprehensive study concerning FL’s security and privacy aspects that can help bridge the gap between the current state of federated AI and a future in which mass adoption is possible.

565 citations


Journal ArticleDOI
23 Feb 2021
TL;DR: This article focuses on nonverbal overload as a potential cause for fatigue in Zoom, and provides four arguments outlining how various aspects of the current Zoom interface likely lead to psychological consequences.
Abstract: For decades, scholars have predicted that videoconference technology will disrupt the practice of commuting daily to and from work and will change the way people socialize. In 2020, the Covid-19 pandemic forced a drastic increase in the number of videoconference meetings, and Zoom became the leading software package because it was free, robust, and easy to use. While the software has been an essential tool for productivity, learning, and social interaction, something about being on videoconference all day seems particularly exhausting, and the term “Zoom Fatigue” caught on quickly. In this article, I focus on nonverbal overload as a potential cause for fatigue, and provide four arguments outlining how various aspects of the current Zoom interface likely lead to psychological consequences. The arguments are based on academic theory and research, but also have yet to be directly tested in the context of Zoom, and require future experimentation to confirm. Instead of indicting the medium, my goal is to point out these design flaws to isolate research areas for social scientists and to suggest design improvements for technologists.

441 citations


Journal ArticleDOI
TL;DR: In this article, the authors highlight the main themes that are currently under investigation in the context of in vivo tumour metabolism, specifically emphasizing questions that remain unanswered and highlight the new interest in exploiting cancer genetic analysis for patient stratification and dietary interventions in combination with therapies that target metabolism.
Abstract: Tumour initiation and progression requires the metabolic reprogramming of cancer cells. Cancer cells autonomously alter their flux through various metabolic pathways in order to meet the increased bioenergetic and biosynthetic demand as well as mitigate oxidative stress required for cancer cell proliferation and survival. Cancer driver mutations coupled with environmental nutrient availability control flux through these metabolic pathways. Metabolites, when aberrantly accumulated, can also promote tumorigenesis. The development and application of new technologies over the last few decades has not only revealed the heterogeneity and plasticity of tumours but also allowed us to uncover new metabolic pathways involved in supporting tumour growth. The tumour microenvironment (TME), which can be depleted of certain nutrients, forces cancer cells to adapt by inducing nutrient scavenging mechanisms to sustain cancer cell proliferation. There is growing appreciation that the metabolism of cell types other than cancer cells within the TME, including endothelial cells, fibroblasts and immune cells, can modulate tumour progression. Because metastases are a major cause of death of patients with cancer, efforts are underway to understand how metabolism is harnessed by metastatic cells. Additionally, there is a new interest in exploiting cancer genetic analysis for patient stratification and/or dietary interventions in combination with therapies that target metabolism. In this Perspective, we highlight these main themes that are currently under investigation in the context of in vivo tumour metabolism, specifically emphasizing questions that remain unanswered.

416 citations


Journal ArticleDOI
TL;DR: In this paper, a tried and tested approach for genome curation using gEVAL, the genome evaluation browser, is described and recommended for assembly curation in a GEVAL-independent context to facilitate the uptake of genome curations in the wider community.
Abstract: Genome sequence assemblies provide the basis for our understanding of biology. Generating error-free assemblies is therefore the ultimate, but sadly still unachieved goal of a multitude of research projects. Despite the ever-advancing improvements in data generation, assembly algorithms and pipelines, no automated approach has so far reliably generated near error-free genome assemblies for eukaryotes. Whilst working towards improved datasets and fully automated pipelines, assembly evaluation and curation is actively used to bridge this shortcoming and significantly reduce the number of assembly errors. In addition to this increase in product value, the insights gained from assembly curation are fed back into the automated assembly strategy and contribute to notable improvements in genome assembly quality. We describe our tried and tested approach for assembly curation using gEVAL, the genome evaluation browser. We outline the procedures applied to genome curation using gEVAL and also our recommendations for assembly curation in a gEVAL-independent context to facilitate the uptake of genome curation in the wider community.

373 citations


Journal ArticleDOI
TL;DR: In this article, the authors systematically reviewed existing research on the COVID-19 pandemic in supply chain disciplines and identified 74 relevant articles published on or before 28 September 2020, and the synthesis of the findings reveals that four broad themes recur in the published work: namely, impacts of the CO VID-2019 pandemic, resilience strategies for managing impacts and recovery, the role of technology in implementing resilience strategies, and supply chain sustainability in the light of the pandemic.
Abstract: The global spread of the novel coronavirus, also known as the COVID-19 pandemic, has had a devastating impact on supply chains. Since the pandemic started, scholars have been researching and publishing their studies on the various supply-chain-related issues raised by COVID-19. However, while the number of articles on this subject has been steadily increasing, due to the absence of any systematic literature reviews, it remains unclear what aspects of this disruption have already been studied and what aspects still need to be investigated. The present study systematically reviews existing research on the COVID-19 pandemic in supply chain disciplines. Through a rigorous and systematic search, we identify 74 relevant articles published on or before 28 September 2020. The synthesis of the findings reveals that four broad themes recur in the published work: namely, impacts of the COVID-19 pandemic, resilience strategies for managing impacts and recovery, the role of technology in implementing resilience strategies, and supply chain sustainability in the light of the pandemic. Alongside the synthesis of the findings, this study describes the methodologies, context, and theories used in each piece of research. Our analysis reveals that there is a lack of empirically designed and theoretically grounded studies in this area; hence, the generalizability of the findings, thus far, is limited. Moreover, the analysis reveals that most studies have focused on supply chains for high-demand essential goods and healthcare products, while low-demand items and SMEs have been largely ignored. We also review the literature on prior epidemic outbreaks and other disruptions in supply chain disciplines. By considering the findings of these articles alongside research on the COVID-19 pandemic, this study offers research questions and directions for further investigation. These directions can guide scholars in designing and conducting impactful research in the field.

Journal ArticleDOI
TL;DR: The association of introducing and lifting NPIs with the level of transmission of SARS-CoV-2, as measured by the time-varying reproduction number (R), is understood from a broad perspective across 131 countries.
Abstract: Summary Background Non-pharmaceutical interventions (NPIs) were implemented by many countries to reduce the transmission of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the causal agent of COVID-19. A resurgence in COVID-19 cases has been reported in some countries that lifted some of these NPIs. We aimed to understand the association of introducing and lifting NPIs with the level of transmission of SARS-CoV-2, as measured by the time-varying reproduction number (R), from a broad perspective across 131 countries. Methods In this modelling study, we linked data on daily country-level estimates of R from the London School of Hygiene & Tropical Medicine (London, UK) with data on country-specific policies on NPIs from the Oxford COVID-19 Government Response Tracker, available between Jan 1 and July 20, 2020. We defined a phase as a time period when all NPIs remained the same, and we divided the timeline of each country into individual phases based on the status of NPIs. We calculated the R ratio as the ratio between the daily R of each phase and the R from the last day of the previous phase (ie, before the NPI status changed) as a measure of the association between NPI status and transmission of SARS-CoV-2. We then modelled the R ratio using a log-linear regression with introduction and relaxation of each NPI as independent variables for each day of the first 28 days after the change in the corresponding NPI. In an ad-hoc analysis, we estimated the effect of reintroducing multiple NPIs with the greatest effects, and in the observed sequence, to tackle the possible resurgence of SARS-CoV-2. Findings 790 phases from 131 countries were included in the analysis. A decreasing trend over time in the R ratio was found following the introduction of school closure, workplace closure, public events ban, requirements to stay at home, and internal movement limits; the reduction in R ranged from 3% to 24% on day 28 following the introduction compared with the last day before introduction, although the reduction was significant only for public events ban (R ratio 0·76, 95% CI 0·58–1·00); for all other NPIs, the upper bound of the 95% CI was above 1. An increasing trend over time in the R ratio was found following the relaxation of school closure, bans on public events, bans on public gatherings of more than ten people, requirements to stay at home, and internal movement limits; the increase in R ranged from 11% to 25% on day 28 following the relaxation compared with the last day before relaxation, although the increase was significant only for school reopening (R ratio 1·24, 95% CI 1·00–1·52) and lifting bans on public gatherings of more than ten people (1·25, 1·03–1·51); for all other NPIs, the lower bound of the 95% CI was below 1. It took a median of 8 days (IQR 6–9) following the introduction of an NPI to observe 60% of the maximum reduction in R and even longer (17 days [14–20]) following relaxation to observe 60% of the maximum increase in R. In response to a possible resurgence of COVID-19, a control strategy of banning public events and public gatherings of more than ten people was estimated to reduce R, with an R ratio of 0·71 (95% CI 0·55–0·93) on day 28, decreasing to 0·62 (0·47–0·82) on day 28 if measures to close workplaces were added, 0·58 (0·41–0·81) if measures to close workplaces and internal movement restrictions were added, and 0·48 (0·32–0·71) if measures to close workplaces, internal movement restrictions, and requirements to stay at home were added. Interpretation Individual NPIs, including school closure, workplace closure, public events ban, ban on gatherings of more than ten people, requirements to stay at home, and internal movement limits, are associated with reduced transmission of SARS-CoV-2, but the effect of introducing and lifting these NPIs is delayed by 1–3 weeks, with this delay being longer when lifting NPIs. These findings provide additional evidence that can inform policy-maker decisions on the timing of introducing and lifting different NPIs, although R should be interpreted in the context of its known limitations. Funding Wellcome Trust Institutional Strategic Support Fund and Data-Driven Innovation initiative.

Journal ArticleDOI
TL;DR: Point Cloud Transformer (PCT) as mentioned in this paper is based on Transformer, which is inherently permutation invariant for processing a sequence of points, making it well suited for point cloud learning.
Abstract: The irregular domain and lack of ordering make it challenging to design deep neural networks for point cloud processing. This paper presents a novel framework named Point Cloud Transformer (PCT) for point cloud learning. PCT is based on Transformer, which achieves huge success in natural language processing and displays great potential in image processing. It is inherently permutation invariant for processing a sequence of points, making it well-suited for point cloud learning. To better capture local context within the point cloud, we enhance input embedding with the support of farthest point sampling and nearest neighbor search. Extensive experiments demonstrate that the PCT achieves the state-of-the-art performance on shape classification, part segmentation, semantic segmentation, and normal estimation tasks.

Journal ArticleDOI
TL;DR: Mol* as mentioned in this paper is a web-native 3D visualization and streaming tool for macromolecular coordinate and experimental data, together with capabilities for displaying structure quality, functional, or biological context annotations.
Abstract: Large biomolecular structures are being determined experimentally on a daily basis using established techniques such as crystallography and electron microscopy. In addition, emerging integrative or hybrid methods (I/HM) are producing structural models of huge macromolecular machines and assemblies, sometimes containing 100s of millions of non-hydrogen atoms. The performance requirements for visualization and analysis tools delivering these data are increasing rapidly. Significant progress in developing online, web-native three-dimensional (3D) visualization tools was previously accomplished with the introduction of the LiteMol suite and NGL Viewers. Thereafter, Mol* development was jointly initiated by PDBe and RCSB PDB to combine and build on the strengths of LiteMol (developed by PDBe) and NGL (developed by RCSB PDB). The web-native Mol* Viewer enables 3D visualization and streaming of macromolecular coordinate and experimental data, together with capabilities for displaying structure quality, functional, or biological context annotations. High-performance graphics and data management allows users to simultaneously visualise up to hundreds of (superimposed) protein structures, stream molecular dynamics simulation trajectories, render cell-level models, or display huge I/HM structures. It is the primary 3D structure viewer used by PDBe and RCSB PDB. It can be easily integrated into third-party services. Mol* Viewer is open source and freely available at https://molstar.org/.

Book
30 Nov 2021
TL;DR: Theories of creativity in the classroom include: 1. What is Creativity? 2. Models of the Creative Process 3. Theories of Creativity: The Individual 4. Systems in Context 5. Creative People 6. Teaching Creative Thinking Skills and Habits 7. Creativity in the Content Areas: Language Arts, Social Studies and the Arts 8. Motivation, Creativity, and Classroom Organization 9. Assessing Creativity Appendix: Problem-Finding Lessons as discussed by the authors.
Abstract: Preface: Why Creativity in the Classroom? Part I: Understanding Creative People and Processes 1. What is Creativity? 2. Models of the Creative Process 3. Theories of Creativity: The Individual 4. Theories of Creativity: Systems in Context 5. Creative People Part II: Creativity and Classroom Life 6. Teaching Creative Thinking Skills and Habits 7. Creativity in the Content Areas: Language Arts, Social Studies and the Arts 8. Creativity in the Content Areas: Science, Math, and General Teaching Strategies 9. Motivation, Creativity, and Classroom Organization 10. Assessing Creativity Appendix: Problem-Finding Lessons References Author Index Subject Index a

Journal ArticleDOI
TL;DR: In this article, the authors focus on the photometric content, describing the input data, the algorithms, the processing, and the validation of the results of Gaia EDR3.
Abstract: Context. Gaia Early Data Release 3 (Gaia EDR3) contains astrometry and photometry results for about 1.8 billion sources based on observations collected by the European Space Agency Gaia satellite during the first 34 months of its operational phase.Aims. In this paper, we focus on the photometric content, describing the input data, the algorithms, the processing, and the validation of the results. Particular attention is given to the quality of the data and to a number of features that users may need to take into account to make the best use of the Gaia EDR3 catalogue.Methods. The processing broadly followed the same procedure as for Gaia DR2, but with significant improvements in several aspects of the blue and red photometer (BP and RP) preprocessing and in the photometric calibration process. In particular, the treatment of the BP and RP background has been updated to include a better estimation of the local background, and the detection of crowding effects has been used to exclude affected data from the calibrations. The photometric calibration models have also been updated to account for flux loss over the whole magnitude range. Significant improvements in the modelling and calibration of the Gaia point and line spread functions have also helped to reduce a number of instrumental effects that were still present in DR2. Results. Gaia EDR3 contains 1.806 billion sources with G -band photometry and 1.540 billion sources with G BP and G RP photometry. The median uncertainty in the G -band photometry, as measured from the standard deviation of the internally calibrated mean photometry for a given source, is 0.2 mmag at magnitude G = 10–14, 0.8 mmag at G ≈ 17, and 2.6 mmag at G ≈ 19. The significant magnitude term found in the Gaia DR2 photometry is no longer visible, and overall there are no trends larger than 1 mmag mag−1 . Using one passband over the whole colour and magnitude range leaves no systematics above the 1% level in magnitude in any of the bands, and a larger systematic is present for a very small sample of bright and blue sources. A detailed description of the residual systematic effects is provided. Overall the quality of the calibrated mean photometry in Gaia EDR3 is superior with respect to DR2 for all bands.

Journal ArticleDOI
TL;DR: A comprehensive overview of the state-of-the-art on RISs, with focus on their operating principles, performance evaluation, beamforming design and resource management, applications of machine learning to RIS-enhanced wireless networks, as well as the integration of RISs with other emerging technologies.
Abstract: Reconfigurable intelligent surfaces (RISs), also known as intelligent reflecting surfaces (IRSs), or large intelligent surfaces (LISs), 1 have received significant attention for their potential to enhance the capacity and coverage of wireless networks by smartly reconfiguring the wireless propagation environment. Therefore, RISs are considered a promising technology for the sixth-generation (6G) of communication networks. In this context, we provide a comprehensive overview of the state-of-the-art on RISs, with focus on their operating principles, performance evaluation, beamforming design and resource management, applications of machine learning to RIS-enhanced wireless networks, as well as the integration of RISs with other emerging technologies. We describe the basic principles of RISs both from physics and communications perspectives, based on which we present performance evaluation of multiantenna assisted RIS systems. In addition, we systematically survey existing designs for RIS-enhanced wireless networks encompassing performance analysis, information theory, and performance optimization perspectives. Furthermore, we survey existing research contributions that apply machine learning for tackling challenges in dynamic scenarios, such as random fluctuations of wireless channels and user mobility in RIS-enhanced wireless networks. Last but not least, we identify major issues and research opportunities associated with the integration of RISs and other emerging technologies for applications to next-generation networks. 1 Without loss of generality, we use the name of RIS in the remainder of this paper.

Journal ArticleDOI
TL;DR: In this paper, the authors compared the two widely used methods of Structural Equation Modeling (SEM): covariance based CB-SEM and Partial Least Squares based SEM.

Book ChapterDOI
27 Sep 2021
TL;DR: TransBTS as mentioned in this paper exploits Transformer in 3D CNN for MRI Brain Tumor Segmentation and proposes a novel network named TransBTS based on the encoder-decoder structure.
Abstract: Transformer, which can benefit from global (long-range) information modeling using self-attention mechanisms, has been successful in natural language processing and 2D image classification recently. However, both local and global features are crucial for dense prediction tasks, especially for 3D medical image segmentation. In this paper, we for the first time exploit Transformer in 3D CNN for MRI Brain Tumor Segmentation and propose a novel network named TransBTS based on the encoder-decoder structure. To capture the local 3D context information, the encoder first utilizes 3D CNN to extract the volumetric spatial feature maps. Meanwhile, the feature maps are reformed elaborately for tokens that are fed into Transformer for global feature modeling. The decoder leverages the features embedded by Transformer and performs progressive upsampling to predict the detailed segmentation map. Extensive experimental results on both BraTS 2019 and 2020 datasets show that TransBTS achieves comparable or higher results than previous state-of-the-art 3D methods for brain tumor segmentation on 3D MRI scans. The source code is available at https://github.com/Wenxuan-1119/TransBTS.

Journal ArticleDOI
08 Jan 2021
TL;DR: In this paper, the authors survey recent progress in the development of modern computer vision techniques-powered by deep learning-for medical applications, focusing on medical imaging, medical video, and clinical deployment.
Abstract: A decade of unprecedented progress in artificial intelligence (AI) has demonstrated the potential for many fields-including medicine-to benefit from the insights that AI techniques can extract from data. Here we survey recent progress in the development of modern computer vision techniques-powered by deep learning-for medical applications, focusing on medical imaging, medical video, and clinical deployment. We start by briefly summarizing a decade of progress in convolutional neural networks, including the vision tasks they enable, in the context of healthcare. Next, we discuss several example medical imaging applications that stand to benefit-including cardiology, pathology, dermatology, ophthalmology-and propose new avenues for continued work. We then expand into general medical video, highlighting ways in which clinical workflows can integrate computer vision to enhance care. Finally, we discuss the challenges and hurdles required for real-world clinical deployment of these technologies.

Journal ArticleDOI
TL;DR: In this article, the authors use a systems framework for studying entrepreneurial ecosystems, develop a measurement instrument of its elements, and use this to compose an entrepreneurial ecosystem index to examine the quality of entrepreneurial ecosystems in the Netherlands.
Abstract: There is a growing interest in ecosystems as an approach for understanding the context of entrepreneurship at the macro level of an organizational community. It consists of all the interdependent actors and factors that enable and constrain entrepreneurship within a particular territory. Although growing in popularity, the entrepreneurial ecosystem concept remains loosely defined and measured. This paper shows the value of taking a systems view of the context of entrepreneurship: understanding entrepreneurial economies from a systems perspective. We use a systems framework for studying entrepreneurial ecosystems, develop a measurement instrument of its elements, and use this to compose an entrepreneurial ecosystem index to examine the quality of entrepreneurial ecosystems in the Netherlands. We find that the prevalence of high-growth firms in a region is strongly related to the quality of its entrepreneurial ecosystem. Strong interrelationships among the ecosystem elements reveal their interdependence and need for a systems perspective.

Journal ArticleDOI
TL;DR: It is hypothesize that the addition of structural determinants and root causes will identify racism as a cause of inequities in maternal health outcomes, as many of the social and political structures and policies in the United States were born out of racism, classism, and gender oppression.
Abstract: Since the World Health Organization launched its commission on the social determinants of health (SDOH) over a decade ago, a large body of research has proven that social determinants-defined as the conditions in which people are born, grow, live, work, and age-are significant drivers of disease risk and susceptibility within clinical care and public health systems. Unfortunately, the term has lost meaning within systems of care because of misuse and lack of context. As many disparate health outcomes remain, including higher risk of maternal mortality among Black women, a deeper understanding of the SDOH-and what forces underlie their distribution-is needed. In this article, we will expand our review of social determinants of maternal health to include the terms "structural determinants of health" and "root causes of inequities" as we assess the literature on this topic. We hypothesize that the addition of structural determinants and root causes will identify racism as a cause of inequities in maternal health outcomes, as many of the social and political structures and policies in the United States were born out of racism, classism, and gender oppression. We will conclude with proposed practice and policy solutions to end inequities in maternal health outcomes.

Journal ArticleDOI
TL;DR: In this paper, a plethora of digital technologies effecting on manufacturing enterprises is discussed. But the authors focus on the effects in the smart factory domain, focusing on the effect in the manufacturing domain.
Abstract: Industry 4.0 (I4.0) encompasses a plethora of digital technologies effecting on manufacturing enterprises. Most research on this topic examines the effects in the smart factory domain, focusing on ...

Journal ArticleDOI
TL;DR: A comprehensive review of state-of-the-art deep learning approaches that have been used in the context of histopathological image analysis can be found in this paper, where a survey of over 130 papers is presented.

Journal ArticleDOI
TL;DR: The most important predictor of when states adopted social distancing policies is political: All else equal, states led by Republican governors were slower to implement such policies during a critical window of early COVID-19 response.
Abstract: Context Social distancing is an essential but economically painful measure to flatten the curve of emergent infectious diseases. As the novel coronavirus that causes COVID-19 spread throughout the United States in early 2020, the federal government left to the states the difficult and consequential decisions about when to cancel events, close schools and businesses, and issue stay-at-home orders. Methods The authors present an original, detailed dataset of state-level social distancing policy responses to the epidemic; they then apply event history analysis to study the timing of implementation of five social distancing policies across all 50 states. Results The most important predictor of when states adopted social distancing policies is political: all else equal, states led by Republican governors were slower to implement such policies during a critical window of early COVID-19 response. Conclusions Continuing actions driven by partisanship rather than by public health expertise and scientific recommendations may exact greater tolls on health and broader society.

Journal ArticleDOI
04 Jan 2021
TL;DR: In this paper, a machine learning approach was used to detect COVID-19 cases by simple features accessed by asking basic questions, such as sex, age ≥ 60 years, known contact with an infected individual, and the appearance of five initial clinical symptoms.
Abstract: Effective screening of SARS-CoV-2 enables quick and efficient diagnosis of COVID-19 and can mitigate the burden on healthcare systems. Prediction models that combine several features to estimate the risk of infection have been developed. These aim to assist medical staff worldwide in triaging patients, especially in the context of limited healthcare resources. We established a machine-learning approach that trained on records from 51,831 tested individuals (of whom 4769 were confirmed to have COVID-19). The test set contained data from the subsequent week (47,401 tested individuals of whom 3624 were confirmed to have COVID-19). Our model predicted COVID-19 test results with high accuracy using only eight binary features: sex, age ≥60 years, known contact with an infected individual, and the appearance of five initial clinical symptoms. Overall, based on the nationwide data publicly reported by the Israeli Ministry of Health, we developed a model that detects COVID-19 cases by simple features accessed by asking basic questions. Our framework can be used, among other considerations, to prioritize testing for COVID-19 when testing resources are limited.

Proceedings ArticleDOI
19 Apr 2021
TL;DR: TransFuser as discussed by the authors integrates image and LiDAR representations using attention and achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
Abstract: How should representations from complementary sensors be integrated for autonomous driving? Geometry-based sensor fusion has shown great promise for perception tasks such as object detection and motion forecasting. However, for the actual driving task, the global context of the 3D scene is key, e.g. a change in traffic light state can affect the behavior of a vehicle geometrically distant from that traffic light. Geometry alone may therefore be insufficient for effectively fusing representations in end-to-end driving models. In this work, we demonstrate that imitation learning policies based on existing sensor fusion methods under-perform in the presence of a high density of dynamic agents and complex scenarios, which require global contextual reasoning, such as handling traffic oncoming from multiple directions at uncontrolled intersections. Therefore, we propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention. We experimentally validate the efficacy of our approach in urban settings involving complex scenarios using the CARLA urban driving simulator. Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.

Journal ArticleDOI
19 Jan 2021-Mbio
TL;DR: In this paper, the authors used a pipeline for single nucleotide variant calling in a metagenomic context, characterized minor SARS-CoV-2 alleles in the wastewater and detected viral genotypes which were also found within clinical genomes throughout California.
Abstract: Viral genome sequencing has guided our understanding of the spread and extent of genetic diversity of SARS-CoV-2 during the COVID-19 pandemic. SARS-CoV-2 viral genomes are usually sequenced from nasopharyngeal swabs of individual patients to track viral spread. Recently, RT-qPCR of municipal wastewater has been used to quantify the abundance of SARS-CoV-2 in several regions globally. However, metatranscriptomic sequencing of wastewater can be used to profile the viral genetic diversity across infected communities. Here, we sequenced RNA directly from sewage collected by municipal utility districts in the San Francisco Bay Area to generate complete and nearly complete SARS-CoV-2 genomes. The major consensus SARS-CoV-2 genotypes detected in the sewage were identical to clinical genomes from the region. Using a pipeline for single nucleotide variant calling in a metagenomic context, we characterized minor SARS-CoV-2 alleles in the wastewater and detected viral genotypes which were also found within clinical genomes throughout California. Observed wastewater variants were more similar to local California patient-derived genotypes than they were to those from other regions within the United States or globally. Additional variants detected in wastewater have only been identified in genomes from patients sampled outside California, indicating that wastewater sequencing can provide evidence for recent introductions of viral lineages before they are detected by local clinical sequencing. These results demonstrate that epidemiological surveillance through wastewater sequencing can aid in tracking exact viral strains in an epidemic context.

Journal ArticleDOI
TL;DR: The creation of a true international consensus group to include all the relevant scientific liver societies, patient advocacy organizations, bio‐pharmaceutical industry, regulatory agencies and policy makers is recommended to move the field forward.

Journal ArticleDOI
TL;DR: This review highlights what, in the context of CNNs, it means to be a good model in computational neuroscience and the various ways models can provide insight.
Abstract: Convolutional neural networks (CNNs) were inspired by early findings in the study of biological vision. They have since become successful tools in computer vision and state-of-the-art models of both neural activity and behavior on visual tasks. This review highlights what, in the context of CNNs, it means to be a good model in computational neuroscience and the various ways models can provide insight. Specifically, it covers the origins of CNNs and the methods by which we validate them as models of biological vision. It then goes on to elaborate on what we can learn about biological vision by understanding and experimenting on CNNs and discusses emerging opportunities for the use of CNNs in vision research beyond basic object recognition.