scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: In this article, the authors study the impact of the short-term accommodation market on the hotel industry and find that the impact is non-uniformly distributed, with lower-priced hotels and those hotels not catering to business travelers being the most affected.
Abstract: Peer-to-peer markets, collectively known as the sharing economy, have emerged as alternative suppliers of goods and services traditionally provided by long-established industries. A central question regards the impact of these sharing economy platforms on incumbent firms. We study the case of Airbnb, specifically analyzing Airbnb’s entry into the short-term accommodation market in Texas and its impact on the incumbent hotel industry. We first explore Airbnb’s impact on hotel revenue, by using a difference- in-differences empirical strategy that exploits the significant spatiotemporal variation in the patterns of Airbnb adoption across city-level markets. We estimate that in Austin, where Airbnb supply is highest, the causal impact on hotel revenue is in the 8-10% range; moreover, the impact is non-uniformly distributed, with lower-priced hotels and those hotels not catering to business travelers being the most affected. We find that this impact materializes through less aggressive hotel room pricing, an impact that benefits all consumers, not just participants in the sharing economy. The impact on hotel prices is especially pronounced during periods of peak demand, such as SXSW. We find that by enabling supply to scale – a differentiating feature of peer-to-peer platforms – Airbnb has significantly crimped hotels’ ability to raise prices during periods of peak demand. Our work provides empirical evidence that the sharing economy is making inroads by successfully competing with, differentiating from, and acquiring market share from incumbent firms.

1,519 citations


Journal ArticleDOI
02 Jan 2015-Science
TL;DR: It is shown that the lifetime risk of cancers of many different types is strongly correlated with the total number of divisions of the normal self-renewing cells maintaining that tissue’s homeostasis, suggesting that only a third of the variation in cancer risk among tissues is attributable to environmental factors or inherited predispositions.
Abstract: Some tissue types give rise to human cancers millions of times more often than other tissue types. Although this has been recognized for more than a century, it has never been explained. Here, we show that the lifetime risk of cancers of many different types is strongly correlated (0.81) with the total number of divisions of the normal self-renewing cells maintaining that tissue’s homeostasis. These results suggest that only a third of the variation in cancer risk among tissues is attributable to environmental factors or inherited predispositions. The majority is due to “bad luck,” that is, random mutations arising during DNA replication in normal, noncancerous stem cells. This is important not only for understanding the disease but also for designing strategies to limit the mortality it causes.

1,519 citations


Journal ArticleDOI
01 Jul 2020-Obesity
TL;DR: The COVID‐19 pandemic is rapidly spreading worldwide, notably in Europe and North America where obesity is highly prevalent and the relation between obesity and severe acute respiratory syndrome coronavirus‐2 (SARS‐CoV‐2) has not been fully documented.
Abstract: Objective The COVID-19 pandemic is rapidly spreading worldwide, notably in Europe and North America where obesity is highly prevalent. The relation between obesity and severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) has not been fully documented. Methods This retrospective cohort study analyzed the relationship between clinical characteristics, including BMI, and the requirement for invasive mechanical ventilation (IMV) in 124 consecutive patients admitted in intensive care for SARS-CoV-2 in a single French center. Results Obesity (BMI > 30) and severe obesity (BMI > 35) were present in 47.6% and 28.2% of cases, respectively. Overall, 85 patients (68.6%) required IMV. The proportion of patients who required IMV increased with BMI categories (P 35 (85.7%). In multivariate logistic regression, the need for IMV was significantly associated with male sex (P 35 versus patients with BMI Conclusions The present study showed a high frequency of obesity among patients admitted in intensive care for SARS-CoV-2. Disease severity increased with BMI. Obesity is a risk factor for SARS-CoV-2 severity, requiring increased attention to preventive measures in susceptible individuals.

1,518 citations


Journal ArticleDOI
TL;DR: The five glioma molecular groups had different ages at onset, overall survival, and associations with germline variants, which implies that they are characterized by distinct mechanisms of pathogenesis.
Abstract: BACKGROUND The prediction of clinical behavior, response to therapy, and outcome of infiltrative glioma is challenging. On the basis of previous studies of tumor biology, we defined five glioma molecular groups with the use of three alterations: mutations in the TERT promoter, mutations in IDH, and codeletion of chromosome arms 1p and 19q (1p/19q codeletion). We tested the hypothesis that within groups based on these features, tumors would have similar clinical variables, acquired somatic alterations, and germline variants. METHODS We scored tumors as negative or positive for each of these markers in 1087 gliomas and compared acquired alterations and patient characteristics among the five primary molecular groups. Using 11,590 controls, we assessed associations between these groups and known glioma germline variants.

1,518 citations


Journal ArticleDOI
TL;DR: Improvements make the miRTarBase one of the more comprehensively annotated, experimentally validated miRNA-target interactions databases and motivate additional miRNA research efforts.
Abstract: MicroRNAs (miRNAs) are small non-coding RNAs of approximately 22 nucleotides, which negatively regulate the gene expression at the post-transcriptional level. This study describes an update of the miRTarBase (http://miRTarBase.mbc.nctu.edu.tw/) that provides information about experimentally validated miRNA-target interactions (MTIs). The latest update of the miRTarBase expanded it to identify systematically Argonaute-miRNA-RNA interactions from 138 crosslinking and immunoprecipitation sequencing (CLIP-seq) data sets that were generated by 21 independent studies. The database contains 4966 articles, 7439 strongly validated MTIs (using reporter assays or western blots) and 348 007 MTIs from CLIP-seq. The number of MTIs in the miRTarBase has increased around 7-fold since the 2014 miRTarBase update. The miRNA and gene expression profiles from The Cancer Genome Atlas (TCGA) are integrated to provide an effective overview of this exponential growth in the miRNA experimental data. These improvements make the miRTarBase one of the more comprehensively annotated, experimentally validated miRNA-target interactions databases and motivate additional miRNA research efforts.

1,517 citations


Journal ArticleDOI
TL;DR: This paper surveys the state-of-the-art literature on C-RAN and can serve as a starting point for anyone willing to understand C- RAN architecture and advance the research on the network.
Abstract: Cloud Radio Access Network (C-RAN) is a novel mobile network architecture which can address a number of challenges the operators face while trying to support growing end-user's needs. The main idea behind C-RAN is to pool the Baseband Units (BBUs) from multiple base stations into centralized BBU Pool for statistical multiplexing gain, while shifting the burden to the high-speed wireline transmission of In-phase and Quadrature (IQ) data. C-RAN enables energy efficient network operation and possible cost savings on baseband resources. Furthermore, it improves network capacity by performing load balancing and cooperative processing of signals originating from several base stations. This paper surveys the state-of-the-art literature on C-RAN. It can serve as a starting point for anyone willing to understand C-RAN architecture and advance the research on C-RAN.

1,516 citations


Proceedings ArticleDOI
19 Feb 2018
TL;DR: In this article, the authors make the observation that the performance of multi-task learning is strongly dependent on the relative weighting between each task's loss, and propose a principled approach to weight multiple loss functions by considering the homoscedastic uncertainty of each task.
Abstract: Numerous deep learning applications benefit from multitask learning with multiple regression and classification objectives. In this paper we make the observation that the performance of such systems is strongly dependent on the relative weighting between each task's loss. Tuning these weights by hand is a difficult and expensive process, making multi-task learning prohibitive in practice. We propose a principled approach to multi-task deep learning which weighs multiple loss functions by considering the homoscedastic uncertainty of each task. This allows us to simultaneously learn various quantities with different units or scales in both classification and regression settings. We demonstrate our model learning per-pixel depth regression, semantic and instance segmentation from a monocular input image. Perhaps surprisingly, we show our model can learn multi-task weightings and outperform separate models trained individually on each task.

1,515 citations


Journal ArticleDOI
TL;DR: This ESO-ESMO ABC 5 Clinical Practice Guideline provides key recommendations for managing advanced breast cancer patients, and provides updates on managing patients with all breast cancer subtypes, LABC, follow-up, palliative and supportive care.

1,514 citations


Proceedings ArticleDOI
07 Jun 2015
TL;DR: In this paper, the authors define the hypercolumn at a pixel as the vector of activations of all CNN units above that pixel, and use hypercolumns as pixel descriptors.
Abstract: Recognition algorithms based on convolutional networks (CNNs) typically use the output of the last layer as a feature representation. However, the information in this layer may be too coarse spatially to allow precise localization. On the contrary, earlier layers may be precise in localization but will not capture semantics. To get the best of both worlds, we define the hypercolumn at a pixel as the vector of activations of all CNN units above that pixel. Using hypercolumns as pixel descriptors, we show results on three fine-grained localization tasks: simultaneous detection and segmentation [22], where we improve state-of-the-art from 49.7 mean APr [22] to 60.0, keypoint localization, where we get a 3.3 point boost over [20], and part labeling, where we show a 6.6 point gain over a strong baseline.

1,511 citations


Journal ArticleDOI
TL;DR: In this article, the authors compared two approaches to assess saturation: code saturation and meaning saturation, and examined sample sizes needed to reach saturation in each approach, what saturation meant, and how to assess it.
Abstract: Saturation is a core guiding principle to determine sample sizes in qualitative research, yet little methodological research exists on parameters that influence saturation. Our study compared two approaches to assessing saturation: code saturation and meaning saturation. We examined sample sizes needed to reach saturation in each approach, what saturation meant, and how to assess saturation. Examining 25 in-depth interviews, we found that code saturation was reached at nine interviews, whereby the range of thematic issues was identified. However, 16 to 24 interviews were needed to reach meaning saturation where we developed a richly textured understanding of issues. Thus, code saturation may indicate when researchers have "heard it all," but meaning saturation is needed to "understand it all." We used our results to develop parameters that influence saturation, which may be used to estimate sample sizes for qualitative research proposals or to document in publications the grounds on which saturation was achieved.

1,508 citations


Journal ArticleDOI
TL;DR: Patients with previous cardiovascular metabolic diseases may face a greater risk of developing into the severe condition and the comorbidities can also greatly affect the prognosis of the COVID-19, which can aggravate the damage to the heart.
Abstract: Studies have reminded that cardiovascular metabolic comorbidities made patients more susceptible to suffer 2019 novel corona virus (2019-nCoV) disease (COVID-19), and exacerbated the infection. The aim of this analysis is to determine the association of cardiovascular metabolic diseases with the development of COVID-19. A meta-analysis of eligible studies that summarized the prevalence of cardiovascular metabolic diseases in COVID-19 and compared the incidences of the comorbidities in ICU/severe and non-ICU/severe patients was performed. Embase and PubMed were searched for relevant studies. A total of six studies with 1527 patients were included in this analysis. The proportions of hypertension, cardia-cerebrovascular disease and diabetes in patients with COVID-19 were 17.1%, 16.4% and 9.7%, respectively. The incidences of hypertension, cardia-cerebrovascular diseases and diabetes were about twofolds, threefolds and twofolds, respectively, higher in ICU/severe cases than in their non-ICU/severe counterparts. At least 8.0% patients with COVID-19 suffered the acute cardiac injury. The incidence of acute cardiac injury was about 13 folds higher in ICU/severe patients compared with the non-ICU/severe patients. Patients with previous cardiovascular metabolic diseases may face a greater risk of developing into the severe condition and the comorbidities can also greatly affect the prognosis of the COVID-19. On the other hand, COVID-19 can, in turn, aggravate the damage to the heart.

Journal ArticleDOI
TL;DR: It is shown that NCS can provide over one-third of the cost-effective climate mitigation needed between now and 2030 to stabilize warming to below 2 °C.
Abstract: Better stewardship of land is needed to achieve the Paris Climate Agreement goal of holding warming to below 2 °C; however, confusion persists about the specific set of land stewardship options available and their mitigation potential. To address this, we identify and quantify "natural climate solutions" (NCS): 20 conservation, restoration, and improved land management actions that increase carbon storage and/or avoid greenhouse gas emissions across global forests, wetlands, grasslands, and agricultural lands. We find that the maximum potential of NCS-when constrained by food security, fiber security, and biodiversity conservation-is 23.8 petagrams of CO2 equivalent (PgCO2e) y-1 (95% CI 20.3-37.4). This is ≥30% higher than prior estimates, which did not include the full range of options and safeguards considered here. About half of this maximum (11.3 PgCO2e y-1) represents cost-effective climate mitigation, assuming the social cost of CO2 pollution is ≥100 USD MgCO2e-1 by 2030. Natural climate solutions can provide 37% of cost-effective CO2 mitigation needed through 2030 for a >66% chance of holding warming to below 2 °C. One-third of this cost-effective NCS mitigation can be delivered at or below 10 USD MgCO2-1 Most NCS actions-if effectively implemented-also offer water filtration, flood buffering, soil health, biodiversity habitat, and enhanced climate resilience. Work remains to better constrain uncertainty of NCS mitigation estimates. Nevertheless, existing knowledge reported here provides a robust basis for immediate global action to improve ecosystem stewardship as a major solution to climate change.

Journal ArticleDOI
TL;DR: This paper provides a framework for developing sampling designs in mixed methods research and presents sampling schemes that have been associated with quantitative and qualitative research, and provides a sampling design typology.
Abstract: This paper provides a framework for developing sampling designs in mixed methods research. First, we present sampling schemes that have been associated with quantitative and qualitative research. Second, we discuss sample size considerations and provide sample size recommendations for each of the major research designs for quantitative and qualitative approaches. Third, we provide a sampling design typology and we demonstrate how sampling designs can be classified according to time orientation of the components and relationship of the qualitative and quantitative sample. Fourth, we present four major crises to mixed methods research and indicate how each crisis may be used to guide sampling design considerations. Finally, we emphasize how sampling design impacts the extent to which researchers can generalize their findings. Key Words: Sampling Schemes, Qualitative Research, Generalization, Parallel Sampling Designs, Pairwise Sampling Designs, Subgroup Sampling Designs, Nested Sampling Designs, and Multilevel Sampling Designs

Posted Content
TL;DR: This work presents a standard CNN architecture trained to recognize the shapes' rendered views independently of each other, and shows that a 3D shape can be recognized even from a single view at an accuracy far higher than using state-of-the-art3D shape descriptors.
Abstract: A longstanding question in computer vision concerns the representation of 3D shapes for recognition: should 3D shapes be represented with descriptors operating on their native 3D formats, such as voxel grid or polygon mesh, or can they be effectively represented with view-based descriptors? We address this question in the context of learning to recognize 3D shapes from a collection of their rendered views on 2D images. We first present a standard CNN architecture trained to recognize the shapes' rendered views independently of each other, and show that a 3D shape can be recognized even from a single view at an accuracy far higher than using state-of-the-art 3D shape descriptors. Recognition rates further increase when multiple views of the shapes are provided. In addition, we present a novel CNN architecture that combines information from multiple views of a 3D shape into a single and compact shape descriptor offering even better recognition performance. The same architecture can be applied to accurately recognize human hand-drawn sketches of shapes. We conclude that a collection of 2D views can be highly informative for 3D shape recognition and is amenable to emerging CNN architectures and their derivatives.

Journal ArticleDOI
TL;DR: This review briefly analyze how the efficacy of liposomes depends on the nature of their components and their size, surface charge, and lipidic organization, and describes some strategies developed to overcome limitations of the “first-generation” liposome-based drugs on the market and in clinical trials.
Abstract: Since their discovery in the 1960s, liposomes have been studied in depth, and they continue to constitute a field of intense research. Liposomes are valued for their biological and technological advantages, and are considered to be the most successful drug-carrier system known to date. Notable progress has been made, and several biomedical applications of liposomes are either in clinical trials, are about to be put on the market, or have already been approved for public use. In this review, we briefly analyze how the efficacy of liposomes depends on the nature of their components and their size, surface charge, and lipidic organization. Moreover, we discuss the influence of the physicochemical properties of liposomes on their interaction with cells, half-life, ability to enter tissues, and final fate in vivo. Finally, we describe some strategies developed to overcome limitations of the "first-generation" liposomes, and liposome-based drugs on the market and in clinical trials.

Journal ArticleDOI
TL;DR: A framework for adaptive visual object tracking based on structured output prediction that is able to outperform state-of-the-art trackers on various benchmark videos and can easily incorporate additional features and kernels into the framework, which results in increased tracking performance.
Abstract: Adaptive tracking-by-detection methods are widely used in computer vision for tracking arbitrary objects. Current approaches treat the tracking problem as a classification task and use online learning techniques to update the object model. However, for these updates to happen one needs to convert the estimated object position into a set of labelled training examples, and it is not clear how best to perform this intermediate step. Furthermore, the objective for the classifier (label prediction) is not explicitly coupled to the objective for the tracker (estimation of object position). In this paper, we present a framework for adaptive visual object tracking based on structured output prediction. By explicitly allowing the output space to express the needs of the tracker, we avoid the need for an intermediate classification step. Our method uses a kernelised structured output support vector machine (SVM), which is learned online to provide adaptive tracking. To allow our tracker to run at high frame rates, we (a) introduce a budgeting mechanism that prevents the unbounded growth in the number of support vectors that would otherwise occur during tracking, and (b) show how to implement tracking on the GPU. Experimentally, we show that our algorithm is able to outperform state-of-the-art trackers on various benchmark videos. Additionally, we show that we can easily incorporate additional features and kernels into our framework, which results in increased tracking performance.

Journal ArticleDOI
27 Jul 2018-Science
TL;DR: It is postulated that super-enhancers are phase-separated multimolecular assemblies, also known as biomolecular condensates, which provide a means to compartmentalize and concentrate biochemical reactions within cells.
Abstract: Super-enhancers (SEs) are clusters of enhancers that cooperatively assemble a high density of transcriptional apparatus to drive robust expression of genes with prominent roles in cell identity. Here, we demonstrate that the SE-enriched transcriptional coactivators BRD4 and MED1 form nuclear puncta at SEs that exhibit properties of liquid-like condensates and are disrupted by chemicals that perturb condensates. The intrinsically disordered regions (IDRs) of BRD4 and MED1 can form phase-separated droplets and MED1-IDR droplets can compartmentalize and concentrate transcription apparatus from nuclear extracts. These results support the idea that coactivators form phase-separated condensates at SEs that compartmentalize and concentrate the transcription apparatus, suggest a role for coactivator IDRs in this process, and offer insights into mechanisms involved in control of key cell identity genes.

Journal ArticleDOI
TL;DR: In this paper, a double-blind trial was conducted to evaluate the effect of empagliflozin on the risk of cardiovascular death or hospitalization for heart failure in patients with heart failure and a reduced ejection fraction.
Abstract: Background Sodium-glucose cotransporter 2 inhibitors reduce the risk of hospitalization for heart failure in patients with heart failure and a reduced ejection fraction, but their effects in patients with heart failure and a preserved ejection fraction are uncertain. Methods In this double-blind trial, we randomly assigned 5988 patients with class II-IV heart failure and an ejection fraction of more than 40% to receive empagliflozin (10 mg once daily) or placebo, in addition to usual therapy. The primary outcome was a composite of cardiovascular death or hospitalization for heart failure. Results Over a median of 26.2 months, a primary outcome event occurred in 415 of 2997 patients (13.8%) in the empagliflozin group and in 511 of 2991 patients (17.1%) in the placebo group (hazard ratio, 0.79; 95% confidence interval [CI], 0.69 to 0.90; P Conclusions Empagliflozin reduced the combined risk of cardiovascular death or hospitalization for heart failure in patients with heart failure and a preserved ejection fraction, regardless of the presence or absence of diabetes. (Funded by Boehringer Ingelheim and Eli Lilly; EMPEROR-Preserved ClinicalTrials.gov number, NCT03057951).

Journal ArticleDOI
TL;DR: A deep learning model was developed to extract visual features from volumetric chest CT scans for the detection of coronavirus 2019 and differentiate it from community-acquired pneumonia and other lung conditions.
Abstract: Background Coronavirus disease 2019 (COVID-19) has widely spread all over the world since the beginning of 2020. It is desirable to develop automatic and accurate detection of COVID-19 using chest CT. Purpose To develop a fully automatic framework to detect COVID-19 using chest CT and evaluate its performance. Materials and Methods In this retrospective and multicenter study, a deep learning model, the COVID-19 detection neural network (COVNet), was developed to extract visual features from volumetric chest CT scans for the detection of COVID-19. CT scans of community-acquired pneumonia (CAP) and other non-pneumonia abnormalities were included to test the robustness of the model. The datasets were collected from six hospitals between August 2016 and February 2020. Diagnostic performance was assessed with the area under the receiver operating characteristic curve, sensitivity, and specificity. Results The collected dataset consisted of 4352 chest CT scans from 3322 patients. The average patient age (±standard deviation) was 49 years ± 15, and there were slightly more men than women (1838 vs 1484, respectively; P = .29). The per-scan sensitivity and specificity for detecting COVID-19 in the independent test set was 90% (95% confidence interval [CI]: 83%, 94%; 114 of 127 scans) and 96% (95% CI: 93%, 98%; 294 of 307 scans), respectively, with an area under the receiver operating characteristic curve of 0.96 (P < .001). The per-scan sensitivity and specificity for detecting CAP in the independent test set was 87% (152 of 175 scans) and 92% (239 of 259 scans), respectively, with an area under the receiver operating characteristic curve of 0.95 (95% CI: 0.93, 0.97). Conclusion A deep learning model can accurately detect coronavirus 2019 and differentiate it from community-acquired pneumonia and other lung conditions. © RSNA, 2020 Online supplemental material is available for this article.

Journal ArticleDOI
TL;DR: This article reviews in a selective way the recent research on the interface between machine learning and the physical sciences, including conceptual developments in ML motivated by physical insights, applications of machine learning techniques to several domains in physics, and cross fertilization between the two fields.
Abstract: Machine learning (ML) encompasses a broad range of algorithms and modeling tools used for a vast array of data processing tasks, which has entered most scientific disciplines in recent years. This article reviews in a selective way the recent research on the interface between machine learning and the physical sciences. This includes conceptual developments in ML motivated by physical insights, applications of machine learning techniques to several domains in physics, and cross fertilization between the two fields. After giving a basic notion of machine learning methods and principles, examples are described of how statistical physics is used to understand methods in ML. This review then describes applications of ML methods in particle physics and cosmology, quantum many-body physics, quantum computing, and chemical and material physics. Research and development into novel computing architectures aimed at accelerating ML are also highlighted. Each of the sections describe recent successes as well as domain-specific methodology and challenges.

Journal ArticleDOI
Julie George1, Jing Shan Lim2, Se Jin Jang3, Yupeng Cun1, Luka Ozretić, Gu Kong4, Frauke Leenders1, Xin Lu1, Lynnette Fernandez-Cuesta1, Graziella Bosco1, Christian Müller1, Ilona Dahmen1, Nadine Jahchan2, Kwon-Sik Park2, Dian Yang2, Anthony N. Karnezis5, Dedeepya Vaka2, Ángela Torres2, Maia Segura Wang, Jan O. Korbel, Roopika Menon6, Sung-Min Chun3, Deokhoon Kim3, Matthew D. Wilkerson7, Neil Hayes7, David Engelmann8, Brigitte M. Pützer8, Marc Bos1, Sebastian Michels6, Ignacija Vlasic, Danila Seidel1, Berit Pinther1, Philipp Schaub1, Christian Becker1, Janine Altmüller1, Jun Yokota9, Takashi Kohno, Reika Iwakawa, Koji Tsuta, Masayuki Noguchi10, Thomas Muley11, Hans Hoffmann11, Philipp A. Schnabel12, Iver Petersen13, Yuan Chen13, Alex Soltermann14, Verena Tischler14, Chang-Min Choi3, Yong-Hee Kim3, Pierre P. Massion15, Yong Zou15, Dragana Jovanovic16, Milica Kontic16, Gavin M. Wright17, Prudence A. Russell17, Benjamin Solomon17, Ina Koch, Michael Lindner, Lucia Anna Muscarella18, Annamaria la Torre18, John K. Field19, Marko Jakopović20, Jelena Knezevic, Esmeralda Castaños-Vélez21, Luca Roz, Ugo Pastorino, O.T. Brustugun22, Marius Lund-Iversen22, Erik Thunnissen23, Jens Köhler, Martin Schuler, Johan Botling24, Martin Sandelin24, Montserrat Sanchez-Cespedes, Helga B. Salvesen25, Viktor Achter1, Ulrich Lang1, Magdalena Bogus1, Peter M. Schneider1, Thomas Zander, Sascha Ansén6, Michael Hallek1, Jürgen Wolf6, Martin Vingron26, Yasushi Yatabe, William D. Travis27, Peter Nürnberg1, Christian Reinhardt, Sven Perner3, Lukas C. Heukamp, Reinhard Büttner, Stefan A. Haas26, Elisabeth Brambilla28, Martin Peifer1, Julien Sage2, Roman K. Thomas1 
06 Aug 2015-Nature
TL;DR: This first comprehensive study of somatic genome alterations in SCLC uncovers several key biological processes and identifies candidate therapeutic targets in this highly lethal form of cancer.
Abstract: We have sequenced the genomes of 110 small cell lung cancers (SCLC), one of the deadliest human cancers. In nearly all the tumours analysed we found bi-allelic inactivation of TP53 and RB1, sometimes by complex genomic rearrangements. Two tumours with wild-type RB1 had evidence of chromothripsis leading to overexpression of cyclin D1 (encoded by the CCND1 gene), revealing an alternative mechanism of Rb1 deregulation. Thus, loss of the tumour suppressors TP53 and RB1 is obligatory in SCLC. We discovered somatic genomic rearrangements of TP73 that create an oncogenic version of this gene, TP73Δex2/3. In rare cases, SCLC tumours exhibited kinase gene mutations, providing a possible therapeutic opportunity for individual patients. Finally, we observed inactivating mutations in NOTCH family genes in 25% of human SCLC. Accordingly, activation of Notch signalling in a pre-clinical SCLC mouse model strikingly reduced the number of tumours and extended the survival of the mutant mice. Furthermore, neuroendocrine gene expression was abrogated by Notch activity in SCLC cells. This first comprehensive study of somatic genome alterations in SCLC uncovers several key biological processes and identifies candidate therapeutic targets in this highly lethal form of cancer.

Journal ArticleDOI
TL;DR: This paper overviews the current research efforts on smart radio environments, the enabling technologies to realize them in practice, the need of new communication-theoretic models for their analysis and design, and the long-term and open research issues to be solved towards their massive deployment.
Abstract: Future wireless networks are expected to constitute a distributed intelligent wireless communications, sensing, and computing platform, which will have the challenging requirement of interconnecting the physical and digital worlds in a seamless and sustainable manner. Currently, two main factors prevent wireless network operators from building such networks: (1) the lack of control of the wireless environment, whose impact on the radio waves cannot be customized, and (2) the current operation of wireless radios, which consume a lot of power because new signals are generated whenever data has to be transmitted. In this paper, we challenge the usual “more data needs more power and emission of radio waves” status quo, and motivate that future wireless networks necessitate a smart radio environment: a transformative wireless concept, where the environmental objects are coated with artificial thin films of electromagnetic and reconfigurable material (that are referred to as reconfigurable intelligent meta-surfaces), which are capable of sensing the environment and of applying customized transformations to the radio waves. Smart radio environments have the potential to provide future wireless networks with uninterrupted wireless connectivity, and with the capability of transmitting data without generating new signals but recycling existing radio waves. We will discuss, in particular, two major types of reconfigurable intelligent meta-surfaces applied to wireless networks. The first type of meta-surfaces will be embedded into, e.g., walls, and will be directly controlled by the wireless network operators via a software controller in order to shape the radio waves for, e.g., improving the network coverage. The second type of meta-surfaces will be embedded into objects, e.g., smart t-shirts with sensors for health monitoring, and will backscatter the radio waves generated by cellular base stations in order to report their sensed data to mobile phones. These functionalities will enable wireless network operators to offer new services without the emission of additional radio waves, but by recycling those already existing for other purposes. This paper overviews the current research efforts on smart radio environments, the enabling technologies to realize them in practice, the need of new communication-theoretic models for their analysis and design, and the long-term and open research issues to be solved towards their massive deployment. In a nutshell, this paper is focused on discussing how the availability of reconfigurable intelligent meta-surfaces will allow wireless network operators to redesign common and well-known network communication paradigms.

Journal ArticleDOI
03 Jun 2015-Thyroid
TL;DR: The revised guidelines are focused primarily on the diagnosis and treatment of patients with sporadic medullary thyroid carcinoma (MTC) and hereditary MTC and developed 67 evidence-based recommendations to assist clinicians in the care of Patients with MTC.
Abstract: Introduction: The American Thyroid Association appointed a Task Force of experts to revise the original Medullary Thyroid Carcinoma: Management Guidelines of the American Thyroid Association. Methods: The Task Force identified relevant articles using a systematic PubMed search, supplemented with additional published materials, and then created evidence-based recommendations, which were set in categories using criteria adapted from the United States Preventive Services Task Force Agency for Healthcare Research and Quality. The original guidelines provided abundant source material and an excellent organizational structure that served as the basis for the current revised document. Results: The revised guidelines are focused primarily on the diagnosis and treatment of patients with sporadic medullary thyroid carcinoma (MTC) and hereditary MTC. Conclusions: The Task Force developed 67 evidence-based recommendations to assist clinicians in the care of patients with MTC. The Task Force considers the recommendati...

Journal ArticleDOI
TL;DR: Recently updated features of miRDB are described, including 2.1 million predicted gene targets regulated by 6709 miRNAs and a new feature is the web server interface that allows submission of user-provided sequences for miRNA target prediction.
Abstract: MicroRNAs (miRNAs) are small non-coding RNAs that are extensively involved in many physiological and disease processes. One major challenge in miRNA studies is the identification of genes regulated by miRNAs. To this end, we have developed an online resource, miRDB (http://mirdb.org), for miRNA target prediction and functional annotations. Here, we describe recently updated features of miRDB, including 2.1 million predicted gene targets regulated by 6709 miRNAs. In addition to presenting precompiled prediction data, a new feature is the web server interface that allows submission of user-provided sequences for miRNA target prediction. In this way, users have the flexibility to study any custom miRNAs or target genes of interest. Another major update of miRDB is related to functional miRNA annotations. Although thousands of miRNAs have been identified, many of the reported miRNAs are not likely to play active functional roles or may even have been falsely identified as miRNAs from high-throughput studies. To address this issue, we have performed combined computational analyses and literature mining, and identified 568 and 452 functional miRNAs in humans and mice, respectively. These miRNAs, as well as associated functional annotations, are presented in the FuncMir Collection in miRDB.

Journal ArticleDOI
TL;DR: It is shown that consumption of particular types of food produces predictable shifts in existing host bacterial genera, which affects host immune and metabolic parameters, with broad implications for human health.
Abstract: Recent studies have suggested that the intestinal microbiome plays an important role in modulating risk of several chronic diseases, including inflammatory bowel disease, obesity, type 2 diabetes, cardiovascular disease, and cancer. At the same time, it is now understood that diet plays a significant role in shaping the microbiome, with experiments showing that dietary alterations can induce large, temporary microbial shifts within 24 h. Given this association, there may be significant therapeutic utility in altering microbial composition through diet. This review systematically evaluates current data regarding the effects of several common dietary components on intestinal microbiota. We show that consumption of particular types of food produces predictable shifts in existing host bacterial genera. Furthermore, the identity of these bacteria affects host immune and metabolic parameters, with broad implications for human health. Familiarity with these associations will be of tremendous use to the practitioner as well as the patient.

Journal ArticleDOI
TL;DR: Seven vital areas of research in this topic are identified, covering the full spectrum of learning from imbalanced data: classification, regression, clustering, data streams, big data analytics and applications, e.g., in social media and computer vision.
Abstract: Despite more than two decades of continuous development learning from imbalanced data is still a focus of intense research. Starting as a problem of skewed distributions of binary tasks, this topic evolved way beyond this conception. With the expansion of machine learning and data mining, combined with the arrival of big data era, we have gained a deeper insight into the nature of imbalanced learning, while at the same time facing new emerging challenges. Data-level and algorithm-level methods are constantly being improved and hybrid approaches gain increasing popularity. Recent trends focus on analyzing not only the disproportion between classes, but also other difficulties embedded in the nature of data. New real-life problems motivate researchers to focus on computationally efficient, adaptive and real-time methods. This paper aims at discussing open issues and challenges that need to be addressed to further develop the field of imbalanced learning. Seven vital areas of research in this topic are identified, covering the full spectrum of learning from imbalanced data: classification, regression, clustering, data streams, big data analytics and applications, e.g., in social media and computer vision. This paper provides a discussion and suggestions concerning lines of future research for each of them.

Journal ArticleDOI
TL;DR: In this trial involving high-risk persons, lung-cancer mortality was significantly lower among those who underwent volume CT screening than among thoseWho underwent no screening.
Abstract: Background There are limited data from randomized trials regarding whether volume-based, low-dose computed tomographic (CT) screening can reduce lung-cancer mortality among male former and...

Journal ArticleDOI
TL;DR: Data shows that E is involved in critical aspects of the viral life cycle and that CoVs lacking E make promising vaccine candidates, which can aid in the production of effective anti-coronaviral agents for both human CoVs and enzootic CoVs.
Abstract: Coronaviruses (CoVs) primarily cause enzootic infections in birds and mammals but, in the last few decades, have shown to be capable of infecting humans as well. The outbreak of severe acute respiratory syndrome (SARS) in 2003 and, more recently, Middle-East respiratory syndrome (MERS) has demonstrated the lethality of CoVs when they cross the species barrier and infect humans. A renewed interest in coronaviral research has led to the discovery of several novel human CoVs and since then much progress has been made in understanding the CoV life cycle. The CoV envelope (E) protein is a small, integral membrane protein involved in several aspects of the virus’ life cycle, such as assembly, budding, envelope formation, and pathogenesis. Recent studies have expanded on its structural motifs and topology, its functions as an ion-channelling viroporin, and its interactions with both other CoV proteins and host cell proteins. This review aims to establish the current knowledge on CoV E by highlighting the recent progress that has been made and comparing it to previous knowledge. It also compares E to other viral proteins of a similar nature to speculate the relevance of these new findings. Good progress has been made but much still remains unknown and this review has identified some gaps in the current knowledge and made suggestions for consideration in future research. The most progress has been made on SARS-CoV E, highlighting specific structural requirements for its functions in the CoV life cycle as well as mechanisms behind its pathogenesis. Data shows that E is involved in critical aspects of the viral life cycle and that CoVs lacking E make promising vaccine candidates. The high mortality rate of certain CoVs, along with their ease of transmission, underpins the need for more research into CoV molecular biology which can aid in the production of effective anti-coronaviral agents for both human CoVs and enzootic CoVs.

Journal ArticleDOI
10 Jan 2019
TL;DR: This review will provide an overview of the studies that focus on gut microbiota balances in the same individual and between individuals and highlight the close mutualistic relationship between gut microbiota variations and diseases.
Abstract: Each individual is provided with a unique gut microbiota profile that plays many specific functions in host nutrient metabolism, maintenance of structural integrity of the gut mucosal barrier, immunomodulation, and protection against pathogens. Gut microbiota are composed of different bacteria species taxonomically classified by genus, family, order, and phyla. Each human’s gut microbiota are shaped in early life as their composition depends on infant transitions (birth gestational date, type of delivery, methods of milk feeding, weaning period) and external factors such as antibiotic use. These personal and healthy core native microbiota remain relatively stable in adulthood but differ between individuals due to enterotypes, body mass index (BMI) level, exercise frequency, lifestyle, and cultural and dietary habits. Accordingly, there is not a unique optimal gut microbiota composition since it is different for each individual. However, a healthy host–microorganism balance must be respected in order to optimally perform metabolic and immune functions and prevent disease development. This review will provide an overview of the studies that focus on gut microbiota balances in the same individual and between individuals and highlight the close mutualistic relationship between gut microbiota variations and diseases. Indeed, dysbiosis of gut microbiota is associated not only with intestinal disorders but also with numerous extra-intestinal diseases such as metabolic and neurological disorders. Understanding the cause or consequence of these gut microbiota balances in health and disease and how to maintain or restore a healthy gut microbiota composition should be useful in developing promising therapeutic interventions.

Proceedings ArticleDOI
01 Jun 2018
TL;DR: The Dataset for Object Detection in Aerial Images (DOTA) as discussed by the authors is a large-scale dataset of aerial images collected from different sensors and platforms and contains objects exhibiting a wide variety of scales, orientations, and shapes.
Abstract: Object detection is an important and challenging problem in computer vision. Although the past decade has witnessed major advances in object detection in natural scenes, such successes have been slow to aerial imagery, not only because of the huge variation in the scale, orientation and shape of the object instances on the earth's surface, but also due to the scarcity of well-annotated datasets of objects in aerial scenes. To advance object detection research in Earth Vision, also known as Earth Observation and Remote Sensing, we introduce a large-scale Dataset for Object deTection in Aerial images (DOTA). To this end, we collect 2806 aerial images from different sensors and platforms. Each image is of the size about 4000 A— 4000 pixels and contains objects exhibiting a wide variety of scales, orientations, and shapes. These DOTA images are then annotated by experts in aerial image interpretation using 15 common object categories. The fully annotated DOTA images contains 188, 282 instances, each of which is labeled by an arbitrary (8 d.o.f.) quadrilateral. To build a baseline for object detection in Earth Vision, we evaluate state-of-the-art object detection algorithms on DOTA. Experiments demonstrate that DOTA well represents real Earth Vision applications and are quite challenging.