scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: Among patients with HER2‐positive early breast cancer who had residual invasive disease after completion of neoadjuvant therapy, the risk of recurrence of invasive breast cancer or death was 50% lower with adjuvant T‐DM1 than with trastuzumab alone.
Abstract: Background Patients who have residual invasive breast cancer after receiving neoadjuvant chemotherapy plus human epidermal growth factor receptor 2 (HER2)–targeted therapy have a worse pro...

1,365 citations


Journal ArticleDOI
TL;DR: Fuzzy dark matter (FDM) as discussed by the authorsDM is an alternative to CDM, which is an extremely light boson having a de Broglie wavelength inside the galaxy.
Abstract: Many aspects of the large-scale structure of the Universe can be described successfully using cosmological models in which $27\ifmmode\pm\else\textpm\fi{}1%$ of the critical mass-energy density consists of cold dark matter (CDM). However, few---if any---of the predictions of CDM models have been successful on scales of $\ensuremath{\sim}10\text{ }\text{ }\mathrm{kpc}$ or less. This lack of success is usually explained by the difficulty of modeling baryonic physics (star formation, supernova and black-hole feedback, etc.). An intriguing alternative to CDM is that the dark matter is an extremely light ($m\ensuremath{\sim}{10}^{\ensuremath{-}22}\text{ }\text{ }\mathrm{eV}$) boson having a de Broglie wavelength $\ensuremath{\lambda}\ensuremath{\sim}1\text{ }\text{ }\mathrm{kpc}$, often called fuzzy dark matter (FDM). We describe the arguments from particle physics that motivate FDM, review previous work on its astrophysical signatures, and analyze several unexplored aspects of its behavior. In particular, (i) FDM halos or subhalos smaller than about $1{0}^{7}(m/{10}^{\ensuremath{-}22}\text{ }\text{ }\mathrm{eV}{)}^{\ensuremath{-}3/2}$ ${M}_{\ensuremath{\bigodot}}$ do not form, and the abundance of halos smaller than a few times $1{0}^{10}(m/{10}^{\ensuremath{-}22}\text{ }\text{ }\mathrm{eV}{)}^{\ensuremath{-}4/3}$ ${M}_{\ensuremath{\bigodot}}$ is substantially smaller in FDM than in CDM. (ii) FDM halos are comprised of a central core that is a stationary, minimum-energy solution of the Schr\"odinger-Poisson equation, sometimes called a ``soliton,'' surrounded by an envelope that resembles a CDM halo. The soliton can produce a distinct signature in the rotation curves of FDM-dominated systems. (iii) The transition between soliton and envelope is determined by a relaxation process analogous to two-body relaxation in gravitating N-body systems, which proceeds as if the halo were composed of particles with mass $\ensuremath{\sim}\ensuremath{\rho}{\ensuremath{\lambda}}^{3}$ where $\ensuremath{\rho}$ is the halo density. (iv) Relaxation may have substantial effects on the stellar disk and bulge in the inner parts of disk galaxies, but has negligible effect on disk thickening or globular cluster disruption near the solar radius. (v) Relaxation can produce FDM disks but a FDM disk in the solar neighborhood must have a half-thickness of at least $\ensuremath{\sim}300(m/{10}^{\ensuremath{-}22}\text{ }\text{ }\mathrm{eV}{)}^{\ensuremath{-}2/3}\text{ }\text{ }\mathrm{pc}$ and a midplane density less than $0.2(m/{10}^{\ensuremath{-}22}\text{ }\text{ }\mathrm{eV}{)}^{2/3}$ times the baryonic disk density. (vi) Solitonic FDM subhalos evaporate by tunneling through the tidal radius and this limits the minimum subhalo mass inside $\ensuremath{\sim}30\text{ }\text{ }\mathrm{kpc}$ of the Milky Way to a few times $1{0}^{8}(m/{10}^{\ensuremath{-}22}\text{ }\text{ }\mathrm{eV}{)}^{\ensuremath{-}3/2}$ ${M}_{\ensuremath{\bigodot}}$. (vii) If the dark matter in the Fornax dwarf galaxy is composed of CDM, most of the globular clusters observed in that galaxy should have long ago spiraled to its center, and this problem is resolved if the dark matter is FDM. (viii) FDM delays galaxy formation relative to CDM but its galaxy-formation history is consistent with current observations of high-redshift galaxies and the late reionization observed by Planck. If the dark matter is composed of FDM, most observations favor a particle mass $\ensuremath{\gtrsim}{10}^{\ensuremath{-}22}\text{ }\text{ }\mathrm{eV}$ and the most significant observational consequences occur if the mass is in the range $1--10\ifmmode\times\else\texttimes\fi{}{10}^{\ensuremath{-}22}\text{ }\text{ }\mathrm{eV}$. There is tension with observations of the Lyman-$\ensuremath{\alpha}$ forest, which favor $m\ensuremath{\gtrsim}10--20\ifmmode\times\else\texttimes\fi{}{10}^{\ensuremath{-}22}\text{ }\text{ }\mathrm{eV}$ and we discuss whether more sophisticated models of reionization may resolve this tension.

1,365 citations


Journal ArticleDOI
TL;DR: The aim of this paper is to review recently published papers in reverse logistic and closed-loop supply chain in scientific journals and identify gaps in the literature to clarify and to suggest future research opportunities.

1,364 citations


Journal ArticleDOI
TL;DR: OpenMM is a molecular dynamics simulation toolkit with a unique focus on extensibility, which makes it an ideal tool for researchers developing new simulation methods, and also allows those new methods to be immediately available to the larger community.
Abstract: OpenMM is a molecular dynamics simulation toolkit with a unique focus on extensibility. It allows users to easily add new features, including forces with novel functional forms, new integration algorithms, and new simulation protocols. Those features automatically work on all supported hardware types (including both CPUs and GPUs) and perform well on all of them. In many cases they require minimal coding, just a mathematical description of the desired function. They also require no modification to OpenMM itself and can be distributed independently of OpenMM. This makes it an ideal tool for researchers developing new simulation methods, and also allows those new methods to be immediately available to the larger community.

1,364 citations


Journal ArticleDOI
TL;DR: In this paper, stable magnetic skyrmions at room temperature in ultrathin transition metal ferromagnets with magnetic transmission soft X-ray microscopy were observed and demonstrated.
Abstract: Magnetic skyrmions are topologically protected spin textures that exhibit fascinating physical behaviours and large potential in highly energy-efficient spintronic device applications. The main obstacles so far are that skyrmions have been observed in only a few exotic materials and at low temperatures, and fast current-driven motion of individual skyrmions has not yet been achieved. Here, we report the observation of stable magnetic skyrmions at room temperature in ultrathin transition metal ferromagnets with magnetic transmission soft X-ray microscopy. We demonstrate the ability to generate stable skyrmion lattices and drive trains of individual skyrmions by short current pulses along a magnetic racetrack at speeds exceeding 100 m s(-1) as required for applications. Our findings provide experimental evidence of recent predictions and open the door to room-temperature skyrmion spintronics in robust thin-film heterostructures.

1,364 citations


Journal ArticleDOI
TL;DR: Experiments on a number of challenging low-light images are present to reveal the efficacy of the proposed LIME and show its superiority over several state-of-the-arts in terms of enhancement quality and efficiency.
Abstract: When one captures images in low-light conditions, the images often suffer from low visibility. Besides degrading the visual aesthetics of images, this poor quality may also significantly degenerate the performance of many computer vision and multimedia algorithms that are primarily designed for high-quality inputs. In this paper, we propose a simple yet effective low-light image enhancement (LIME) method. More concretely, the illumination of each pixel is first estimated individually by finding the maximum value in R, G, and B channels. Furthermore, we refine the initial illumination map by imposing a structure prior on it, as the final illumination map. Having the well-constructed illumination map, the enhancement can be achieved accordingly. Experiments on a number of challenging low-light images are present to reveal the efficacy of our LIME and show its superiority over several state-of-the-arts in terms of enhancement quality and efficiency.

1,364 citations


Proceedings ArticleDOI
07 Jun 2015
TL;DR: This paper shows how to learn directly from image data a general similarity function for comparing image patches, which is a task of fundamental importance for many computer vision problems, and opts for a CNN-based model that is trained to account for a wide variety of changes in image appearance.
Abstract: In this paper we show how to learn directly from image data (i.e., without resorting to manually-designed features) a general similarity function for comparing image patches, which is a task of fundamental importance for many computer vision problems. To encode such a function, we opt for a CNN-based model that is trained to account for a wide variety of changes in image appearance. To that end, we explore and study multiple neural network architectures, which are specifically adapted to this task. We show that such an approach can significantly outperform the state-of-the-art on several problems and benchmark datasets.

1,364 citations


Journal ArticleDOI
Guanghai Wang1, Yun Ting Zhang1, Jin Zhao1, Jun Zhang1, Fan Jiang1 
TL;DR: It is the responsibility and keen interests of all stakeholders, from governments to parents, to ensure that the physical and mental impacts of the COVID-19 epidemic on children and adolescents are kept minimal.

1,363 citations



01 Jan 2015
TL;DR: This paper conducted a multisite study of juvenile drug courts to examine the ability of these courts to reduce recidivism and improve youth's social functioning, and to determine whether these programs use evidence-based practices in their treatment services.
Abstract: As an alternative to traditional juvenile courts, juvenile drug courts attempt to provide substance abuse treatment, sanctions, and incentives to rehabilitate nonviolent drug-involved youth, empower families to support them in this process, and prevent recidivism. The Office of Juvenile Justice and Delinquency Prevention (OJJDP) sponsored a multisite study of juvenile drug courts to examine the ability of these courts to reduce recidivism and improve youth’s social functioning, and to determine whether these programs use evidence-based practices in their treatment services. This bulletin provides an overview of the findings.

1,363 citations


Posted Content
TL;DR: This work presents DeepCluster, a clustering method that jointly learns the parameters of a neural network and the cluster assignments of the resulting features and outperforms the current state of the art by a significant margin on all the standard benchmarks.
Abstract: Clustering is a class of unsupervised learning methods that has been extensively applied and studied in computer vision. Little work has been done to adapt it to the end-to-end training of visual features on large scale datasets. In this work, we present DeepCluster, a clustering method that jointly learns the parameters of a neural network and the cluster assignments of the resulting features. DeepCluster iteratively groups the features with a standard clustering algorithm, k-means, and uses the subsequent assignments as supervision to update the weights of the network. We apply DeepCluster to the unsupervised training of convolutional neural networks on large datasets like ImageNet and YFCC100M. The resulting model outperforms the current state of the art by a significant margin on all the standard benchmarks.

Journal ArticleDOI
TL;DR: Research Design: Qualitative, Quantitative and Mixed Methods Approaches by Creswell (2014) covers three approaches as mentioned in this paper : qualitative, quantitative and mixed methods, which is informative and illustrative and is equally beneficial for students, teachers and researchers.
Abstract: The book Research Design: Qualitative, Quantitative and Mixed Methods Approaches by Creswell (2014) covers three approaches— qualitative, quantitative and mixed methods. This educational book is informative and illustrative and is equally beneficial for students, teachers and researchers. Readers should have basic knowledge of research for better understanding of this book. There are two parts of the book. Part 1 (chapter 1-4) consists of steps for developing research proposal and part II (chapter 5-10) explains how to develop a research proposal or write a research report. A summary is given at the end of every chapter that helps the reader to recapitulate the ideas. Moreover, writing exercises and suggested readings at the end of every chapter are useful for the readers.

Journal ArticleDOI
TL;DR: It is demonstrated that a minor adjustment to the 806R primer will greatly increase detec- tion of the globally abundant SAR11 clade in marine and lake environments, and enable inclusion of this important bacterial lineage in experimental and environmental-based studies.
Abstract: High-throughput sequencing of small subunit ribosomal RNA (SSU rRNA) genes from marine environments is a widely applied method used to uncover the composition of micro- bial communities. We conducted an analysis of surface ocean waters with the commonly employed hypervariable 4 region SSU rRNA gene primers 515F and 806R, and found that bacteria belonging to the SAR11 clade of Alphaproteobacteria, a group typically making up 20 to 40% of the bacterioplankton in this environment, were greatly underrepresented and comprised <4% of the total community. Using the SILVA reference database, we found a single nucleotide mismatch to nearly all SAR11 subclades, and revised the 806R primer so that it increased the detection of SAR11 clade sequences in the database from 2.6 to 96.7%. We then compared the performance of the original and revised 806R primers in surface seawater samples, and found that SAR11 com- prised 0.3 to 3.9% of sequences with the original primers and 17.5 to 30.5% of the sequences with the revised 806R primer. Furthermore, an investigation of seawater obtained from aquaria re - vealed that SAR11 sequences acquired with the revised 806R primer were more similar to natural cellular abundances of SAR11 detected using fluorescence in situ hybridization counts. Collectively, these results demonstrate that a minor adjustment to the 806R primer will greatly increase detec- tion of the globally abundant SAR11 clade in marine and lake environments, and enable inclusion of this important bacterial lineage in experimental and environmental-based studies.

Posted Content
TL;DR: An overview of existing work in this field of research is provided and neural architecture search methods are categorized according to three dimensions: search space, search strategy, and performance estimation strategy.
Abstract: Deep Learning has enabled remarkable progress over the last years on a variety of tasks, such as image recognition, speech recognition, and machine translation. One crucial aspect for this progress are novel neural architectures. Currently employed architectures have mostly been developed manually by human experts, which is a time-consuming and error-prone process. Because of this, there is growing interest in automated neural architecture search methods. We provide an overview of existing work in this field of research and categorize them according to three dimensions: search space, search strategy, and performance estimation strategy.

Journal ArticleDOI
21 Mar 2018-Nature
TL;DR: A unified framework for image reconstruction—automated transform by manifold approximation (AUTOMAP)—which recasts image reconstruction as a data-driven supervised learning task that allows a mapping between the sensor and the image domain to emerge from an appropriate corpus of training data is presented.
Abstract: Image reconstruction is essential for imaging applications across the physical and life sciences, including optical and radar systems, magnetic resonance imaging, X-ray computed tomography, positron emission tomography, ultrasound imaging and radio astronomy. During image acquisition, the sensor encodes an intermediate representation of an object in the sensor domain, which is subsequently reconstructed into an image by an inversion of the encoding function. Image reconstruction is challenging because analytic knowledge of the exact inverse transform may not exist a priori, especially in the presence of sensor non-idealities and noise. Thus, the standard reconstruction approach involves approximating the inverse function with multiple ad hoc stages in a signal processing chain, the composition of which depends on the details of each acquisition strategy, and often requires expert parameter tuning to optimize reconstruction performance. Here we present a unified framework for image reconstruction-automated transform by manifold approximation (AUTOMAP)-which recasts image reconstruction as a data-driven supervised learning task that allows a mapping between the sensor and the image domain to emerge from an appropriate corpus of training data. We implement AUTOMAP with a deep neural network and exhibit its flexibility in learning reconstruction transforms for various magnetic resonance imaging acquisition strategies, using the same network architecture and hyperparameters. We further demonstrate that manifold learning during training results in sparse representations of domain transforms along low-dimensional data manifolds, and observe superior immunity to noise and a reduction in reconstruction artefacts compared with conventional handcrafted reconstruction methods. In addition to improving the reconstruction performance of existing acquisition methodologies, we anticipate that AUTOMAP and other learned reconstruction approaches will accelerate the development of new acquisition strategies across imaging modalities.

Book ChapterDOI
08 Sep 2018
TL;DR: DeepCluster as discussed by the authors is a clustering method that jointly learns the parameters of a neural network and the cluster assignments of the resulting features, and uses the subsequent assignments as supervision to update the weights of the network.
Abstract: Clustering is a class of unsupervised learning methods that has been extensively applied and studied in computer vision. Little work has been done to adapt it to the end-to-end training of visual features on large-scale datasets. In this work, we present DeepCluster, a clustering method that jointly learns the parameters of a neural network and the cluster assignments of the resulting features. DeepCluster iteratively groups the features with a standard clustering algorithm, k-means, and uses the subsequent assignments as supervision to update the weights of the network. We apply DeepCluster to the unsupervised training of convolutional neural networks on large datasets like ImageNet and YFCC100M. The resulting model outperforms the current state of the art by a significant margin on all the standard benchmarks.

Journal Article
TL;DR: A conceptual review of key advancements in this area of representation learning on graphs, including matrix factorization-based methods, random-walk based algorithms, and graph neural networks are provided.
Abstract: Machine learning on graphs is an important and ubiquitous task with applications ranging from drug design to friendship recommendation in social networks. The primary challenge in this domain is finding a way to represent, or encode, graph structure so that it can be easily exploited by machine learning models. Traditionally, machine learning approaches relied on user-defined heuristics to extract features encoding structural information about a graph (e.g., degree statistics or kernel functions). However, recent years have seen a surge in approaches that automatically learn to encode graph structure into low-dimensional embeddings, using techniques based on deep learning and nonlinear dimensionality reduction. Here we provide a conceptual review of key advancements in this area of representation learning on graphs, including matrix factorization-based methods, random-walk based algorithms, and graph neural networks. We review methods to embed individual nodes as well as approaches to embed entire (sub)graphs. In doing so, we develop a unified framework to describe these recent approaches, and we highlight a number of important applications and directions for future work.

Posted Content
TL;DR: Residual Attention Network as discussed by the authors is a convolutional neural network using attention mechanism which can incorporate with state-of-the-art feed forward network architecture in an end-to-end training fashion.
Abstract: In this work, we propose "Residual Attention Network", a convolutional neural network using attention mechanism which can incorporate with state-of-art feed forward network architecture in an end-to-end training fashion. Our Residual Attention Network is built by stacking Attention Modules which generate attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. Inside each Attention Module, bottom-up top-down feedforward structure is used to unfold the feedforward and feedback attention process into a single feedforward process. Importantly, we propose attention residual learning to train very deep Residual Attention Networks which can be easily scaled up to hundreds of layers. Extensive analyses are conducted on CIFAR-10 and CIFAR-100 datasets to verify the effectiveness of every module mentioned above. Our Residual Attention Network achieves state-of-the-art object recognition performance on three benchmark datasets including CIFAR-10 (3.90% error), CIFAR-100 (20.45% error) and ImageNet (4.8% single model and single crop, top-5 error). Note that, our method achieves 0.6% top-1 accuracy improvement with 46% trunk depth and 69% forward FLOPs comparing to ResNet-200. The experiment also demonstrates that our network is robust against noisy labels.

Journal ArticleDOI
TL;DR: This review focuses on studies in humans to describe challenges and propose strategies that leverage existing knowledge to move rapidly from correlation to causation and ultimately to translation into therapies.
Abstract: Our understanding of the link between the human microbiome and disease, including obesity, inflammatory bowel disease, arthritis and autism, is rapidly expanding. Improvements in the throughput and accuracy of DNA sequencing of the genomes of microbial communities that are associated with human samples, complemented by analysis of transcriptomes, proteomes, metabolomes and immunomes and by mechanistic experiments in model systems, have vastly improved our ability to understand the structure and function of the microbiome in both diseased and healthy states. However, many challenges remain. In this review, we focus on studies in humans to describe these challenges and propose strategies that leverage existing knowledge to move rapidly from correlation to causation and ultimately to translation into therapies.

Journal ArticleDOI
TL;DR: Nivolumab was associated with few side effects, did not delay surgery, and induced a major pathological response in 45% of resected tumors, and the tumor mutational burden was predictive of the pathological response to PD‐1 blockade.
Abstract: Background Antibodies that block programmed death 1 (PD-1) protein improve survival in patients with advanced non–small-cell lung cancer (NSCLC) but have not been tested in resectable NSCLC, a condition in which little progress has been made during the past decade. Methods In this pilot study, we administered two preoperative doses of PD-1 inhibitor nivolumab in adults with untreated, surgically resectable early (stage I, II, or IIIA) NSCLC. Nivolumab (at a dose of 3 mg per kilogram of body weight) was administered intravenously every 2 weeks, with surgery planned approximately 4 weeks after the first dose. The primary end points of the study were safety and feasibility. We also evaluated the tumor pathological response, expression of programmed death ligand 1 (PD-L1), mutational burden, and mutation-associated, neoantigen-specific T-cell responses. Results Neoadjuvant nivolumab had an acceptable side-effect profile and was not associated with delays in surgery. Of the 21 tumors that were removed...

Proceedings ArticleDOI
12 Mar 2018
TL;DR: DUC is designed to generate pixel-level prediction, which is able to capture and decode more detailed information that is generally missing in bilinear upsampling, and a hybrid dilated convolution (HDC) framework in the encoding phase is proposed.
Abstract: Recent advances in deep learning, especially deep convolutional neural networks (CNNs), have led to significant improvement over previous semantic segmentation systems. Here we show how to improve pixel-wise semantic segmentation by manipulating convolution-related operations that are of both theoretical and practical value. First, we design dense upsampling convolution (DUC) to generate pixel-level prediction, which is able to capture and decode more detailed information that is generally missing in bilinear upsampling. Second, we propose a hybrid dilated convolution (HDC) framework in the encoding phase. This framework 1) effectively enlarges the receptive fields (RF) of the network to aggregate global information; 2) alleviates what we call the "gridding issue"caused by the standard dilated convolution operation. We evaluate our approaches thoroughly on the Cityscapes dataset, and achieve a state-of-art result of 80.1% mIOU in the test set at the time of submission. We also have achieved state-of-theart overall on the KITTI road estimation benchmark and the PASCAL VOC2012 segmentation task. Our source code can be found at https://github.com/TuSimple/TuSimple-DUC.

Proceedings ArticleDOI
18 Jun 2018
TL;DR: Deep Ordinal Regression Network (DORN) as discussed by the authors discretizes depth and recast depth network learning as an ordinal regression problem by training the network using an ordinary regression loss, which achieves much higher accuracy and faster convergence in synch.
Abstract: Monocular depth estimation, which plays a crucial role in understanding 3D scene geometry, is an ill-posed problem. Recent methods have gained significant improvement by exploring image-level information and hierarchical features from deep convolutional neural networks (DCNNs). These methods model depth estimation as a regression problem and train the regression networks by minimizing mean squared error, which suffers from slow convergence and unsatisfactory local solutions. Besides, existing depth estimation networks employ repeated spatial pooling operations, resulting in undesirable low-resolution feature maps. To obtain high-resolution depth maps, skip-connections or multilayer deconvolution networks are required, which complicates network training and consumes much more computations. To eliminate or at least largely reduce these problems, we introduce a spacing-increasing discretization (SID) strategy to discretize depth and recast depth network learning as an ordinal regression problem. By training the network using an ordinary regression loss, our method achieves much higher accuracy and faster convergence in synch. Furthermore, we adopt a multi-scale network structure which avoids unnecessary spatial pooling and captures multi-scale information in parallel. The proposed deep ordinal regression network (DORN) achieves state-of-the-art results on three challenging benchmarks, i.e., KITTI [16], Make3D [49], and NYU Depth v2 [41], and outperforms existing methods by a large margin.

Posted ContentDOI
09 Feb 2020-medRxiv
TL;DR: The 2019-nCoV epidemic spreads rapidly by human-to-human transmission and the disease severity (including oxygen saturation, respiratory rate, blood leukocyte/lymphocyte count and chest X-ray/CT manifestations) predict poor clinical outcomes.
Abstract: Background Since December 2019, acute respiratory disease (ARD) due to 2019 novel coronavirus (2019-nCoV) emerged in Wuhan city and rapidly spread throughout China. We sought to delineate the clinical characteristics of these cases. Methods We extracted the data on 1,099 patients with laboratory-confirmed 2019-nCoV ARD from 552 hospitals in 31 provinces/provincial municipalities through January 29th, 2020. Results The median age was 47.0 years, and 41.90% were females. Only 1.18% of patients had a direct contact with wildlife, whereas 31.30% had been to Wuhan and 71.80% had contacted with people from Wuhan. Fever (87.9%) and cough (67.7%) were the most common symptoms. Diarrhea is uncommon. The median incubation period was 3.0 days (range, 0 to 24.0 days). On admission, ground-glass opacity was the typical radiological finding on chest computed tomography (50.00%). Significantly more severe cases were diagnosed by symptoms plus reverse-transcriptase polymerase-chain-reaction without abnormal radiological findings than non-severe cases (23.87% vs. 5.20%, P Conclusions The 2019-nCoV epidemic spreads rapidly by human-to-human transmission. Normal radiologic findings are present among some patients with 2019-nCoV infection. The disease severity (including oxygen saturation, respiratory rate, blood leukocyte/lymphocyte count and chest X-ray/CT manifestations) predict poor clinical outcomes.

Journal ArticleDOI
28 Jan 2016-Cell
TL;DR: It is often presented as common knowledge that bacteria outnumber human cells by a ratio of at least 10:1, but it is found that the ratio is much closer to 1:1.

Journal ArticleDOI
TL;DR: VASPKIT as mentioned in this paper is a command-line program that aims at providing a robust and user-friendly interface to perform high-throughput analysis of a variety of material properties from the raw data produced by the VASP code.

Journal ArticleDOI
TL;DR: Treatment with alpelisib–fulvestrant prolonged progression‐free survival among patients with PIK3CA‐mutated, HR‐positive, HER2‐negative advanced breast cancer who had received endocrine therapy previously.
Abstract: Background PIK3CA mutations occur in approximately 40% of patients with hormone receptor (HR)–positive, human epidermal growth factor receptor 2 (HER2)–negative breast cancer. The PI3Kα-sp...

Journal ArticleDOI
TL;DR: In this observational study involving patients with Covid-19 who had been admitted to the hospital, hydroxychloroquine administration was not associated with either a greatly lowered or an increased risk of the composite end point of intubation or death.
Abstract: Background Hydroxychloroquine has been widely administered to patients with Covid-19 without robust evidence supporting its use. Methods We examined the association between hydroxychloroqu...

Proceedings ArticleDOI
07 Jun 2015
TL;DR: This paper used multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives, which serve as conditional inputs to a maximum-entropy language model.
Abstract: This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1%. When human judges compare the system captions to ones written by other people on our held-out test set, the system captions have equal or better quality 34% of the time.

Journal ArticleDOI
31 Jul 2015-Science
TL;DR: The current understanding of CPA is described, some of the nonclassical thermodynamic and dynamic mechanisms known to give rise to experimentally observed pathways are examined, and the challenges to the understanding of these mechanisms are highlighted.
Abstract: Field and laboratory observations show that crystals commonly form by the addition and attachment of particles that range from multi-ion complexes to fully formed nanoparticles. The particles involved in these nonclassical pathways to crystallization are diverse, in contrast to classical models that consider only the addition of monomeric chemical species. We review progress toward understanding crystal growth by particle-attachment processes and show that multiple pathways result from the interplay of free-energy landscapes and reaction dynamics. Much remains unknown about the fundamental aspects, particularly the relationships between solution structure, interfacial forces, and particle motion. Developing a predictive description that connects molecular details to ensemble behavior will require revisiting long-standing interpretations of crystal formation in synthetic systems, biominerals, and patterns of mineralization in natural environments.

Journal ArticleDOI
22 Jun 2018-Science
TL;DR: It is demonstrated that, in the general population, the personality trait neuroticism is significantly correlated with almost every psychiatric disorder and migraine, and it is shown that both psychiatric and neurological disorders have robust correlations with cognitive and personality measures.
Abstract: Disorders of the brain can exhibit considerable epidemiological comorbidity and often share symptoms, provoking debate about their etiologic overlap. We quantified the genetic sharing of 25 brain disorders from genome-wide association studies of 265,218 patients and 784,643 control participants and assessed their relationship to 17 phenotypes from 1,191,588 individuals. Psychiatric disorders share common variant risk, whereas neurological disorders appear more distinct from one another and from the psychiatric disorders. We also identified significant sharing between disorders and a number of brain phenotypes, including cognitive measures. Further, we conducted simulations to explore how statistical power, diagnostic misclassification, and phenotypic heterogeneity affect genetic correlations. These results highlight the importance of common genetic variation as a risk factor for brain disorders and the value of heritability-based methods in understanding their etiology.