scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
12 Jan 2017-eLife
TL;DR: Skene and Henikoff as discussed by the authors developed a new method, called CUTR, which means that protein-DNA interactions are more likely to be maintained in their natural state, which can be used to more accurately identify where transcription factors bind to DNA from yeast and human cells.
Abstract: The DNA in a person’s skin cell will contain the same genes as the DNA in their muscle or brain cells. However, these cells have different identities because different genes are active in skin, muscle and brain cells. Proteins called transcription factors dictate the patterns of gene activation in the different kinds of cells by binding to DNA and switching nearby genes on or off. Transcription factors interact with other proteins such as histones that help to package DNA into a structure known as chromatin. Together, transcription factors, histones and other chromatin-associated proteins determine whether or not nearby genes are active. Sometimes transcription factors and other chromatin-associated proteins bind to the wrong sites on DNA; this situation can lead to diseases in humans, such as cancer. This is one of the many reasons why researchers are interested in working out where specific DNA-binding proteins are located in different situations. A technique called chromatin immunoprecipitation (or ChIP for short) can be used to achieve this goal, yet despite being one of the most widely used techniques in molecular biology, ChIP is hampered by numerous problems. As such, many researchers are keen to find alternative approaches. Skene and Henikoff have now developed a new method, called CUTR this means that protein-DNA interactions are more likely to be maintained in their natural state. With CUT&RUN, as in ChIP, a specific antibody identifies the protein of interest. But in CUT&RUN, this antibody binds to the target protein in intact cells and cuts out the DNA that the protein is bound to, releasing the DNA fragment from the cell. This new strategy allows the DNA fragments to be sequenced and identified more efficiently than is currently possible with ChIP. Skene and Henikoff showed that their new method could more accurately identify where transcription factors bind to DNA from yeast and human cells. CUT&RUN also identified a specific histone that is rarely found in yeast chromatin and the technique can be used with a small number of starting cells. Given the advantages that CUT&RUN offers over ChIP, Skene and Henikoff anticipate that the method will be viewed as a cost-effective and versatile alternative to ChIP. In future, the method could be automated so that multiple analyses can be performed at once.

938 citations


Journal ArticleDOI
TL;DR: Current knowledge and emerging mechanisms governing oral polymicrobial synergy and dysbiosis that have both enhanced the understanding of pathogenic mechanisms and aided the design of innovative therapeutic approaches for oral diseases are discussed.
Abstract: The dynamic and polymicrobial oral microbiome is a direct precursor of diseases such as dental caries and periodontitis, two of the most prevalent microbially induced disorders worldwide. Distinct microenvironments at oral barriers harbour unique microbial communities, which are regulated through sophisticated signalling systems and by host and environmental factors. The collective function of microbial communities is a major driver of homeostasis or dysbiosis and ultimately health or disease. Despite different aetiologies, periodontitis and caries are each driven by a feedforward loop between the microbiota and host factors (inflammation and dietary sugars, respectively) that favours the emergence and persistence of dysbiosis. In this Review, we discuss current knowledge and emerging mechanisms governing oral polymicrobial synergy and dysbiosis that have both enhanced our understanding of pathogenic mechanisms and aided the design of innovative therapeutic approaches for oral diseases.

938 citations


Journal ArticleDOI
TL;DR: A novel strategy to design HEAs using the eutectic alloy concept, i.e. to achieve a microstructure composed of alternating soft fcc and hard bcc phases is proposed, which can be readily adapted to large-scale industrial production of HEAs with simultaneous high fracture strength and high ductility.
Abstract: High-entropy alloys (HEAs) can have either high strength or high ductility, and a simultaneous achievement of both still constitutes a tough challenge. The inferior castability and compositional segregation of HEAs are also obstacles for their technological applications. To tackle these problems, here we proposed a novel strategy to design HEAs using the eutectic alloy concept, i.e. to achieve a microstructure composed of alternating soft fcc and hard bcc phases. As a manifestation of this concept, an AlCoCrFeNi 2.1 (atomic portion) eutectic high-entropy alloy (EHEA) was designed. The as-cast EHEA possessed a fine lamellar fcc/B2 microstructure, and showed an unprecedented combination of high tensile ductility and high fracture strength at room temperature. The excellent mechanical properties could be kept up to 700°C. This new alloy design strategy can be readily adapted to large-scale industrial production of HEAs with simultaneous high fracture strength and high ductility.

938 citations


Journal ArticleDOI
TL;DR: In 1985 when a group of experts convened by the World Health Organization in Fortaleza, Brazil, met to discuss the appropriate technology for birth, they echoed what was considered an unjustified and remarkable increase of caesarean section rates worldwide.

938 citations


Journal ArticleDOI
TL;DR: CP2K as discussed by the authors is an open source electronic structure and molecular dynamics software package to perform atomistic simulations of solid-state, liquid, molecular, and biological systems, especially aimed at massively parallel and linear-scaling electronic structure methods and state-of-the-art ab initio molecular dynamics simulations.
Abstract: CP2K is an open source electronic structure and molecular dynamics software package to perform atomistic simulations of solid-state, liquid, molecular, and biological systems. It is especially aimed at massively parallel and linear-scaling electronic structure methods and state-of-the-art ab initio molecular dynamics simulations. Excellent performance for electronic structure calculations is achieved using novel algorithms implemented for modern high-performance computing systems. This review revisits the main capabilities of CP2K to perform efficient and accurate electronic structure simulations. The emphasis is put on density functional theory and multiple post–Hartree–Fock methods using the Gaussian and plane wave approach and its augmented all-electron extension.

938 citations


Journal ArticleDOI
TL;DR: In this article, the authors used a current-induced spin Hall spin torque to demonstrate the skyrmion Hall effect, and the resultant SKRIMMion accumulation, by driving SKyrmions from the creep-motion regime (where their dynamics are influenced by pinning defects) into the steady-flow-mode.
Abstract: The well-known Hall effect describes the transverse deflection of charged particles (electrons/holes) as a result of the Lorentz force. Similarly, it is intriguing to examine if quasi-particles without an electric charge, but with a topological charge, show related transverse motion. Magnetic skyrmions with a well-defined spin texture with a unit topological charge serve as good candidates to test this hypothesis. In spite of the recent progress made on investigating magnetic skyrmions, direct observation of the skyrmion Hall effect has remained elusive. Here, by using a current-induced spin Hall spin torque, we experimentally demonstrate the skyrmion Hall effect, and the resultant skyrmion accumulation, by driving skyrmions from the creep-motion regime (where their dynamics are influenced by pinning defects) into the steady-flow-motion regime. The experimental observation of transverse transport of skyrmions due to topological charge may potentially create many exciting opportunities, such as topological selection. Experiments show that when driven by electric currents, magnetic skyrmions experience transverse motion due to their topological charge — similar to the conventional Hall effect experienced by charged particles in a perpendicular magnetic field.

938 citations


Journal ArticleDOI
TL;DR: This mini-review highlights the existing vitrimer systems in the period 2011–2015 with the main focus on their chemical origin.
Abstract: Most covalent adaptable networks give highly interesting properties for material processing such as reshaping, recycling and repairing. Classical thermally reversible chemical cross-links allow for a heat-triggered switch between materials that behave as insoluble cured resins, and liquid thermoplastic materials, through a fully reversible sol–gel transition. In 2011, a new class of materials, coined vitrimers, was introduced, which extended the realm of adaptable organic polymer networks. Such materials have the remarkable property that they can be thermally processed in a liquid state without losing network integrity. This feature renders the materials processable like vitreous glass, not requiring precise temperature control. In this mini-review, an overview of the state-of-the-art in the quickly emerging field of vitrimer materials is presented. With a main focus on the chemical origins of their unique thermal behavior, the existing chemical systems and their properties will be discussed. Furthermore, future prospects and challenges in this important research field are highlighted.

937 citations


Journal ArticleDOI
TL;DR: AffectNet is by far the largest database of facial expression, valence, and arousal in the wild enabling research in automated facial expression recognition in two different emotion models and various evaluation metrics show that the deep neural network baselines can perform better than conventional machine learning methods and off-the-shelf facial expressions recognition systems.
Abstract: Automated affective computing in the wild setting is a challenging problem in computer vision. Existing annotated databases of facial expressions in the wild are small and mostly cover discrete emotions (aka the categorical model). There are very limited annotated facial databases for affective computing in the continuous dimensional model (e.g., valence and arousal). To meet this need, we collected, annotated, and prepared for public distribution a new database of facial emotions in the wild (called AffectNet). AffectNet contains more than 1,000,000 facial images from the Internet by querying three major search engines using 1250 emotion related keywords in six different languages. About half of the retrieved images were manually annotated for the presence of seven discrete facial expressions and the intensity of valence and arousal. AffectNet is by far the largest database of facial expression, valence, and arousal in the wild enabling research in automated facial expression recognition in two different emotion models. Two baseline deep neural networks are used to classify images in the categorical model and predict the intensity of valence and arousal. Various evaluation metrics show that our deep neural network baselines can perform better than conventional machine learning methods and off-the-shelf facial expression recognition systems.

937 citations


Book ChapterDOI
TL;DR: In this paper, the authors present a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks.
Abstract: In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data. In one pertinent, well-motivated attack scenario, an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples. In this work, we present a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks. Following a recently proposed framework for security evaluation, we simulate attack scenarios that exhibit different risk levels for the classifier by increasing the attacker's knowledge of the system and her ability to manipulate attack samples. This gives the classifier designer a better picture of the classifier performance under evasion attacks, and allows him to perform a more informed model selection (or parameter setting). We evaluate our approach on the relevant security task of malware detection in PDF files, and show that such systems can be easily evaded. We also sketch some countermeasures suggested by our analysis.

937 citations


Posted Content
TL;DR: This tutorial describes how Bayesian optimization works, including Gaussian process regression and three common acquisition functions: expected improvement, entropy search, and knowledge gradient, and provides a generalization of expected improvement to noisy evaluations beyond the noise-free setting where it is more commonly applied.
Abstract: Bayesian optimization is an approach to optimizing objective functions that take a long time (minutes or hours) to evaluate. It is best-suited for optimization over continuous domains of less than 20 dimensions, and tolerates stochastic noise in function evaluations. It builds a surrogate for the objective and quantifies the uncertainty in that surrogate using a Bayesian machine learning technique, Gaussian process regression, and then uses an acquisition function defined from this surrogate to decide where to sample. In this tutorial, we describe how Bayesian optimization works, including Gaussian process regression and three common acquisition functions: expected improvement, entropy search, and knowledge gradient. We then discuss more advanced techniques, including running multiple function evaluations in parallel, multi-fidelity and multi-information source optimization, expensive-to-evaluate constraints, random environmental conditions, multi-task Bayesian optimization, and the inclusion of derivative information. We conclude with a discussion of Bayesian optimization software and future research directions in the field. Within our tutorial material we provide a generalization of expected improvement to noisy evaluations, beyond the noise-free setting where it is more commonly applied. This generalization is justified by a formal decision-theoretic argument, standing in contrast to previous ad hoc modifications.

936 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a conceptually simple and intuitive learning objective function, i.e., additive margin softmax, for face verification, which is more intuitive and interpretable.
Abstract: In this letter, we propose a conceptually simple and intuitive learning objective function, i.e., additive margin softmax, for face verification. In general, face verification tasks can be viewed as metric learning problems, even though lots of face verification models are trained in classification schemes. It is possible when a large-margin strategy is introduced into the classification model to encourage intraclass variance minimization. As one alternative, angular softmax has been proposed to incorporate the margin. In this letter, we introduce another kind of margin to the softmax loss function, which is more intuitive and interpretable. Experiments on LFW and MegaFace show that our algorithm performs better when the evaluation criteria are designed for very low false alarm rate.

Journal ArticleDOI
TL;DR: It is demonstrated that intravenously injected DNA nanorobots deliver thrombin specifically to tumor-associated blood vessels and induce intravascular thrombosis, resulting in tumor necrosis and inhibition of tumor growth.
Abstract: Nanoscale robots have potential as intelligent drug delivery systems that respond to molecular triggers. Using DNA origami we constructed an autonomous DNA robot programmed to transport payloads and present them specifically in tumors. Our nanorobot is functionalized on the outside with a DNA aptamer that binds nucleolin, a protein specifically expressed on tumor-associated endothelial cells, and the blood coagulation protease thrombin within its inner cavity. The nucleolin-targeting aptamer serves both as a targeting domain and as a molecular trigger for the mechanical opening of the DNA nanorobot. The thrombin inside is thus exposed and activates coagulation at the tumor site. Using tumor-bearing mouse models, we demonstrate that intravenously injected DNA nanorobots deliver thrombin specifically to tumor-associated blood vessels and induce intravascular thrombosis, resulting in tumor necrosis and inhibition of tumor growth. The nanorobot proved safe and immunologically inert in mice and Bama miniature pigs. Our data show that DNA nanorobots represent a promising strategy for precise drug delivery in cancer therapy.

Journal ArticleDOI
TL;DR: The results of a collaborative integrated work which aims to characterize the trichothecene genotypes of strains from three Fusarium species, collected over the period 2000–2013 and to enhance the standardization of epidemiological data collection were described.
Abstract: Fusarium species, particularly Fusarium graminearum and F. culmorum, are the main cause of trichothecene type B contamination in cereals. Data on the distribution of Fusarium trichothecene genotypes in cereals in Europe are scattered in time and space. Furthermore, a common core set of related variables (sampling method, host cultivar, previous crop, etc.) that would allow more effective analysis of factors influencing the spatial and temporal population distribution, is lacking. Consequently, based on the available data, it is difficult to identify factors influencing chemotype distribution and spread at the European level. Here we describe the results of a collaborative integrated work which aims (1) to characterize the trichothecene genotypes of strains from three Fusarium species, collected over the period 2000–2013 and (2) to enhance the standardization of epidemiological data collection. Information on host plant, country of origin, sampling location, year of sampling and previous crop of 1147 F. graminearum, 479 F. culmorum, and 3 F. cortaderiae strains obtained from 17 European countries was compiled and a map of trichothecene type B genotype distribution was plotted for each species. All information on the strains was collected in a freely accessible and updatable database (www.catalogueeu.luxmcc.lu), which will serve as a starting point for epidemiological analysis of potential spatial and temporal trichothecene genotype shifts in Europe. The analysis of the currently available European dataset showed that in F. graminearum, the predominant genotype was 15-acetyldeoxynivalenol (15-ADON) (82.9%), followed by 3-acetyldeoxynivalenol (3-ADON) (13.6%), and nivalenol (NIV) (3.5%). In F. culmorum, the prevalent genotype was 3-ADON (59.9%), while the NIV genotype accounted for the remaining 40.1%. Both, geographical and temporal patterns of trichothecene genotypes distribution were identified.

Journal ArticleDOI
TL;DR: 6G with additional technical requirements beyond those of 5G will enable faster and further communications to the extent that the boundary between physical and cyber worlds disappears.
Abstract: The fifth generation (5G) wireless communication networks are being deployed worldwide from 2020 and more capabilities are in the process of being standardized, such as mass connectivity, ultra-reliability, and guaranteed low latency. However, 5G will not meet all requirements of the future in 2030 and beyond, and sixth generation (6G) wireless communication networks are expected to provide global coverage, enhanced spectral/energy/cost efficiency, better intelligence level and security, etc. To meet these requirements, 6G networks will rely on new enabling technologies, i.e., air interface and transmission technologies and novel network architecture, such as waveform design, multiple access, channel coding schemes, multi-antenna technologies, network slicing, cell-free architecture, and cloud/fog/edge computing. Our vision on 6G is that it will have four new paradigm shifts. First, to satisfy the requirement of global coverage, 6G will not be limited to terrestrial communication networks, which will need to be complemented with non-terrestrial networks such as satellite and unmanned aerial vehicle (UAV) communication networks, thus achieving a space-air-ground-sea integrated communication network. Second, all spectra will be fully explored to further increase data rates and connection density, including the sub-6 GHz, millimeter wave (mmWave), terahertz (THz), and optical frequency bands. Third, facing the big datasets generated by the use of extremely heterogeneous networks, diverse communication scenarios, large numbers of antennas, wide bandwidths, and new service requirements, 6G networks will enable a new range of smart applications with the aid of artificial intelligence (AI) and big data technologies. Fourth, network security will have to be strengthened when developing 6G networks. This article provides a comprehensive survey of recent advances and future trends in these four aspects. Clearly, 6G with additional technical requirements beyond those of 5G will enable faster and further communications to the extent that the boundary between physical and cyber worlds disappears.

Journal ArticleDOI
TL;DR: The most recent data, besides confirming the mitochondrial role in tissue oxidative stress and protection, show interplay between mitochondria and other ROS cellular sources, so that activation of one can lead to activation of other sources.
Abstract: There is significant evidence that, in living systems, free radicals and other reactive oxygen and nitrogen species play a double role, because they can cause oxidative damage and tissue dysfunction and serve as molecular signals activating stress responses that are beneficial to the organism. Mitochondria have been thought to both play a major role in tissue oxidative damage and dysfunction and provide protection against excessive tissue dysfunction through several mechanisms, including stimulation of opening of permeability transition pores. Until recently, the functional significance of ROS sources different from mitochondria has received lesser attention. However, the most recent data, besides confirming the mitochondrial role in tissue oxidative stress and protection, show interplay between mitochondria and other ROS cellular sources, so that activation of one can lead to activation of other sources. Thus, it is currently accepted that in various conditions all cellular sources of ROS provide significant contribution to processes that oxidatively damage tissues and assure their survival, through mechanisms such as autophagy and apoptosis.

Journal ArticleDOI
TL;DR: This paper proposed three attention schemes that integrate mutual influence between sentences into CNNs, thus the representation of each sentence takes into consideration its counterpart, and achieved state-of-the-art performance on answer selection, paraphrase identification, and textual entailment.
Abstract: How to model a pair of sentences is a critical issue in many NLP tasks such as answer selection (AS), paraphrase identification (PI) and textual entailment (TE). Most prior work (i) deals with one individual task by fine-tuning a specific system; (ii) models each sentence's representation separately, rarely considering the impact of the other sentence; or (iii) relies fully on manually designed, task-specific linguistic features. This work presents a general Attention Based Convolutional Neural Network (ABCNN) for modeling a pair of sentences. We make three contributions. (i) The ABCNN can be applied to a wide variety of tasks that require modeling of sentence pairs. (ii) We propose three attention schemes that integrate mutual influence between sentences into CNNs; thus, the representation of each sentence takes into consideration its counterpart. These interdependent sentence pair representations are more powerful than isolated sentence representations. (iii) ABCNNs achieve state-of-the-art performance on AS, PI and TE tasks. We release code at: https://github.com/yinwenpeng/Answer_Selection.

02 Feb 2017
TL;DR: The LISA Consortium as mentioned in this paper proposed a 4-year mission in response to ESA's call for missions for L3, which is an all-sky monitor and will offer a wide view of a dynamic cosmos using Gravitational Waves as new and unique messengers to unveil The Gravitational Universe.
Abstract: Following the selection of The Gravitational Universe by ESA, and the successful flight of LISA Pathfinder, the LISA Consortium now proposes a 4 year mission in response to ESA's call for missions for L3. The observatory will be based on three arms with six active laser links, between three identical spacecraft in a triangular formation separated by 2.5 million km. LISA is an all-sky monitor and will offer a wide view of a dynamic cosmos using Gravitational Waves as new and unique messengers to unveil The Gravitational Universe. It provides the closest ever view of the infant Universe at TeV energy scales, has known sources in the form of verification binaries in the Milky Way, and can probe the entire Universe, from its smallest scales near the horizons of black holes, all the way to cosmological scales. The LISA mission will scan the entire sky as it follows behind the Earth in its orbit, obtaining both polarisations of the Gravitational Waves simultaneously, and will measure source parameters with astrophysically relevant sensitivity in a band from below $10^{-4}\,$Hz to above $10^{-1}\,$Hz.

Journal ArticleDOI
TL;DR: An international multidisciplinary team of 29 members with expertise in guideline development, evidence analysis, and family-centered care is assembled to revise the 2007 Clinical Practice Guidelines for support of the family in the patient-centered ICU.
Abstract: Objective:To provide clinicians with evidence-based strategies to optimize the support of the family of critically ill patients in the ICU.Methods:We used the Council of Medical Specialty Societies principles for the development of clinical guidelines as the framework for guideline development. We a

Journal ArticleDOI
TL;DR: The authors' survey globally quantified the considerable shortage of corneal graft tissue, with only 1 cornea available for 70 needed, and efforts to encourage cornea donation must continue in all countries, but it is also essential to develop alternative and/or complementary solutions, such as corneAL bioengineering.
Abstract: Importance Corneal transplantation restores visual function when visual impairment caused by a corneal disease becomes too severe. It is considered the world’s most frequent type of transplantation, but, to our knowledge, there are no exhaustive data allowing measurement of supply and demand, although such data are essential in defining local, national, and global strategies to fight corneal blindness. Objective To describe the worldwide situation of corneal transplantation supply and demand. Design, Setting, and Participants Data were collected between August 2012 and August 2013 from a systematic review of published literature in parallel with national and international reports on corneal transplantation and eye banking. In a second step, eye bank staff and/or corneal surgeons were interviewed on their local activities. Interviews were performed during international ophthalmology or eye-banking congresses or by telephone or email. Countries’ national supply/demand status was classified using a 7-grade system. Data were collected from 148 countries. Main Outcomes and Measures Corneal transplantation and corneal procurements per capita in each country. Results In 2012, we identified 184 576 corneal transplants performed in 116 countries. These were procured from 283 530 corneas and stored in 742 eye banks. The top indications were Fuchs dystrophy (39% of all corneal transplants performed), a primary corneal edema mostly affecting elderly individuals; keratoconus (27%), a corneal disease that slowly deforms the cornea in young people; and sequellae of infectious keratitis (20%). The United States, with 199.10 −6 corneal transplants per capita, had the highest transplantation rate, followed by Lebanon (122.10 −6 ) and Canada (117.10 −6 ), while the median of the 116 transplanting countries was 19.10 −6 . Corneas were procured in only 82 countries. Only the United States and Sri Lanka exported large numbers of donor corneas. About 53% of the world’s population had no access to corneal transplantation. Conclusions and Relevance Our survey globally quantified the considerable shortage of corneal graft tissue, with only 1 cornea available for 70 needed. Efforts to encourage cornea donation must continue in all countries, but it is also essential to develop alternative and/or complementary solutions, such as corneal bioengineering.

Posted Content
Chao Peng1, Xiangyu Zhang, Gang Yu, Guiming Luo1, Jian Sun 
TL;DR: In this paper, a Global Convolutional Network (GCN) is proposed to address both the classification and localization issues for the semantic segmentation, which achieves state-of-the-art performance on two public benchmarks.
Abstract: One of recent trends [30, 31, 14] in network architec- ture design is stacking small filters (e.g., 1x1 or 3x3) in the entire network because the stacked small filters is more ef- ficient than a large kernel, given the same computational complexity. However, in the field of semantic segmenta- tion, where we need to perform dense per-pixel prediction, we find that the large kernel (and effective receptive field) plays an important role when we have to perform the clas- sification and localization tasks simultaneously. Following our design principle, we propose a Global Convolutional Network to address both the classification and localization issues for the semantic segmentation. We also suggest a residual-based boundary refinement to further refine the ob- ject boundaries. Our approach achieves state-of-art perfor- mance on two public benchmarks and significantly outper- forms previous results, 82.2% (vs 80.2%) on PASCAL VOC 2012 dataset and 76.9% (vs 71.8%) on Cityscapes dataset.

Posted Content
TL;DR: This work discusses core RL elements, including value function, in particular, Deep Q-Network (DQN), policy, reward, model, planning, and exploration, and important mechanisms for RL, including attention and memory, unsupervised learning, transfer learning, multi-agent RL, hierarchical RL, and learning to learn.
Abstract: We give an overview of recent exciting achievements of deep reinforcement learning (RL). We discuss six core elements, six important mechanisms, and twelve applications. We start with background of machine learning, deep learning and reinforcement learning. Next we discuss core RL elements, including value function, in particular, Deep Q-Network (DQN), policy, reward, model, planning, and exploration. After that, we discuss important mechanisms for RL, including attention and memory, unsupervised learning, transfer learning, multi-agent RL, hierarchical RL, and learning to learn. Then we discuss various applications of RL, including games, in particular, AlphaGo, robotics, natural language processing, including dialogue systems, machine translation, and text generation, computer vision, neural architecture design, business management, finance, healthcare, Industry 4.0, smart grid, intelligent transportation systems, and computer systems. We mention topics not reviewed yet, and list a collection of RL resources. After presenting a brief summary, we close with discussions. Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant update.

MonographDOI
01 Jan 2017
TL;DR: This chapter explains why many real-world networks are small worlds and have large fluctuations in their degrees, and why Probability theory offers a highly effective way to deal with the complexity of networks, and leads us to consider random graphs.
Abstract: This rigorous introduction to network science presents random graphs as models for real-world networks. Such networks have distinctive empirical properties and a wealth of new models have emerged to capture them. Classroom tested for over ten years, this text places recent advances in a unified framework to enable systematic study. Designed for a master's-level course, where students may only have a basic background in probability, the text covers such important preliminaries as convergence of random variables, probabilistic bounds, coupling, martingales, and branching processes. Building on this base - and motivated by many examples of real-world networks, including the Internet, collaboration networks, and the World Wide Web - it focuses on several important models for complex networks and investigates key properties, such as the connectivity of nodes. Numerous exercises allow students to develop intuition and experience in working with the models.

Journal ArticleDOI
TL;DR: In this article, the authors review the recent advances and open challenges in the field of solution-processed photodetectors, examining the topic from both the materials and the device perspective and highlighting the potential of the synergistic combination of materials and device engineering.
Abstract: Efficient light detection is central to modern science and technology. Current photodetectors mainly use photodiodes based on crystalline inorganic elemental semiconductors, such as silicon, or compounds such as III–V semiconductors. Photodetectors made of solution-processed semiconductors — which include organic materials, metal-halide perovskites and quantum dots — have recently emerged as candidates for next-generation light sensing. They combine ease of processing, tailorable optoelectronic properties, facile integration with complementary metal–oxide–semiconductors, compatibility with flexible substrates and good performance. Here, we review the recent advances and the open challenges in the field of solution-processed photodetectors, examining the topic from both the materials and the device perspective and highlighting the potential of the synergistic combination of materials and device engineering. We explore hybrid phototransistors and their potential to overcome trade-offs in noise, gain and speed, as well as the rapid advances in metal-halide perovskite photodiodes and their recent application in narrowband filterless photodetection. Conventional photodetectors, made of crystalline inorganic semiconductors, are limited in terms of the compactness and sensitivity they can reach. Photodetectors based on solution-processed semiconductors combine ease of processing, tailorable optoelectronic properties and good performance, and thus hold potential for next-generation light sensing.

Proceedings Article
02 Nov 2017
TL;DR: In this paper, the authors propose a method to learn deep ReLU-based classifiers that are provably robust against norm-bounded adversarial perturbations on the training data; for previously unseen examples, the approach will be guaranteed to detect all adversarial examples, though it may flag some non-adversarial examples as well.
Abstract: We propose a method to learn deep ReLU-based classifiers that are provably robust against norm-bounded adversarial perturbations (on the training data; for previously unseen examples, the approach will be guaranteed to detect all adversarial examples, though it may flag some non-adversarial examples as well). The basic idea of the approach is to consider a convex outer approximation of the set of activations reachable through a norm-bounded perturbation, and we develop a robust optimization procedure that minimizes the worst case loss over this outer region (via a linear program). Crucially, we show that the dual problem to this linear program can be represented itself as a deep network similar to the backpropagation network, leading to very efficient optimization approaches that produce guaranteed bounds on the robust loss. The end result is that by executing a few more forward and backward passes through a slightly modified version of the original network (though possibly with much larger batch sizes), we can learn a classifier that is provably robust to any norm-bounded adversarial attack. We illustrate the approach on a toy 2D robust classification task, and on a simple convolutional architecture applied to MNIST, where we produce a classifier that provably has less than 8.4% test error for any adversarial attack with bounded $\ell_\infty$ norm less than $\epsilon = 0.1$. This represents the largest verified network that we are aware of, and we discuss future challenges in scaling the approach to much larger domains.

Journal ArticleDOI
TL;DR: Pertuzumab significantly improved the rates of invasive‐disease–free survival among patients with HER2‐positive, operable breast cancer when it was added to adjuvant chemotherapy and chemotherapy.
Abstract: BackgroundPertuzumab increases the rate of pathological complete response in the preoperative context and increases overall survival among patients with metastatic disease when it is added to trastuzumab and chemotherapy for the treatment of human epidermal growth factor receptor 2 (HER2)–positive breast cancer. In this trial, we investigated whether pertuzumab, when added to adjuvant trastuzumab and chemotherapy, improves outcomes among patients with HER2-positive early breast cancer. MethodsWe randomly assigned patients with node-positive or high-risk node-negative HER2-positive, operable breast cancer to receive either pertuzumab or placebo added to standard adjuvant chemotherapy plus 1 year of treatment with trastuzumab. We assumed a 3-year invasive-disease–free survival rate of 91.8% with pertuzumab and 89.2% with placebo. ResultsIn the trial population, 63% of the patients who were randomly assigned to receive pertuzumab (2400 patients) or placebo (2405 patients) had node-positive disease and 36% ha...

Journal ArticleDOI
Sylvain Deville1
TL;DR: In this article, a review of the freeze-casting process is presented, with particular attention being paid on the underlying principles of the structure formation mechanisms and the influence of processing parameters on the structure.
Abstract: Freeze-casting, the templating of porous structure by the solidification of a solvent, have seen a great deal of efforts during the last few years. Of particular interest are the unique structure and properties exhibited by porous freeze-casted ceramics, which opened new opportunities in the field of cellular ceramics. The objective of this review is to provide a first understanding of the process as of today, with particular attention being paid on the underlying principles of the structure formation mechanisms and the influence of processing parameters on the structure. This analysis highlights the current limits of both the understanding and the control of the process. A few perspectives are given, with regards of the current achievements, interests and identified issues.

Proceedings Article
12 Feb 2018
TL;DR: The experiments show that the best discovered activation function, f(x) = x \cdot \text{sigmoid}(\beta x)$, which is named Swish, tends to work better than ReLU on deeper models across a number of challenging datasets.
Abstract: The choice of activation functions in deep networks has a significant effect on the training dynamics and task performance. Currently, the most successful and widely-used activation function is the Rectified Linear Unit (ReLU). Although various hand-designed alternatives to ReLU have been proposed, none have managed to replace it due to inconsistent gains. In this work, we propose to leverage automatic search techniques to discover new activation functions. Using a combination of exhaustive and reinforcement learning-based search, we discover multiple novel activation functions. We verify the effectiveness of the searches by conducting an empirical evaluation with the best discovered activation function. Our experiments show that the best discovered activation function, $f(x) = x \cdot \text{sigmoid}(\beta x)$, which we name Swish, tends to work better than ReLU on deeper models across a number of challenging datasets. For example, simply replacing ReLUs with Swish units improves top-1 classification accuracy on ImageNet by 0.9\% for Mobile NASNet-A and 0.6\% for Inception-ResNet-v2. The simplicity of Swish and its similarity to ReLU make it easy for practitioners to replace ReLUs with Swish units in any neural network.

Journal ArticleDOI
TL;DR: It is demonstrated that the Li2S decomposition energy barrier is associated with the binding between isolated Li ions and the sulfur in sulfides; this is the main reason that sulfide materials can induce lower overpotential compared with commonly used carbon materials.
Abstract: Polysulfide binding and trapping to prevent dissolution into the electrolyte by a variety of materials has been well studied in Li−S batteries. Here we discover that some of those materials can play an important role as an activation catalyst to facilitate oxidation of the discharge product, Li2S, back to the charge product, sulfur. Combining theoretical calculations and experimental design, we select a series of metal sulfides as a model system to identify the key parameters in determining the energy barrier for Li2S oxidation and polysulfide adsorption. We demonstrate that the Li2S decomposition energy barrier is associated with the binding between isolated Li ions and the sulfur in sulfides; this is the main reason that sulfide materials can induce lower overpotential compared with commonly used carbon materials. Fundamental understanding of this reaction process is a crucial step toward rational design and screening of materials to achieve high reversible capacity and long cycle life in Li−S batteries.

Journal ArticleDOI
TL;DR: The hypothesis that black holes are the fastest computers in nature is discussed and the conjecture that the quantum complexity of a holographic state is dual to the action of a certain spacetime region that is called a Wheeler-DeWitt patch is illustrated.
Abstract: We conjecture that the quantum complexity of a holographic state is dual to the action of a certain spacetime region that we call a Wheeler-DeWitt patch. We illustrate and test the conjecture in the context of neutral, charged, and rotating black holes in anti-de Sitter spacetime, as well as black holes perturbed with static shells and with shock waves. This conjecture evolved from a previous conjecture that complexity is dual to spatial volume, but appears to be a major improvement over the original. In light of our results, we discuss the hypothesis that black holes are the fastest computers in nature.

Journal ArticleDOI
15 Oct 2015-Nature
TL;DR: A two-qubit logic gate is presented, which uses single spins in isotopically enriched silicon and is realized by performing single- and two- qubits operations in a quantum dot system using the exchange interaction, as envisaged in the Loss–DiVincenzo proposal.
Abstract: Quantum computation requires qubits that can be coupled in a scalable manner, together with universal and high-fidelity one- and two-qubit logic gates. Many physical realizations of qubits exist, including single photons, trapped ions, superconducting circuits, single defects or atoms in diamond and silicon, and semiconductor quantum dots, with single-qubit fidelities that exceed the stringent thresholds required for fault-tolerant quantum computing. Despite this, high-fidelity two-qubit gates in the solid state that can be manufactured using standard lithographic techniques have so far been limited to superconducting qubits, owing to the difficulties of coupling qubits and dephasing in semiconductor systems. Here we present a two-qubit logic gate, which uses single spins in isotopically enriched silicon and is realized by performing single- and two-qubit operations in a quantum dot system using the exchange interaction, as envisaged in the Loss-DiVincenzo proposal. We realize CNOT gates via controlled-phase operations combined with single-qubit operations. Direct gate-voltage control provides single-qubit addressability, together with a switchable exchange interaction that is used in the two-qubit controlled-phase gate. By independently reading out both qubits, we measure clear anticorrelations in the two-spin probabilities of the CNOT gate.