scispace - formally typeset
Search or ask a question
Browse all papers

Proceedings ArticleDOI
Hao Wang1, Yitong Wang1, Zhou Zheng1, Ji Xing1, Dihong Gong1, Jingchao Zhou1, Zhifeng Li1, Wei Liu1 
18 Jun 2018
TL;DR: In this article, the authors proposed a large margin cosine loss (LMCL), which normalizes both features and weight vectors to remove radial variations, based on which a cosine margin term is introduced to further maximize the decision margin in the angular space.
Abstract: Face recognition has made extraordinary progress owing to the advancement of deep convolutional neural networks (CNNs). The central task of face recognition, including face verification and identification, involves face feature discrimination. However, the traditional softmax loss of deep CNNs usually lacks the power of discrimination. To address this problem, recently several loss functions such as center loss, large margin softmax loss, and angular softmax loss have been proposed. All these improved losses share the same idea: maximizing inter-class variance and minimizing intra-class variance. In this paper, we propose a novel loss function, namely large margin cosine loss (LMCL), to realize this idea from a different perspective. More specifically, we reformulate the softmax loss as a cosine loss by L2 normalizing both features and weight vectors to remove radial variations, based on which a cosine margin term is introduced to further maximize the decision margin in the angular space. As a result, minimum intra-class variance and maximum inter-class variance are achieved by virtue of normalization and cosine decision margin maximization. We refer to our model trained with LMCL as CosFace. Extensive experimental evaluations are conducted on the most popular public-domain face recognition datasets such as MegaFace Challenge, Youtube Faces (YTF) and Labeled Face in the Wild (LFW). We achieve the state-of-the-art performance on these benchmarks, which confirms the effectiveness of our proposed approach.

1,879 citations


Journal ArticleDOI
Abstract: "Quantum sensing" describes the use of a quantum system, quantum properties or quantum phenomena to perform a measurement of a physical quantity Historical examples of quantum sensors include magnetometers based on superconducting quantum interference devices and atomic vapors, or atomic clocks More recently, quantum sensing has become a distinct and rapidly growing branch of research within the area of quantum science and technology, with the most common platforms being spin qubits, trapped ions and flux qubits The field is expected to provide new opportunities - especially with regard to high sensitivity and precision - in applied physics and other areas of science In this review, we provide an introduction to the basic principles, methods and concepts of quantum sensing from the viewpoint of the interested experimentalist

1,878 citations


Journal ArticleDOI
26 May 2017-Science
TL;DR: A subcellular map of the human proteome is presented to facilitate functional exploration of individual proteins and their role in human biology and disease and integrated into existing network models of protein-protein interactions for increased accuracy.
Abstract: Resolving the spatial distribution of the human proteome at a subcellular level can greatly increase our understanding of human biology and disease. Here we present a comprehensive image-based map ...

1,878 citations



Journal ArticleDOI
TL;DR: This document describes the development and use of angiotensin-converting enzyme, a non-volatile substance that acts as a “spatially aggregating substance” to reduce the chances of heart attack in women.
Abstract: 2-D : two-dimensional 3-D : three-dimensional 5-FU : 5-fluorouracil ACE : angiotensin-converting enzyme ARB : angiotensin II receptor blocker ASE : American Society of Echocardiography BNP : B-type natriuretic peptide CABG : coronary artery bypass graft CAD : coronary artery

1,875 citations


Journal ArticleDOI
14 Apr 2017-Science
TL;DR: This article showed that applying machine learning to ordinary human language results in human-like semantic biases and replicated a spectrum of known biases, as measured by the Implicit Association Test, using a widely used, purely statistical machine-learning model trained on a standard corpus of text from the World Wide Web.
Abstract: Machine learning is a means to derive artificial intelligence by discovering patterns in existing data. Here, we show that applying machine learning to ordinary human language results in human-like semantic biases. We replicated a spectrum of known biases, as measured by the Implicit Association Test, using a widely used, purely statistical machine-learning model trained on a standard corpus of text from the World Wide Web. Our results indicate that text corpora contain recoverable and accurate imprints of our historic biases, whether morally neutral as toward insects or flowers, problematic as toward race or gender, or even simply veridical, reflecting the status quo distribution of gender with respect to careers or first names. Our methods hold promise for identifying and addressing sources of bias in culture, including technology.

1,874 citations


Book ChapterDOI
08 Sep 2018
TL;DR: In this article, the authors propose a multimodal unsupervised image-to-image (MUNIT) framework, where the image representation can be decomposed into a content code that is domain-invariant and a style code that captures domain-specific properties.
Abstract: Unsupervised image-to-image translation is an important and challenging problem in computer vision. Given an image in the source domain, the goal is to learn the conditional distribution of corresponding images in the target domain, without seeing any examples of corresponding image pairs. While this conditional distribution is inherently multimodal, existing approaches make an overly simplified assumption, modeling it as a deterministic one-to-one mapping. As a result, they fail to generate diverse outputs from a given source domain image. To address this limitation, we propose a Multimodal Unsupervised Image-to-image \(\text{ Translation } \text{(MUNIT) }\) framework. We assume that the image representation can be decomposed into a content code that is domain-invariant, and a style code that captures domain-specific properties. To translate an image to another domain, we recombine its content code with a random style code sampled from the style space of the target domain. We analyze the proposed framework and establish several theoretical results. Extensive experiments with comparisons to state-of-the-art approaches further demonstrate the advantage of the proposed framework. Moreover, our framework allows users to control the style of translation outputs by providing an example style image. Code and pretrained models are available at https://github.com/nvlabs/MUNIT.

1,874 citations


Journal ArticleDOI
TL;DR: How AMPK functions as a central mediator of the cellular response to energetic stress and mitochondrial insults and coordinates multiple features of autophagy and mitochondrial biology is discussed.
Abstract: Cells constantly adapt their metabolism to meet their energy needs and respond to nutrient availability. Eukaryotes have evolved a very sophisticated system to sense low cellular ATP levels via the serine/threonine kinase AMP-activated protein kinase (AMPK) complex. Under conditions of low energy, AMPK phosphorylates specific enzymes and growth control nodes to increase ATP generation and decrease ATP consumption. In the past decade, the discovery of numerous new AMPK substrates has led to a more complete understanding of the minimal number of steps required to reprogramme cellular metabolism from anabolism to catabolism. This energy switch controls cell growth and several other cellular processes, including lipid and glucose metabolism and autophagy. Recent studies have revealed that one ancestral function of AMPK is to promote mitochondrial health, and multiple newly discovered targets of AMPK are involved in various aspects of mitochondrial homeostasis, including mitophagy. This Review discusses how AMPK functions as a central mediator of the cellular response to energetic stress and mitochondrial insults and coordinates multiple features of autophagy and mitochondrial biology.

1,873 citations


Proceedings ArticleDOI
TL;DR: A new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling is introduced, and top results are obtained on the challenging Pascal VOC 2012 segmentation benchmark.
Abstract: Pixel-level labelling tasks, such as semantic segmentation, play a central role in image understanding. Recent approaches have attempted to harness the capabilities of deep learning techniques for image recognition to tackle pixel-level labelling tasks. One central issue in this methodology is the limited capacity of deep learning techniques to delineate visual objects. To solve this problem, we introduce a new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling. To this end, we formulate mean-field approximate inference for the Conditional Random Fields with Gaussian pairwise potentials as Recurrent Neural Networks. This network, called CRF-RNN, is then plugged in as a part of a CNN to obtain a deep network that has desirable properties of both CNNs and CRFs. Importantly, our system fully integrates CRF modelling with CNNs, making it possible to train the whole deep network end-to-end with the usual back-propagation algorithm, avoiding offline post-processing methods for object delineation. We apply the proposed method to the problem of semantic image segmentation, obtaining top results on the challenging Pascal VOC 2012 segmentation benchmark.

1,873 citations


Journal ArticleDOI
TL;DR: Among patients who had a previous acute coronary syndrome and who were receiving high-intensity statin therapy, the risk of recurrent ischemic cardiovascular events was lower among those who received alirocumab than amongThose who received placebo.
Abstract: BACKGROUND Patients who have had an acute coronary syndrome are at high risk for recurrent ischemic cardiovascular events. We sought to determine whether alirocumab, a human monoclonal antibody to proprotein convertase subtilisin-kexin type 9 (PCSK9), would improve cardiovascular outcomes after an acute coronary syndrome in patients receiving high-intensity statin therapy. METHODS We conducted a multicenter, randomized, double-blind, placebo-controlled trial involving 18,924 patients who had an acute coronary syndrome 1 to 12 months earlier, had a low-density lipoprotein (LDL) cholesterol level of at least 70 mg per deciliter (1.8 mmol per liter), a non-highdensity lipoprotein cholesterol level of at least 100 mg per deciliter (2.6 mmol per liter), or an apolipoprotein B level of at least 80 mg per deciliter, and were receiving statin therapy at a high-intensity dose or at the maximum tolerated dose. Patients were randomly assigned to receive alirocumab subcutaneously at a dose of 75 mg (9462 patients) or matching placebo (9462 patients) every 2 weeks. The dose of alirocumab was adjusted under blinded conditions to target an LDL cholesterol level of 25 to 50 mg per deciliter (0.6 to 1.3 mmol per liter). The primary end point was a composite of death from coronary heart disease, nonfatal myocardial infarction, fatal or nonfatal ischemic stroke, or unstable angina requiring hospitalization. RESULTS The median duration of follow-up was 2.8 years. A composite primary end-point event occurred in 903 patients (9.5%) in the alirocumab group and in 1052 patients (11.1%) in the placebo group (hazard ratio, 0.85; 95% confidence interval [CI], 0.78 to 0.93; P<0.001). A total of 334 patients (3.5%) in the alirocumab group and 392 patients (4.1%) in the placebo group died (hazard ratio, 0.85; 95% CI, 0.73 to 0.98). The absolute benefit of alirocumab with respect to the composite primary end point was greater among patients who had a baseline LDL cholesterol level of 100 mg or more per deciliter than among patients who had a lower baseline level. The incidence of adverse events was similar in the two groups, with the exception of local injection-site reactions (3.8% in the alirocumab group vs. 2.1% in the placebo group). CONCLUSIONS Among patients who had a previous acute coronary syndrome and who were receiving highintensity statin therapy, the risk of recurrent ischemic cardiovascular events was lower among those who received alirocumab than among those who received placebo.

1,873 citations


Proceedings ArticleDOI
21 Jul 2017
TL;DR: This paper proposes a very deep CNN model (up to 52 convolutional layers) named Deep Recursive Residual Network (DRRN) that strives for deep yet concise networks, and recursive learning is used to control the model parameters while increasing the depth.
Abstract: Recently, Convolutional Neural Network (CNN) based models have achieved great success in Single Image Super-Resolution (SISR). Owing to the strength of deep networks, these CNN models learn an effective nonlinear mapping from the low-resolution input image to the high-resolution target image, at the cost of requiring enormous parameters. This paper proposes a very deep CNN model (up to 52 convolutional layers) named Deep Recursive Residual Network (DRRN) that strives for deep yet concise networks. Specifically, residual learning is adopted, both in global and local manners, to mitigate the difficulty of training very deep networks, recursive learning is used to control the model parameters while increasing the depth. Extensive benchmark evaluation shows that DRRN significantly outperforms state of the art in SISR, while utilizing far fewer parameters. Code is available at https://github.com/tyshiwo/DRRN_CVPR17.

Journal ArticleDOI
TL;DR: The authors identify the challenges and proposed set of minimal reporting guidelines for mouse and human MDSC are a heterogeneous population expanded in cancer and other chronic inflammatory conditions.
Abstract: Myeloid-derived suppressor cells (MDSCs) have emerged as major regulators of immune responses in cancer and other pathological conditions. In recent years, ample evidence supports key contributions of MDSC to tumour progression through both immune-mediated mechanisms and those not directly associated with immune suppression. MDSC are the subject of intensive research with >500 papers published in 2015 alone. However, the phenotypic, morphological and functional heterogeneity of these cells generates confusion in investigation and analysis of their roles in inflammatory responses. The purpose of this communication is to suggest characterization standards in the burgeoning field of MDSC research.

Proceedings Article
13 Feb 2017
TL;DR: SeqGAN as mentioned in this paper models the data generator as a stochastic policy in reinforcement learning (RL), and the RL reward signal comes from the discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search.
Abstract: As a new way of training generative models, Generative Adversarial Net (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is nontrivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines.

Journal ArticleDOI
TL;DR: Direct Sparse Odometry (DSO) as mentioned in this paper combines a fully direct probabilistic model with consistent, joint optimization of all model parameters, including geometry represented as inverse depth in a reference frame and camera motion.
Abstract: Direct Sparse Odometry (DSO) is a visual odometry method based on a novel, highly accurate sparse and direct structure and motion formulation. It combines a fully direct probabilistic model (minimizing a photometric error) with consistent, joint optimization of all model parameters, including geometry-represented as inverse depth in a reference frame-and camera motion. This is achieved in real time by omitting the smoothness prior used in other direct methods and instead sampling pixels evenly throughout the images. Since our method does not depend on keypoint detectors or descriptors, it can naturally sample pixels from across all image regions that have intensity gradient, including edges or smooth intensity variations on essentially featureless walls. The proposed model integrates a full photometric calibration, accounting for exposure time, lens vignetting, and non-linear response functions. We thoroughly evaluate our method on three different datasets comprising several hours of video. The experiments show that the presented approach significantly outperforms state-of-the-art direct and indirect methods in a variety of real-world settings, both in terms of tracking accuracy and robustness.

Journal ArticleDOI
TL;DR: A novel approach (pkCSM) which uses graph-based signatures to develop predictive models of central ADMET properties for drug development and performs as well or better than current methods.
Abstract: Drug development has a high attrition rate, with poor pharmacokinetic and safety properties a significant hurdle. Computational approaches may help minimize these risks. We have developed a novel approach (pkCSM) which uses graph-based signatures to develop predictive models of central ADMET properties for drug development. pkCSM performs as well or better than current methods. A freely accessible web server (http://structure.bioc.cam.ac.uk/pkcsm), which retains no information submitted to it, provides an integrated platform to rapidly evaluate pharmacokinetic and toxicity properties.


Posted Content
TL;DR: I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.2% on HMDB-51 and 97.9% on UCF-101 after pre-training on Kinetics, and a new Two-Stream Inflated 3D Conv net that is based on 2D ConvNet inflation is introduced.
Abstract: The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics. We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.9% on HMDB-51 and 98.0% on UCF-101.

Journal ArticleDOI
TL;DR: An in-depth annotation of the newly discovered coronavirus (2019-nCoV) genome has revealed differences between 2019-n coV and severe acute respiratory syndrome (SARS) or SARS-like coronaviruses.

Proceedings Article
01 Jan 2016
TL;DR: Prioritized experience replay as mentioned in this paper is a framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently, achieving human-level performance across many Atari games.
Abstract: Experience replay lets online reinforcement learning agents remember and reuse experiences from the past. In prior work, experience transitions were uniformly sampled from a replay memory. However, this approach simply replays transitions at the same frequency that they were originally experienced, regardless of their significance. In this paper we develop a framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently. We use prioritized experience replay in Deep Q-Networks (DQN), a reinforcement learning algorithm that achieved human-level performance across many Atari games. DQN with prioritized experience replay achieves a new state-of-the-art, outperforming DQN with uniform replay on 41 out of 49 games.

Journal ArticleDOI
TL;DR: In this paper, the authors propose a Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities, which performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques.
Abstract: When building a unified vision system or gradually adding new apabilities to a system, the usual assumption is that training data for all tasks is always available. However, as the number of tasks grows, storing and retraining on such data becomes infeasible. A new problem arises where we add new capabilities to a Convolutional Neural Network (CNN), but the training data for its existing capabilities are unavailable. We propose our Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities. Our method performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques and performs similarly to multitask learning that uses original task data we assume unavailable. A more surprising observation is that Learning without Forgetting may be able to replace fine-tuning with similar old and new task datasets for improved new task performance.

Journal ArticleDOI
TL;DR: This review critically evaluates the current literature on the presence, behaviour and fate of microplastics in freshwater and terrestrial environments and, where appropriate, draws on relevant studies from other fields including nanotechnology, agriculture and waste management.

Journal ArticleDOI
TL;DR: MixOmics is introduced, an R package dedicated to the multivariate analysis of biological data sets with a specific focus on data exploration, dimension reduction and visualisation and extends Projection to Latent Structure models for discriminant analysis.
Abstract: The advent of high throughput technologies has led to a wealth of publicly available 'omics data coming from different sources, such as transcriptomics, proteomics, metabolomics. Combining such large-scale biological data sets can lead to the discovery of important biological insights, provided that relevant information can be extracted in a holistic manner. Current statistical approaches have been focusing on identifying small subsets of molecules (a 'molecular signature') to explain or predict biological conditions, but mainly for a single type of 'omics. In addition, commonly used methods are univariate and consider each biological feature independently. We introduce mixOmics, an R package dedicated to the multivariate analysis of biological data sets with a specific focus on data exploration, dimension reduction and visualisation. By adopting a systems biology approach, the toolkit provides a wide range of methods that statistically integrate several data sets at once to probe relationships between heterogeneous 'omics data sets. Our recent methods extend Projection to Latent Structure (PLS) models for discriminant analysis, for data integration across multiple 'omics data or across independent studies, and for the identification of molecular signatures. We illustrate our latest mixOmics integrative frameworks for the multivariate analyses of 'omics data available from the package.

Journal ArticleDOI
TL;DR: In this article, the authors presented the CHELSA (Climatologies at high resolution for the earth's land surface areas) data of downscaled model output temperature and precipitation estimates of the ERA-Interim climatic reanalysis to a high resolution of 30'arc'sec.
Abstract: High-resolution information on climatic conditions is essential to many applications in environmental and ecological sciences. Here we present the CHELSA (Climatologies at high resolution for the earth’s land surface areas) data of downscaled model output temperature and precipitation estimates of the ERA-Interim climatic reanalysis to a high resolution of 30 arc sec. The temperature algorithm is based on statistical downscaling of atmospheric temperatures. The precipitation algorithm incorporates orographic predictors including wind fields, valley exposition, and boundary layer height, with a subsequent bias correction. The resulting data consist of a monthly temperature and precipitation climatology for the years 1979–2013. We compare the data derived from the CHELSA algorithm with other standard gridded products and station data from the Global Historical Climate Network. We compare the performance of the new climatologies in species distribution modelling and show that we can increase the accuracy of species range predictions. We further show that CHELSA climatological data has a similar accuracy as other products for temperature, but that its predictions of precipitation patterns are better. Machine-accessible metadata file describing the reported data (ISA-Tab format)

Posted Content
TL;DR: The SUNDIALS suite of nonlinear and DIfferential/ALgebraic equation solvers (SUNDIALs) as mentioned in this paper has been redesigned to better enable the use of application-specific and third-party algebraic solvers and data structures.
Abstract: In recent years, the SUite of Nonlinear and DIfferential/ALgebraic equation Solvers (SUNDIALS) has been redesigned to better enable the use of application-specific and third-party algebraic solvers and data structures. Throughout this work, we have adhered to specific guiding principles that minimized the impact to current users while providing maximum flexibility for later evolution of solvers and data structures. The redesign was done through creation of new classes for linear and nonlinear solvers, enhancements to the vector class, and the creation of modern Fortran interfaces that leverage interoperability features of the Fortran 2003 standard. The vast majority of this work has been performed "behind-the-scenes," with minimal changes to the user interface and no reduction in solver capabilities or performance. However, these changes now allow advanced users to create highly customized solvers that exploit their problem structure, enabling SUNDIALS use on extreme-scale, heterogeneous computational architectures.

Journal ArticleDOI
TL;DR: The International Agency for Research on Cancer convened a workshop on the relationship between body fatness and cancer, from which an IARC handbook on the topic will appear.
Abstract: The International Agency for Research on Cancer convened a workshop on the relationship between body fatness and cancer, from which an IARC handbook on the topic will appear. An executive summary of the evidence is presented.

Journal ArticleDOI
TL;DR: A marked increase in the all-cause mortality of middle-aged white non-Hispanic men and women in the United States between 1999 and 2013 reversed decades of progress in mortality and was unique to the United United States; no other rich country saw a similar turnaround.
Abstract: This paper documents a marked increase in the all-cause mortality of middle-aged white non-Hispanic men and women in the United States between 1999 and 2013. This change reversed decades of progress in mortality and was unique to the United States; no other rich country saw a similar turnaround. The midlife mortality reversal was confined to white non-Hispanics; black non-Hispanics and Hispanics at midlife, and those aged 65 and above in every racial and ethnic group, continued to see mortality rates fall. This increase for whites was largely accounted for by increasing death rates from drug and alcohol poisonings, suicide, and chronic liver diseases and cirrhosis. Although all education groups saw increases in mortality from suicide and poisonings, and an overall increase in external cause mortality, those with less education saw the most marked increases. Rising midlife mortality rates of white non-Hispanics were paralleled by increases in midlife morbidity. Self-reported declines in health, mental health, and ability to conduct activities of daily living, and increases in chronic pain and inability to work, as well as clinically measured deteriorations in liver function, all point to growing distress in this population. We comment on potential economic causes and consequences of this deterioration.

Proceedings Article
07 Dec 2015
TL;DR: In this paper, a convolutional neural network that operates directly on graphs is proposed to learn end-to-end learning of prediction pipelines whose inputs are graphs of arbitrary size and shape.
Abstract: We introduce a convolutional neural network that operates directly on graphs. These networks allow end-to-end learning of prediction pipelines whose inputs are graphs of arbitrary size and shape. The architecture we present generalizes standard molecular feature extraction methods based on circular fingerprints. We show that these data-driven features are more interpretable, and have better predictive performance on a variety of tasks.

Proceedings Article
17 Jul 2017
TL;DR: This article found that depth, width, weight decay, and batch normalization are important factors influencing confidence calibration of neural networks, and that temperature scaling is surprisingly effective at calibrating predictions.
Abstract: Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.

Journal ArticleDOI
TL;DR: It is established that Fe(3+) in Ni(1-x)Fe(x)OOH occupies octahedral sites with unusually short Fe-O bond distances, induced by edge-sharing with surrounding [NiO6] octahedra, which results in near optimal adsorption energies of OER intermediates and low overpotentials at Fe sites.
Abstract: Highly active catalysts for the oxygen evolution reaction (OER) are required for the development of photoelectrochemical devices that generate hydrogen efficiently from water using solar energy. Here, we identify the origin of a 500-fold OER activity enhancement that can be achieved with mixed (Ni,Fe)oxyhydroxides (Ni(1-x)Fe(x)OOH) over their pure Ni and Fe parent compounds, resulting in one of the most active currently known OER catalysts in alkaline electrolyte. Operando X-ray absorption spectroscopy (XAS) using high energy resolution fluorescence detection (HERFD) reveals that Fe(3+) in Ni(1-x)Fe(x)OOH occupies octahedral sites with unusually short Fe-O bond distances, induced by edge-sharing with surrounding [NiO6] octahedra. Using computational methods, we establish that this structural motif results in near optimal adsorption energies of OER intermediates and low overpotentials at Fe sites. By contrast, Ni sites in Ni(1-x)Fe(x)OOH are not active sites for the oxidation of water.

Journal ArticleDOI
TL;DR: These guidelines provide an up-date of previous IFCN report on “Non-invasive electrical and magnetic stimulation of the brain, spinal cord and roots: basic principles and procedures for routine clinical application” and include some recent extensions and developments.