scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: The main findings show that Internet health information seeking can improve the patient-physician relationship depending on whether the patient discusses the information with the physician and on their prior relationship.
Abstract: Background: With online health information becoming increasingly popular among patients, concerns have been raised about the impact of patients’ Internet health information-seeking behavior on their relationship with physicians. Therefore, it is pertinent to understand the influence of online health information on the patient-physician relationship. Objective: Our objective was to systematically review existing research on patients’ Internet health information seeking and its influence on the patient-physician relationship. Methods: We systematically searched PubMed and key medical informatics, information systems, and communication science journals covering the period of 2000 to 2015. Empirical articles that were in English were included. We analyzed the content covering themes in 2 broad categories: factors affecting patients’ discussion of online findings during consultations and implications for the patient-physician relationship. Results: We identified 18 articles that met the inclusion criteria and the quality requirement for the review. The articles revealed barriers, facilitators, and demographic factors that influence patients’ disclosure of online health information during consultations and the different mechanisms patients use to reveal these findings. Our review also showed the mechanisms in which online information could influence patients’ relationship with their physicians. Conclusions: Results of this review contribute to the understanding of the patient-physician relationship of Internet-informed patients. Our main findings show that Internet health information seeking can improve the patient-physician relationship depending on whether the patient discusses the information with the physician and on their prior relationship. As patients have better access to health information through the Internet and expect to be more engaged in health decision making, traditional models of the patient-provider relationship and communication strategies must be revisited to adapt to this changing demographic. [J Med Internet Res 2017;19(1):e9]

641 citations


Journal ArticleDOI
TL;DR: This companion paper to the introduction of the International League Against Epilepsy (ILAE) 2017 classification of seizure types provides guidance on how to employ the classification, and a “users’ manual” for the ILAE 2017 will assist the adoption of the new system.
Abstract: This companion paper to the introduction of the International League Against Epilepsy (ILAE) 2017 classification of seizure types provides guidance on how to employ the classification. Illustration of the classification is enacted by tables, a glossary of relevant terms, mapping of old to new terms, suggested abbreviations, and examples. Basic and extended versions of the classification are available, depending on the desired degree of detail. Key signs and symptoms of seizures (semiology) are used as a basis for categories of seizures that are focal or generalized from onset or with unknown onset. Any focal seizure can further be optionally characterized by whether awareness is retained or impaired. Impaired awareness during any segment of the seizure renders it a focal impaired awareness seizure. Focal seizures are further optionally characterized by motor onset signs and symptoms: atonic, automatisms, clonic, epileptic spasms, or hyperkinetic, myoclonic, or tonic activity. Nonmotor-onset seizures can manifest as autonomic, behavior arrest, cognitive, emotional, or sensory dysfunction. The earliest prominent manifestation defines the seizure type, which might then progress to other signs and symptoms. Focal seizures can become bilateral tonic-clonic. Generalized seizures engage bilateral networks from onset. Generalized motor seizure characteristics comprise atonic, clonic, epileptic spasms, myoclonic, myoclonic-atonic, myoclonic-tonic-clonic, tonic, or tonic-clonic. Nonmotor (absence) seizures are typical or atypical, or seizures that present prominent myoclonic activity or eyelid myoclonia. Seizures of unknown onset may have features that can still be classified as motor, nonmotor, tonic-clonic, epileptic spasms, or behavior arrest. This "users' manual" for the ILAE 2017 seizure classification will assist the adoption of the new system.

641 citations


Journal ArticleDOI
TL;DR: These consensus statements provide a review of the literature and specific, updated recommendations for eradication therapy in adults and recommend that all H pylori eradication regimens now be given for 14 days.

641 citations


Journal ArticleDOI
TL;DR: The recent studies have described how leader cells at the front of cell groups drive migration and have highlighted the importance of follower cells and cell-cell communication, both between followers and between follower and leader cells, to improve the efficiency of collective movement.
Abstract: Collective cell migration has a key role during morphogenesis and during wound healing and tissue renewal in the adult, and it is involved in cancer spreading. In addition to displaying a coordinated migratory behaviour, collectively migrating cells move more efficiently than if they migrated separately, which indicates that a cellular interplay occurs during collective cell migration. In recent years, evidence has accumulated confirming the importance of such intercellular communication and exploring the molecular mechanisms involved. These mechanisms are based both on direct physical interactions, which coordinate the cellular responses, and on the collective cell behaviour that generates an optimal environment for efficient directed migration. The recent studies have described how leader cells at the front of cell groups drive migration and have highlighted the importance of follower cells and cell-cell communication, both between followers and between follower and leader cells, to improve the efficiency of collective movement.

641 citations


Journal ArticleDOI
TL;DR: This guideline takes a holistic approach, addressing all aspects of the care of people with schizophrenia and related disorders, not only correct diagnosis and symptom relief but also optimal recovery of social function, and uses a clinical staging model as a framework for recommendations regarding assessment, treatment and ongoing care.
Abstract: Objectives:This guideline provides recommendations for the clinical management of schizophrenia and related disorders for health professionals working in Australia and New Zealand. It aims to encou...

641 citations


Journal ArticleDOI
26 Oct 2018-Science
TL;DR: Inorganic cation tuning, using rubidium and cesium, enables highly crystalline formamidinium-based perovskites without Br or MA, and this work demonstrates an efficiency of 20.35% (stabilized), one of the highest for MA-free perovSKites, with a drastically improved stability reached without the stabilizing influence of mesoporous interlayers.
Abstract: Currently, perovskite solar cells (PSCs) with high performances greater than 20% contain bromine (Br), causing a suboptimal bandgap, and the thermally unstable methylammonium (MA) molecule. Avoiding Br and especially MA can therefore result in more optimal bandgaps and stable perovskites. We show that inorganic cation tuning, using rubidium and cesium, enables highly crystalline formamidinium-based perovskites without Br or MA. On a conventional, planar device architecture, using polymeric interlayers at the electron- and hole-transporting interface, we demonstrate an efficiency of 20.35% (stabilized), one of the highest for MA-free perovskites, with a drastically improved stability reached without the stabilizing influence of mesoporous interlayers. The perovskite is not heated beyond 100°C. Going MA-free is a new direction for perovskites that are inherently stable and compatible with tandems or flexible substrates, which are the main routes commercializing PSCs.

641 citations


Journal ArticleDOI
TL;DR: The mascon solutions presented here are an enhanced representation of the RL05 GRACE solutions and provide accurate surface-based gridded information that can be used without further processing.
Abstract: The determination of the gravity model for the Gravity Recovery and Climate Experiment (GRACE) is susceptible to modeling errors, measurement noise, and observability issues. The ill-posed GRACE estimation problem causes the unconstrained GRACE RL05 solutions to have north-south stripes. We discuss the development of global equal area mascon solutions to improve the GRACE gravity information for the study of Earth surface processes. These regularized mascon solutions are developed with a 1° resolution using Tikhonov regularization in a geodesic grid domain. These solutions are derived from GRACE information only, and no external model or data is used to inform the constraints. The regularization matrix is time variable and will not bias or attenuate future regional signals to some past statistics from GRACE or other models. The resulting Center for Space Research (CSR) mascon solutions have no stripe errors and capture all the signals observed by GRACE within the measurement noise level. The solutions are not tailored for specific applications and are global in nature. This study discusses the solution approach and compares the resulting solutions with postprocessed results from the RL05 spherical harmonic solutions and other global mascon solutions for studies of Arctic ice sheet processes, ocean bottom pressure variation, and land surface total water storage change. This suite of comparisons leads to the conclusion that the mascon solutions presented here are an enhanced representation of the RL05 GRACE solutions and provide accurate surface-based gridded information that can be used without further processing.

641 citations


Journal ArticleDOI
TL;DR: This Critical Insight comments on the identification process of LC-MS-based untargeted metabolomics studies—specifically in mammalian systems, focusing on the ability to accurately identify metabolites.

641 citations


Journal ArticleDOI
TL;DR: The utility of the CRISPR‐Cas9 system in generating novel allelic variation for breeding drought‐tolerant crops is demonstrated and the ARGOS8 variants increased grain yield by five bushels per acre under flowering stress conditions and had no yield loss under well‐watered conditions.
Abstract: Maize ARGOS8 is a negative regulator of ethylene responses. A previous study has shown that transgenic plants constitutively overexpressing ARGOS8 have reduced ethylene sensitivity and improved grain yield under drought stress conditions. To explore the targeted use of ARGOS8 native expression variation in drought-tolerant breeding, a diverse set of over 400 maize inbreds was examined for ARGOS8 mRNA expression, but the expression levels in all lines were less than that created in the original ARGOS8 transgenic events. We then employed a CRISPR-Cas-enabled advanced breeding technology to generate novel variants of ARGOS8. The native maize GOS2 promoter, which confers a moderate level of constitutive expression, was inserted into the 5'-untranslated region of the native ARGOS8 gene or was used to replace the native promoter of ARGOS8. Precise genomic DNA modification at the ARGOS8 locus was verified by PCR and sequencing. The ARGOS8 variants had elevated levels of ARGOS8 transcripts relative to the native allele and these transcripts were detectable in all the tissues tested, which was the expected results using the GOS2 promoter. A field study showed that compared to the WT, the ARGOS8 variants increased grain yield by five bushels per acre under flowering stress conditions and had no yield loss under well-watered conditions. These results demonstrate the utility of the CRISPR-Cas9 system in generating novel allelic variation for breeding drought-tolerant crops.

641 citations


Journal ArticleDOI
TL;DR: A systems biological model is proposed that posits circular communication loops amid the brain, gut, and gut microbiome, and in which perturbation at any level can propagate dysregulation throughout the circuit.
Abstract: Preclinical and clinical studies have shown bidirectional interactions within the brain-gut-microbiome axis. Gut microbes communicate to the central nervous system through at least 3 parallel and interacting channels involving nervous, endocrine, and immune signaling mechanisms. The brain can affect the community structure and function of the gut microbiota through the autonomic nervous system, by modulating regional gut motility, intestinal transit and secretion, and gut permeability, and potentially through the luminal secretion of hormones that directly modulate microbial gene expression. A systems biological model is proposed that posits circular communication loops amid the brain, gut, and gut microbiome, and in which perturbation at any level can propagate dysregulation throughout the circuit. A series of largely preclinical observations implicates alterations in brain-gut-microbiome communication in the pathogenesis and pathophysiology of irritable bowel syndrome, obesity, and several psychiatric and neurologic disorders. Continued research holds the promise of identifying novel therapeutic targets and developing treatment strategies to address some of the most debilitating, costly, and poorly understood diseases.

641 citations


Journal ArticleDOI
TL;DR: It is found that mothers with young children have reduced their work hours four to five times more than fathers, indicating yet another negative consequence of the COVID‐19 pandemic, highlighting the challenges it poses to women's work hours and employment.
Abstract: School and daycare closures due to the COVID-19 pandemic have increased caregiving responsibilities for working parents As a result, many have changed their work hours to meet these growing demands In this study, we use panel data from the US Current Population Survey to examine changes in mothers' and fathers' work hours from February through April, 2020, the period of time prior to the widespread COVID-19 outbreak in the US and through its first peak Using person-level fixed effects models, we find that mothers with young children have reduced their work hours four to five times more than fathers Consequently, the gender gap in work hours has grown by 20 to 50 percent These findings indicate yet another negative consequence of the COVID-19 pandemic, highlighting the challenges it poses to women's work hours and employment

Journal ArticleDOI
TL;DR: It is suggested that changes in aridity, such as those predicted by climate-change models, may reduce microbial abundance and diversity, a response that will likely impact the provision of key ecosystem services by global drylands.
Abstract: Soil bacteria and fungi play key roles in the functioning of terrestrial ecosystems, yet our understanding of their responses to climate change lags significantly behind that of other organisms. This gap in our understanding is particularly true for drylands, which occupy ∼41% of Earth´s surface, because no global, systematic assessments of the joint diversity of soil bacteria and fungi have been conducted in these environments to date. Here we present results from a study conducted across 80 dryland sites from all continents, except Antarctica, to assess how changes in aridity affect the composition, abundance, and diversity of soil bacteria and fungi. The diversity and abundance of soil bacteria and fungi was reduced as aridity increased. These results were largely driven by the negative impacts of aridity on soil organic carbon content, which positively affected the abundance and diversity of both bacteria and fungi. Aridity promoted shifts in the composition of soil bacteria, with increases in the relative abundance of Chloroflexi and α-Proteobacteria and decreases in Acidobacteria and Verrucomicrobia. Contrary to what has been reported by previous continental and global-scale studies, soil pH was not a major driver of bacterial diversity, and fungal communities were dominated by Ascomycota. Our results fill a critical gap in our understanding of soil microbial communities in terrestrial ecosystems. They suggest that changes in aridity, such as those predicted by climate-change models, may reduce microbial abundance and diversity, a response that will likely impact the provision of key ecosystem services by global drylands.

Journal ArticleDOI
TL;DR: Although the features of e-cigarette use that were responsible for injury have not been identified, this cluster of illnesses represents an emerging clinical syndrome or syndromes and additional work is needed to characterize the pathophysiology and to identify the definitive causes.
Abstract: Background E-cigarettes are battery-operated devices that heat a liquid and deliver an aerosolized product to the user. Pulmonary illnesses related to e-cigarette use have been reported, b...

Proceedings ArticleDOI
01 Jun 2016
TL;DR: This paper proposes a weakly supervised deep detection architecture that modifies one such network to operate at the level of image regions, performing simultaneously region selection and classification.
Abstract: Weakly supervised learning of object detection is an important problem in image understanding that still does not have a satisfactory solution. In this paper, we address this problem by exploiting the power of deep convolutional neural networks pre-trained on large-scale image-level classification tasks. We propose a weakly supervised deep detection architecture that modifies one such network to operate at the level of image regions, performing simultaneously region selection and classification. Trained as an image classifier, the architecture implicitly learns object detectors that are better than alternative weakly supervised detection systems on the PASCAL VOC data. The model, which is a simple and elegant end-to-end architecture, outperforms standard data augmentation and fine-tuning techniques for the task of image-level classification as well.

Journal ArticleDOI
TL;DR: In this article, the authors define requirements for a suitable descriptor and demonstrate how a meaningful descriptor can be found systematically, for a classic example, the energy difference of zinc blende or wurtzite and rocksalt semiconductors.
Abstract: Statistical learning of materials properties or functions so far starts with a largely silent, nonchallenged step: the choice of the set of descriptive parameters (termed descriptor). However, when the scientific connection between the descriptor and the actuating mechanisms is unclear, the causality of the learned descriptor-property relation is uncertain. Thus, a trustful prediction of new promising materials, identification of anomalies, and scientific advancement are doubtful. We analyze this issue and define requirements for a suitable descriptor. For a classic example, the energy difference of zinc blende or wurtzite and rocksalt semiconductors, we demonstrate how a meaningful descriptor can be found systematically.

Journal ArticleDOI
TL;DR: The canonical adenine nucleotide-dependent mechanism that activates AMPK when cellular energy status is compromised, as well as other, noncanonical activation mechanisms, are reviewed.

Proceedings ArticleDOI
01 Oct 2019
TL;DR: AoANet as mentioned in this paper proposes an Attention on Attention (AoA) module, which extends the conventional attention mechanisms to determine the relevance between attention results and queries and achieves state-of-the-art performance.
Abstract: Attention mechanisms are widely used in current encoder/decoder frameworks of image captioning, where a weighted average on encoded vectors is generated at each time step to guide the caption decoding process. However, the decoder has little idea of whether or how well the attended vector and the given attention query are related, which could make the decoder give misled results. In this paper, we propose an Attention on Attention (AoA) module, which extends the conventional attention mechanisms to determine the relevance between attention results and queries. AoA first generates an information vector and an attention gate using the attention result and the current context, then adds another attention by applying element-wise multiplication to them and finally obtains the attended information, the expected useful knowledge. We apply AoA to both the encoder and the decoder of our image captioning model, which we name as AoA Network (AoANet). Experiments show that AoANet outperforms all previously published methods and achieves a new state-of-the-art performance of 129.8 CIDEr-D score on MS COCO Karpathy offline test split and 129.6 CIDEr-D (C40) score on the official online testing server. Code is available at https://github.com/husthuaan/AoANet.

Journal ArticleDOI
TL;DR: This work presents a tunable RT skyrmion platform based on multilayer stacks of Ir/Fe/Co/Pt, which is established a platform for investigating functional sub-50-nm RTskyrmions, pointing towards the development of skyrMion-based memory devices.
Abstract: Magnetic skyrmions are nanoscale topological spin structures offering great promise for next-generation information storage technologies. The recent discovery of sub-100-nm room-temperature (RT) skyrmions in several multilayer films has triggered vigorous efforts to modulate their physical properties for their use in devices. Here we present a tunable RT skyrmion platform based on multilayer stacks of Ir/Fe/Co/Pt, which we study using X-ray microscopy, magnetic force microscopy and Hall transport techniques. By varying the ferromagnetic layer composition, we can tailor the magnetic interactions governing skyrmion properties, thereby tuning their thermodynamic stability parameter by an order of magnitude. The skyrmions exhibit a smooth crossover between isolated (metastable) and disordered lattice configurations across samples, while their size and density can be tuned by factors of two and ten, respectively. We thus establish a platform for investigating functional sub-50-nm RT skyrmions, pointing towards the development of skyrmion-based memory devices.

Journal ArticleDOI
TL;DR: Treatment with rosuvastatin at a dose of 10 mg per day resulted in a significantly lower risk of cardiovascular events than placebo in an intermediate-risk, ethnically diverse population without cardiovascular disease.
Abstract: BackgroundPrevious trials have shown that the use of statins to lower cholesterol reduces the risk of cardiovascular events among persons without cardiovascular disease. Those trials have involved persons with elevated lipid levels or inflammatory markers and involved mainly white persons. It is unclear whether the benefits of statins can be extended to an intermediate-risk, ethnically diverse population without cardiovascular disease. MethodsIn one comparison from a 2-by-2 factorial trial, we randomly assigned 12,705 participants in 21 countries who did not have cardiovascular disease and were at intermediate risk to receive rosuvastatin at a dose of 10 mg per day or placebo. The first coprimary outcome was the composite of death from cardiovascular causes, nonfatal myocardial infarction, or nonfatal stroke, and the second coprimary outcome additionally included revascularization, heart failure, and resuscitated cardiac arrest. The median follow-up was 5.6 years. ResultsThe overall mean low-density lipop...

Journal ArticleDOI
TL;DR: It is shown that algorithmic control is central to the operation of online labour platforms and can result in low pay, social isolation, working unsocial and irregular hours, overwork, sleep deprivation and exhaustion.
Abstract: This article evaluates the job quality of work in the remote gig economy. Such work consists of the remote provision of a wide variety of digital services mediated by online labour platforms. Focusing on workers in Southeast Asia and Sub-Saharan Africa, the article draws on semi-structured interviews in six countries (N = 107) and a cross-regional survey (N = 679) to detail the manner in which remote gig work is shaped by platform-based algorithmic control. Despite varying country contexts and types of work, we show that algorithmic control is central to the operation of online labour platforms. Algorithmic management techniques tend to offer workers high levels of flexibility, autonomy, task variety and complexity. However, these mechanisms of control can also result in low pay, social isolation, working unsocial and irregular hours, overwork, sleep deprivation and exhaustion.

Journal ArticleDOI
05 May 2020-PLOS ONE
TL;DR: This work describes and validate a simple-to-apply method for assessing and reporting on saturation in the context of inductive thematic analyses and proposes a more flexible approach to reporting saturation.
Abstract: Data saturation is the most commonly employed concept for estimating sample sizes in qualitative research. Over the past 20 years, scholars using both empirical research and mathematical/statistical models have made significant contributions to the question: How many qualitative interviews are enough? This body of work has advanced the evidence base for sample size estimation in qualitative inquiry during the design phase of a study, prior to data collection, but it does not provide qualitative researchers with a simple and reliable way to determine the adequacy of sample sizes during and/or after data collection. Using the principle of saturation as a foundation, we describe and validate a simple-to-apply method for assessing and reporting on saturation in the context of inductive thematic analyses. Following a review of the empirical research on data saturation and sample size estimation in qualitative research, we propose an alternative way to evaluate saturation that overcomes the shortcomings and challenges associated with existing methods identified in our review. Our approach includes three primary elements in its calculation and assessment: Base Size, Run Length, and New Information Threshold. We additionally propose a more flexible approach to reporting saturation. To validate our method, we use a bootstrapping technique on three existing thematically coded qualitative datasets generated from in-depth interviews. Results from this analysis indicate the method we propose to assess and report on saturation is feasible and congruent with findings from earlier studies.

Proceedings ArticleDOI
03 Apr 2017
TL;DR: In this paper, the authors introduce a new notion of unfairness, disparate mistreatment, which is defined in terms of misclassification rates and then propose intuitive measures of disparate mistreating for decision boundary-based classifiers.
Abstract: Automated data-driven decision making systems are increasingly being used to assist, or even replace humans in many settings These systems function by learning from historical decisions, often taken by humans In order to maximize the utility of these systems (or, classifiers), their training involves minimizing the errors (or, misclassifications) over the given historical data However, it is quite possible that the optimally trained classifier makes decisions for people belonging to different social groups with different misclassification rates (eg, misclassification rates for females are higher than for males), thereby placing these groups at an unfair disadvantage To account for and avoid such unfairness, in this paper, we introduce a new notion of unfairness, disparate mistreatment, which is defined in terms of misclassification rates We then propose intuitive measures of disparate mistreatment for decision boundary-based classifiers, which can be easily incorporated into their formulation as convex-concave constraints Experiments on synthetic as well as real world datasets show that our methodology is effective at avoiding disparate mistreatment, often at a small cost in terms of accuracy

Journal ArticleDOI
TL;DR: In this article, precise and accurate parameters for late-type (late K and M) dwarf stars are important for characterization of any orbiting planets, but such determinations have been hampered by these stars' compl...
Abstract: Precise and accurate parameters for late-type (late K and M) dwarf stars are important for characterization of any orbiting planets, but such determinations have been hampered by these stars' compl ...

Journal ArticleDOI
14 Jul 2016-Nature
TL;DR: A Climate Sensitivity Profile approach is applied to 10,003 terrestrial and aquatic phenological data sets, spatially matched to temperature and precipitation data, to quantify variation in climate sensitivity and detected systematic variation in the direction and magnitude of phenological climate sensitivity.
Abstract: Differences in phenological responses to climate change among species can desynchronise ecological interactions and thereby threaten ecosystem function. To assess these threats, we must quantify the relative impact of climate change on species at different trophic levels. Here, we apply a Climate Sensitivity Profile approach to 10,003 terrestrial and aquatic phenological data sets, spatially matched to temperature and precipitation data, to quantify variation in climate sensitivity. The direction, magnitude and timing of climate sensitivity varied markedly among organisms within taxonomic and trophic groups. Despite this variability, we detected systematic variation in the direction and magnitude of phenological climate sensitivity. Secondary consumers showed consistently lower climate sensitivity than other groups. We used mid-century climate change projections to estimate that the timing of phenological events could change more for primary consumers than for species in other trophic levels (6.2 versus 2.5–2.9 days earlier on average), with substantial taxonomic variation (1.1–14.8 days earlier on average).

Journal ArticleDOI
TL;DR: This paper presents an extensive review on the artifact removal algorithms used to remove the main sources of interference encountered in the electroencephalogram (EEG), specifically ocular, muscular and cardiac artifacts, and concludes that the safest approach is to correct the measured EEG using independent component analysis-to be precise, an algorithm based on second-order statistics such as second- order blind identification (SOBI).
Abstract: This paper presents an extensive review on the artifact removal algorithms used to remove the main sources of interference encountered in the electroencephalogram (EEG), specifically ocular, muscular and cardiac artifacts. We first introduce background knowledge on the characteristics of EEG activity, of the artifacts and of the EEG measurement model. Then, we present algorithms commonly employed in the literature and describe their key features. Lastly, principally on the basis of the results provided by various researchers, but also supported by our own experience, we compare the state-of-the-art methods in terms of reported performance, and provide guidelines on how to choose a suitable artifact removal algorithm for a given scenario. With this review we have concluded that, without prior knowledge of the recorded EEG signal or the contaminants, the safest approach is to correct the measured EEG using independent component analysis-to be precise, an algorithm based on second-order statistics such as second-order blind identification (SOBI). Other effective alternatives include extended information maximization (InfoMax) and an adaptive mixture of independent component analyzers (AMICA), based on higher order statistics. All of these algorithms have proved particularly effective with simulations and, more importantly, with data collected in controlled recording conditions. Moreover, whenever prior knowledge is available, then a constrained form of the chosen method should be used in order to incorporate such additional information. Finally, since which algorithm is the best performing is highly dependent on the type of the EEG signal, the artifacts and the signal to contaminant ratio, we believe that the optimal method for removing artifacts from the EEG consists in combining more than one algorithm to correct the signal using multiple processing stages, even though this is an option largely unexplored by researchers in the area.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: A new benchmark is defined by unifying both the evaluation protocols and data splits for zero-shot learning, and a significant number of the state-of-the-art methods are compared and analyzed in depth, both in the classic zero- shot setting but also in the more realistic generalized zero-shots setting.
Abstract: Due to the importance of zero-shot learning, the number of proposed approaches has increased steadily recently. We argue that it is time to take a step back and to analyze the status quo of the area. The purpose of this paper is three-fold. First, given the fact that there is no agreed upon zero-shot learning benchmark, we first define a new benchmark by unifying both the evaluation protocols and data splits. This is an important contribution as published results are often not comparable and sometimes even flawed due to, e.g. pre-training on zero-shot test classes. Second, we compare and analyze a significant number of the state-of-the-art methods in depth, both in the classic zero-shot setting but also in the more realistic generalized zero-shot setting. Finally, we discuss limitations of the current status of the area which can be taken as a basis for advancing it.

Journal ArticleDOI
TL;DR: In this article, the authors show that the envelope erosion is the longest for those planets with hydrogen/helium-rich envelopes that, while only a few percent in weight, double its radius.
Abstract: A new piece of evidence supporting the photoevaporation-driven evolution model for low-mass, close-in exoplanets was recently presented by the California-Kepler-Survey. The radius distribution of the Kepler planets is shown to be bimodal, with a ``valley' separating two peaks at 1.3 and 2.6 Rearth. Such an ``evaporation-valley' had been predicted by numerical models previously. Here, we develop a minimal model to demonstrate that this valley results from the following fact: the timescale for envelope erosion is the longest for those planets with hydrogen/helium-rich envelopes that, while only a few percent in weight, double its radius. The timescale falls for envelopes lighter than this because the planet's radius remains largely constant for tenuous envelopes. The timescale also drops for heavier envelopes because the planet swells up faster than the addition of envelope mass. Photoevaporation, therefore, herds planets into either bare cores ~1.3 Rearth, or those with double the core's radius (~2.6 Rearth). This process mostly occurs during the first 100 Myrs when the stars' high energy flux are high and nearly constant. The observed radius distribution further requires that the Kepler planets are clustered around 3 Mearth in mass, are born with H/He envelopes more than a few percent in mass, and that their cores are similar to the Earth in composition. Such envelopes must have been accreted before the dispersal of the gas disks, while the core composition indicates formation inside the ice-line. Lastly, the photoevaporation model fails to account for bare planets beyond ~30-60 days, if these planets are abundant, they may point to a significant second channel for planet formation, resembling the Solar-System terrestrial planets.

Journal ArticleDOI
TL;DR: In such nonstationary environments, where the probabilistic properties of the data change over time, a non-adaptive model trained under the false stationarity assumption is bound to become obsolete in time, and perform sub-optimally at best, or fail catastrophically at worst.
Abstract: The prevalence of mobile phones, the internet-of-things technology, and networks of sensors has led to an enormous and ever increasing amount of data that are now more commonly available in a streaming fashion [1]-[5]. Often, it is assumed - either implicitly or explicitly - that the process generating such a stream of data is stationary, that is, the data are drawn from a fixed, albeit unknown probability distribution. In many real-world scenarios, however, such an assumption is simply not true, and the underlying process generating the data stream is characterized by an intrinsic nonstationary (or evolving or drifting) phenomenon. The nonstationarity can be due, for example, to seasonality or periodicity effects, changes in the users' habits or preferences, hardware or software faults affecting a cyber-physical system, thermal drifts or aging effects in sensors. In such nonstationary environments, where the probabilistic properties of the data change over time, a non-adaptive model trained under the false stationarity assumption is bound to become obsolete in time, and perform sub-optimally at best, or fail catastrophically at worst.

Journal ArticleDOI
TL;DR: This tutorial review discusses the increasing trend to exploit the large magnetic moments and anisotropies of f-element ions in molecular nanomagnets, and presents a critical discussion of key parameters to be optimised.
Abstract: Ever since the discovery that certain manganese clusters retain their magnetisation for months at low temperatures, there has been intense interest in molecular nanomagnets because of potential applications in data storage, spintronics, quantum computing, and magnetocaloric cooling. In this Tutorial Review, we summarise some key historical developments, and centre our discussion principally on the increasing trend to exploit the large magnetic moments and anisotropies of f-element ions. We focus on the important theme of strategies to improve these systems with the ultimate aim of developing materials for ultra-high-density data storage devices. We present a critical discussion of key parameters to be optimised, as well as of experimental and theoretical techniques to be used to this end.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: A recurrent rain detection and removal network that removes rain streaks and clears up the rain accumulation iteratively and progressively is proposed and a new contextualized dilated network is developed to exploit regional contextual information and to produce better representations for rain detection.
Abstract: In this paper, we address a rain removal problem from a single image, even in the presence of heavy rain and rain streak accumulation. Our core ideas lie in our new rain image model and new deep learning architecture. We add a binary map that provides rain streak locations to an existing model, which comprises a rain streak layer and a background layer. We create a model consisting of a component representing rain streak accumulation (where individual streaks cannot be seen, and thus visually similar to mist or fog), and another component representing various shapes and directions of overlapping rain streaks, which usually happen in heavy rain. Based on the model, we develop a multi-task deep learning architecture that learns the binary rain streak map, the appearance of rain streaks, and the clean background, which is our ultimate output. The additional binary map is critically beneficial, since its loss function can provide additional strong information to the network. To handle rain streak accumulation (again, a phenomenon visually similar to mist or fog) and various shapes and directions of overlapping rain streaks, we propose a recurrent rain detection and removal network that removes rain streaks and clears up the rain accumulation iteratively and progressively. In each recurrence of our method, a new contextualized dilated network is developed to exploit regional contextual information and to produce better representations for rain detection. The evaluation on real images, particularly on heavy rain, shows the effectiveness of our models and architecture.