scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: In this paper, the authors review and evaluate key contributions to the understanding, performance effects, and mitigation of power loss due to soiling on a solar panel, and present a few cleaning method to prevent from dust accumulation on the surface of solar arrays.
Abstract: The power output delivered from a photovoltaic module highly depends on the amount of irradiance, which reaches the solar cells. Many factors determine the ideal output or optimum yield in a photovoltaic module. However, the environment is one of the contributing parameters which directly affect the photovoltaic performance. The authors review and evaluate key contributions to the understanding, performance effects, and mitigation of power loss due to soiling on a solar panel. Electrical characteristics of PV (Voltage and current) are discussed with respect to shading due to soiling. Shading due to soiling is divided in two categories, namely, soft shading such as air pollution, and hard shading which occurs when a solid such as accumulated dust blocks the sunlight. The result shows that soft shading affects the current provided by the PV module, but the voltage remains the same. In hard shading, the performance of the PV module depends on whether some cells are shaded or all cells of the PV module are shaded. If some cells are shaded, then as long as the unshaded cells receive solar irradiance, there will be some output although there will be a decrease in the voltage output of the PV module. This study also present a few cleaning method to prevent from dust accumulation on the surface of solar arrays.

628 citations


Journal ArticleDOI
07 Oct 2016-Science
TL;DR: A mesoscale “assembly-and-mineralization” approach inspired by the natural process in mollusks is described to fabricate bulk synthetic nacre that highly resembles both the chemical composition and the hierarchical structure of natural nacre.
Abstract: Although biomimetic designs are expected to play a key role in exploring future structural materials, facile fabrication of bulk biomimetic materials under ambient conditions remains a major challenge. Here, we describe a mesoscale “assembly-and-mineralization” approach inspired by the natural process in mollusks to fabricate bulk synthetic nacre that highly resembles both the chemical composition and the hierarchical structure of natural nacre. The millimeter-thick synthetic nacre consists of alternating organic layers and aragonite platelet layers (91 weight percent) and exhibits good ultimate strength and fracture toughness. This predesigned matrix-directed mineralization method represents a rational strategy for the preparation of robust composite materials with hierarchically ordered structures, where various constituents are adaptable, including brittle and heat-labile materials.

628 citations


Journal ArticleDOI
24 Nov 2017-Science
TL;DR: Microstructured three-dimensional elastic chiral mechanical metamaterials that overcome the unavailability of this degree of freedom hinders applications in terms of mode conversion and the realization of advanced mechanical designs using coordinate transformations are realized.
Abstract: Rationally designed artificial materials enable mechanical properties that are inaccessible with ordinary materials. Pushing on an ordinary linearly elastic bar can cause it to be deformed in many ways. However, a twist, the counterpart of optical activity in the static case, is strictly zero. The unavailability of this degree of freedom hinders applications in terms of mode conversion and the realization of advanced mechanical designs using coordinate transformations. Here, we aim at realizing microstructured three-dimensional elastic chiral mechanical metamaterials that overcome this limitation. On overall millimeter-sized samples, we measure twists per axial strain exceeding 2°/%. Scaling up the number of unit cells for fixed sample dimensions, the twist is robust due to metamaterial stiffening, indicating a characteristic length scale and bringing the aforementioned applications into reach.

628 citations


Journal ArticleDOI
TL;DR: The concept of the tumor immunity continuum is proposed as a framework for developing rational combination strategies for anticancer immune therapies based on current understanding of tumor and circulating pharmacodynamic correlates of immune modulation.
Abstract: Clinical trials with immune checkpoint inhibitors have provided important insights into the mode of action of anticancer immune therapies and potential mechanisms of immune escape. Development of the next wave of rational clinical combination strategies will require a deep understanding of the mechanisms by which combination partners influence the battle between the immune system's capabilities to fight cancer and the immune-suppressive processes that promote tumor growth. This review focuses on our current understanding of tumor and circulating pharmacodynamic correlates of immune modulation and elaborates on lessons learned from human translational research with checkpoint inhibitors. Actionable tumor markers of immune activation including CD8(+)T cells, PD-L1 IHC as a pharmacodynamic marker of T-cell function, T-cell clonality, and challenges with conduct of trials that ask scientific questions from serial biopsies are addressed. Proposals for clinical trial design, as well as future applications of peripheral pharmacodynamic endpoints as potential surrogates of early clinical activity, are discussed. On the basis of emerging mechanisms of response and immune escape, we propose the concept of the tumor immunity continuum as a framework for developing rational combination strategies.

628 citations


Journal ArticleDOI
TL;DR: A package written in Python/C++ that has been designed to minimize the effort required to build deep learning based representation of potential energy and force field and to perform molecular dynamics, it is demonstrated that the resulted molecular dynamics model reproduces accurately the structural information contained in the original model.

628 citations


Proceedings Article
05 Dec 2016
TL;DR: This paper showed that residual networks can be seen as a collection of many paths of differing lengths, and that these paths enable very deep networks by leveraging only the short paths during training, as longer paths do not contribute any gradient.
Abstract: In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.

628 citations


Posted ContentDOI
TL;DR: The whole word masking (wwm) strategy for Chinese BERT is introduced, along with a series of Chinese pre-trained language models, and a simple but effective model called MacBERT is proposed, which improves upon RoBERTa in several ways.
Abstract: Bidirectional Encoder Representations from Transformers (BERT) has shown marvelous improvements across various NLP tasks. Recently, an upgraded version of BERT has been released with Whole Word Masking (WWM), which mitigate the drawbacks of masking partial WordPiece tokens in pre-training BERT. In this technical report, we adapt whole word masking in Chinese text, that masking the whole word instead of masking Chinese characters, which could bring another challenge in Masked Language Model (MLM) pre-training task. The proposed models are verified on various NLP tasks, across sentence-level to document-level, including machine reading comprehension (CMRC 2018, DRCD, CJRC), natural language inference (XNLI), sentiment classification (ChnSentiCorp), sentence pair matching (LCQMC, BQ Corpus), and document classification (THUCNews). Experimental results on these datasets show that the whole word masking could bring another significant gain. Moreover, we also examine the effectiveness of the Chinese pre-trained models: BERT, ERNIE, BERT-wwm, BERT-wwm-ext, RoBERTa-wwm-ext, and RoBERTa-wwm-ext-large. We release all the pre-trained models: \url{this https URL

628 citations


Journal ArticleDOI
TL;DR: This paper puts forward a practical approach to confounder selection decisions when the somewhat less stringent assumption is made that knowledge is available for each covariate whether it is a cause of the exposure, and whetherIt is aCause of the outcome.
Abstract: Selecting an appropriate set of confounders for which to control is critical for reliable causal inference. Recent theoretical and methodological developments have helped clarify a number of principles of confounder selection. When complete knowledge of a causal diagram relating all covariates to each other is available, graphical rules can be used to make decisions about covariate control. Unfortunately, such complete knowledge is often unavailable. This paper puts forward a practical approach to confounder selection decisions when the somewhat less stringent assumption is made that knowledge is available for each covariate whether it is a cause of the exposure, and whether it is a cause of the outcome. Based on recent theoretically justified developments in the causal inference literature, the following proposal is made for covariate control decisions: control for each covariate that is a cause of the exposure, or of the outcome, or of both; exclude from this set any variable known to be an instrumental variable; and include as a covariate any proxy for an unmeasured variable that is a common cause of both the exposure and the outcome. Various principles of confounder selection are then further related to statistical covariate selection methods.

628 citations


Journal ArticleDOI
21 Feb 2018-Nature
TL;DR: The seamless integration of a memristor and transistor into one multi-terminal device could enable complex neuromorphic learning and the study of the physics of defect kinetics in two-dimensional materials.
Abstract: Memristors are two-terminal passive circuit elements that have been developed for use in non-volatile resistive random-access memory and may also be useful in neuromorphic computing. Memristors have higher endurance and faster read/write times than flash memory and can provide multi-bit data storage. However, although two-terminal memristors have demonstrated capacity for basic neural functions, synapses in the human brain outnumber neurons by more than a thousandfold, which implies that multi-terminal memristors are needed to perform complex functions such as heterosynaptic plasticity. Previous attempts to move beyond two-terminal memristors, such as the three-terminal Widrow-Hoff memristor and field-effect transistors with nanoionic gates or floating gates, did not achieve memristive switching in the transistor. Here we report the experimental realization of a multi-terminal hybrid memristor and transistor (that is, a memtransistor) using polycrystalline monolayer molybdenum disulfide (MoS2) in a scalable fabrication process. The two-dimensional MoS2 memtransistors show gate tunability in individual resistance states by four orders of magnitude, as well as large switching ratios, high cycling endurance and long-term retention of states. In addition to conventional neural learning behaviour of long-term potentiation/depression, six-terminal MoS2 memtransistors have gate-tunable heterosynaptic functionality, which is not achievable using two-terminal memristors. For example, the conductance between a pair of floating electrodes (pre- and post-synaptic neurons) is varied by a factor of about ten by applying voltage pulses to modulatory terminals. In situ scanning probe microscopy, cryogenic charge transport measurements and device modelling reveal that the bias-induced motion of MoS2 defects drives resistive switching by dynamically varying Schottky barrier heights. Overall, the seamless integration of a memristor and transistor into one multi-terminal device could enable complex neuromorphic learning and the study of the physics of defect kinetics in two-dimensional materials.

628 citations


Posted Content
TL;DR: BinaryConnect as discussed by the authors proposes to train a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated, and obtain near state-of-the-art results on the permutation-invariant MNIST, CIFAR-10 and SVHN.
Abstract: Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and power-hungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.

628 citations


Journal ArticleDOI
TL;DR: A combination of photoemission and scanning tunnelling spectroscopy measurements provide compelling evidence that single layers of 1T'-WTe2 are a class of quantum spin Hall insulator as mentioned in this paper.
Abstract: A combination of photoemission and scanning tunnelling spectroscopy measurements provide compelling evidence that single layers of 1T'-WTe2 are a class of quantum spin Hall insulator. A quantum spin Hall (QSH) insulator is a novel two-dimensional quantum state of matter that features quantized Hall conductance in the absence of a magnetic field, resulting from topologically protected dissipationless edge states that bridge the energy gap opened by band inversion and strong spin–orbit coupling1,2. By investigating the electronic structure of epitaxially grown monolayer 1T'-WTe2 using angle-resolved photoemission (ARPES) and first-principles calculations, we observe clear signatures of topological band inversion and bandgap opening, which are the hallmarks of a QSH state. Scanning tunnelling microscopy measurements further confirm the correct crystal structure and the existence of a bulk bandgap, and provide evidence for a modified electronic structure near the edge that is consistent with the expectations for a QSH insulator. Our results establish monolayer 1T'-WTe2 as a new class of QSH insulator with large bandgap in a robust two-dimensional materials family of transition metal dichalcogenides (TMDCs).

Journal ArticleDOI
TL;DR: The literature on the prospective association between smoking and depression and anxiety is inconsistent in terms of the direction of association most strongly supported, suggesting the need for future studies that employ different methodologies, such as Mendelian randomization (MR), which will allow for stronger causal inferences.
Abstract: Background Many studies report a positive association between smoking and mental illness. However, the literature remains mixed regarding the direction of this association. We therefore conducted a systematic review evaluating the association of smoking and depression and/or anxiety in longitudinal studies. Methods Studies were identified by searching PubMed, Scopus, and Web of Science and were included if they: (1) used human participants, (2) were longitudinal, (3) reported primary data, (4) had smoking as an exposure and depression and/or anxiety as an outcome, or (5) had depression and/or anxiety as the exposure and smoking as an outcome. Results Outcomes from 148 studies were categorized into: smoking onset, smoking status, smoking heaviness, tobacco dependence, and smoking trajectory. The results for each category varied substantially, with evidence for positive associations in both directions (smoking to later mental health and mental health to later smoking) as well as null findings. Overall, nearly half the studies reported that baseline depression/anxiety was associated with some type of later smoking behavior, while over a third found evidence that a smoking exposure was associated with later depression/anxiety. However, there were few studies directly supporting a bidirectional model of smoking and anxiety, and very few studies reporting null results. Conclusions The literature on the prospective association between smoking and depression and anxiety is inconsistent in terms of the direction of association most strongly supported. This suggests the need for future studies that employ different methodologies, such as Mendelian randomization (MR), which will allow us to draw stronger causal inferences. Implications We systematically reviewed longitudinal studies on the association of different aspects of smoking behavior with depression and anxiety. The results varied considerably, with evidence for smoking both associated with subsequent depression and anxiety, and vice versa. Few studies supported a bidirectional relationship, or reported null results, and no clear patterns by gender, ethnicity, clinical status, length to follow-up, or diagnostic test. Suggesting that despite advantages of longitudinal studies, they cannot alone provide strong evidence of causality. Therefore, future studies investigating this association should employ different methods allowing for stronger causal inferences to be made, such as MR.

Journal ArticleDOI
TL;DR: An action framework for countries with low tuberculosis (TB) incidence sets out priority interventions required for these countries to progress first towards “pre-elimination” and eventually the elimination of TB as a public health problem.
Abstract: This paper describes an action framework for countries with low tuberculosis (TB) incidence (<100 TB cases per million population) that are striving for TB elimination. The framework sets out priority interventions required for these countries to progress first towards "pre-elimination" (<10 cases per million) and eventually the elimination of TB as a public health problem (less than one case per million). TB epidemiology in most low-incidence countries is characterised by a low rate of transmission in the general population, occasional outbreaks, a majority of TB cases generated from progression of latent TB infection (LTBI) rather than local transmission, concentration to certain vulnerable and hard-to-reach risk groups, and challenges posed by cross-border migration. Common health system challenges are that political commitment, funding, clinical expertise and general awareness of TB diminishes as TB incidence falls. The framework presents a tailored response to these challenges, grouped into eight priority action areas: 1) ensure political commitment, funding and stewardship for planning and essential services; 2) address the most vulnerable and hard-to-reach groups; 3) address special needs of migrants and cross-border issues; 4) undertake screening for active TB and LTBI in TB contacts and selected high-risk groups, and provide appropriate treatment; 5) optimise the prevention and care of drug-resistant TB; 6) ensure continued surveillance, programme monitoring and evaluation and case-based data management; 7) invest in research and new tools; and 8) support global TB prevention, care and control. The overall approach needs to be multisectorial, focusing on equitable access to high-quality diagnosis and care, and on addressing the social determinants of TB. Because of increasing globalisation and population mobility, the response needs to have both national and global dimensions.

Journal ArticleDOI
26 Oct 2018-Science
TL;DR: It is found that a metal-organic framework containing iron-peroxo sites bound ethane more strongly than ethylene and could be used to separate the gases at ambient conditions and demonstrate the potential of Fe2(O2)(dobdc) for this important industrial separation with a low energy cost under ambient conditions.
Abstract: The separation of ethane from its corresponding ethylene is an important, challenging, and energy-intensive process in the chemical industry. Here we report a microporous metal-organic framework, iron(III) peroxide 2,5-dioxido-1,4-benzenedicarboxylate [Fe 2 (O 2 )(dobdc) (dobdc 4− : 2,5-dioxido-1,4-benzenedicarboxylate)], with iron (Fe)–peroxo sites for the preferential binding of ethane over ethylene and thus highly selective separation of C 2 H 6 /C 2 H 4 . Neutron powder diffraction studies and theoretical calculations demonstrate the key role of Fe-peroxo sites for the recognition of ethane. The high performance of Fe 2 (O 2 )(dobdc) for the ethane/ethylene separation has been validated by gas sorption isotherms, ideal adsorbed solution theory calculations, and simulated and experimental breakthrough curves. Through a fixed-bed column packed with this porous material, polymer-grade ethylene (99.99% pure) can be straightforwardly produced from ethane/ethylene mixtures during the first adsorption cycle, demonstrating the potential of Fe 2 (O 2 )(dobdc) for this important industrial separation with a low energy cost under ambient conditions.

Posted Content
TL;DR: This paper characterizes a large and thoughtful selection of recent efficiency-flavored “X-former” models, providing an organized and comprehensive overview of existing work and models across multiple domains.
Abstract: Transformer model architectures have garnered immense interest lately due to their effectiveness across a range of domains like language, vision and reinforcement learning. In the field of natural language processing for example, Transformers have become an indispensable staple in the modern deep learning stack. Recently, a dizzying number of "X-former" models have been proposed - Reformer, Linformer, Performer, Longformer, to name a few - which improve upon the original Transformer architecture, many of which make improvements around computational and memory efficiency. With the aim of helping the avid researcher navigate this flurry, this paper characterizes a large and thoughtful selection of recent efficiency-flavored "X-former" models, providing an organized and comprehensive overview of existing work and models across multiple domains.

Posted Content
TL;DR: This work evaluates encoders to inverse the mapping of a cGAN, i.e., mapping a real image into a latent space and a conditional representation, which allows to reconstruct and modify real images of faces conditioning on arbitrary attributes.
Abstract: Generative Adversarial Networks (GANs) have recently demonstrated to successfully approximate complex data distributions. A relevant extension of this model is conditional GANs (cGANs), where the introduction of external information allows to determine specific representations of the generated images. In this work, we evaluate encoders to inverse the mapping of a cGAN, i.e., mapping a real image into a latent space and a conditional representation. This allows, for example, to reconstruct and modify real images of faces conditioning on arbitrary attributes. Additionally, we evaluate the design of cGANs. The combination of an encoder with a cGAN, which we call Invertible cGAN (IcGAN), enables to re-generate real images with deterministic complex modifications.

Journal ArticleDOI
TL;DR: It is demonstrated that Hall-like currents can occur in second-order response to external electric fields in a wide class of time-reversal invariant and inversion breaking materials, at both zero and twice the driving frequency.
Abstract: It is well known that a nonvanishing Hall conductivity requires broken time-reversal symmetry. However, in this work, we demonstrate that Hall-like currents can occur in second-order response to external electric fields in a wide class of time-reversal invariant and inversion breaking materials, at both zero and twice the driving frequency. This nonlinear Hall effect has a quantum origin arising from the dipole moment of the Berry curvature in momentum space, which generates a net anomalous velocity when the system is in a current-carrying state. The nonlinear Hall coefficient is a rank-two pseudotensor, whose form is determined by point group symmetry. We discus optimal conditions to observe this effect and propose candidate two- and three-dimensional materials, including topological crystalline insulators, transition metal dichalcogenides, and Weyl semimetals.

Book
07 Jun 2018
TL;DR: The recent developments that establish the fundamental limits for community detection in the stochastic block model are surveyed, both with respect to information-theoretic and computational thresholds, and for various recovery requirements such as exact, partial and weak recovery.
Abstract: The stochastic block model (SBM) is a random graph model with planted clusters. It is widely employed as a canonical model to study clustering and community detection, and provides generally a fertile ground to study the statistical and computational tradeoffs that arise in network and data sciences. This note surveys the recent developments that establish the fundamental limits for community detection in the SBM, both with respect to information-theoretic and computational thresholds, and for various recovery requirements such as exact, partial and weak recovery (a.k.a., detection). The main results discussed are the phase transitions for exact recovery at the Chernoff-Hellinger threshold, the phase transition for weak recovery at the Kesten-Stigum threshold, the optimal distortion-SNR tradeoff for partial recovery, the learning of the SBM parameters and the gap between information-theoretic and computational thresholds. The note also covers some of the algorithms developed in the quest of achieving the limits, in particular two-round algorithms via graph-splitting, semi-definite programming, linearized belief propagation, classical and nonbacktracking spectral methods. A few open problems are also discussed.

Journal ArticleDOI
13 Oct 2015-PLOS ONE
TL;DR: Time spent in MVPA is an important target for intervention and preventing transfer of time from LIPA to SB might lessen the negative effects of physical inactivity, so time spent in each of these behaviors are codependent.
Abstract: The associations between time spent in sleep, sedentary behaviors (SB) and physical activity with health are usually studied without taking into account that time is finite during the day, so time spent in each of these behaviors are codependent. Therefore, little is known about the combined effect of time spent in sleep, SB and physical activity, that together constitute a composite whole, on obesity and cardio-metabolic health markers. Cross-sectional analysis of NHANES 2005–6 cycle on N = 1937 adults, was undertaken using a compositional analysis paradigm, which accounts for this intrinsic codependence. Time spent in SB, light intensity (LIPA) and moderate to vigorous activity (MVPA) was determined from accelerometry and combined with self-reported sleep time to obtain the 24 hour time budget composition. The distribution of time spent in sleep, SB, LIPA and MVPA is significantly associated with BMI, waist circumference, triglycerides, plasma glucose, plasma insulin (all p<0.001), and systolic (p<0.001) and diastolic blood pressure (p<0.003), but not HDL or LDL. Within the composition, the strongest positive effect is found for the proportion of time spent in MVPA. Strikingly, the effects of MVPA replacing another behavior and of MVPA being displaced by another behavior are asymmetric. For example, re-allocating 10 minutes of SB to MVPA was associated with a lower waist circumference by 0.001% but if 10 minutes of MVPA is displaced by SB this was associated with a 0.84% higher waist circumference. The proportion of time spent in LIPA and SB were detrimentally associated with obesity and cardiovascular disease markers, but the association with SB was stronger. For diabetes risk markers, replacing SB with LIPA was associated with more favorable outcomes. Time spent in MVPA is an important target for intervention and preventing transfer of time from LIPA to SB might lessen the negative effects of physical inactivity.

Journal ArticleDOI
TL;DR: In this paper, the authors provide a broader historical perspective on the observational discoveries and theoretical arguments that led the scientific community to adopt dark matter as an essential part of the standard cosmological model.
Abstract: Although dark matter is a central element of modern cosmology, the history of how it became accepted as part of the dominant paradigm is often ignored or condensed into an anecdotal account focused around the work of a few pioneering scientists. The aim of this review is to provide a broader historical perspective on the observational discoveries and the theoretical arguments that led the scientific community to adopt dark matter as an essential part of the standard cosmological model.

Journal ArticleDOI
TL;DR: Trial Sequential Analysis represents analysis of meta-analytic data, with transparent assumptions, and better control of type I and type II errors than the traditional meta-analysis using naïve unadjusted confidence intervals.
Abstract: Most meta-analyses in systematic reviews, including Cochrane ones, do not have sufficient statistical power to detect or refute even large intervention effects. This is why a meta-analysis ought to be regarded as an interim analysis on its way towards a required information size. The results of the meta-analyses should relate the total number of randomised participants to the estimated required meta-analytic information size accounting for statistical diversity. When the number of participants and the corresponding number of trials in a meta-analysis are insufficient, the use of the traditional 95% confidence interval or the 5% statistical significance threshold will lead to too many false positive conclusions (type I errors) and too many false negative conclusions (type II errors). We developed a methodology for interpreting meta-analysis results, using generally accepted, valid evidence on how to adjust thresholds for significance in randomised clinical trials when the required sample size has not been reached. The Lan-DeMets trial sequential monitoring boundaries in Trial Sequential Analysis offer adjusted confidence intervals and restricted thresholds for statistical significance when the diversity-adjusted required information size and the corresponding number of required trials for the meta-analysis have not been reached. Trial Sequential Analysis provides a frequentistic approach to control both type I and type II errors. We define the required information size and the corresponding number of required trials in a meta-analysis and the diversity (D2) measure of heterogeneity. We explain the reasons for using Trial Sequential Analysis of meta-analysis when the actual information size fails to reach the required information size. We present examples drawn from traditional meta-analyses using unadjusted naive 95% confidence intervals and 5% thresholds for statistical significance. Spurious conclusions in systematic reviews with traditional meta-analyses can be reduced using Trial Sequential Analysis. Several empirical studies have demonstrated that the Trial Sequential Analysis provides better control of type I errors and of type II errors than the traditional naive meta-analysis. Trial Sequential Analysis represents analysis of meta-analytic data, with transparent assumptions, and better control of type I and type II errors than the traditional meta-analysis using naive unadjusted confidence intervals.

Proceedings ArticleDOI
19 Jul 2018
TL;DR: An end-to-end framework named Event Adversarial Neural Network (EANN), which can derive event-invariant features and thus benefit the detection of fake news on newly arrived events, is proposed.
Abstract: As news reading on social media becomes more and more popular, fake news becomes a major issue concerning the public and government. The fake news can take advantage of multimedia content to mislead readers and get dissemination, which can cause negative effects or even manipulate the public events. One of the unique challenges for fake news detection on social media is how to identify fake news on newly emerged events. Unfortunately, most of the existing approaches can hardly handle this challenge, since they tend to learn event-specific features that can not be transferred to unseen events. In order to address this issue, we propose an end-to-end framework named Event Adversarial Neural Network (EANN), which can derive event-invariant features and thus benefit the detection of fake news on newly arrived events. It consists of three main components: the multi-modal feature extractor, the fake news detector, and the event discriminator. The multi-modal feature extractor is responsible for extracting the textual and visual features from posts. It cooperates with the fake news detector to learn the discriminable representation for the detection of fake news. The role of event discriminator is to remove the event-specific features and keep shared features among events. Extensive experiments are conducted on multimedia datasets collected from Weibo and Twitter. The experimental results show our proposed EANN model can outperform the state-of-the-art methods, and learn transferable feature representations.

Journal ArticleDOI
TL;DR: State-of-the-art MRI findings in patients presenting with a clinically isolated syndrome were discussed in a MAGNIMS workshop, the goal of which was to provide an evidence-based and expert-opinion consensus on diagnostic MRI criteria modifications.
Abstract: In patients presenting with a clinically isolated syndrome, MRI can support and substitute clinical information in the diagnosis of multiple sclerosis by showing disease dissemination in space and time and by helping to exclude disorders that can mimic multiple sclerosis. MRI criteria were first included in the diagnostic work-up for multiple sclerosis in 2001, and since then several modifications to the criteria have been proposed in an attempt to simplify lesion-count models for showing disease dissemination in space, change the timing of MRI scanning to show dissemination in time, and increase the value of spinal cord imaging. Since the last update of these criteria, new data on the use of MRI to establish dissemination in space and time have become available, and MRI technology has improved. State-of-the-art MRI findings in these patients were discussed in a MAGNIMS workshop, the goal of which was to provide an evidence-based and expert-opinion consensus on proposed modifications to MRI criteria for the diagnosis of multiple sclerosis.

Journal ArticleDOI
TL;DR: A new category of "personalised preventative health coaches" (Digital Health Advisors) will emerge that will possess the skills and the ability to interpret and understand health and well-being data and help their clients avoid chronic and diet-related illness, improve cognitive function, achieve improved mental health and achieve improved lifestyles overall.
Abstract: Objectives A number of technologies can reduce overall costs for the prevention or management of chronic illnesses. These include devices that constantly monitor health indicators, devices that auto-administer therapies, or devices that track real-time health data when a patient self-administers a therapy. Because they have increased access to high-speed Internet and smartphones, many patients have started to use mobile applications (apps) to manage various health needs. These devices and mobile apps are now increasingly used and integrated with telemedicine and telehealth via the medical Internet of Things (mIoT). This paper reviews mIoT and big data in healthcare fields.

Journal ArticleDOI
TL;DR: A conceptual model is proposed to illustrate how online peer-to-peer connections may afford opportunities for individuals with serious mental illness to challenge stigma, increase consumer activation and access online interventions for mental and physical wellbeing.
Abstract: Aims: People with serious mental illness are increasingly turning to popular social media, including Facebook, Twitter or YouTube, to share their illness experiences or seek advice from others with similar health conditions. This emerging form of unsolicited communication among self-forming online communities of patients and individuals with diverse health concerns is referred to as peer-to-peer support. We offer a perspective on how online peer-to-peer connections among people with serious mental illness could advance efforts to promote mental and physical wellbeing in this group. Methods: In this commentary, we take the perspective that when an individual with serious mental illness decides to connect with similar others online it represents a critical point in their illness experience. We propose a conceptual model to illustrate how online peer-to-peer connections may afford opportunities for individuals with serious mental illness to challenge stigma, increase consumer activation and access online interventions for mental and physical wellbeing. Results: People with serious mental illness report benefits from interacting with peers online from greater social connectedness, feelings of group belonging and by sharing personal stories and strategies for coping with day-to-day challenges of living with a mental illness. Within online communities, individuals with serious mental illness could challenge stigma through personal empowerment and providing hope. By learning from peers online, these individuals may gain insight about important health care decisions, which could promote mental health care seeking behaviours. These individuals could also access interventions for mental and physical wellbeing delivered through social media that could incorporate mutual support between peers, help promote treatment engagement and reach a wider demographic. Unforeseen risks may include exposure to misleading information, facing hostile or derogatory comments from others, or feeling more uncertain about one's health condition. However, given the evidence to date, the benefits of online peer-to-peer support appear to outweigh the potential risks. Conclusion: Future research must explore these opportunities to support and empower people with serious mental illness through online peer networks while carefully considering potential risks that may arise from online peer-to-peer interactions. Efforts will also need to address methodological challenges in the form of evaluating interventions delivered through social media and collecting objective mental and physical health outcome measures online. A key challenge will be to determine whether skills learned from peers in online networks translate into tangible and meaningful improvements in recovery, employment, or mental and physical wellbeing in the offline world.

Journal ArticleDOI
TL;DR: This review highlights the features of meiotic recombination that distinguish it from recombinational repair in somatic cells, and how the molecular processes of meiotics recombination are embedded and interdependent with the chromosome structures that characterize meiotic prophase.
Abstract: The study of homologous recombination has its historical roots in meiosis. In this context, recombination occurs as a programmed event that culminates in the formation of crossovers, which are essential for accurate chromosome segregation and create new combinations of parental alleles. Thus, meiotic recombination underlies both the independent assortment of parental chromosomes and genetic linkage. This review highlights the features of meiotic recombination that distinguish it from recombinational repair in somatic cells, and how the molecular processes of meiotic recombination are embedded and interdependent with the chromosome structures that characterize meiotic prophase. A more in-depth review presents our understanding of how crossover and noncrossover pathways of meiotic recombination are differentiated and regulated. The final section of this review summarizes the studies that have defined defective recombination as a leading cause of pregnancy loss and congenital disease in humans.

Journal ArticleDOI
TL;DR: It is argued that blockchain (BC), a disruptive technology that has found many applications from cryptocurrencies to smart contracts, is a potential solution to these challenges and is proposed a BC-based architecture to protect the privacy of users and to increase the security of the vehicular ecosystem.
Abstract: Interconnected smart vehicles offer a range of sophisticated services that benefit the vehicle owners, transport authorities, car manufacturers, and other service providers. This potentially exposes smart vehicles to a range of security and privacy threats such as location tracking or remote hijacking of the vehicle. In this article, we argue that blockchain (BC), a disruptive technology that has found many applications from cryptocurrencies to smart contracts, is a potential solution to these challenges. We propose a BC-based architecture to protect the privacy of users and to increase the security of the vehicular ecosystem. Wireless remote software updates and other emerging services such as dynamic vehicle insurance fees are used to illustrate the efficacy of the proposed security architecture. We also qualitatively argue the resilience of the architecture against common security attacks.

Journal ArticleDOI
TL;DR: The proposed models, which are first validated through extensive simulation results, reveal the relationships between the free-space path loss of RIS-assisted wireless communications and the distances from the transmitter/receiver to the RIS, the size of the ris, the near-field/far-field effects of the RIS and the radiation patterns of antennas and unit cells.
Abstract: Reconfigurable intelligent surfaces (RISs) comprised of tunable unit cells have recently drawn significant attention due to their superior capability in manipulating electromagnetic waves. In particular, RIS-assisted wireless communications have the great potential to achieve significant performance improvement and coverage enhancement in a cost-effective and energy-efficient manner, by properly programming the reflection coefficients of the unit cells of RISs. In this article, free-space path loss models for RIS-assisted wireless communications are developed for different scenarios by studying the physics and electromagnetic nature of RISs. The proposed models, which are first validated through extensive simulation results, reveal the relationships between the free-space path loss of RIS-assisted wireless communications and the distances from the transmitter/receiver to the RIS, the size of the RIS, the near-field/far-field effects of the RIS, and the radiation patterns of antennas and unit cells. In addition, three fabricated RISs (metasurfaces) are utilized to further corroborate the theoretical findings through experimental measurements conducted in a microwave anechoic chamber. The measurement results match well with the modeling results, thus validating the proposed free-space path loss models for RISs, which may pave the way for further theoretical studies and practical applications in this field.

Journal ArticleDOI
09 Jun 2016-PLOS ONE
TL;DR: SARTools provides systematic quality controls of the dataset as well as diagnostic plots that help to tune the model parameters and keeps track of the whole analysis process, parameter values and versions of the R packages used.
Abstract: Background Several R packages exist for the detection of differentially expressed genes from RNA-Seq data. The analysis process includes three main steps, namely normalization, dispersion estimation and test for differential expression. Quality control steps along this process are recommended but not mandatory, and failing to check the characteristics of the dataset may lead to spurious results. In addition, normalization methods and statistical models are not exchangeable across the packages without adequate transformations the users are often not aware of. Thus, dedicated analysis pipelines are needed to include systematic quality control steps and prevent errors from misusing the proposed methods. Results SARTools is an R pipeline for differential analysis of RNA-Seq count data. It can handle designs involving two or more conditions of a single biological factor with or without a blocking factor (such as a batch effect or a sample pairing). It is based on DESeq2 and edgeR and is composed of an R package and two R script templates (for DESeq2 and edgeR respectively). Tuning a small number of parameters and executing one of the R scripts, users have access to the full results of the analysis, including lists of differentially expressed genes and a HTML report that (i) displays diagnostic plots for quality control and model hypotheses checking and (ii) keeps track of the whole analysis process, parameter values and versions of the R packages used. Conclusions SARTools provides systematic quality controls of the dataset as well as diagnostic plots that help to tune the model parameters. It gives access to the main parameters of DESeq2 and edgeR and prevents untrained users from misusing some functionalities of both packages. By keeping track of all the parameters of the analysis process it fits the requirements of reproducible research.

Journal ArticleDOI
TL;DR: A second-order topological insulator in d dimensions is an insulator which has no d-1 dimensional topological boundary states but has d-2 dimensional topology boundary states, which constitutes the bulk topological index.
Abstract: A second-order topological insulator in d dimensions is an insulator which has no d-1 dimensional topological boundary states but has d-2 dimensional topological boundary states. It is an extended notion of the conventional topological insulator. Higher-order topological insulators have been investigated in square and cubic lattices. In this Letter, we generalize them to breathing kagome and pyrochlore lattices. First, we construct a second-order topological insulator on the breathing Kagome lattice. Three topological boundary states emerge at the corner of the triangle, realizing a 1/3 fractional charge at each corner. Second, we construct a third-order topological insulator on the breathing pyrochlore lattice. Four topological boundary states emerge at the corners of the tetrahedron with a 1/4 fractional charge at each corner. These higher-order topological insulators are characterized by the quantized polarization, which constitutes the bulk topological index. Finally, we study a second-order topological semimetal by stacking the breathing kagome lattice.