scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: The simultaneous presence and activity of organoheterotrophic Denitrifying bacteria, sulfide-dependent denitrifiers, and anammox bacteria suggests a tight network of bacteria coupling carbon-, nitrogen-, and sulfur cycling in Lake Grevelingen sediments.
Abstract: Denitrifying and anammox bacteria are involved in the nitrogen cycling in marine sediments but the environmental factors that regulate the relative importance of these processes are not well constrained. Here, we evaluated the abundance, diversity, and potential activity of denitrifying, anammox, and sulfide-dependent denitrifying bacteria in the sediments of the seasonally hypoxic saline Lake Grevelingen, known to harbor an active microbial community involved in sulfur oxidation pathways. Depth distributions of 16S rRNA gene, nirS gene of denitrifying and anammox bacteria, aprA gene of sulfur-oxidizing and sulfate-reducing bacteria, and ladderane lipids of anammox bacteria were studied in sediments impacted by seasonally hypoxic bottom waters. Samples were collected down to 5 cm depth (1 cm resolution) at three different locations before (March) and during summer hypoxia (August). The abundance of denitrifying bacteria did not vary despite of differences in oxygen and sulfide availability in the sediments, whereas anammox bacteria were more abundant in the summer hypoxia but in those sediments with lower sulfide concentrations. The potential activity of denitrifying and anammox bacteria as well as of sulfur-oxidizing, including sulfide-dependent denitrifiers and sulfate-reducing bacteria, was potentially inhibited by the competition for nitrate and nitrite with cable and/or Beggiatoa-like bacteria in March and by the accumulation of sulfide in the summer hypoxia. The simultaneous presence and activity of organoheterotrophic denitrifying bacteria, sulfide-dependent denitrifiers, and anammox bacteria suggests a tight network of bacteria coupling carbon-, nitrogen-, and sulfur cycling in Lake Grevelingen sediments.

946 citations


Journal ArticleDOI
30 Jul 2015-Cell
TL;DR: A CRISPR-based genetic screen was used to identify genes whose loss sensitizes human cells to phenformin, a complex I inhibitor, and yielded GOT1, the cytosolic aspartate aminotransferase, loss of which kills cells upon ETC inhibition.

946 citations


Proceedings ArticleDOI
07 Aug 2017
TL;DR: P Pensieve is proposed, a system that generates ABR algorithms using reinforcement learning (RL), and outperforms the best state-of-the-art scheme, with improvements in average QoE of 12%--25%.
Abstract: Client-side video players employ adaptive bitrate (ABR) algorithms to optimize user quality of experience (QoE). Despite the abundance of recently proposed schemes, state-of-the-art ABR algorithms suffer from a key limitation: they use fixed control rules based on simplified or inaccurate models of the deployment environment. As a result, existing schemes inevitably fail to achieve optimal performance across a broad set of network conditions and QoE objectives.We propose Pensieve, a system that generates ABR algorithms using reinforcement learning (RL). Pensieve trains a neural network model that selects bitrates for future video chunks based on observations collected by client video players. Pensieve does not rely on pre-programmed models or assumptions about the environment. Instead, it learns to make ABR decisions solely through observations of the resulting performance of past decisions. As a result, Pensieve automatically learns ABR algorithms that adapt to a wide range of environments and QoE metrics. We compare Pensieve to state-of-the-art ABR algorithms using trace-driven and real world experiments spanning a wide variety of network conditions, QoE metrics, and video properties. In all considered scenarios, Pensieve outperforms the best state-of-the-art scheme, with improvements in average QoE of 12%--25%. Pensieve also generalizes well, outperforming existing schemes even on networks for which it was not explicitly trained.

946 citations


Journal ArticleDOI
TL;DR: Immediate completion lymph‐node dissection increased the rate of regional disease control and provided prognostic information but did not increase melanoma‐specific survival among patients with melanoma and sentinel‐node metastases.
Abstract: BackgroundSentinel-lymph-node biopsy is associated with increased melanoma-specific survival (i.e., survival until death from melanoma) among patients with node-positive intermediate-thickness melanomas (1.2 to 3.5 mm). The value of completion lymph-node dissection for patients with sentinel-node metastases is not clear. MethodsIn an international trial, we randomly assigned patients with sentinel-node metastases detected by means of standard pathological assessment or a multimarker molecular assay to immediate completion lymph-node dissection (dissection group) or nodal observation with ultrasonography (observation group). The primary end point was melanoma-specific survival. Secondary end points included disease-free survival and the cumulative rate of nonsentinel-node metastasis. ResultsImmediate completion lymph-node dissection was not associated with increased melanoma-specific survival among 1934 patients with data that could be evaluated in an intention-to-treat analysis or among 1755 patients in t...

946 citations


Journal ArticleDOI
TL;DR: Authors/Task Force Members: Silvia G. Priori (Chairperson), Carina Blomström-Lundqvist (Co-chairperson) (Sweden), Andrea Mazzanti† (Italy), Nico Blom (The Netherlands), Martin Borggrefe (Germany), John Camm (UK), Perry Mark Elliott (UK).
Abstract: 2015 ESC Guidelines for the Management of Patients With Ventricular Arrhythmias and the Prevention of Sudden Cardiac Death

945 citations


Proceedings ArticleDOI
27 Dec 2018
TL;DR: This work presents a framework for mitigating biases concerning demographic groups by including a variable for the group of interest and simultaneously learning a predictor and an adversary, which results in accurate predictions that exhibit less evidence of stereotyping Z.
Abstract: Machine learning is a tool for building models that accurately represent input training data. When undesired biases concerning demographic groups are in the training data, well-trained models will reflect those biases. We present a framework for mitigating such biases by including a variable for the group of interest and simultaneously learning a predictor and an adversary. The input to the network X, here text or census data, produces a prediction Y, such as an analogy completion or income bracket, while the adversary tries to model a protected variable Z, here gender or zip code. The objective is to maximize the predictor's ability to predict Y while minimizing the adversary's ability to predict Z. Applied to analogy completion, this method results in accurate predictions that exhibit less evidence of stereotyping Z. When applied to a classification task using the UCI Adult (Census) Dataset, it results in a predictive model that does not lose much accuracy while achieving very close to equality of odds (Hardt, et al., 2016). The method is flexible and applicable to multiple definitions of fairness as well as a wide range of gradient-based learning models, including both regression and classification tasks.

945 citations


Proceedings ArticleDOI
28 Jul 2019
TL;DR: This work provides novel support for the possibility that BERT networks capture structural information about language by performing a series of experiments to unpack the elements of English language structure learned by BERT.
Abstract: BERT is a recent language representation model that has surprisingly performed well in diverse language understanding benchmarks. This result indicates the possibility that BERT networks capture structural information about language. In this work, we provide novel support for this claim by performing a series of experiments to unpack the elements of English language structure learned by BERT. We first show that BERT's phrasal representation captures phrase-level information in the lower layers. We also show that BERT's intermediate layers encode a rich hierarchy of linguistic information, with surface features at the bottom, syntactic features in the middle and semantic features at the top. BERT turns out to require deeper layers when long-distance dependency information is required, e.g.~to track subject-verb agreement. Finally, we show that BERT representations capture linguistic information in a compositional way that mimics classical, tree-like structures.

945 citations


Journal ArticleDOI
TL;DR: The coagulation function in patients with SARS-CoV-2 is significantly deranged compared with healthy people, but monitoring D-dimer and FDP values may be helpful for the early identification of severe cases.
Abstract: Background As the number of patients increases, there is a growing understanding of the form of pneumonia sustained by the 2019 novel coronavirus (SARS-CoV-2), which has caused an outbreak in China. Up to now, clinical features and treatment of patients infected with SARS-CoV-2 have been reported in detail. However, the relationship between SARS-CoV-2 and coagulation has been scarcely addressed. Our aim is to investigate the blood coagulation function of patients with SARS-CoV-2 infection. Methods In our study, 94 patients with confirmed SARS-CoV-2 infection were admitted in Renmin Hospital of Wuhan University. We prospectively collect blood coagulation data in these patients and in 40 healthy controls during the same period. Results Antithrombin values in patients were lower than that in the control group (p < 0.001). The values of D-dimer, fibrin/fibrinogen degradation products (FDP), and fibrinogen (FIB) in all SARS-CoV-2 cases were substantially higher than those in healthy controls. Moreover, D-dimer and FDP values in patients with severe SARS-CoV-2 infection were higher than those in patients with milder forms. Compared with healthy controls, prothrombin time activity (PT-act) was lower in SARS-CoV-2 patients. Thrombin time in critical SARS-CoV-2 patients was also shorter than that in controls. Conclusions The coagulation function in patients with SARS-CoV-2 is significantly deranged compared with healthy people, but monitoring D-dimer and FDP values may be helpful for the early identification of severe cases.

945 citations


Journal ArticleDOI
TL;DR: The synthesis of quantum confined all inorganic cesium lead halide nanoplates in the perovskite crystal structure that are also highly luminescent (PLQY 84%) and controllable self-assembly of nanoplate either into stacked columnar phases or crystallographic-oriented thin-sheet structures is demonstrated.
Abstract: Anisotropic colloidal quasi-two-dimensional nanoplates (NPLs) hold great promise as functional materials due to their combination of low dimensional optoelectronic properties and versatility through colloidal synthesis. Recently, lead-halide perovskites have emerged as important optoelectronic materials with excellent efficiencies in photovoltaic and light-emitting applications. Here we report the synthesis of quantum confined all inorganic cesium lead halide nanoplates in the perovskite crystal structure that are also highly luminescent (PLQY 84%). The controllable self-assembly of nanoplates either into stacked columnar phases or crystallographic-oriented thin-sheet structures is demonstrated. The broad accessible emission range, high native quantum yields, and ease of self-assembly make perovskite NPLs an ideal platform for fundamental optoelectronic studies and the investigation of future devices.

945 citations


Journal ArticleDOI
TL;DR: The differential impact of autophagy on distinct phases of tumorigenesis is discussed and the implications of this concept for the use of Autophagy modulators in cancer therapy are discussed.
Abstract: Autophagy plays a key role in the maintenance of cellular homeostasis. In healthy cells, such a homeostatic activity constitutes a robust barrier against malignant transformation. Accordingly, many oncoproteins inhibit, and several oncosuppressor proteins promote, autophagy. Moreover, autophagy is required for optimal anticancer immunosurveillance. In neoplastic cells, however, autophagic responses constitute a means to cope with intracellular and environmental stress, thus favoring tumor progression. This implies that at least in some cases, oncogenesis proceeds along with a temporary inhibition of autophagy or a gain of molecular functions that antagonize its oncosuppressive activity. Here, we discuss the differential impact of autophagy on distinct phases of tumorigenesis and the implications of this concept for the use of autophagy modulators in cancer therapy.

945 citations


Journal ArticleDOI
TL;DR: The World Health Organization has declared Covid-19 as a pandemic that has posed a contemporary threat to humanity as discussed by the authors, this pandemic has successfully forced global shutdown of several activities.
Abstract: The World Health Organization has declared Covid-19 as a pandemic that has posed a contemporary threat to humanity. This pandemic has successfully forced global shutdown of several activities, incl...

Posted Content
TL;DR: Relational Graph Convolutional Networks (R-GCNets) as discussed by the authors are related to a recent class of neural networks operating on graphs, and are developed specifically to deal with the highly multi-relational data characteristic of realistic knowledge bases.
Abstract: Knowledge graphs enable a wide variety of applications, including question answering and information retrieval. Despite the great effort invested in their creation and maintenance, even the largest (e.g., Yago, DBPedia or Wikidata) remain incomplete. We introduce Relational Graph Convolutional Networks (R-GCNs) and apply them to two standard knowledge base completion tasks: Link prediction (recovery of missing facts, i.e. subject-predicate-object triples) and entity classification (recovery of missing entity attributes). R-GCNs are related to a recent class of neural networks operating on graphs, and are developed specifically to deal with the highly multi-relational data characteristic of realistic knowledge bases. We demonstrate the effectiveness of R-GCNs as a stand-alone model for entity classification. We further show that factorization models for link prediction such as DistMult can be significantly improved by enriching them with an encoder model to accumulate evidence over multiple inference steps in the relational graph, demonstrating a large improvement of 29.8% on FB15k-237 over a decoder-only baseline.

Proceedings ArticleDOI
Han Hu1, Jiayuan Gu2, Zheng Zhang1, Jifeng Dai1, Yichen Wei1 
01 Jun 2018
TL;DR: In this article, the authors propose an object relation module to model relations between objects, which is shown effective on improving object recognition and duplicate removal steps in the modern object detection pipeline.
Abstract: Although it is well believed for years that modeling relations between objects would help object recognition, there has not been evidence that the idea is working in the deep learning era. All state-of-the-art object detection systems still rely on recognizing object instances individually, without exploiting their relations during learning. This work proposes an object relation module. It processes a set of objects simultaneously through interaction between their appearance feature and geometry, thus allowing modeling of their relations. It is lightweight and in-place. It does not require additional supervision and is easy to embed in existing networks. It is shown effective on improving object recognition and duplicate removal steps in the modern object detection pipeline. It verifies the efficacy of modeling object relations in CNN based detection. It gives rise to the first fully end-to-end object detector.

Journal ArticleDOI
TL;DR: This review summarized the known antimicrobial resistance mechanisms of ESKAPE pathogens to aid in the prediction of underlying or even unknown mechanisms of resistance, which could be applied to other emerging multidrug resistant pathogens.
Abstract: The ESKAPE pathogens (Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa, and Enterobacter species) are the leading cause of nosocomial infections throughout the world. Most of them are multidrug resistant isolates, which is one of the greatest challenges in clinical practice. Multidrug resistance is amongst the top three threats to global public health and is usually caused by excessive drug usage or prescription, inappropriate use of antimicrobials, and substandard pharmaceuticals. Understanding the resistance mechanisms of these bacteria is crucial for the development of novel antimicrobial agents or other alternative tools to combat these public health challenges. Greater mechanistic understanding would also aid in the prediction of underlying or even unknown mechanisms of resistance, which could be applied to other emerging multidrug resistant pathogens. In this review, we summarize the known antimicrobial resistance mechanisms of ESKAPE pathogens.

Posted Content
Tim Salimans1, Diederik P. Kingma1
TL;DR: Weight normalization as mentioned in this paper reparameterizes the weight vectors in a neural network that decouples the length of those weight vectors from their direction, improving the conditioning of the optimization problem and speed up convergence of stochastic gradient descent.
Abstract: We present weight normalization: a reparameterization of the weight vectors in a neural network that decouples the length of those weight vectors from their direction. By reparameterizing the weights in this way we improve the conditioning of the optimization problem and we speed up convergence of stochastic gradient descent. Our reparameterization is inspired by batch normalization but does not introduce any dependencies between the examples in a minibatch. This means that our method can also be applied successfully to recurrent models such as LSTMs and to noise-sensitive applications such as deep reinforcement learning or generative models, for which batch normalization is less well suited. Although our method is much simpler, it still provides much of the speed-up of full batch normalization. In addition, the computational overhead of our method is lower, permitting more optimization steps to be taken in the same amount of time. We demonstrate the usefulness of our method on applications in supervised image recognition, generative modelling, and deep reinforcement learning.

Journal ArticleDOI
TL;DR: This paper describes a recently created image database, TID2013, intended for evaluation of full-reference visual quality assessment metrics, and methodology for determining drawbacks of existing visual quality metrics is described.
Abstract: This paper describes a recently created image database, TID2013, intended for evaluation of full-reference visual quality assessment metrics. With respect to TID2008, the new database contains a larger number (3000) of test images obtained from 25 reference images, 24 types of distortions for each reference image, and 5 levels for each type of distortion. Motivations for introducing 7 new types of distortions and one additional level of distortions are given; examples of distorted images are presented. Mean opinion scores (MOS) for the new database have been collected by performing 985 subjective experiments with volunteers (observers) from five countries (Finland, France, Italy, Ukraine, and USA). The availability of MOS allows the use of the designed database as a fundamental tool for assessing the effectiveness of visual quality. Furthermore, existing visual quality metrics have been tested with the proposed database and the collected results have been analyzed using rank order correlation coefficients between MOS and considered metrics. These correlation indices have been obtained both considering the full set of distorted images and specific image subsets, for highlighting advantages and drawbacks of existing, state of the art, quality metrics. Approaches to thorough performance analysis for a given metric are presented to detect practical situations or distortion types for which this metric is not adequate enough to human perception. The created image database and the collected MOS values are freely available for downloading and utilization for scientific purposes. We have created a new large database.This database contains larger number of distorted images and distortion types.MOS values for all images are obtained and provided.Analysis of correlation between MOS and a wide set of existing metrics is carried out.Methodology for determining drawbacks of existing visual quality metrics is described.

Journal ArticleDOI
TL;DR: Propagation parameters and channel models for understanding mmWave propagation, such as line-of-sight (LOS) probabilities, large-scale path loss, and building penetration loss, as modeled by various standardization bodies are compared over the 0.5–100 GHz range.
Abstract: This paper provides an overview of the features of fifth generation (5G) wireless communication systems now being developed for use in the millimeter wave (mmWave) frequency bands. Early results and key concepts of 5G networks are presented, and the channel modeling efforts of many international groups for both licensed and unlicensed applications are described here. Propagation parameters and channel models for understanding mmWave propagation, such as line-of-sight (LOS) probabilities, large-scale path loss, and building penetration loss, as modeled by various standardization bodies, are compared over the 0.5–100 GHz range.


Journal ArticleDOI
TL;DR: In this paper, it was shown that the implicit null of the cross-sectional dependence (CD) test depends on the relative expansion rates of N and T, and that the CD test has the correct size for values of α in the range [0, 1/4], for all combinations of N, T and K, and irrespective of whether the panel contains lagged values of the dependent variables.
Abstract: This article considers testing the hypothesis that errors in a panel data model are weakly cross-sectionally dependent, using the exponent of cross-sectional dependence α, introduced recently in Bailey, Kapetanios, and Pesaran (2012). It is shown that the implicit null of the cross-sectional dependence (CD) test depends on the relative expansion rates of N and T. When T = O(N e), for some 0 < e ≤1, then the implicit null of the CD test is given by 0 ≤ α < (2 − e)/4, which gives 0 ≤ α <1/4, when N and T tend to infinity at the same rate such that T/N → κ, with κ being a finite positive constant. It is argued that in the case of large N panels, the null of weak dependence is more appropriate than the null of independence which could be quite restrictive for large panels. Using Monte Carlo experiments, it is shown that the CD test has the correct size for values of α in the range [0, 1/4], for all combinations of N and T, and irrespective of whether the panel contains lagged values of the dependent variables...

Posted Content
TL;DR: This work shows how a deep learning architecture equipped with an RN module can implicitly discover and learn to reason about entities and their relations.
Abstract: Relational reasoning is a central component of generally intelligent behavior, but has proven difficult for neural networks to learn. In this paper we describe how to use Relation Networks (RNs) as a simple plug-and-play module to solve problems that fundamentally hinge on relational reasoning. We tested RN-augmented networks on three tasks: visual question answering using a challenging dataset called CLEVR, on which we achieve state-of-the-art, super-human performance; text-based question answering using the bAbI suite of tasks; and complex reasoning about dynamic physical systems. Then, using a curated dataset called Sort-of-CLEVR we show that powerful convolutional networks do not have a general capacity to solve relational questions, but can gain this capacity when augmented with RNs. Our work shows how a deep learning architecture equipped with an RN module can implicitly discover and learn to reason about entities and their relations.

Journal ArticleDOI
TL;DR: Pneumonia-screening CNNs robustly identified hospital system and department within a hospital, which can have large differences in disease burden and may confound predictions.
Abstract: Background There is interest in using convolutional neural networks (CNNs) to analyze medical imaging to provide computer-aided diagnosis (CAD). Recent work has suggested that image classification CNNs may not generalize to new data as well as previously believed. We assessed how well CNNs generalized across three hospital systems for a simulated pneumonia screening task. Methods and findings A cross-sectional design with multiple model training cohorts was used to evaluate model generalizability to external sites using split-sample validation. A total of 158,323 chest radiographs were drawn from three institutions: National Institutes of Health Clinical Center (NIH; 112,120 from 30,805 patients), Mount Sinai Hospital (MSH; 42,396 from 12,904 patients), and Indiana University Network for Patient Care (IU; 3,807 from 3,683 patients). These patient populations had an age mean (SD) of 46.9 years (16.6), 63.2 years (16.5), and 49.6 years (17) with a female percentage of 43.5%, 44.8%, and 57.3%, respectively. We assessed individual models using the area under the receiver operating characteristic curve (AUC) for radiographic findings consistent with pneumonia and compared performance on different test sets with DeLong’s test. The prevalence of pneumonia was high enough at MSH (34.2%) relative to NIH and IU (1.2% and 1.0%) that merely sorting by hospital system achieved an AUC of 0.861 (95% CI 0.855–0.866) on the joint MSH–NIH dataset. Models trained on data from either NIH or MSH had equivalent performance on IU (P values 0.580 and 0.273, respectively) and inferior performance on data from each other relative to an internal test set (i.e., new data from within the hospital system used for training data; P values both <0.001). The highest internal performance was achieved by combining training and test data from MSH and NIH (AUC 0.931, 95% CI 0.927–0.936), but this model demonstrated significantly lower external performance at IU (AUC 0.815, 95% CI 0.745–0.885, P = 0.001). To test the effect of pooling data from sites with disparate pneumonia prevalence, we used stratified subsampling to generate MSH–NIH cohorts that only differed in disease prevalence between training data sites. When both training data sites had the same pneumonia prevalence, the model performed consistently on external IU data (P = 0.88). When a 10-fold difference in pneumonia rate was introduced between sites, internal test performance improved compared to the balanced model (10× MSH risk P < 0.001; 10× NIH P = 0.002), but this outperformance failed to generalize to IU (MSH 10× P < 0.001; NIH 10× P = 0.027). CNNs were able to directly detect hospital system of a radiograph for 99.95% NIH (22,050/22,062) and 99.98% MSH (8,386/8,388) radiographs. The primary limitation of our approach and the available public data is that we cannot fully assess what other factors might be contributing to hospital system–specific biases. Conclusion Pneumonia-screening CNNs achieved better internal than external performance in 3 out of 5 natural comparisons. When models were trained on pooled data from sites with different pneumonia prevalence, they performed better on new pooled data from these sites but not on external data. CNNs robustly identified hospital system and department within a hospital, which can have large differences in disease burden and may confound predictions.

Proceedings Article
10 Jun 2016
TL;DR: The authors proposed a model-free imitation learning algorithm that obtains significant performance gains over existing model free methods in imitating complex behaviors in large, high-dimensional environments, and showed that a certain instantiation of their framework draws an analogy between imitation learning and generative adversarial networks.
Abstract: Consider learning a policy from example expert behavior, without interaction with the expert or access to a reinforcement signal. One approach is to recover the expert's cost function with inverse reinforcement learning, then extract a policy from that cost function with reinforcement learning. This approach is indirect and can be slow. We propose a new general framework for directly extracting a policy from data as if it were obtained by reinforcement learning following inverse reinforcement learning. We show that a certain instantiation of our framework draws an analogy between imitation learning and generative adversarial networks, from which we derive a model-free imitation learning algorithm that obtains significant performance gains over existing model-free methods in imitating complex behaviors in large, high-dimensional environments.

Journal ArticleDOI
TL;DR: The rate of complete remission was higher with inotuzumab ozogamicin than with standard therapy, and a higher percentage of patients in the inotzumabozogamic in group had results below the threshold for minimal residual disease.
Abstract: BackgroundThe prognosis for adults with relapsed acute lymphoblastic leukemia is poor. We sought to determine whether inotuzumab ozogamicin, an anti-CD22 antibody conjugated to calicheamicin, results in better outcomes in patients with relapsed or refractory acute lymphoblastic leukemia than does standard therapy. MethodsIn this phase 3 trial, we randomly assigned adults with relapsed or refractory acute lymphoblastic leukemia to receive either inotuzumab ozogamicin (inotuzumab ozogamicin group) or standard intensive chemotherapy (standard-therapy group). The primary end points were complete remission (including complete remission with incomplete hematologic recovery) and overall survival. ResultsOf the 326 patients who underwent randomization, the first 218 (109 in each group) were included in the primary intention-to-treat analysis of complete remission. The rate of complete remission was significantly higher in the inotuzumab ozogamicin group than in the standard-therapy group (80.7% [95% confidence in...

Book ChapterDOI
TL;DR: In this article, the authors consider two general theories of distributional equality: the first holds that a distributional scheme treats people as equals when it distributes or transfers resources among them until no further transfer would leave them more equal in welfare.
Abstract: Equality is a popular but mysterious political ideal. This chapter discusses one aspect of that question, which might be called the problem of distributional equality. Suppose It considers two general theories of distributional equality. The first holds that a distributional scheme treats people as equals when it distributes or transfers resources among them until no further transfer would leave them more equal in welfare. The second holds that it treats them as equals when it distributes or transfers so that no further transfer would leave their shares of the total resources more equal. Equality of welfare linked to that sort of theory holds that distribution should attempt to leave people as equal as possible in some aspect or quality of their conscious life. The chapter notices a threshold difficulty in applying this conception of equality in a community in which some people themselves hold, as a matter of their own political preferences, exactly the same theory.

Journal ArticleDOI
TL;DR: This review covers technical aspects of tES, as well as applications like exploration of brain physiology, modelling approaches, tES in cognitive neurosciences, and interventional approaches to help the reader to appropriately design and conduct studies involving these brain stimulation techniques.

Journal ArticleDOI
TL;DR: This survey paper summarizes the current state-of-the-art of Internet of Things architectures in various domains systematically and proposes to solve real-life problems by building and deployment of powerful Internet of Nothing notions.

Journal ArticleDOI
02 Oct 2020-Science
TL;DR: A range of preexisting memory CD4+ T cells that are cross-reactive with comparable affinity to SARS-CoV-2 and the common cold coronaviruses human coronavirus (HCoV)-OC43, H coV-229E, H CoV-NL63, and HCov-HKU1 are demonstrated.
Abstract: Many unknowns exist about human immune responses to the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) virus. SARS-CoV-2-reactive CD4+ T cells have been reported in unexposed individuals, suggesting preexisting cross-reactive T cell memory in 20 to 50% of people. However, the source of those T cells has been speculative. Using human blood samples derived before the SARS-CoV-2 virus was discovered in 2019, we mapped 142 T cell epitopes across the SARS-CoV-2 genome to facilitate precise interrogation of the SARS-CoV-2-specific CD4+ T cell repertoire. We demonstrate a range of preexisting memory CD4+ T cells that are cross-reactive with comparable affinity to SARS-CoV-2 and the common cold coronaviruses human coronavirus (HCoV)-OC43, HCoV-229E, HCoV-NL63, and HCoV-HKU1. Thus, variegated T cell memory to coronaviruses that cause the common cold may underlie at least some of the extensive heterogeneity observed in coronavirus disease 2019 (COVID-19) disease.

Journal ArticleDOI
TL;DR: The explosive pandemic of Zika virus infection in South and Central America is the most recent of four unexpected arrivals of important arthropod-borne viral diseases in the Western Hemisphere over the past 20 years.
Abstract: The explosive pandemic of Zika virus infection in South and Central America is the most recent of four unexpected arrivals of important arthropod-borne viral diseases in the Western Hemisphere over the past 20 years. Is this an important new disease-emergence pattern?

Proceedings ArticleDOI
27 Jun 2016
TL;DR: The authors decomposes questions into their linguistic substructures, and uses these structures to dynamically instantiate modular networks (with reusable components for recognizing dogs, classifying colors, etc.). The resulting compound networks are jointly trained.
Abstract: Visual question answering is fundamentally compositional in nature—a question like where is the dog? shares substructure with questions like what color is the dog? and where is the cat? This paper seeks to simultaneously exploit the representational capacity of deep networks and the compositional linguistic structure of questions. We describe a procedure for constructing and learning neural module networks, which compose collections of jointly-trained neural "modules" into deep networks for question answering. Our approach decomposes questions into their linguistic substructures, and uses these structures to dynamically instantiate modular networks (with reusable components for recognizing dogs, classifying colors, etc.). The resulting compound networks are jointly trained. We evaluate our approach on two challenging datasets for visual question answering, achieving state-of-the-art results on both the VQA natural image dataset and a new dataset of complex questions about abstract shapes.

Journal ArticleDOI
22 Sep 2017
TL;DR: There are concerns about using synthetic phenolic antioxidants as food additives because of the reported negative effects on human health, so a replacement of these synthetics by antioxidant extractions from various foods has been proposed.
Abstract: There are concerns about using synthetic phenolic antioxidants such as butylated hydroxytoluene (BHT) and butylated hydroxyanisole (BHA) as food additives because of the reported negative effects on human health. Thus, a replacement of these synthetics by antioxidant extractions from various foods has been proposed. More than 8000 different phenolic compounds have been characterized; fruits and vegetables are the prime sources of natural antioxidants. In order to extract, measure, and identify bioactive compounds from a wide variety of fruits and vegetables, researchers use multiple techniques and methods. This review includes a brief description of a wide range of different assays. The antioxidant, antimicrobial, and anticancer properties of phenolic natural products from fruits and vegetables are also discussed.