scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: The addition of ixazomib to a regimen of lenalidomide and dexamethasone was associated with significantly longer progression-free survival; the additional toxic effects with this all-oral regimen were limited.
Abstract: BackgroundIxazomib is an oral proteasome inhibitor that is currently being studied for the treatment of multiple myeloma. MethodsIn this double-blind, placebo-controlled, phase 3 trial, we randomly assigned 722 patients who had relapsed, refractory, or relapsed and refractory multiple myeloma to receive ixazomib plus lenalidomide–dexamethasone (ixazomib group) or placebo plus lenalidomide–dexamethasone (placebo group). The primary end point was progression-free survival. ResultsProgression-free survival was significantly longer in the ixazomib group than in the placebo group at a median follow-up of 14.7 months (median progression-free survival, 20.6 months vs. 14.7 months; hazard ratio for disease progression or death in the ixazomib group, 0.74; P=0.01); a benefit with respect to progression-free survival was observed with the ixazomib regimen, as compared with the placebo regimen, in all prespecified patient subgroups, including in patients with high-risk cytogenetic abnormalities. The overall rates of...

821 citations


Journal ArticleDOI
TL;DR: Evidence-based practice includes, in part, implementation of the findings of well-conducted quality research studies, so being able to critique quantitative research is an important skill for nurses.
Abstract: Evidence-based practice includes, in part, implementation of the findings of well-conducted quality research studies. So being able to critique quantitative research is an important skill for nurses. Consideration must be given not only to the results of the study but also the rigour of the research. Rigour refers to the extent to which the researchers worked to enhance the quality of the studies. In quantitative research, this is achieved through measurement of the validity and reliability.1 Validity is defined as the extent to which a concept is accurately measured in a quantitative study. For example, a survey designed to explore depression but which actually measures anxiety would not be considered valid. The second measure of quality in a quantitative study is reliability , or the accuracy of an instrument. In other words, the extent to which a research instrument consistently has the same results if it is used in the same situation on repeated occasions. A simple example of validity and reliability is an alarm clock that rings at 7:00 each morning, but is set for 6:30. It is very reliable (it consistently rings the same time each day), but is not valid (it is not ringing at the desired time). It's important to consider validity and reliability of the data collection tools (instruments) when either conducting or critiquing research. There are three major types of validity. These are described in table 1. View this table: Table 1 Types of validity The first category is content validity . This category looks at whether the instrument adequately covers …

821 citations


Journal ArticleDOI
TL;DR: What is known about the dynamics of the ER, what questions remain, and how coordinated responses add to the layers of regulation in this dynamic organelle are discussed.
Abstract: The endoplasmic reticulum (ER) is a large, dynamic structure that serves many roles in the cell including calcium storage, protein synthesis and lipid metabolism. The diverse functions of the ER are performed by distinct domains; consisting of tubules, sheets and the nuclear envelope. Several proteins that contribute to the overall architecture and dynamics of the ER have been identified, but many questions remain as to how the ER changes shape in response to cellular cues, cell type, cell cycle state and during development of the organism. Here we discuss what is known about the dynamics of the ER, what questions remain, and how coordinated responses add to the layers of regulation in this dynamic organelle.

821 citations


Journal ArticleDOI
21 Jun 2016-JAMA
TL;DR: In randomized trials conducted among average-risk, asymptomatic women, ovarian cancer mortality did not significantly differ between screened women and those with no screening or in usual care; evidence on psychological harms was limited but nonsignificant.
Abstract: Importance Colorectal cancer (CRC) remains a significant cause of morbidity and mortality in the United States. Objective To systematically review the effectiveness, diagnostic accuracy, and harms of screening for CRC. Data Sources Searches of MEDLINE, PubMed, and the Cochrane Central Register of Controlled Trials for relevant studies published from January 1, 2008, through December 31, 2014, with surveillance through February 23, 2016. Study Selection English-language studies conducted in asymptomatic populations at general risk of CRC. Data Extraction and Synthesis Two reviewers independently appraised the articles and extracted relevant study data from fair- or good-quality studies. Random-effects meta-analyses were conducted. Main Outcomes and Measures Colorectal cancer incidence and mortality, test accuracy in detecting CRC or adenomas, and serious adverse events. Results Four pragmatic randomized clinical trials (RCTs) evaluating 1-time or 2-time flexible sigmoidoscopy (n = 458 002) were associated with decreased CRC-specific mortality compared with no screening (incidence rate ratio, 0.73; 95% CI, 0.66-0.82). Five RCTs with multiple rounds of biennial screening with guaiac-based fecal occult blood testing (n = 419 966) showed reduced CRC-specific mortality (relative risk [RR], 0.91; 95% CI, 0.84-0.98, at 19.5 years to RR, 0.78; 95% CI, 0.65-0.93, at 30 years). Seven studies of computed tomographic colonography (CTC) with bowel preparation demonstrated per-person sensitivity and specificity to detect adenomas 6 mm and larger comparable with colonoscopy (sensitivity from 73% [95% CI, 58%-84%] to 98% [95% CI, 91%-100%]; specificity from 89% [95% CI, 84%-93%] to 91% [95% CI, 88%-93%]); variability and imprecision may be due to differences in study designs or CTC protocols. Sensitivity of colonoscopy to detect adenomas 6 mm or larger ranged from 75% (95% CI, 63%-84%) to 93% (95% CI, 88%-96%). On the basis of a single stool specimen, the most commonly evaluated families of fecal immunochemical tests (FITs) demonstrated good sensitivity (range, 73%-88%) and specificity (range, 90%-96%). One study (n = 9989) found that FIT plus stool DNA test had better sensitivity in detecting CRC than FIT alone (92%) but lower specificity (84%). Serious adverse events from colonoscopy in asymptomatic persons included perforations (4/10 000 procedures, 95% CI, 2-5 in 10 000) and major bleeds (8/10 000 procedures, 95% CI, 5-14 in 10 000). Computed tomographic colonography may have harms resulting from low-dose ionizing radiation exposure or identification of extracolonic findings. Conclusions and Relevance Colonoscopy, flexible sigmoidoscopy, CTC, and stool tests have differing levels of evidence to support their use, ability to detect cancer and precursor lesions, and risk of serious adverse events in average-risk adults. Although CRC screening has a large body of supporting evidence, additional research is still needed.

821 citations


Journal ArticleDOI
16 Feb 2018-Science
TL;DR: Clinical benefit was associated with loss-of-function mutations in the PBRM1 gene, which encodes a subunit of the PBAF switch-sucrose nonfermentable (SWI/SNF) chromatin remodeling complex, and may alter global tumor-cell expression profiles to influence responsiveness to immune checkpoint therapy.
Abstract: Immune checkpoint inhibitors targeting the programmed cell death 1 receptor (PD-1) improve survival in a subset of patients with clear cell renal cell carcinoma (ccRCC). To identify genomic alterations in ccRCC that correlate with response to anti–PD-1 monotherapy, we performed whole-exome sequencing of metastatic ccRCC from 35 patients. We found that clinical benefit was associated with loss-of-function mutations in the PBRM1 gene ( P = 0.012), which encodes a subunit of the PBAF switch-sucrose nonfermentable (SWI/SNF) chromatin remodeling complex. We confirmed this finding in an independent validation cohort of 63 ccRCC patients treated with PD-1 or PD-L1 (PD-1 ligand) blockade therapy alone or in combination with anti–CTLA-4 (cytotoxic T lymphocyte-associated protein 4) therapies ( P = 0.0071). Gene-expression analysis of PBAF-deficient ccRCC cell lines and PBRM1 -deficient tumors revealed altered transcriptional output in JAK-STAT (Janus kinase–signal transducers and activators of transcription), hypoxia, and immune signaling pathways. PBRM1 loss in ccRCC may alter global tumor-cell expression profiles to influence responsiveness to immune checkpoint therapy.

821 citations


Posted Content
TL;DR: A simple, efficient intra-layer model parallel approach that enables training transformer models with billions of parameters and shows that careful attention to the placement of layer normalization in BERT-like models is critical to achieving increased performance as the model size grows.
Abstract: Recent work in language modeling demonstrates that training large transformer models advances the state of the art in Natural Language Processing applications. However, very large models can be quite difficult to train due to memory constraints. In this work, we present our techniques for training very large transformer models and implement a simple, efficient intra-layer model parallel approach that enables training transformer models with billions of parameters. Our approach does not require a new compiler or library changes, is orthogonal and complimentary to pipeline model parallelism, and can be fully implemented with the insertion of a few communication operations in native PyTorch. We illustrate this approach by converging transformer based models up to 8.3 billion parameters using 512 GPUs. We sustain 15.1 PetaFLOPs across the entire application with 76% scaling efficiency when compared to a strong single GPU baseline that sustains 39 TeraFLOPs, which is 30% of peak FLOPs. To demonstrate that large language models can further advance the state of the art (SOTA), we train an 8.3 billion parameter transformer language model similar to GPT-2 and a 3.9 billion parameter model similar to BERT. We show that careful attention to the placement of layer normalization in BERT-like models is critical to achieving increased performance as the model size grows. Using the GPT-2 model we achieve SOTA results on the WikiText103 (10.8 compared to SOTA perplexity of 15.8) and LAMBADA (66.5% compared to SOTA accuracy of 63.2%) datasets. Our BERT model achieves SOTA results on the RACE dataset (90.9% compared to SOTA accuracy of 89.4%).

821 citations


Posted Content
TL;DR: A quadruplet deep network using a margin-based online hard negative mining is proposed based on the quadruplet loss for the person ReID, which can lead to the model output with a larger inter- class variation and a smaller intra-class variation compared to the triplet loss.
Abstract: Person re-identification (ReID) is an important task in wide area video surveillance which focuses on identifying people across different cameras. Recently, deep learning networks with a triplet loss become a common framework for person ReID. However, the triplet loss pays main attentions on obtaining correct orders on the training set. It still suffers from a weaker generalization capability from the training set to the testing set, thus resulting in inferior performance. In this paper, we design a quadruplet loss, which can lead to the model output with a larger inter-class variation and a smaller intra-class variation compared to the triplet loss. As a result, our model has a better generalization ability and can achieve a higher performance on the testing set. In particular, a quadruplet deep network using a margin-based online hard negative mining is proposed based on the quadruplet loss for the person ReID. In extensive experiments, the proposed network outperforms most of the state-of-the-art algorithms on representative datasets which clearly demonstrates the effectiveness of our proposed method.

821 citations


Posted Content
TL;DR: The hierarchical-DQN framework as discussed by the authors integrates hierarchical value functions, operating at different temporal scales, with intrinsically motivated deep reinforcement learning, allowing for flexible goal specifications, such as functions over entities and relations.
Abstract: Learning goal-directed behavior in environments with sparse feedback is a major challenge for reinforcement learning algorithms. The primary difficulty arises due to insufficient exploration, resulting in an agent being unable to learn robust value functions. Intrinsically motivated agents can explore new behavior for its own sake rather than to directly solve problems. Such intrinsic behaviors could eventually help the agent solve tasks posed by the environment. We present hierarchical-DQN (h-DQN), a framework to integrate hierarchical value functions, operating at different temporal scales, with intrinsically motivated deep reinforcement learning. A top-level value function learns a policy over intrinsic goals, and a lower-level function learns a policy over atomic actions to satisfy the given goals. h-DQN allows for flexible goal specifications, such as functions over entities and relations. This provides an efficient space for exploration in complicated environments. We demonstrate the strength of our approach on two problems with very sparse, delayed feedback: (1) a complex discrete stochastic decision process, and (2) the classic ATARI game `Montezuma's Revenge'.

820 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the potential for the eLISA space-based interferometer to detect the stochastic gravitational wave background produced by strong first-order cosmological phase transitions.
Abstract: We investigate the potential for the eLISA space-based interferometer to detect the stochastic gravitational wave background produced by strong first-order cosmological phase transitions. We discuss the resulting contributions from bubble collisions, magnetohydrodynamic turbulence, and sound waves to the stochastic background, and estimate the total corresponding signal predicted in gravitational waves. The projected sensitivity of eLISA to cosmological phase transitions is computed in a model-independent way for various detector designs and configurations. By applying these results to several specific models, we demonstrate that eLISA is able to probe many well-motivated scenarios beyond the Standard Model of particle physics predicting strong first-order cosmological phase transitions in the early Universe.

820 citations


Proceedings Article
03 Jul 2018
TL;DR: A Mutual Information Neural Estimator (MINE) is presented that is linearly scalable in dimensionality as well as in sample size, trainable through back-prop, and strongly consistent, and applied to improve adversarially trained generative models.
Abstract: We argue that the estimation of mutual information between high dimensional continuous random variables can be achieved by gradient descent over neural networks. We present a Mutual Information Neural Estimator (MINE) that is linearly scalable in dimensionality as well as in sample size, trainable through back-prop, and strongly consistent. We present a handful of applications on which MINE can be used to minimize or maximize mutual information. We apply MINE to improve adversarially trained generative models. We also use MINE to implement the Information Bottleneck, applying it to supervised classification; our results demonstrate substantial improvement in flexibility and performance in these settings.

820 citations


Journal ArticleDOI
TL;DR: The percentage of children with elevated blood lead levels increased after water source change, particularly in socioeconomically disadvantaged neighborhoods, and disadvantaged neighborhoods as having the greatest elevated bloodLead level increases and informed response prioritization during the now-declared public health emergency.
Abstract: Objectives. We analyzed differences in pediatric elevated blood lead level incidence before and after Flint, Michigan, introduced a more corrosive water source into an aging water system without adequate corrosion control. Methods. We reviewed blood lead levels for children younger than 5 years before (2013) and after (2015) water source change in Greater Flint, Michigan.We assessed the percentage of elevated blood lead levels in both time periods, and identified geographical locations through spatial analysis. Results. Incidence of elevated blood lead levels increased from 2.4% to 4.9% (P<.05) after water source change, and neighborhoods with the highest water lead levels experienced a 6.6% increase. No significant change was seen outside the city. Geospatial analysis identified disadvantaged neighborhoods as having the greatest elevated blood lead levelincreases andinformed response prioritization during the now-declared public health emergency. Conclusions. The percentage of children with elevated blood lead levels increased after water source change, particularly in socioeconomically disadvantaged neighborhoods. Water is a growing source of childhood lead exposure because of aging infrastructure. (Am J Public Health. 2016;106:283–290. doi:10.2105/AJPH.2015.303003)

Posted Content
TL;DR: In this paper, a combination of exhaustive and reinforcement learning-based search is proposed to discover multiple novel activation functions, including Swish, which is the most successful and widely used activation function in deep networks.
Abstract: The choice of activation functions in deep networks has a significant effect on the training dynamics and task performance. Currently, the most successful and widely-used activation function is the Rectified Linear Unit (ReLU). Although various hand-designed alternatives to ReLU have been proposed, none have managed to replace it due to inconsistent gains. In this work, we propose to leverage automatic search techniques to discover new activation functions. Using a combination of exhaustive and reinforcement learning-based search, we discover multiple novel activation functions. We verify the effectiveness of the searches by conducting an empirical evaluation with the best discovered activation function. Our experiments show that the best discovered activation function, $f(x) = x \cdot \text{sigmoid}(\beta x)$, which we name Swish, tends to work better than ReLU on deeper models across a number of challenging datasets. For example, simply replacing ReLUs with Swish units improves top-1 classification accuracy on ImageNet by 0.9\% for Mobile NASNet-A and 0.6\% for Inception-ResNet-v2. The simplicity of Swish and its similarity to ReLU make it easy for practitioners to replace ReLUs with Swish units in any neural network.

Journal ArticleDOI
TL;DR: In the experience, approximately one-third of ipilimumab-treated patients required systemic corticosteroids, and almost one- third of those required further immune suppression with anti-TNFα therapy, which does not affect OS or TTF.
Abstract: Purpose Ipilimumab is a standard treatment for metastatic melanoma, but immune-related adverse events (irAEs) are common and can be severe. We reviewed our large, contemporary experience with ipilimumab treatment outside of clinical trials to determine the frequency of use of systemic corticosteroid or anti-tumor necrosis factor α (anti-TNFα) therapy and the effect of these therapies on overall survival (OS) and time to treatment failure (TTF). Patients and Methods We reviewed retrospectively the medical records of patients with melanoma who had received treatment between April 2011 and July 2013 with ipilimumab at the standard dose of 3 mg/kg. We collected data on patient demographics, previous and subsequent treatments, number of ipilimumab doses, irAEs and how they were treated, and overall survival. Results Of the 298 patients, 254 (85%) experienced an irAE of any grade. Fifty-six patients (19%) discontinued therapy because of an irAE, most commonly diarrhea. Overall, 103 patients (35%) required syste...

Journal ArticleDOI
01 Mar 2018
TL;DR: The development of wearable sweat sensors is examined, considering the challenges and opportunities for such technology in the context of personalized healthcare and the requirements of the underlying components.
Abstract: Sweat potentially contains a wealth of physiologically relevant information, but has traditionally been an underutilized resource for non-invasive health monitoring. Recent advances in wearable sweat sensors have overcome many of the historic drawbacks of sweat sensing and such sensors now offer methods of gleaning molecular-level insight into the dynamics of our bodies. Here we review key developments in sweat sensing technology. We highlight the potential value of sweat-based wearable sensors, examine state-of-the-art devices and the requirements of the underlying components, and consider ways to tackle data integrity issues within these systems. We also discuss challenges and opportunities for wearable sweat sensors in the development of personalized healthcare.

Journal ArticleDOI
TL;DR: A polynomial-time algorithm with successive MBS placement, where the MBSs are placed sequentially starting on the area perimeter of the uncovered GTs along a spiral path toward the center, until all GTs are covered.
Abstract: In terrestrial communication networks without fixed infrastructure, unmanned aerial vehicle-mounted mobile base stations (MBSs) provide an efficient solution to achieve wireless connectivity. This letter aims to minimize the number of MBSs needed to provide wireless coverage for a group of distributed ground terminals (GTs), ensuring that each GT is within the communication range of at least one MBS. We propose a polynomial-time algorithm with successive MBS placement, where the MBSs are placed sequentially starting on the area perimeter of the uncovered GTs along a spiral path toward the center, until all GTs are covered. Numerical results show that the proposed algorithm performs favorably compared with other schemes in terms of the number of required MBSs as well as time complexity.

Journal ArticleDOI
TL;DR: The impact of chloroplast genome sequences on understanding the origins of economically important cultivated species and changes that have taken place during domestication are discussed.
Abstract: Chloroplasts play a crucial role in sustaining life on earth. The availability of over 800 sequenced chloroplast genomes from a variety of land plants has enhanced our understanding of chloroplast biology, intracellular gene transfer, conservation, diversity, and the genetic basis by which chloroplast transgenes can be engineered to enhance plant agronomic traits or to produce high-value agricultural or biomedical products. In this review, we discuss the impact of chloroplast genome sequences on understanding the origins of economically important cultivated species and changes that have taken place during domestication. We also discuss the potential biotechnological applications of chloroplast genomes.

Journal ArticleDOI
TL;DR: In this paper, a global expert response study was conducted to elicit projections for the proportion of elective surgery that would be cancelled or postponed during the 12 weeks of peak disruption due to the COVID-19 pandemic.
Abstract: Background: The COVID-19 pandemic has disrupted routine hospital services globally. This study estimated the total number of adult elective operations that would be cancelled worldwide during the 12 weeks of peak disruption due to COVID-19. Methods: A global expert response study was conducted to elicit projections for the proportion of elective surgery that would be cancelled or postponed during the 12 weeks of peak disruption. A Bayesian β-regression model was used to estimate 12-week cancellation rates for 190 countries. Elective surgical case-mix data, stratified by specialty and indication (surgery for cancer versus benign disease), were determined. This case mix was applied to country-level surgical volumes. The 12-week cancellation rates were then applied to these figures to calculate the total number of cancelled operations. Results: The best estimate was that 28 404 603 operations would be cancelled or postponed during the peak 12 weeks of disruption due to COVID-19 (2 367 050 operations per week). Most would be operations for benign disease (90·2 per cent, 25 638 922 of 28 404 603). The overall 12-week cancellation rate would be 72·3 per cent. Globally, 81·7 per cent of operations for benign conditions (25 638 922 of 31 378 062), 37·7 per cent of cancer operations (2 324 070 of 6 162 311) and 25·4 per cent of elective caesarean sections (441 611 of 1 735 483) would be cancelled or postponed. If countries increased their normal surgical volume by 20 per cent after the pandemic, it would take a median of 45 weeks to clear the backlog of operations resulting from COVID-19 disruption. Conclusion: A very large number of operations will be cancelled or postponed owing to disruption caused by COVID-19. Governments should mitigate against this major burden on patients by developing recovery plans and implementing strategies to restore surgical activity safely.

Journal ArticleDOI
TL;DR: Contrary to current dogma, this study suggests that steatosis can progress to NASH and clinically significant fibrosis.

Journal ArticleDOI
TL;DR: A frequentist analogue to SUCRA which is based solely on the point estimates and standard errors of the frequentist network meta-analysis estimates under normality assumption and can easily be calculated as means of one-sided p-values is proposed.
Abstract: Network meta-analysis is used to compare three or more treatments for the same condition. Within a Bayesian framework, for each treatment the probability of being best, or, more general, the probability that it has a certain rank can be derived from the posterior distributions of all treatments. The treatments can then be ranked by the surface under the cumulative ranking curve (SUCRA). For comparing treatments in a network meta-analysis, we propose a frequentist analogue to SUCRA which we call P-score that works without resampling. P-scores are based solely on the point estimates and standard errors of the frequentist network meta-analysis estimates under normality assumption and can easily be calculated as means of one-sided p-values. They measure the mean extent of certainty that a treatment is better than the competing treatments. Using case studies of network meta-analysis in diabetes and depression, we demonstrate that the numerical values of SUCRA and P-Score are nearly identical. Ranking treatments in frequentist network meta-analysis works without resampling. Like the SUCRA values, P-scores induce a ranking of all treatments that mostly follows that of the point estimates, but takes precision into account. However, neither SUCRA nor P-score offer a major advantage compared to looking at credible or confidence intervals.

Journal ArticleDOI
15 Nov 2019-Science
TL;DR: One-dimensional bunched platinum-nickel alloy nanocages with a Pt-skin structure for the oxygen reduction reaction that display high mass activity and specific activity and are nearly 17 and 14 times higher as compared with a commercial platinum on carbon (Pt/C) catalyst.
Abstract: Development of efficient and robust electrocatalysts is critical for practical fuel cells. We report one-dimensional bunched platinum-nickel (Pt-Ni) alloy nanocages with a Pt-skin structure for the oxygen reduction reaction that display high mass activity (3.52 amperes per milligram platinum) and specific activity (5.16 milliamperes per square centimeter platinum), or nearly 17 and 14 times higher as compared with a commercial platinum on carbon (Pt/C) catalyst. The catalyst exhibits high stability with negligible activity decay after 50,000 cycles. Both the experimental results and theoretical calculations reveal the existence of fewer strongly bonded platinum-oxygen (Pt-O) sites induced by the strain and ligand effects. Moreover, the fuel cell assembled by this catalyst delivers a current density of 1.5 amperes per square centimeter at 0.6 volts and can operate steadily for at least 180 hours.

Book ChapterDOI
08 Oct 2016
TL;DR: The Connectionist Text Proposal Network (CTPN) as mentioned in this paper detects a text line in a sequence of fine-scale text proposals directly in convolutional feature maps, and develops a vertical anchor mechanism that jointly predicts location and text/non-text score of each fixed-width proposal.
Abstract: We propose a novel Connectionist Text Proposal Network (CTPN) that accurately localizes text lines in natural image. The CTPN detects a text line in a sequence of fine-scale text proposals directly in convolutional feature maps. We develop a vertical anchor mechanism that jointly predicts location and text/non-text score of each fixed-width proposal, considerably improving localization accuracy. The sequential proposals are naturally connected by a recurrent neural network, which is seamlessly incorporated into the convolutional network, resulting in an end-to-end trainable model. This allows the CTPN to explore rich context information of image, making it powerful to detect extremely ambiguous text. The CTPN works reliably on multi-scale and multi-language text without further post-processing, departing from previous bottom-up methods requiring multi-step post filtering. It achieves 0.88 and 0.61 F-measure on the ICDAR 2013 and 2015 benchmarks, surpassing recent results [8, 35] by a large margin. The CTPN is computationally efficient with 0.14 s/image, by using the very deep VGG16 model [27]. Online demo is available: http://textdet.com/.

Proceedings Article
01 Jan 2016
TL;DR: In this article, the Dynamic Filter Network (DFN) is proposed, where filters are generated dynamically conditioned on an input, and a wide variety of filtering operation can be learned this way, including local spatial transformations, selective (de)blurring or adaptive feature extraction.
Abstract: In a traditional convolutional layer, the learned filters stay fixed after training. In contrast, we introduce a new framework, the Dynamic Filter Network, where filters are generated dynamically conditioned on an input. We show that this architecture is a powerful one, with increased flexibility thanks to its adaptive nature, yet without an excessive increase in the number of model parameters. A wide variety of filtering operation can be learned this way, including local spatial transformations, but also others like selective (de)blurring or adaptive feature extraction. Moreover, multiple such layers can be combined, e.g. in a recurrent architecture. We demonstrate the effectiveness of the dynamic filter network on the tasks of video and stereo prediction, and reach state-of-the-art performance on the moving MNIST dataset with a much smaller model. By visualizing the learned filters, we illustrate that the network has picked up flow information by only looking at unlabelled training data. This suggests that the network can be used to pretrain networks for various supervised tasks in an unsupervised way, like optical flow and depth estimation.

Journal ArticleDOI
01 Jul 2018-Allergy
TL;DR: In this paper, an evidence-and consensus-based guideline was developed following the methods recommended by Cochrane and the Grading of Recommendations Assessment, Development and Evaluation (GRADE) working group.
Abstract: This evidence- and consensus-based guideline was developed following the methods recommended by Cochrane and the Grading of Recommendations Assessment, Development and Evaluation (GRADE) working group. The conference was held on 1 December 2016. It is a joint initiative of the Dermatology Sectionof the European Academy of Allergology and Clinical Immunology (EAACI), the EU-founded network of excellence, the Global Allergy and Asthma European Network (GA(2)LEN), the European Dermatology Forum (EDF) and the World Allergy Organization (WAO) with the participation of 48 delegates of 42 national and international societies. This guideline was acknowledged and accepted by the European Union of Medical Specialists (UEMS). Urticaria is a frequent, mast cell-driven disease, presenting with wheals, angioedema, or both. The lifetime prevalence for acute urticaria is approximately 20%. Chronic spontaneous urticaria and other chronic forms of urticaria are disabling, impair quality of life and affect performance at work and school. This guideline covers the definition and classification of urticaria, taking into account the recent progress in identifying its causes, eliciting factors and pathomechanisms. In addition, it outlines evidence-based diagnostic and therapeutic approaches for the different subtypes of urticaria.

Posted Content
TL;DR: Two approaches to explaining predictions of deep learning models are presented, one method which computes the sensitivity of the prediction with respect to changes in the input and one approach which meaningfully decomposes the decision in terms of the input variables.
Abstract: With the availability of large databases and recent improvements in deep learning methodology, the performance of AI systems is reaching or even exceeding the human level on an increasing number of complex tasks. Impressive examples of this development can be found in domains such as image classification, sentiment analysis, speech understanding or strategic game playing. However, because of their nested non-linear structure, these highly successful machine learning and artificial intelligence models are usually applied in a black box manner, i.e., no information is provided about what exactly makes them arrive at their predictions. Since this lack of transparency can be a major drawback, e.g., in medical applications, the development of methods for visualizing, explaining and interpreting deep learning models has recently attracted increasing attention. This paper summarizes recent developments in this field and makes a plea for more interpretability in artificial intelligence. Furthermore, it presents two approaches to explaining predictions of deep learning models, one method which computes the sensitivity of the prediction with respect to changes in the input and one approach which meaningfully decomposes the decision in terms of the input variables. These methods are evaluated on three classification tasks.

Proceedings Article
30 Apr 2020
TL;DR: This article proposed BERTScore, an automatic evaluation metric for text generation, which computes a similarity score for each token in the candidate sentence with each token from the reference sentence. But instead of exact matches, they compute token similarity using contextual embeddings.
Abstract: We propose BERTScore, an automatic evaluation metric for text generation. Analogously to common metrics, BERTScore computes a similarity score for each token in the candidate sentence with each token in the reference sentence. However, instead of exact matches, we compute token similarity using contextual embeddings. We evaluate using the outputs of 363 machine translation and image captioning systems. BERTScore correlates better with human judgments and provides stronger model selection performance than existing metrics. Finally, we use an adversarial paraphrase detection task and show that BERTScore is more robust to challenging examples compared to existing metrics.

Journal ArticleDOI
TL;DR: Spironolactone was the most effective blood pressure-lowering treatment, throughout the distribution of baseline plasma renin; but its margin of superiority and likelihood of being the best drug for the individual patient were many-fold greater in the lower than higher ends of the distribution.

Journal ArticleDOI
TL;DR: This systematic review is an update examining the relationships between objectively and subjectively measured sedentary behaviour and health indicators in children and youth aged 5-17 years and found higher durations/frequencies of screen time and television viewing were associated with unfavourable body composition.
Abstract: Accumulating evidence suggests that, independent of physical activity levels, sedentary behaviours are associated with increased risk of cardio-metabolic disease, all-cause mortality, and a variety of physiological and psychological problems. Therefore, the purpose of this systematic review is to determine the relationship between sedentary behaviour and health indicators in school-aged children and youth aged 5-17 years. Online databases (MEDLINE, EMBASE and PsycINFO), personal libraries and government documents were searched for relevant studies examining time spent engaging in sedentary behaviours and six specific health indicators (body composition, fitness, metabolic syndrome and cardiovascular disease, self-esteem, pro-social behaviour and academic achievement). 232 studies including 983,840 participants met inclusion criteria and were included in the review. Television (TV) watching was the most common measure of sedentary behaviour and body composition was the most common outcome measure. Qualitative analysis of all studies revealed a dose-response relation between increased sedentary behaviour and unfavourable health outcomes. Watching TV for more than 2 hours per day was associated with unfavourable body composition, decreased fitness, lowered scores for self-esteem and pro-social behaviour and decreased academic achievement. Meta-analysis was completed for randomized controlled studies that aimed to reduce sedentary time and reported change in body mass index (BMI) as their primary outcome. In this regard, a metaanalysis revealed an overall significant effect of -0.81 (95% CI of -1.44 to -0.17, p = 0.01) indicating an overall decrease in mean BMI associated with the interventions. There is a large body of evidence from all study designs which suggests that decreasing any type of sedentary time is associated with lower health risk in youth aged 5-17 years. In particular, the evidence suggests that daily TV viewing in excess of 2 hours is associated with reduced physical and psychosocial health, and that lowering sedentary time leads to reductions in BMI.

Journal ArticleDOI
TL;DR: In this paper, the membrane potentials of spiking neurons are treated as differentiable signals, where discontinuities at spike times are considered as noise, which enables an error backpropagation mechanism for deep spiking neural networks.
Abstract: Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.

Journal Article
TL;DR: This article found that the more trustworthy the host is perceived to be from her photo, the higher the price of the listing and the probability of its being chosen, and that a host's reputation, communicated by her online review scores, has no effect on listing price or likelihood of consumer booking.
Abstract: ‘Sharing economy’ platforms such as Airbnb have recently flourished in the tourism industry. The prominent appearance of sellers' photos on these platforms motivated our study. We suggest that the presence of these photos can have a significant impact on guests' decision making. Specifically, we contend that guests infer the host's trustworthiness from these photos, and that their choice is affected by this inference. In an empirical analysis of Airbnb's data and a controlled experiment, we found that the more trustworthy the host is perceived to be from her photo, the higher the price of the listing and the probability of its being chosen. We also find that a host's reputation, communicated by her online review scores, has no effect on listing price or likelihood of consumer booking. We further demonstrate that if review scores are varied experimentally, they affect guests' decisions, but the role of the host's photo remains significant.

Journal ArticleDOI
15 Mar 2018-Nature
TL;DR: Measurements of a phononic quadrupole topological insulator are reported and topological corner states are found that are an important stepping stone to the experimental realization of topologically protected wave guides in higher dimensions, and thereby open up a new path for the design of metamaterials.
Abstract: The modern theory of charge polarization in solids is based on a generalization of Berry’s phase. The possibility of the quantization of this phase arising from parallel transport in momentum space is essential to our understanding of systems with topological band structures. Although based on the concept of charge polarization, this same theory can also be used to characterize the Bloch bands of neutral bosonic systems such as photonic or phononic crystals. The theory of this quantized polarization has recently been extended from the dipole moment to higher multipole moments. In particular, a two-dimensional quantized quadrupole insulator is predicted to have gapped yet topological one-dimensional edge modes, which stabilize zero-dimensional in-gap corner states. However, such a state of matter has not previously been observed experimentally. Here we report measurements of a phononic quadrupole topological insulator. We experimentally characterize the bulk, edge and corner physics of a mechanical metamaterial (a material with tailored mechanical properties) and find the predicted gapped edge and in-gap corner states. We corroborate our findings by comparing the mechanical properties of a topologically non-trivial system to samples in other phases that are predicted by the quadrupole theory. These topological corner states are an important stepping stone to the experimental realization of topologically protected wave guides in higher dimensions, and thereby open up a new path for the design of metamaterials.