scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: In this article, the authors considered a multi-user MEC network powered by the WPT, where each energy-harvesting WD follows a binary computation offloading policy, i.e., the data set of a task has to be executed as a whole either locally or remotely at the MEC server via task offloading.
Abstract: Finite battery lifetime and low computing capability of size-constrained wireless devices (WDs) have been longstanding performance limitations of many low-power wireless networks, e.g., wireless sensor networks and Internet of Things. The recent development of radio frequency-based wireless power transfer (WPT) and mobile edge computing (MEC) technologies provide a promising solution to fully remove these limitations so as to achieve sustainable device operation and enhanced computational capability. In this paper, we consider a multi-user MEC network powered by the WPT, where each energy-harvesting WD follows a binary computation offloading policy, i.e., the data set of a task has to be executed as a whole either locally or remotely at the MEC server via task offloading. In particular, we are interested in maximizing the (weighted) sum computation rate of all the WDs in the network by jointly optimizing the individual computing mode selection (i.e., local computing or offloading) and the system transmission time allocation (on WPT and task offloading). The major difficulty lies in the combinatorial nature of the multi-user computing mode selection and its strong coupling with the transmission time allocation. To tackle this problem, we first consider a decoupled optimization, where we assume that the mode selection is given and propose a simple bi-section search algorithm to obtain the conditional optimal time allocation. On top of that, a coordinate descent method is devised to optimize the mode selection. The method is simple in implementation but may suffer from high computational complexity in a large-size network. To address this problem, we further propose a joint optimization method based on the alternating direction method of multipliers (ADMM) decomposition technique, which enjoys a much slower increase of computational complexity as the networks size increases. Extensive simulations show that both the proposed methods can efficiently achieve a near-optimal performance under various network setups, and significantly outperform the other representative benchmark methods considered.

563 citations


Journal ArticleDOI
TL;DR: A phase-locked loop (PLL) is a nonlinear negative feedback control system that synchronizes its output in frequency as well as in phase with its input PLLs are now widely used for the synchronization of power-electronics-based converters and also for monitoring and control purposes in different engineering fields as mentioned in this paper.
Abstract: A phase-locked loop (PLL) is a nonlinear negative-feedback control system that synchronizes its output in frequency as well as in phase with its input PLLs are now widely used for the synchronization of power-electronics-based converters and also for monitoring and control purposes in different engineering fields In recent years, there have been many attempts to design more advanced PLLs for three-phase applications The aim of this paper is to provide overviews of these attempts, which can be very useful for engineers and academic researchers

563 citations


Journal ArticleDOI
TL;DR: Significant enhancements to MGD are described, including two new graphical user interfaces: the Multi Genome Viewer for exploring the genomes of multiple mouse strains and the Phenotype-Gene Expression matrix which was developed in collaboration with the Gene Expression Database (GXD) and allows researchers to compare gene expression and phenotype annotations for mouse genes.
Abstract: The Mouse Genome Database (MGD; http://www.informatics.jax.org) is the community model organism genetic and genome resource for the laboratory mouse. MGD is the authoritative source for biological reference data sets related to mouse genes, gene functions, phenotypes, and mouse models of human disease. MGD is the primary outlet for official gene, allele and mouse strain nomenclature based on the guidelines set by the International Committee on Standardized Nomenclature for Mice. In this report we describe significant enhancements to MGD, including two new graphical user interfaces: (i) the Multi Genome Viewer for exploring the genomes of multiple mouse strains and (ii) the Phenotype-Gene Expression matrix which was developed in collaboration with the Gene Expression Database (GXD) and allows researchers to compare gene expression and phenotype annotations for mouse genes. Other recent improvements include enhanced efficiency of our literature curation processes and the incorporation of Transcriptional Start Site (TSS) annotations from RIKEN's FANTOM 5 initiative.

563 citations


Journal ArticleDOI
31 Mar 2017-Science
TL;DR: This approach provides a general framework to decipher differences between classes of human tumors by decoupling cancer cell genotypes, phenotypes, and the composition of the TME.
Abstract: Tumor subclasses differ according to the genotypes and phenotypes of malignant cells as well as the composition of the tumor microenvironment (TME). We dissected these influences in isocitrate dehydrogenase (IDH)-mutant gliomas by combining 14,226 single-cell RNA sequencing (RNA-seq) profiles from 16 patient samples with bulk RNA-seq profiles from 165 patient samples. Differences in bulk profiles between IDH-mutant astrocytoma and oligodendroglioma can be primarily explained by distinct TME and signature genetic events, whereas both tumor types share similar developmental hierarchies and lineages of glial differentiation. As tumor grade increases, we find enhanced proliferation of malignant cells, larger pools of undifferentiated glioma cells, and an increase in macrophage over microglia expression programs in TME. Our work provides a unifying model for IDH-mutant gliomas and a general framework for dissecting the differences among human tumor subclasses.

563 citations


Journal ArticleDOI
TL;DR: In this article, the authors focus on major novel strategies to achieve high-performance thermoelectric (TE) materials and their applications, and present a review of these strategies.
Abstract: Thermoelectric (TE) materials have the capability of converting heat into electricity, which can improve fuel efficiency, as well as providing robust alternative energy supply in multiple applications by collecting wasted heat, and therefore, assisting in finding new energy solutions. In order to construct high performance TE devices, superior TE materials have to be targeted via various strategies. The development of high performance TE devices can broaden the market of TE application and eventually boost the enthusiasm of TE material research. This review focuses on major novel strategies to achieve high-performance TE materials and their applications. Manipulating the carrier concentration and band structures of materials are effective in optimizing the electrical transport properties, while nanostructure engineering and defect engineering can greatly reduce the thermal conductivity approaching the amorphous limit. Currently, TE devices are utilized to generate power in remote missions, solar-thermal systems, implantable or/wearable devices, the automotive industry, and many other fields; they are also serving as temperature sensors and controllers or even gas sensors. The future tendency is to synergistically optimize and integrate all the effective factors to further improve the TE performance, so that highly efficient TE materials and devices can be more beneficial to daily lives.

563 citations


BookDOI
TL;DR: Graycar et al. as mentioned in this paper provide an introduction to intelligence-led policing and discuss some of the related limitations and opportunities, but there is still a lack of clarity among many in law enforcement as to what intelligence led policing is, what it aims to achieve and how it is supposed to operate.
Abstract: This paper is timely, given that policing is currently going through a period of significant change in both operational tactics and organisational structures. New ideas in crime reduction and changes to short- and long-term policing strategies are underway. Intelligence-led policing represents a recent approach and is one of the more prevalent of the current "shifts in crime control philosophy and policing practice" (Maguire 2000). Surprisingly, given the wide distribution of the term "intelligence-led policing", considerable confusion remains in regard to its actual meaning to both front-line officers and police management. This paper provides an introduction to intelligence-led policing and discusses some of the related limitations and opportunities. Adam Graycar Director Since the 1990s, "intelligence-led policing" (also known as intelligence-driven policing") has entered the lexicon of modern policing, especially in the UK and more recently Australia. Yet even with the ability of new ideas and innovation to spread throughout the policing world at the click of a mouse, there is still a lack of clarity among many in law enforcement as to what intelligence-led policing is, what it aims to achieve, and how it is supposed to operate. This can be seen in recent inspection reports of Her Majesty's Inspectorate of Constabulary (HMIC) in the UK (HMIC 2001, 2002), and in the lack of clarity regarding intelligence-led policing in the United States. A recent summit in March 2002 of over 120 criminal intelligence experts from across the US, funded by the US government and organised by the International Association of Chiefs of Police, may become a turning point in policing within the US. The participants called for a National Intelligence Plan, with one of the core recommendations being to "promote intelligence-led policing through a common understanding of criminal intelligence and its usefulness" (IACP 2002, p. v). The aspirations of the summit are considerable, but what is unclear from the summit report is a sound understanding of the aims of intelligence-led policing and its relationship to crime reduction. As intelligence-led policing is now a term in common usage within Australian law enforcement (a search of web pages and media releases found the term "intelligence-led" in all Australian police sites and the web site of the new Australian Crime Commission), it is timely to consider the origins of intelligence-led policing, the crime reduction levers it aims to pull, and the limitations and possibilities for this type of operational practice. Origins of Intelligence-led Policing Intelligence-led policing entered the police lexicon at some time around the early 1990s. As Gill (1998) has noted, the origins of intelligence-led policing are a little indistinct, but the earliest references to it originate in the UK where a seemingly inexorable rise in crime during the late 1980s and early 1990s coincided with increasing calls for police to be more effective and to be more cost-efficient. The driving forces for this move to a new strategy were both external and internal to policing. External drivers included an inability of the traditional, reactive model of policing to cope with the rapid changes in globalisation which have increased opportunities for transnational organised crime and removed physical and technological barriers across the policing domain. In the new "risk society" (Ericson & Haggerty 1997) the police were seen as the source of risk management data for a range of external institutions. With such an influence beyond the boundaries of law enforcement, it was never going to be long before the "new public management" drive to increase efficiency in public agencies reached the police. At the same time there was an internal recognition that changes were taking place in the dynamic relationship between the private security industry and the public police. …

562 citations


Journal ArticleDOI
TL;DR: In this paper, the authors compare Bayesian and frequentist approaches to hypothesis testing and estimation with confidence or credible intervals, and explain how Bayesian methods achieve the goals of the New Statistics better than frequentist methods.
Abstract: In the practice of data analysis, there is a conceptual distinction between hypothesis testing, on the one hand, and estimation with quantified uncertainty on the other. Among frequentists in psychology, a shift of emphasis from hypothesis testing to estimation has been dubbed "the New Statistics" (Cumming 2014). A second conceptual distinction is between frequentist methods and Bayesian methods. Our main goal in this article is to explain how Bayesian methods achieve the goals of the New Statistics better than frequentist methods. The article reviews frequentist and Bayesian approaches to hypothesis testing and to estimation with confidence or credible intervals. The article also describes Bayesian approaches to meta-analysis, randomized controlled trials, and power analysis.

562 citations


Journal ArticleDOI
Orestis Georgiou1, Usman Raza1
TL;DR: In this paper, the authors provide a stochastic geometry framework for modeling the performance of a single gateway LoRa network, a leading LPWA technology, and show that the coverage probability drops exponentially as the number of end-devices grows due to interfering signals using the same spreading sequence.
Abstract: Low power wide area (LPWA) networks are making spectacular progress from design, standardization, to commercialization. At this time of fast-paced adoption, it is of utmost importance to analyze how well these technologies will scale as the number of devices connected to the Internet of Things inevitably grows. In this letter, we provide a stochastic geometry framework for modeling the performance of a single gateway LoRa network, a leading LPWA technology. Our analysis formulates the unique peculiarities of LoRa, including its chirp spread-spectrum modulation technique, regulatory limitations on radio duty cycle, and use of ALOHA protocol on top, all of which are not as common in today’s commercial cellular networks. We show that the coverage probability drops exponentially as the number of end-devices grows due to interfering signals using the same spreading sequence. We conclude that this fundamental limiting factor is perhaps more significant toward LoRa scalability than for instance spectrum restrictions. Our derivations for co-spreading factor interference found in LoRa networks enables rigorous scalability analysis of such networks.

562 citations


Journal ArticleDOI
TL;DR: Multiple imputation is an alternative method to deal withMissing data, which accounts for the uncertainty associated with missing data, and provides unbiased and valid estimates of associations based on information from the available data.
Abstract: Missing data are ubiquitous in clinical epidemiological research. Individuals with missing data may differ from those with no missing data in terms of the outcome of interest and prognosis in general. Missing data are often categorized into the following three types: missing completely at random (MCAR), missing at random (MAR), and missing not at random (MNAR). In clinical epidemiological research, missing data are seldom MCAR. Missing data can constitute considerable challenges in the analyses and interpretation of results and can potentially weaken the validity of results and conclusions. A number of methods have been developed for dealing with missing data. These include complete-case analyses, missing indicator method, single value imputation, and sensitivity analyses incorporating worst-case and best-case scenarios. If applied under the MCAR assumption, some of these methods can provide unbiased but often less precise estimates. Multiple imputation is an alternative method to deal with missing data, which accounts for the uncertainty associated with missing data. Multiple imputation is implemented in most statistical software under the MAR assumption and provides unbiased and valid estimates of associations based on information from the available data. The method affects not only the coefficient estimates for variables with missing data but also the estimates for other variables with no missing data.

562 citations


Journal ArticleDOI
TL;DR: The molecular mechanisms involved in plaque vulnerability and the development of atherothrombosis are discussed and plaques with reduced collagen content are thought to be more vulnerable than those with a thick collagen cap.
Abstract: Atherosclerosis is a maladaptive, nonresolving chronic inflammatory disease that occurs at sites of blood flow disturbance. The disease usually remains silent until a breakdown of integrity at the arterial surface triggers the formation of a thrombus. By occluding the lumen, the thrombus or emboli detaching from it elicits ischaemic symptoms that may be life-threatening. Two types of surface damage can cause atherothrombosis: plaque rupture and endothelial erosion. Plaque rupture is thought to be caused by loss of mechanical stability, often due to reduced tensile strength of the collagen cap surrounding the plaque. Therefore, plaques with reduced collagen content are thought to be more vulnerable than those with a thick collagen cap. Endothelial erosion, on the other hand, may occur after injurious insults to the endothelium instigated by metabolic disturbance or immune insults. This review discusses the molecular mechanisms involved in plaque vulnerability and the development of atherothrombosis.

562 citations


Journal ArticleDOI
06 Nov 2018
TL;DR: The Common International Classification of Ecosystem Services (CICES) as discussed by the authors is widely used for mapping, ecosystem assessment, and natural capital ecosystem accounting, and it has been updated for version 5.1.
Abstract: The Common International Classification of Ecosystem Services (CICES) is widely used for mapping, ecosystem assessment, and natural capital ecosystem accounting. On the basis of the experience gained in using it since the first version was published in 2013, it has been updated for version 5.1. This policy brief summarises what has been done and how the classification can be used.

Journal ArticleDOI
TL;DR: RTJ receives salary support from the Peter Brojde Lung Cancer Centre and the Backler Foundation, Jewish General Hospital Foundation.
Abstract: RTJ receives salary support from the Peter Brojde Lung Cancer Centre and the Backler Foundation, Jewish General Hospital Foundation.

Proceedings ArticleDOI
01 Oct 2017
TL;DR: This work presents flow-guided feature aggregation, an accurate and end-to-end learning framework for video object detection that improves the per-frame features by aggregation of nearby features along the motion paths, and thus improves the video recognition accuracy.
Abstract: Extending state-of-the-art object detectors from image to video is challenging. The accuracy of detection suffers from degenerated object appearances in videos, e.g., motion blur, video defocus, rare poses, etc. Existing work attempts to exploit temporal information on box level, but such methods are not trained end-to-end. We present flow-guided feature aggregation, an accurate and end-to-end learning framework for video object detection. It leverages temporal coherence on feature level instead. It improves the per-frame features by aggregation of nearby features along the motion paths, and thus improves the video recognition accuracy. Our method significantly improves upon strong singleframe baselines in ImageNet VID [33], especially for more challenging fast moving objects. Our framework is principled, and on par with the best engineered systems winning the ImageNet VID challenges 2016, without additional bells-and-whistles. The code would be released.

Posted Content
TL;DR: The first method to automatically estimate the 3D pose of the human body as well as its 3D shape from a single unconstrained image is described, showing superior pose accuracy with respect to the state of the art.
Abstract: We describe the first method to automatically estimate the 3D pose of the human body as well as its 3D shape from a single unconstrained image. We estimate a full 3D mesh and show that 2D joints alone carry a surprising amount of information about body shape. The problem is challenging because of the complexity of the human body, articulation, occlusion, clothing, lighting, and the inherent ambiguity in inferring 3D from 2D. To solve this, we first use a recently published CNN-based method, DeepCut, to predict (bottom-up) the 2D body joint locations. We then fit (top-down) a recently published statistical body shape model, called SMPL, to the 2D joints. We do so by minimizing an objective function that penalizes the error between the projected 3D model joints and detected 2D joints. Because SMPL captures correlations in human shape across the population, we are able to robustly fit it to very little data. We further leverage the 3D model to prevent solutions that cause interpenetration. We evaluate our method, SMPLify, on the Leeds Sports, HumanEva, and Human3.6M datasets, showing superior pose accuracy with respect to the state of the art.

Journal ArticleDOI
TL;DR: A simple theorem in quantum information theory is proved, which implies that bulk operators in the anti-de Sitter/conformal field theory (AdS/CFT) correspondence can be reconstructed as CFT operators in a spatial subregion A, provided that they lie in its entanglement wedge.
Abstract: In this Letter we prove a simple theorem in quantum information theory, which implies that bulk operators in the anti-de Sitter/conformal field theory (AdS/CFT) correspondence can be reconstructed as CFT operators in a spatial subregion A, provided that they lie in its entanglement wedge. This is an improvement on existing reconstruction methods, which have at most succeeded in the smaller causal wedge. The proof is a combination of the recent work of Jafferis, Lewkowycz, Maldacena, and Suh on the quantum relative entropy of a CFT subregion with earlier ideas interpreting the correspondence as a quantum error correcting code.

Posted Content
TL;DR: In this article, the authors introduce Concept Activation Vectors (CAVs), which provide an interpretation of a neural net's internal state in terms of human-friendly concepts, and use CAVs as part of a technique, Testing with CAVs (TCAV), that uses directional derivatives to quantify the degree to which a user defined concept is important to a classification result.
Abstract: The interpretation of deep learning models is a challenge due to their size, complexity, and often opaque internal state. In addition, many systems, such as image classifiers, operate on low-level features rather than high-level concepts. To address these challenges, we introduce Concept Activation Vectors (CAVs), which provide an interpretation of a neural net's internal state in terms of human-friendly concepts. The key idea is to view the high-dimensional internal state of a neural net as an aid, not an obstacle. We show how to use CAVs as part of a technique, Testing with CAVs (TCAV), that uses directional derivatives to quantify the degree to which a user-defined concept is important to a classification result--for example, how sensitive a prediction of "zebra" is to the presence of stripes. Using the domain of image classification as a testing ground, we describe how CAVs may be used to explore hypotheses and generate insights for a standard image classification network as well as a medical application.

Proceedings ArticleDOI
01 Jun 2016
TL;DR: This paper proposes an End-to-End learning approach to address ordinal regression problems using deep Convolutional Neural Network, which could simultaneously conduct feature learning and regression modeling, and achieves the state-of-the-art performance on both the MORPH and AFAD datasets.
Abstract: To address the non-stationary property of aging patterns, age estimation can be cast as an ordinal regression problem. However, the processes of extracting features and learning a regression model are often separated and optimized independently in previous work. In this paper, we propose an End-to-End learning approach to address ordinal regression problems using deep Convolutional Neural Network, which could simultaneously conduct feature learning and regression modeling. In particular, an ordinal regression problem is transformed into a series of binary classification sub-problems. And we propose a multiple output CNN learning algorithm to collectively solve these classification sub-problems, so that the correlation between these tasks could be explored. In addition, we publish an Asian Face Age Dataset (AFAD) containing more than 160K facial images with precise age ground-truths, which is the largest public age dataset to date. To the best of our knowledge, this is the first work to address ordinal regression problems by using CNN, and achieves the state-of-the-art performance on both the MORPH and AFAD datasets.

Journal ArticleDOI
TL;DR: In this paper, the authors search for excess gamma-ray emission coincident with the positions of confirmed and candidate Milky Way satellite galaxies using six years of data from the Fermi Large Area Telescope (LAT).
Abstract: We search for excess gamma-ray emission coincident with the positions of confirmed and candidate Milky Way satellite galaxies using six years of data from the Fermi Large Area Telescope (LAT). Our sample of 45 stellar systems includes 28 kinematically confirmed dark-matter-dominated dwarf spheroidal galaxies (dSphs) and 17 recently discovered systems that have photometric characteristics consistent with the population of known dSphs. For each of these targets, the relative predicted gamma-ray flux due to dark matter annihilation is taken from kinematic analysis if available, and estimated from a distance-based scaling relation otherwise, assuming that the stellar systems are DM-dominated dSphs. LAT data coincident with four of the newly discovered targets show a slight preference (each ~ 2sigma local) for gamma-ray emission in excess of the background. However, the ensemble of derived gamma-ray flux upper limits for individual targets is consistent with the expectation from analyzing random blank-sky regions, and a combined analysis of the population of stellar systems yields no globally significant excess (global significance < 1sigma ). Our analysis has increased sensitivity compared to the analysis of 15 confirmed dSphs by Ackermann et al. The observed constraints on the DM annihilation cross section are statistically consistent with the background expectation, improving by a factor of ~2 for large DM masses ({m}{DM,b\bar{b}}≳ 1 {TeV} and {m}{DM,{tau }+{tau }-}≳ 70 {GeV}) and weakening by a factor of ~1.5 at lower masses relative to previously observed limits.

Journal ArticleDOI
TL;DR: In this article, the authors argue that family firms invest less in innovation but have an increased conversion rate of innovation input into output and, ultimately, a higher innovation output than non-family firms.
Abstract: Family firms are often portrayed as an important yet conservative form of organization that is reluctant to invest in innovation; however, simultaneously, evidence has shown that family firms are flourishing and in fact constitute many of the world’s most innovative firms. Our study contributes to disentangling this puzzling effect. We argue that family firms—owing to the family’s high level of control over the firm, wealth concentration, and importance of nonfinancial goals—invest less in innovation but have an increased conversion rate of innovation input into output and, ultimately, a higher innovation output than nonfamily firms. Empirical evidence from a meta-analysis based on 108 primary studies from 42 countries supports our hypotheses. We further argue and empirically show that the observed effects are even stronger when the CEO of the family firm is a later-generation family member. However, when the CEO of the family firm is the firm’s founder, innovation input is higher and, contrary to our initial expectations, innovation output is lower than that in other firms. We further show that the family firm–innovation input–output relationships depend on country-level factors; namely, the level of minority shareholder protection and the education level of the workforce in the country.

Journal ArticleDOI
TL;DR: A comprehensive review specifically on the emerging field of graph convolutional networks, which is one of the most prominent graph deep learning models, is conducted and several open challenges are presented and potential directions for future research are discussed.
Abstract: Graphs naturally appear in numerous application domains, ranging from social analysis, bioinformatics to computer vision. The unique capability of graphs enables capturing the structural relations among data, and thus allows to harvest more insights compared to analyzing data in isolation. However, it is often very challenging to solve the learning problems on graphs, because (1) many types of data are not originally structured as graphs, such as images and text data, and (2) for graph-structured data, the underlying connectivity patterns are often complex and diverse. On the other hand, the representation learning has achieved great successes in many areas. Thereby, a potential solution is to learn the representation of graphs in a low-dimensional Euclidean space, such that the graph properties can be preserved. Although tremendous efforts have been made to address the graph representation learning problem, many of them still suffer from their shallow learning mechanisms. Deep learning models on graphs (e.g., graph neural networks) have recently emerged in machine learning and other related areas, and demonstrated the superior performance in various problems. In this survey, despite numerous types of graph neural networks, we conduct a comprehensive review specifically on the emerging field of graph convolutional networks, which is one of the most prominent graph deep learning models. First, we group the existing graph convolutional network models into two categories based on the types of convolutions and highlight some graph convolutional network models in details. Then, we categorize different graph convolutional networks according to the areas of their applications. Finally, we present several open challenges in this area and discuss potential directions for future research.

Proceedings Article
23 May 2016
TL;DR: This paper proves a conjecture published in 1989 and partially addresses an open problem announced at the Conference on Learning Theory (COLT) 2015, and presents an instance for which it can answer the following question: how difficult is it to directly train a deep model in theory?
Abstract: In this paper, we prove a conjecture published in 1989 and also partially address an open problem announced at the Conference on Learning Theory (COLT) 2015. For an expected loss function of a deep nonlinear neural network, we prove the following statements under the independence assumption adopted from recent work: 1) the function is non-convex and non-concave, 2) every local minimum is a global minimum, 3) every critical point that is not a global minimum is a saddle point, and 4) the property of saddle points differs for shallow networks (with three layers) and deeper networks (with more than three layers). Moreover, we prove that the same four statements hold for deep linear neural networks with any depth, any widths and no unrealistic assumptions. As a result, we present an instance, for which we can answer to the following question: how difficult to directly train a deep model in theory? It is more difficult than the classical machine learning models (because of the non-convexity), but not too difficult (because of the nonexistence of poor local minima and the property of the saddle points). We note that even though we have advanced the theoretical foundations of deep learning, there is still a gap between theory and practice.

Journal ArticleDOI
TL;DR: In patients with rheumatoid arthritis who had had an inadequate response to methotrexate, baricitinib was associated with significant clinical improvements as compared with placebo and adalimumab.
Abstract: BackgroundBaricitinib is an oral, reversible inhibitor of the Janus kinases JAK1 and JAK2 that may have therapeutic value in patients with rheumatoid arthritis. MethodsWe conducted a 52-week, phase 3, double-blind, placebo- and active-controlled trial in which 1307 patients with active rheumatoid arthritis who were receiving background therapy with methotrexate were randomly assigned to one of three regimens in a 3:3:2 ratio: placebo (switched to baricitinib after 24 weeks), 4 mg of baricitinib once daily, or 40 mg of adalimumab (an anti–tumor necrosis factor α monoclonal antibody) every other week. End-point measures evaluated after adjustment for multiplicity included 20% improvement according to the criteria of the American College of Rheumatology (ACR20 response) (the primary end point), the Disease Activity Score for 28 joints (DAS28), the Health Assessment Questionnaire–Disability Index, and the Simplified Disease Activity Index at week 12, as well as radiographic progression of joint damage as meas...

Book ChapterDOI
08 Oct 2016
TL;DR: A new model is proposed that focuses on the discriminating properties of the visible object, jointly predicts a class label, and explains why the predicted label is appropriate for the image, and generates sentences that realize a global sentence property, such as class specificity.
Abstract: Clearly explaining a rationale for a classification decision to an end user can be as important as the decision itself. Existing approaches for deep visual recognition are generally opaque and do not output any justification text; contemporary vision-language models can describe image content but fail to take into account class-discriminative image aspects which justify visual predictions. We propose a new model that focuses on the discriminating properties of the visible object, jointly predicts a class label, and explains why the predicted label is appropriate for the image. Through a novel loss function based on sampling and reinforcement learning, our model learns to generate sentences that realize a global sentence property, such as class specificity. Our results on the CUB dataset show that our model is able to generate explanations which are not only consistent with an image but also more discriminative than descriptions produced by existing captioning methods.

Journal ArticleDOI
TL;DR: This study provides the first insights into smartphone use, smartphone addiction, and predictors of smartphone addiction in young people from a European country and should be extended in further studies.
Abstract: Background and AimsSmartphone addiction, its association with smartphone use, and its predictors have not yet been studied in a European sample. This study investigated indicators of smartphone use, smartphone addiction, and their associations with demographic and health behaviour-related variables in young people.MethodsA convenience sample of 1,519 students from 127 Swiss vocational school classes participated in a survey assessing demographic and health-related characteristics as well as indicators of smartphone use and addiction. Smartphone addiction was assessed using a short version of the Smartphone Addiction Scale for Adolescents (SAS-SV). Logistic regression analyses were conducted to investigate demographic and health-related predictors of smartphone addiction.ResultsSmartphone addiction occurred in 256 (16.9%) of the 1,519 students. Longer duration of smartphone use on a typical day, a shorter time period until first smartphone use in the morning, and reporting that social networking was the mo...

Journal ArticleDOI
TL;DR: A convolutional neural network architecture that is trainable in an end-to-end manner directly for the place recognition task, and significantly outperforms non-learnt image representations and off-the-shelf CNN descriptors on two challenging place recognition benchmarks.
Abstract: We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph We present the following four principal contributions First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the “Vector of Locally Aggregated Descriptors” image representation commonly used in image retrieval The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation Second, we create a new weakly supervised ranking loss, which enables end-to-end learning of the architecture's parameters from images depicting the same places over time downloaded from Google Street View Time Machine Third, we develop an efficient training procedure which can be applied on very large-scale weakly labelled tasks Finally, we show that the proposed architecture and training procedure significantly outperform non-learnt image representations and off-the-shelf CNN descriptors on challenging place recognition and image retrieval benchmarks

Journal ArticleDOI
TL;DR: This paper implemented and compared several important RNN architectures, including Elman, Jordan, and hybrid variants, and implemented these networks with the publicly available Theano neural network toolkit and completed experiments on the well-known airline travel information system (ATIS) benchmark.
Abstract: Semantic slot filling is one of the most challenging problems in spoken language understanding (SLU). In this paper, we propose to use recurrent neural networks (RNNs) for this task, and present several novel architectures designed to efficiently model past and future temporal dependencies. Specifically, we implemented and compared several important RNN architectures, including Elman, Jordan, and hybrid variants. To facilitate reproducibility, we implemented these networks with the publicly available Theano neural network toolkit and completed experiments on the well-known airline travel information system (ATIS) benchmark. In addition, we compared the approaches on two custom SLU data sets from the entertainment and movies domains. Our results show that the RNN-based models outperform the conditional random field (CRF) baseline by 2% in absolute error reduction on the ATIS benchmark. We improve the state-of-the-art by 0.5% in the Entertainment domain, and 6.7% for the movies domain.

Proceedings ArticleDOI
01 Jun 2018
TL;DR: A data-augmentation approach is demonstrated that, in combination with existing word-embedding debiasing techniques, removes the bias demonstrated by rule-based, feature-rich, and neural coreference systems in WinoBias without significantly affecting their performance on existing datasets.
Abstract: In this paper, we introduce a new benchmark for co-reference resolution focused on gender bias, WinoBias. Our corpus contains Winograd-schema style sentences with entities corresponding to people referred by their occupation (e.g. the nurse, the doctor, the carpenter). We demonstrate that a rule-based, a feature-rich, and a neural coreference system all link gendered pronouns to pro-stereotypical entities with higher accuracy than anti-stereotypical entities, by an average difference of 21.1 in F1 score. Finally, we demonstrate a data-augmentation approach that, in combination with existing word-embedding debiasing techniques, removes the bias demonstrated by these systems in WinoBias without significantly affecting their performance on existing datasets.

Journal ArticleDOI
TL;DR: Determining the effective setup of parameters, developing improved biocompatible/bioactive materials, and improving the mechanical/biological properties of laser sintered and 3D printed tissues are the three main concerns which have been investigated in this article.

Journal ArticleDOI
TL;DR: In this paper, a Bayesian approach was used to define credible sets for the T1D-associated SNPs localized to enhancer sequences active in thymus, T and B cells, and CD34(+) stem cells.
Abstract: Genetic studies of type 1 diabetes (T1D) have identified 50 susceptibility regions, finding major pathways contributing to risk, with some loci shared across immune disorders. To make genetic comparisons across autoimmune disorders as informative as possible, a dense genotyping array, the Immunochip, was developed, from which we identified four new T1D-associated regions (P < 5 × 10(-8)). A comparative analysis with 15 immune diseases showed that T1D is more similar genetically to other autoantibody-positive diseases, significantly most similar to juvenile idiopathic arthritis and significantly least similar to ulcerative colitis, and provided support for three additional new T1D risk loci. Using a Bayesian approach, we defined credible sets for the T1D-associated SNPs. The associated SNPs localized to enhancer sequences active in thymus, T and B cells, and CD34(+) stem cells. Enhancer-promoter interactions can now be analyzed in these cell types to identify which particular genes and regulatory sequences are causal.

Journal ArticleDOI
TL;DR: A survey of the state of the art in natural language generation can be found in this article, with an up-to-date synthesis of research on the core tasks in NLG and the architectures adopted in which such tasks are organized.
Abstract: This paper surveys the current state of the art in Natural Language Generation (NLG), defined as the task of generating text or speech from non-linguistic input. A survey of NLG is timely in view of the changes that the field has undergone over the past two decades, especially in relation to new (usually data-driven) methods, as well as new applications of NLG technology. This survey therefore aims to (a) give an up-to-date synthesis of research on the core tasks in NLG and the architectures adopted in which such tasks are organised; (b) highlight a number of recent research topics that have arisen partly as a result of growing synergies between NLG and other areas of artifical intelligence; (c) draw attention to the challenges in NLG evaluation, relating them to similar challenges faced in other areas of nlp, with an emphasis on different evaluation methods and the relationships between them.