scispace - formally typeset
Search or ask a question

Showing papers by "French Institute for Research in Computer Science and Automation published in 2018"


Journal ArticleDOI
TL;DR: How far state-of-the-art deep learning methods can go at assessing CMRI, i.e., segmenting the myocardium and the two ventricles as well as classifying pathologies is measured, to open the door to highly accurate and fully automatic analysis of cardiac CMRI.
Abstract: Delineation of the left ventricular cavity, myocardium, and right ventricle from cardiac magnetic resonance images (multi-slice 2-D cine MRI) is a common clinical task to establish diagnosis. The automation of the corresponding tasks has thus been the subject of intense research over the past decades. In this paper, we introduce the “Automatic Cardiac Diagnosis Challenge” dataset (ACDC), the largest publicly available and fully annotated dataset for the purpose of cardiac MRI (CMR) assessment. The dataset contains data from 150 multi-equipments CMRI recordings with reference measurements and classification from two medical experts. The overarching objective of this paper is to measure how far state-of-the-art deep learning methods can go at assessing CMRI, i.e., segmenting the myocardium and the two ventricles as well as classifying pathologies. In the wake of the 2017 MICCAI-ACDC challenge, we report results from deep learning methods provided by nine research groups for the segmentation task and four groups for the classification task. Results show that the best methods faithfully reproduce the expert analysis, leading to a mean value of 0.97 correlation score for the automatic extraction of clinical indices and an accuracy of 0.96 for automatic diagnosis. These results clearly open the door to highly accurate and fully automatic analysis of cardiac CMRI. We also identify scenarios for which deep learning methods are still failing. Both the dataset and detailed results are publicly available online, while the platform will remain open for new submissions.

1,056 citations


Proceedings ArticleDOI
01 Feb 2018
TL;DR: This work establishes dense correspondences between an RGB image and a surface-based representation of the human body, a task referred to as dense human pose estimation, and improves accuracy through cascading, obtaining a system that delivers highly-accurate results at multiple frames per second on a single gpu.
Abstract: In this work we establish dense correspondences between an RGB image and a surface-based representation of the human body, a task we refer to as dense human pose estimation. We gather dense correspondences for 50K persons appearing in the COCO dataset by introducing an efficient annotation pipeline. We then use our dataset to train CNN-based systems that deliver dense correspondence 'in the wild', namely in the presence of background, occlusions and scale variations. We improve our training set's effectiveness by training an inpainting network that can fill in missing ground truth values and report improvements with respect to the best results that would be achievable in the past. We experiment with fully-convolutional networks and region-based models and observe a superiority of the latter. We further improve accuracy through cascading, obtaining a system that delivers highly-accurate results at multiple frames per second on a single gpu. Supplementary materials, data, code, and videos are provided on the project page http://densepose.org.

987 citations


Journal ArticleDOI
TL;DR: In this paper, Long-Term Temporal Convolutional Neural Networks (LTCNNs) were used to learn action representations with high-quality optical flow vector fields and achieved state-of-the-art results on two challenging benchmarks for action recognition.
Abstract: Typical human actions last several seconds and exhibit characteristic spatio-temporal structure. Recent methods attempt to capture this structure and learn action representations with convolutional neural networks. Such representations, however, are typically learned at the level of a few video frames failing to model actions at their full temporal extent. In this work we learn video representations using neural networks with long-term temporal convolutions (LTC). We demonstrate that LTC-CNN models with increased temporal extents improve the accuracy of action recognition. We also study the impact of different low-level representations, such as raw values of video pixels and optical flow vector fields and demonstrate the importance of high-quality optical flow estimation for learning accurate action models. We report state-of-the-art results on two challenging benchmarks for human action recognition UCF101 (92.7%) and HMDB51 (67.2%).

853 citations


Journal ArticleDOI
TL;DR: This primer aims to introduce clinicians and researchers to the opportunities and challenges in bringing machine intelligence into psychiatric practice.

434 citations


Journal ArticleDOI
14 Feb 2018-Nature
TL;DR: A glia–neuron interaction at the perisomatic space of LHb is involved in setting the neuronal firing mode in models of a major psychiatric disease, and Kir4.1 in the LHb might have potential as a target for treating clinical depression.
Abstract: Enhanced bursting activity of neurons in the lateral habenula (LHb) is essential in driving depression-like behaviours, but the cause of this increase has been unknown Here, using a high-throughput quantitative proteomic screen, we show that an astroglial potassium channel (Kir41) is upregulated in the LHb in rat models of depression Kir41 in the LHb shows a distinct pattern of expression on astrocytic membrane processes that wrap tightly around the neuronal soma Electrophysiology and modelling data show that the level of Kir41 on astrocytes tightly regulates the degree of membrane hyperpolarization and the amount of bursting activity of LHb neurons Astrocyte-specific gain and loss of Kir41 in the LHb bidirectionally regulates neuronal bursting and depression-like symptoms Together, these results show that a glia-neuron interaction at the perisomatic space of LHb is involved in setting the neuronal firing mode in models of a major psychiatric disease Kir41 in the LHb might have potential as a target for treating clinical depression

343 citations


Journal ArticleDOI
18 Oct 2018-Cell
TL;DR: A light-sheet microscope is developed that adapts itself to the dramatic changes in size, shape, and optical properties of the post-implantation mouse embryo and captures its development from gastrulation to early organogenesis at the cellular level.

326 citations


Journal ArticleDOI
TL;DR: This article provides an overview of containerization, a new technological trend in lightweight virtualization, and provides a taxonomy of elasticity mechanisms according to the identified works and key properties.
Abstract: Elasticity is a fundamental property in cloud computing that has recently witnessed major developments This article reviews both classical and recent elasticity solutions and provides an overview of containerization, a new technological trend in lightweight virtualization It also discusses major issues and research challenges related to elasticity in cloud computing We comprehensively review and analyze the proposals developed in this field We provide a taxonomy of elasticity mechanisms according to the identified works and key properties Compared to other works in literature, this article presents a broader and detailed analysis of elasticity approaches and is considered as the first survey addressing the elasticity of containers

272 citations


Journal ArticleDOI
TL;DR: This work presents a new deep learning approach to blending for IBR, in which held-out real image data is used to learn blending weights to combine input photo contributions, and designs the network architecture and the training loss to provide high quality novel view synthesis, while reducing temporal flickering artifacts.
Abstract: Free-viewpoint image-based rendering (IBR) is a standing challenge. IBR methods combine warped versions of input photos to synthesize a novel view. The image quality of this combination is directly affected by geometric inaccuracies of multi-view stereo (MVS) reconstruction and by view- and image-dependent effects that produce artifacts when contributions from different input views are blended. We present a new deep learning approach to blending for IBR, in which we use held-out real image data to learn blending weights to combine input photo contributions. Our Deep Blending method requires us to address several challenges to achieve our goal of interactive free-viewpoint IBR navigation. We first need to provide sufficiently accurate geometry so the Convolutional Neural Network (CNN) can succeed in finding correct blending weights. We do this by combining two different MVS reconstructions with complementary accuracy vs. completeness tradeoffs. To tightly integrate learning in an interactive IBR system, we need to adapt our rendering algorithm to produce a fixed number of input layers that can then be blended by the CNN. We generate training data with a variety of captured scenes, using each input photo as ground truth in a held-out approach. We also design the network architecture and the training loss to provide high quality novel view synthesis, while reducing temporal flickering artifacts. Our results demonstrate free-viewpoint IBR in a wide variety of scenes, clearly surpassing previous methods in visual quality, especially when moving far from the input cameras.

265 citations


Journal ArticleDOI
TL;DR: In recent years, airborne and spaceborne hyperspectral imaging systems have advanced in terms of spectral and spatial resolution, which makes the data sets they produce a valuable source for land cover classification.
Abstract: In recent years, airborne and spaceborne hyperspectral imaging systems have advanced in terms of spectral and spatial resolution, which makes the data sets they produce a valuable source for land cover classification. The availability of hyperspectral data with fine spatial resolution has revolutionized hyperspectral image (HSI) classification techniques by taking advantage of both spectral and spatial information in a single classification framework.

257 citations


Journal ArticleDOI
TL;DR: A differential between-subject effect of competition on mu (8–12 Hz) oscillatory activity during aiming is found; compared to training, the more the subject was able to desynchronize his mu rhythm during competition, the better was his shooting performance.
Abstract: Competition changes the environment for athletes. The difficulty of training for such stressful events can lead to the well-known effect of 'choking' under pressure, which prevents athletes from performing at their best level. To studying the effect of competition on the human brain, we recorded pilot electroencephalography (EEG) data while novice shooters were immersed in a realistic virtual environment representing a shooting range. We found a differential between-subject effect of competition on mu [8-12 Hz] oscillatory activity during aiming; compared to training, the more the subject was able to desynchronize his mu rhythm during competition, the better was his shooting performance. Because this differential effect could not be explained by differences in simple measures of the kinematics and muscular activity, nor by the effect of competition or shooting performance per se, we interpret our results as evidence that mu desynchronization has a positive effect on performance during competition

241 citations


Book
28 Mar 2018
TL;DR: This paper intends to provide treatment where contracts are precisely defined and characterized so that they can be used in design methodologies such as the ones mentioned above with no ambiguity, and provides an important link between interfaces and contracts to show similarities and correspondences.
Abstract: Recently, contract-based design has been proposed as an “orthogonal” approach that complements system design methodologies proposed so far to cope with the complexity of system design. Contract-based design provides a rigorous scaffolding for verification, analysis, abstraction/refinement, and even synthesis. A number of results have been obtained in this domain but a unified treatment of the topic that can help put contract-based design in perspective was missing. This monograph intends to provide such a treatment where contracts are precisely defined and characterized so that they can be used in design methodologies with no ambiguity. In particular, this monograph identifies the essence of complex system design using contracts through a mathematical “meta-theory”, where all the properties of the methodology are derived from a very abstract and generic notion of contract. We show that the meta-theory provides deep and illuminating links with existing contract and interface theories, as well as guidelines for designing new theories. Our study encompasses contracts for both software and systems, with emphasis on the latter. We illustrate the use of contracts with two examples: requirement engineering for a parking garage management, and the development of contracts for timing and scheduling in the context of the AUTOSAR methodology in use in the automotive sector.

Journal ArticleDOI
TL;DR: It is proposed that, although these preferences are non-instrumental and can on occasion interfere with external goals, they are important heuristics that allow organisms to cope with the high complexity of both sampling and search, and generate curiosity-driven investigations in large, open environments in which rewards are sparse and ex ante unknown.
Abstract: In natural behaviour, animals actively interrogate their environments using endogenously generated 'question-and-answer' strategies. However, in laboratory settings participants typically engage with externally imposed stimuli and tasks, and the mechanisms of active sampling remain poorly understood. We review a nascent neuroscientific literature that examines active-sampling policies and their relation to attention and curiosity. We distinguish between information sampling, in which organisms reduce uncertainty relevant to a familiar task, and information search, in which they investigate in an open-ended fashion to discover new tasks. We review evidence that both sampling and search depend on individual preferences over cognitive states, including attitudes towards uncertainty, learning progress and types of information. We propose that, although these preferences are non-instrumental and can on occasion interfere with external goals, they are important heuristics that allow organisms to cope with the high complexity of both sampling and search, and generate curiosity-driven investigations in large, open environments in which rewards are sparse and ex ante unknown.

Journal ArticleDOI
01 Nov 2018-Brain
TL;DR: The findings demonstrate that EEG markers of consciousness can be reliably, economically and automatically identified with machine learning in various clinical and acquisition contexts and shows that the generalization performance from Paris to Liège remains stable even if up to 20% of the diagnostic labels are randomly flipped.
Abstract: Determining the state of consciousness in patients with disorders of consciousness is a challenging practical and theoretical problem. Recent findings suggest that multiple markers of brain activity extracted from the EEG may index the state of consciousness in the human brain. Furthermore, machine learning has been found to optimize their capacity to discriminate different states of consciousness in clinical practice. However, it is unknown how dependable these EEG markers are in the face of signal variability because of different EEG configurations, EEG protocols and subpopulations from different centres encountered in practice. In this study we analysed 327 recordings of patients with disorders of consciousness (148 unresponsive wakefulness syndrome and 179 minimally conscious state) and 66 healthy controls obtained in two independent research centres (Paris Pitie-Salpetriere and Liege). We first show that a non-parametric classifier based on ensembles of decision trees provides robust out-of-sample performance on unseen data with a predictive area under the curve (AUC) of ~0.77 that was only marginally affected when using alternative EEG configurations (different numbers and positions of sensors, numbers of epochs, average AUC = 0.750 ± 0.014). In a second step, we observed that classifiers based on multiple as well as single EEG features generalize to recordings obtained from different patient cohorts, EEG protocols and different centres. However, the multivariate model always performed best with a predictive AUC of 0.73 for generalization from Paris 1 to Paris 2 datasets, and an AUC of 0.78 from Paris to Liege datasets. Using simulations, we subsequently demonstrate that multivariate pattern classification has a decisive performance advantage over univariate classification as the stability of EEG features decreases, as different EEG configurations are used for feature-extraction or as noise is added. Moreover, we show that the generalization performance from Paris to Liege remains stable even if up to 20% of the diagnostic labels are randomly flipped. Finally, consistent with recent literature, analysis of the learned decision rules of our classifier suggested that markers related to dynamic fluctuations in theta and alpha frequency bands carried independent information and were most influential. Our findings demonstrate that EEG markers of consciousness can be reliably, economically and automatically identified with machine learning in various clinical and acquisition contexts.

Journal ArticleDOI
TL;DR: This paper provides the first comprehensive overview of RIOT, covering the key components of interest to potential developers and users: the kernel, hardware abstraction, and software modularity, both conceptually and in practice for various example configurations.
Abstract: As the Internet of Things (IoT) emerges, compact operating systems (OSs) are required on low-end devices to ease development and portability of IoT applications. RIOT is a prominent free and open source OS in this space. In this paper, we provide the first comprehensive overview of RIOT. We cover the key components of interest to potential developers and users: the kernel, hardware abstraction, and software modularity, both conceptually and in practice for various example configurations. We explain operational aspects like system boot-up, timers, power management, and the use of networking. Finally, the relevant APIs as exposed by the OS are discussed along with the larger ecosystem around RIOT, including development and open source community aspects.


Proceedings Article
01 Jan 2018
TL;DR: In this article, the Lipschitz constant of deep neural networks is estimated using a power method with automatic differentiation, and an improved algorithm named SeqLip is proposed for sequential neural networks that takes advantage of the linear computation graph to split the computation per pair of consecutive layers.
Abstract: Deep neural networks are notorious for being sensitive to small well-chosen perturbations, and estimating the regularity of such architectures is of utmost importance for safe and robust practical applications. In this paper, we investigate one of the key characteristics to assess the regularity of such methods: the Lipschitz constant of deep learning architectures. First, we show that, even for two layer neural networks, the exact computation of this quantity is NP-hard and state-of-art methods may significantly overestimate it. Then, we both extend and improve previous estimation methods by providing AutoLip, the first generic algorithm for upper bounding the Lipschitz constant of any automatically differentiable function. We provide a power method algorithm working with automatic differentiation, allowing efficient computations even on large convolutions. Second, for sequential neural networks, we propose an improved algorithm named SeqLip that takes advantage of the linear computation graph to split the computation per pair of consecutive layers. Third we propose heuristics on SeqLip in order to tackle very large networks. Our experiments show that SeqLip can significantly improve on the existing upper bounds. Finally, we provide an implementation of AutoLip in the PyTorch environment that may be used to better estimate the robustness of a given neural network to small perturbations or regularize it using more precise Lipschitz estimations. These results also hint at the difficulty to estimate the Lipschitz constant of deep networks.

Journal ArticleDOI
TL;DR: This paper presents a new method to fabricate 3D models on a robotic printing system equipped with multi-axis motion that successfully generates tool-paths for 3D printing models with large overhangs and high-genus topology.
Abstract: This paper presents a new method to fabricate 3D models on a robotic printing system equipped with multi-axis motion. Materials are accumulated inside the volume along curved tool-paths so that the need of supporting structures can be tremendously reduced - if not completely abandoned - on all models. Our strategy to tackle the challenge of tool-path planning for multi-axis 3D printing is to perform two successive decompositions, first volume-to-surfaces and then surfaces-to-curves. The volume-to-surfaces decomposition is achieved by optimizing a scalar field within the volume that represents the fabrication sequence. The field is constrained such that its iso-values represent curved layers that are supported from below, and present a convex surface affording for collision-free navigation of the printer head. After extracting all curved layers, the surfaces-to-curves decomposition covers them with tool-paths while taking into account constraints from the robotic printing system. Our method successfully generates tool-paths for 3D printing models with large overhangs and high-genus topology. We fabricated several challenging cases on our robotic platform to verify and demonstrate its capabilities.

Book ChapterDOI
19 Aug 2018
TL;DR: The Generic Group Model (GGM) is one of the most important and successful tools for assessing hardness assumptions in cryptography and has been used extensively in the past two decades.
Abstract: One of the most important and successful tools for assessing hardness assumptions in cryptography is the Generic Group Model (GGM). Over the past two decades, numerous assumptions and protocols have been analyzed within this model. While a proof in the GGM can certainly provide some measure of confidence in an assumption, its scope is rather limited since it does not capture group-specific algorithms that make use of the representation of the group.

Journal ArticleDOI
TL;DR: The proposed methodology offers the possibility to dramatically reduce the size and the online computation time of a finite element model (FEM) of a soft robot and provides a generic way to control soft robots.
Abstract: Obtaining an accurate mechanical model of a soft deformable robot compatible with the computation time imposed by robotic applications is often considered an unattainable goal. This paper should invert this idea. The proposed methodology offers the possibility to dramatically reduce the size and the online computation time of a finite element model (FEM) of a soft robot. After a set of expensive offline simulations based on the whole model, we apply snapshot-proper orthogonal decomposition to sharply reduce the number of state variables of the soft-robot model. To keep the computational efficiency, hyperreduction is used to perform the integration on a reduced domain. The method allows to tune the error during the two main steps of complexity reduction. The method handles external loads (contact, friction, gravity, etc.) with precision as long as they are tested during the offline simulations. The method is validated on two very different examples of FEMs of soft robots and on one real soft robot. It enables acceleration factors of more than 100, while saving accuracy, in particular compared to coarsely meshed FEMs and provides a generic way to control soft robots.

Journal ArticleDOI
TL;DR: A contact planner for complex legged locomotion tasks: standing up, climbing stairs using a handrail, crossing rubble, and getting out of a car is presented, and the first interactive implementation of a contact planner (open source) is presented.
Abstract: We present a contact planner for complex legged locomotion tasks: standing up, climbing stairs using a handrail, crossing rubble, and getting out of a car. The need for such a planner was shown at the DARPA Robotics Challenge, where such behaviors could not be demonstrated (except for egress). Current planners suffer from their prohibitive algorithmic complexity because they deploy a tree of robot configurations projected in contact with the environment. We tackle this issue by introducing a reduction property: the reachability condition. This condition defines a geometric approximation of the contact manifold, which is of low dimension, presents a Cartesian topology, and can be efficiently sampled and explored. The hard contact planning problem can then be decomposed into two subproblems: first, we plan a path for the root without considering the whole-body configuration, using a sampling-based algorithm; then, we generate a discrete sequence of whole-body configurations in static equilibrium along this path, using a deterministic contact-selection algorithm. The reduction breaks the algorithm complexity encountered in previous works, resulting in the first interactive implementation of a contact planner (open source). While no contact planner has yet been proposed with theoretical completeness, we empirically show the interest of our framework: in a few seconds, with high success rates, we generate complex contact plans for various scenarios and two robots: HRP-2 and HyQ. These plans are validated in dynamic simulations or on the real HRP-2 robot.

Proceedings ArticleDOI
21 May 2018
TL;DR: The newly proposed reward and learning strategies lead together to faster convergence and more robust driving using only RGB image from a forward facing camera, and some domain adaption capability of the latest reinforcement learning algorithm is shown.
Abstract: We present research using the latest reinforcement learning algorithm for end-to-end driving without any mediated perception (object recognition, scene understanding). The newly proposed reward and learning strategies lead together to faster convergence and more robust driving using only RGB image from a forward facing camera. An Asynchronous Actor Critic (A3C) framework is used to learn the car control in a physically and graphically realistic rally game, with the agents evolving simultaneously on tracks with a variety of road structures (turns, hills), graphics (seasons, location) and physics (road adherence). A thorough evaluation is conducted and generalization is proven on unseen tracks and using legal speed limits. Open loop tests on real sequences of images show some domain adaption capability of our method.

Journal ArticleDOI
TL;DR: Results of the challenge highlighted that automatic algorithms, including the recent machine learning methods, are still trailing human expertise on both detection and delineation criteria, and it is demonstrated that computing a statistically robust consensus of the algorithms performs closer tohuman expertise on one score (segmentation) although still trailing on detection scores.
Abstract: We present a study of multiple sclerosis segmentation algorithms conducted at the international MICCAI 2016 challenge. This challenge was operated using a new open-science computing infrastructure. This allowed for the automatic and independent evaluation of a large range of algorithms in a fair and completely automatic manner. This computing infrastructure was used to evaluate thirteen methods of MS lesions segmentation, exploring a broad range of state-of-theart algorithms, against a high-quality database of 53 MS cases coming from four centers following a common definition of the acquisition protocol. Each case was annotated manually by an unprecedented number of seven different experts. Results of the challenge highlighted that automatic algorithms, including the recent machine learning methods (random forests, deep learning, …), are still trailing human expertise on both detection and delineation criteria. In addition, we demonstrate that computing a statistically robust consensus of the algorithms performs closer to human expertise on one score (segmentation) although still trailing on detection scores.

Journal ArticleDOI
TL;DR: The proposed nonlinear dynamic observers guarantee convergence of the observer states to the original system state in a finite and in a fixed (defined a priori ) time.

Journal ArticleDOI
TL;DR: A method based on deep learning to perform cardiac segmentation on short axis Magnetic resonance imaging stacks iteratively from the top slice to the bottom slice iteratively using a novel variant of the U-net.
Abstract: We propose a method based on deep learning to perform cardiac segmentation on short axis Magnetic resonance imaging stacks iteratively from the top slice (around the base) to the bottom slice (around the apex). At each iteration, a novel variant of the U-net is applied to propagate the segmentation of a slice to the adjacent slice below it. In other words, the prediction of a segmentation of a slice is dependent upon the already existing segmentation of an adjacent slice. The 3-D consistency is hence explicitly enforced. The method is trained on a large database of 3078 cases from the U.K. Biobank. It is then tested on the 756 different cases from the U.K. Biobank and three other state-of-the-art cohorts (ACDC with 100 cases, Sunnybrook with 30 cases, and RVSC with 16 cases). Results comparable or even better than the state of the art in terms of distance measures are achieved. They also emphasize the assets of our method, namely, enhanced spatial consistency (currently neither considered nor achieved by the state of the art), and the generalization ability to unseen cases even from other databases.

Journal ArticleDOI
20 Jul 2018-Science
TL;DR: A fault-tolerant error-detection scheme that suppresses spreading of ancilla errors by a factor of 5, while maintaining the assignment fidelity is demonstrated and the results demonstrate that hardware-efficient approaches that exploit system-specific error models can yield advances toward fault-Tolerant quantum computation.
Abstract: A critical component of any quantum error–correcting scheme is detection of errors by using an ancilla system. However, errors occurring in the ancilla can propagate onto the logical qubit, irreversibly corrupting the encoded information. We demonstrate a fault-tolerant error-detection scheme that suppresses spreading of ancilla errors by a factor of 5, while maintaining the assignment fidelity. The same method is used to prevent propagation of ancilla excitations, increasing the logical qubit dephasing time by an order of magnitude. Our approach is hardware-efficient, as it uses a single multilevel transmon ancilla and a cavity-encoded logical qubit, whose interaction is engineered in situ by using an off-resonant sideband drive. The results demonstrate that hardware-efficient approaches that exploit system-specific error models can yield advances toward fault-tolerant quantum computation.

Journal ArticleDOI
TL;DR: A robust 23-gene expression-based predictor of PFS, applicable to routinely available FFPE biopsies from FL patients at diagnosis, is developed, which may allow individualizing therapy for patients with FL according to the patient risk category.
Abstract: Summary Background Patients with follicular lymphoma have heterogeneous outcomes. Predictor models to distinguish, at diagnosis, between patients at high and low risk of progression are needed. The objective of this study was to use gene-expression profiling data to build and validate a predictive model of outcome for patients treated in the rituximab era. Methods A training set of fresh-frozen tumour biopsies was prospectively obtained from 160 untreated patients with high-tumour-burden follicular lymphoma enrolled in the phase 3 randomised PRIMA trial, in which rituximab maintenance was evaluated after rituximab plus chemotherapy induction (median follow-up 6·6 years [IQR 6·0–7·0]). RNA of sufficient quality was obtained for 149 of 160 cases, and Affymetrix U133 Plus 2.0 microarrays were used for gene-expression profiling. We did a multivariate Cox regression analysis to identify genes with expression levels associated with progression-free survival independently of maintenance treatment in a subgroup of 134 randomised patients. Expression levels from 95 curated genes were then determined by digital expression profiling (NanoString technology) in 53 formalin-fixed paraffin-embedded samples of the training set to compare the technical reproducibility of expression levels for each gene between technologies. Genes with high correlation (>0·75) were included in an L2-penalised Cox model adjusted on rituximab maintenance to build a predictive score for progression-free survival. The model was validated using NanoString technology to digitally quantify gene expression in 488 formalin-fixed, paraffin-embedded samples from three independent international patient cohorts from the PRIMA trial (n=178; distinct from the training cohort), the University of Iowa/Mayo Clinic Lymphoma SPORE project (n=201), and the Barcelona Hospital Clinic (n=109). All tissue samples consisted of pretreatment diagnostic biopsies and were confirmed as follicular lymphoma grade 1–3a. The patients were all treated with regimens containing rituximab and chemotherapy, possibly followed by either rituximab maintenance or ibritumomab–tiuxetan consolidation. We determined an optimum threshold on the score to predict patients at low risk and high risk of progression. The model, including the multigene score and the threshold, was initially evaluated in the three validation cohorts separately. The sensitivity and specificity of the score for the prediction of the risk of lymphoma progression at 2 years were assessed on the combined validation cohorts. Findings In the training cohort, the expression levels of 395 genes were associated with a risk of progression. 23 genes reflecting both B-cell biology and tumour microenvironment with correlation coefficients greater than 0·75 between the two technologies and sample types were retained to build a predictive model that identified a population at an increased risk of progression (p Interpretation We developed and validated a robust 23-gene expression-based predictor of progression-free survival that is applicable to routinely available formalin-fixed, paraffin-embedded tumour biopsies from patients with follicular lymphoma at time of diagnosis. Applying this score could allow individualised therapy for patients according to their risk category. Funding Roche, SIRIC Lyric, LYSARC, National Institutes of Health, the Henry J Predolin Foundation, and the Spanish Plan Nacional de Investigacion.

Journal ArticleDOI
TL;DR: The case studies showed that SINCERITIES could provide accurate GRN predictions, significantly better than other GRN inference algorithms such as TSNI, GENIE3 and JUMP3, and has a low computational complexity and is amenable to problems of extremely large dimensionality.
Abstract: Motivation Single cell transcriptional profiling opens up a new avenue in studying the functional role of cell-to-cell variability in physiological processes. The analysis of single cell expression profiles creates new challenges due to the distributive nature of the data and the stochastic dynamics of gene transcription process. The reconstruction of gene regulatory networks (GRNs) using single cell transcriptional profiles is particularly challenging, especially when directed gene-gene relationships are desired. Results We developed SINCERITIES (SINgle CEll Regularized Inference using TIme-stamped Expression profileS) for the inference of GRNs from single cell transcriptional profiles. We focused on time-stamped cross-sectional expression data, commonly generated from transcriptional profiling of single cells collected at multiple time points after cell stimulation. SINCERITIES recovers directed regulatory relationships among genes by employing regularized linear regression (ridge regression), using temporal changes in the distributions of gene expressions. Meanwhile, the modes of the gene regulations (activation and repression) come from partial correlation analyses between pairs of genes. We demonstrated the efficacy of SINCERITIES in inferring GRNs using in silico time-stamped single cell expression data and single cell transcriptional profiles of THP-1 monocytic human leukemia cells. The case studies showed that SINCERITIES could provide accurate GRN predictions, significantly better than other GRN inference algorithms such as TSNI, GENIE3 and JUMP3. Moreover, SINCERITIES has a low computational complexity and is amenable to problems of extremely large dimensionality. Finally, an application of SINCERITIES to single cell expression data of T2EC chicken erythrocytes pointed to BATF as a candidate novel regulator of erythroid development. Availability and implementation MATLAB and R version of SINCERITIES are freely available from the following websites: http://www.cabsel.ethz.ch/tools/sincerities.html and https://github.com/CABSEL/SINCERITIES. The single cell THP-1 and T2EC transcriptional profiles are available from the original publications (Kouno et al., 2013; Richard et al., 2016). The in silico single cell data are available on SINCERITIES websites. Supplementary information Supplementary data are available at Bioinformatics online.

Journal ArticleDOI
TL;DR: DBGWAS as discussed by the authors uses compacted De Bruijn graphs (cDBG) to gather nodes, identified by the association model, into subgraphs defined from their neighbourhood in the initial cDBG.
Abstract: Genome-wide association study (GWAS) methods applied to bacterial genomes have shown promising results for genetic marker discovery or detailed assessment of marker effect. Recently, alignment-free methods based on k-mer composition have proven their ability to explore the accessory genome. However, they lead to redundant descriptions and results which are sometimes hard to interpret. Here we introduce DBGWAS, an extended k-mer-based GWAS method producing interpretable genetic variants associated with distinct phenotypes. Relying on compacted De Bruijn graphs (cDBG), our method gathers cDBG nodes, identified by the association model, into subgraphs defined from their neighbourhood in the initial cDBG. DBGWAS is alignment-free and only requires a set of contigs and phenotypes. In particular, it does not require prior annotation or reference genomes. It produces subgraphs representing phenotype-associated genetic variants such as local polymorphisms and mobile genetic elements (MGE). It offers a graphical framework which helps interpret GWAS results. Importantly it is also computationally efficient-experiments took one hour and a half on average. We validated our method using antibiotic resistance phenotypes for three bacterial species. DBGWAS recovered known resistance determinants such as mutations in core genes in Mycobacterium tuberculosis, and genes acquired by horizontal transfer in Staphylococcus aureus and Pseudomonas aeruginosa-along with their MGE context. It also enabled us to formulate new hypotheses involving genetic variants not yet described in the antibiotic resistance literature. An open-source tool implementing DBGWAS is available at https://gitlab.com/leoisl/dbgwas.

Posted Content
TL;DR: In this paper, Li et al. established dense correspondences between RGB image and a surface-based representation of the human body, a task referred to as dense human pose estimation, and trained CNN-based systems that deliver dense correspondence 'in the wild', namely in the presence of background, occlusions and scale variations.
Abstract: In this work, we establish dense correspondences between RGB image and a surface-based representation of the human body, a task we refer to as dense human pose estimation. We first gather dense correspondences for 50K persons appearing in the COCO dataset by introducing an efficient annotation pipeline. We then use our dataset to train CNN-based systems that deliver dense correspondence 'in the wild', namely in the presence of background, occlusions and scale variations. We improve our training set's effectiveness by training an 'inpainting' network that can fill in missing groundtruth values and report clear improvements with respect to the best results that would be achievable in the past. We experiment with fully-convolutional networks and region-based models and observe a superiority of the latter; we further improve accuracy through cascading, obtaining a system that delivers highly0accurate results in real time. Supplementary materials and videos are provided on the project page this http URL

Book ChapterDOI
08 Sep 2018
TL;DR: A Multi-Task Convolution Neural Network (MTCNN) employing joint dynamic loss weight adjustment towards classification of named soft biometrics, as well as towards mitigation of soft biometric related bias is proposed.
Abstract: This work explores joint classification of gender, age and race. Specifically, we here propose a Multi-Task Convolution Neural Network (MTCNN) employing joint dynamic loss weight adjustment towards classification of named soft biometrics, as well as towards mitigation of soft biometrics related bias. The proposed algorithm achieves promising results on the UTKFace and the Bias Estimation in Face Analytics (BEFA) datasets and was ranked first in the BEFA Challenge of the European Conference of Computer Vision (ECCV) 2018.