scispace - formally typeset
Search or ask a question

Showing papers by "French Institute for Research in Computer Science and Automation published in 2015"


Journal ArticleDOI
TL;DR: The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) as mentioned in this paper was organized in conjunction with the MICCAI 2012 and 2013 conferences, and twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low and high grade glioma patients.
Abstract: In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients—manually annotated by up to four raters—and to 65 comparable scans generated using tumor image simulation software Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%–85%), illustrating the difficulty of this task We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource

3,699 citations


Proceedings ArticleDOI
07 Jun 2015
TL;DR: A weakly supervised convolutional neural network is described for object classification that relies only on image-level labels, yet can learn from cluttered scenes containing multiple objects.
Abstract: Successful methods for visual object recognition typically rely on training datasets containing lots of richly annotated images. Detailed image annotation, e.g. by object bounding boxes, however, is both expensive and often subjective. We describe a weakly supervised convolutional neural network (CNN) for object classification that relies only on image-level labels, yet can learn from cluttered scenes containing multiple objects. We quantify its object classification and object location prediction performance on the Pascal VOC 2012 (20 object classes) and the much larger Microsoft COCO (80 object classes) datasets. We find that the network (i) outputs accurate image-level labels, (ii) predicts approximate locations (but not extents) of objects, and (iii) performs comparably to its fully-supervised counterparts using object bounding box annotation for training.

1,020 citations


Proceedings ArticleDOI
13 Jun 2015
TL;DR: This paper proposes an accelerator which is 60x more energy efficient than the previous state-of-the-art neural network accelerator, designed down to the layout at 65 nm, with a modest footprint and consuming only 320 mW, but still about 30x faster than high-end GPUs.
Abstract: In recent years, neural network accelerators have been shown to achieve both high energy efficiency and high performance for a broad application scope within the important category of recognition and mining applications. Still, both the energy efficiency and performance of such accelerators remain limited by memory accesses. In this paper, we focus on image applications, arguably the most important category among recognition and mining applications. The neural networks which are state-of-the-art for these applications are Convolutional Neural Networks (CNN), and they have an important property: weights are shared among many neurons, considerably reducing the neural network memory footprint. This property allows to entirely map a CNN within an SRAM, eliminating all DRAM accesses for weights. By further hoisting this accelerator next to the image sensor, it is possible to eliminate all remaining DRAM accesses, i.e., for inputs and outputs. In this paper, we propose such a CNN accelerator, placed next to a CMOS or CCD sensor. The absence of DRAM accesses combined with a careful exploitation of the specific data access patterns within CNNs allows us to design an accelerator which is 60× more energy efficient than the previous state-of-the-art neural network accelerator. We present a full design down to the layout at 65 nm, with a modest footprint of 4.86mm2 and consuming only 320mW, but still about 30× faster than high-end GPUs.

1,005 citations


Proceedings ArticleDOI
07 Jun 2015
TL;DR: In this article, an edge-aware geodesic distance is used to handle occlusions and motion boundaries for optical flow estimation in large displacements with significant occlusion.
Abstract: We propose a novel approach for optical flow estimation, targeted at large displacements with significant occlusions. It consists of two steps: i) dense matching by edge-preserving interpolation from a sparse set of matches; ii) variational energy minimization initialized with the dense matches. The sparse-to-dense interpolation relies on an appropriate choice of the distance, namely an edge-aware geodesic distance. This distance is tailored to handle occlusions and motion boundaries - two common and difficult issues for optical flow computation. We also propose an approximation scheme for the geodesic distance to allow fast computation without loss of performance. Subsequent to the dense interpolation step, standard one-level variational energy minimization is carried out on the dense matches to obtain the final flow estimation. The proposed approach, called Edge-Preserving Interpolation of Correspondences (EpicFlow) is fast and robust to large displacements. It significantly outperforms the state of the art on MPI-Sintel and performs on par on Kitti and Middlebury.

804 citations


Book ChapterDOI
TL;DR: In this article, a review of various global sensitivity analysis methods of model output is presented, in a complete methodological framework, in which three kinds of methods are distinguished: the screening (coarse sorting of the most influential inputs among a large number), the measures of importance (quantitative sensitivity indices) and the deep exploration of the model behaviour (measuring the effects of inputs on their all variation range).
Abstract: This chapter makes a review, in a complete methodological framework, of various global sensitivity analysis methods of model output. Numerous statistical and probabilistic tools (regression, smoothing, tests, statistical learning, Monte Carlo, …) aim at determining the model input variables which mostly contribute to an interest quantity depending on model output. This quantity can be for instance the variance of an output variable. Three kinds of methods are distinguished: the screening (coarse sorting of the most influential inputs among a large number), the measures of importance (quantitative sensitivity indices) and the deep exploration of the model behaviour (measuring the effects of inputs on their all variation range). A progressive application methodology is illustrated on a scholar application. A synthesis is given to place every method according to several axes, mainly the cost in number of model evaluations, the model complexity and the nature of brought information.

744 citations


Posted Content
TL;DR: Zhang et al. as mentioned in this paper proposed an online visual tracking algorithm by learning discriminative saliency map using Convolutional Neural Network (CNN), which takes outputs from hidden layers of the network as feature descriptors since they show excellent representation performance in various general visual recognition problems.
Abstract: We propose an online visual tracking algorithm by learning discriminative saliency map using Convolutional Neural Network (CNN). Given a CNN pre-trained on a large-scale image repository in offline, our algorithm takes outputs from hidden layers of the network as feature descriptors since they show excellent representation performance in various general visual recognition problems. The features are used to learn discriminative target appearance models using an online Support Vector Machine (SVM). In addition, we construct target-specific saliency map by backpropagating CNN features with guidance of the SVM, and obtain the final tracking result in each frame based on the appearance model generatively constructed with the saliency map. Since the saliency map visualizes spatial configuration of target effectively, it improves target localization accuracy and enable us to achieve pixel-level target segmentation. We verify the effectiveness of our tracking algorithm through extensive experiment on a challenging benchmark, where our method illustrates outstanding performance compared to the state-of-the-art tracking algorithms.

665 citations


Journal ArticleDOI
Damian Smedley1, Syed Haider2, Steffen Durinck3, Luca Pandini4, Paolo Provero4, Paolo Provero5, James E. Allen6, Olivier Arnaiz7, Mohammad Awedh8, Richard Baldock9, Giulia Barbiera4, Philippe Bardou10, Tim Beck11, Andrew Blake, Merideth Bonierbale12, Anthony J. Brookes11, Gabriele Bucci4, Iwan Buetti4, Sarah W. Burge6, Cédric Cabau10, Joseph W. Carlson13, Claude Chelala14, Charalambos Chrysostomou11, Davide Cittaro4, Olivier Collin15, Raul Cordova12, Rosalind J. Cutts14, Erik Dassi16, Alex Di Genova17, Anis Djari10, Anthony Esposito18, Heather Estrella18, Eduardo Eyras19, Eduardo Eyras20, Julio Fernandez-Banet18, Simon A. Forbes1, Robert C. Free11, Takatomo Fujisawa, Emanuela Gadaleta14, Jose Manuel Garcia-Manteiga4, David Goodstein13, Kristian Gray6, José Afonso Guerra-Assunção14, Bernard Haggarty9, Dong Jin Han21, Byung Woo Han21, Todd W. Harris22, Jayson Harshbarger, Robert K. Hastings11, Richard D. Hayes13, Claire Hoede10, Shen Hu23, Zhi-Liang Hu24, Lucie N. Hutchins, Zhengyan Kan18, Hideya Kawaji, Aminah Keliet10, Arnaud Kerhornou6, Sunghoon Kim21, Rhoda Kinsella6, Christophe Klopp10, Lei Kong25, Daniel Lawson6, Dejan Lazarevic4, Ji Hyun Lee21, Thomas Letellier10, Chuan-Yun Li25, Pietro Liò26, Chu Jun Liu25, Jie Luo6, Alejandro Maass17, Jérôme Mariette10, Thomas Maurel6, Stefania Merella4, Azza M. Mohamed8, François Moreews10, Ibounyamine Nabihoudine10, Nelson Ndegwa27, Céline Noirot10, Cristian Perez-Llamas20, Michael Primig28, Alessandro Quattrone16, Hadi Quesneville10, Davide Rambaldi4, James M. Reecy24, Michela Riba4, Steven Rosanoff6, Amna A. Saddiq8, Elisa Salas12, Olivier Sallou15, Rebecca Shepherd1, Reinhard Simon12, Linda Sperling7, William Spooner29, Daniel M. Staines6, Delphine Steinbach10, Kevin R. Stone, Elia Stupka4, Jon W. Teague1, Abu Z. Dayem Ullah14, Jun Wang25, Doreen Ware29, Marie Wong-Erasmus, Ken Youens-Clark29, Amonida Zadissa6, Shi Jian Zhang25, Arek Kasprzyk4, Arek Kasprzyk8 
TL;DR: The latest version of the BioMart Community Portal comes with many new databases that have been created by the ever-growing community and comes with better support and extensibility for data analysis and visualization tools.
Abstract: The BioMart Community Portal (www.biomart.org) is a community-driven effort to provide a unified interface to biomedical databases that are distributed worldwide. The portal provides access to numerous database projects supported by 30 scientific organizations. It includes over 800 different biological datasets spanning genomics, proteomics, model organisms, cancer data, ontology information and more. All resources available through the portal are independently administered and funded by their host organizations. The BioMart data federation technology provides a unified interface to all the available data. The latest version of the portal comes with many new databases that have been created by our ever-growing community. It also comes with better support and extensibility for data analysis and visualization tools. A new addition to our toolbox, the enrichment analysis tool is now accessible through graphical and web service interface. The BioMart community portal averages over one million requests per day. Building on this level of service and the wealth of information that has become available, the BioMart Community Portal has introduced a new, more scalable and cheaper alternative to the large data stores maintained by specialized organizations.

664 citations


Book ChapterDOI
25 Aug 2015
TL;DR: It is demonstrated that LSTM speech enhancement, even when used 'naively' as front-end processing, delivers competitive results on the CHiME-2 speech recognition task.
Abstract: We evaluate some recent developments in recurrent neural network RNN based speech enhancement in the light of noise-robust automatic speech recognition ASR. The proposed framework is based on Long Short-Term Memory LSTM RNNs which are discriminatively trained according to an optimal speech reconstruction objective. We demonstrate that LSTM speech enhancement, even when used 'naively' as front-end processing, delivers competitive results on the CHiME-2 speech recognition task. Furthermore, simple, feature-level fusion based extensions to the framework are proposed to improve the integration with the ASR back-end. These yield a best result of 13.76i¾?% average word error rate, which is, to our knowledge, the best score to date.

603 citations


Posted Content
TL;DR: This paper converts the dense weight matrices of the fully-connected layers to the Tensor Train format such that the number of parameters is reduced by a huge factor and at the same time the expressive power of the layer is preserved.
Abstract: Deep neural networks currently demonstrate state-of-the-art performance in several domains. At the same time, models of this class are very demanding in terms of computational resources. In particular, a large amount of memory is required by commonly used fully-connected layers, making it hard to use the models on low-end devices and stopping the further increase of the model size. In this paper we convert the dense weight matrices of the fully-connected layers to the Tensor Train format such that the number of parameters is reduced by a huge factor and at the same time the expressive power of the layer is preserved. In particular, for the Very Deep VGG networks we report the compression factor of the dense weight matrix of a fully-connected layer up to 200000 times leading to the compression factor of the whole network up to 7 times.

588 citations


Journal ArticleDOI
TL;DR: New nonlinear control laws are designed for robust stabilization of a chain of integrators using Implicit Lyapunov Functions for finite-time and fixed-time stability analysis of nonlinear systems.

547 citations


Journal ArticleDOI
TL;DR: NeuroVault as discussed by the authors is a web-based repository that allows researchers to store, share, visualize, and decode statistical maps of the human brain without the need to install additional software.
Abstract: Here we present NeuroVault — a web based repository that allows researchers to store, share, visualize, and decode statistical maps of the human brain. NeuroVault is easy to use and employs modern web technologies to provide informative visualization of data without the need to install additional software. In addition, it leverages the power of the Neurosynth database to provide cognitive decoding of deposited maps. The data are exposed through a public REST API enabling other services and tools to take advantage of it. NeuroVault is a new resource for researchers interested in conducting meta- and coactivation analyses.

Journal ArticleDOI
TL;DR: In this paper, a randomized comparison-based adaptive search algorithm is proposed to optimize a linear function with a linear constraint, where resampling is used to handle the linear constraint.
Abstract: This paper analyzes a (1, $\lambda$)-Evolution Strategy, a randomized comparison-based adaptive search algorithm, optimizing a linear function with a linear constraint. The algorithm uses resampling to handle the constraint. Two cases are investigated: first the case where the step-size is constant, and second the case where the step-size is adapted using cumulative step-size adaptation. We exhibit for each case a Markov chain describing the behaviour of the algorithm. Stability of the chain implies, by applying a law of large numbers, either convergence or divergence of the algorithm. Divergence is the desired behaviour. In the constant step-size case, we show stability of the Markov chain and prove the divergence of the algorithm. In the cumulative step-size adaptation case, we prove stability of the Markov chain in the simplified case where the cumulation parameter equals 1, and discuss steps to obtain similar results for the full (default) algorithm where the cumulation parameter is smaller than 1. The stability of the Markov chain allows us to deduce geometric divergence or convergence , depending on the dimension, constraint angle, population size and damping parameter, at a rate that we estimate. Our results complement previous studies where stability was assumed.

Proceedings ArticleDOI
08 Jun 2015
TL;DR: Numerical results regarding three different coverage scenarios, show that the optimal policy significantly increases the chances of hit under high-coverage regime, i.e., when the probabilities of coverage by more than just one station are high enough.
Abstract: In this work we consider the problem of an optimal geographic placement of content in wireless cellular networks modelled by Poisson point processes. Specifically, for the typical user requesting some particular content and whose popularity follows a given law (e.g. Zipf), we calculate the probability of finding the content cached in one of the base stations. Wireless coverage follows the usual signal-to-interference-and noise ratio (SINR) model, or some variants of it. We formulate and solve the problem of an optimal randomized content placement policy, to maximize the user's hit probability. The result dictates that it is not always optimal to follow the standard policy “cache the most popular content, everywhere”. In fact, our numerical results regarding three different coverage scenarios, show that the optimal policy significantly increases the chances of hit under high-coverage regime, i.e., when the probabilities of coverage by more than just one station are high enough.

Proceedings ArticleDOI
12 Oct 2015
TL;DR: Logjam, a novel flaw in TLS that lets a man-in-the-middle downgrade connections to "export-grade" Diffie-Hellman, is presented and a close reading of published NSA leaks shows that the agency's attacks on VPNs are consistent with having achieved a break.
Abstract: We investigate the security of Diffie-Hellman key exchange as used in popular Internet protocols and find it to be less secure than widely believed. First, we present Logjam, a novel flaw in TLS that lets a man-in-the-middle downgrade connections to "export-grade" Diffie-Hellman. To carry out this attack, we implement the number field sieve discrete log algorithm. After a week-long precomputation for a specified 512-bit group, we can compute arbitrary discrete logs in that group in about a minute. We find that 82% of vulnerable servers use a single 512-bit group, allowing us to compromise connections to 7% of Alexa Top Million HTTPS sites. In response, major browsers are being changed to reject short groups. We go on to consider Diffie-Hellman with 768- and 1024-bit groups. We estimate that even in the 1024-bit case, the computations are plausible given nation-state resources. A small number of fixed or standardized groups are used by millions of servers; performing precomputation for a single 1024-bit group would allow passive eavesdropping on 18% of popular HTTPS sites, and a second group would allow decryption of traffic to 66% of IPsec VPNs and 26% of SSH servers. A close reading of published NSA leaks shows that the agency's attacks on VPNs are consistent with having achieved such a break. We conclude that moving to stronger key exchange methods should be a priority for the Internet community.

Journal ArticleDOI
01 Jun 2015
TL;DR: A quick introduction to scikit-learn as well as to machine-learning basics are given.
Abstract: Machine learning is a pervasive development at the intersection of statistics and computer science. While it can benefit many data-related applications, the technical nature of the research literature and the corresponding algorithms slows down its adoption. Scikit-learn is an open-source software project that aims at making machine learning accessible to all, whether it be in academia or in industry. It benefits from the general-purpose Python language, which is both broadly adopted in the scientific world, and supported by a thriving ecosystem of contributors. Here we give a quick introduction to scikit-learn as well as to machine-learning basics.

Journal ArticleDOI
20 Feb 2015-Science
TL;DR: In this article, the state of a superconducting resonator is confined to a manifold of coherent superpositions of multiple stable steady states, and a Schrodinger cat state spontaneously squeezes out of vacuum before decaying into a classical mixture.
Abstract: Physical systems usually exhibit quantum behavior, such as superpositions and entanglement, only when they are sufficiently decoupled from a lossy environment. Paradoxically, a specially engineered interaction with the environment can become a resource for the generation and protection of quantum states. This notion can be generalized to the confinement of a system into a manifold of quantum states, consisting of all coherent superpositions of multiple stable steady states. We have confined the state of a superconducting resonator to the quantum manifold spanned by two coherent states of opposite phases and have observed a Schrodinger cat state spontaneously squeeze out of vacuum before decaying into a classical mixture. This experiment points toward robustly encoding quantum information in multidimensional steady-state manifolds.

Journal ArticleDOI
TL;DR: The modeling features of the Uppaal SMC tool, new verification algorithms and ways of applying them to potentially complex case studies are demonstrated.
Abstract: This tutorial paper surveys the main features of Uppaal SMC, a model checking approach in Uppaal family that allows us to reason on networks of complex real-timed systems with a stochastic semantic. We demonstrate the modeling features of the tool, new verification algorithms and ways of applying them to potentially complex case studies.

Proceedings ArticleDOI
18 Apr 2015
TL;DR: This article goes beyond the focused research questions addressed so far by delineating the research area, synthesizing its open challenges and laying out a research agenda.
Abstract: Physical representations of data have existed for thousands of years. Yet it is now that advances in digital fabrication, actuated tangible interfaces, and shape-changing displays are spurring an emerging area of research that we call Data Physicalization. It aims to help people explore, understand, and communicate data using computer-supported physical data representations. We call these representations physicalizations, analogously to visualizations -- their purely visual counterpart. In this article, we go beyond the focused research questions addressed so far by delineating the research area, synthesizing its open challenges and laying out a research agenda.

Journal ArticleDOI
TL;DR: A survey of optical flow estimation classifying the main principles elaborated during this evolution, with a particular concern given to recent developments is proposed.

Proceedings Article
07 Dec 2015
TL;DR: In this paper, the authors converted the dense weight matrices of the fully-connected layers to the Tensor Train format such that the number of parameters is reduced by a huge factor and at the same time the expressive power of the layer is preserved.
Abstract: Deep neural networks currently demonstrate state-of-the-art performance in several domains. At the same time, models of this class are very demanding in terms of computational resources. In particular, a large amount of memory is required by commonly used fully-connected layers, making it hard to use the models on low-end devices and stopping the further increase of the model size. In this paper we convert the dense weight matrices of the fully-connected layers to the Tensor Train [17] format such that the number of parameters is reduced by a huge factor and at the same time the expressive power of the layer is preserved. In particular, for the Very Deep VGG networks [21] we report the compression factor of the dense weight matrix of a fully-connected layer up to 200000 times leading to the compression factor of the whole network up to 7 times.

Proceedings Article
06 Jul 2015
TL;DR: An online visual tracking algorithm by learning discriminative saliency map using Convolutional Neural Network using hidden layers of the network to improve target localization accuracy and achieve pixel-level target segmentation.
Abstract: We propose an online visual tracking algorithm by learning discriminative saliency map using Convolutional Neural Network (CNN). Given a CNN pre-trained on a large-scale image repository in offline, our algorithm takes outputs from hidden layers of the network as feature descriptors since they show excellent representation performance in various general visual recognition problems. The features are used to learn discriminative target appearance models using an online Support Vector Machine (SVM). In addition, we construct target-specific saliency map by backprojecting CNN features with guidance of the SVM, and obtain the final tracking result in each frame based on the appearance model generatively constructed with the saliency map. Since the saliency map reveals spatial configuration of target effectively, it improves target localization accuracy and enables us to achieve pixel-level target segmentation. We verify the effectiveness of our tracking algorithm through extensive experiment on a challenging benchmark, where our method illustrates outstanding performance compared to the state-of-the-art tracking algorithms.

Proceedings ArticleDOI
01 Nov 2015
TL;DR: This paper addresses the challenge of bringing TSCH (Time Slotted Channel Hopping MAC) to dynamic networks, focusing on low-power IPv6 and RPL networks, and introduces Orchestra, which allows Orchestra to build non-deterministic networks while exploiting the robustness of TSCH.
Abstract: Time slotted operation is a well-proven approach to achieve highly reliable low-power networking through scheduling and channel hopping. It is, however, difficult to apply time slotting to dynamic networks as envisioned in the Internet of Things. Commonly, these applications do not have pre-defined periodic traffic patterns and nodes can be added or removed dynamically.This paper addresses the challenge of bringing TSCH (Time Slotted Channel Hopping MAC) to such dynamic networks. We focus on low-power IPv6 and RPL networks, and introduce Orchestra. In Orchestra, nodes autonomously compute their own, local schedules. They maintain multiple schedules, each allocated to a particular traffic plane (application, routing, MAC), and updated automatically as the topology evolves. Orchestra (re)computes local schedules without signaling overhead, and does not require any central or distributed scheduler. Instead, it relies on the existing network stack information to maintain the schedules. This scheme allows Orchestra to build non-deterministic networks while exploiting the robustness of TSCH.We demonstrate the practicality of Orchestra and quantify its benefits through extensive evaluation in two testbeds, on two hardware platforms. Orchestra reduces, or even eliminates, network contention. In long running experiments of up to 72~h we show that Orchestra achieves end-to-end delivery ratios of over 99.99%. Compared to RPL in asynchronous low-power listening networks, Orchestra improves reliability by two orders of magnitude, while achieving a similar latency-energy balance.

Proceedings ArticleDOI
14 Dec 2015
TL;DR: This paper introduces the FIT IoT-LAB testbed, an open testbed composed of 2728 low-power wireless nodes and 117 mobile robots available for experimenting with large-scale wireless IoT technologies, ranging from low-level protocols to advanced Internet services.
Abstract: This paper introduces the FIT IoT-LAB testbed, an open testbed composed of 2728 low-power wireless nodes and 117 mobile robots available for experimenting with large-scale wireless IoT technologies, ranging from low-level protocols to advanced Internet services. IoT-LAB is built to accelerate the development of tomorrow's IoT technology by offering an accurate open-access and open-source multi-user scientific tool. The IoT-LAB testbed is deployed in 6 sites across France. Each site features different node and hardware capabilities, but all sites are interconnected and available through the same web portal, common REST interfaces and consistent CLI tools. The result is a heterogeneous testing environment, which covers a large spectrum of IoT use cases and applications. IoT-LAB is a one-of-a-kind facility, allowing anyone to test their solution at scale, experiment and fine-tune new networking concept.

Proceedings ArticleDOI
14 Mar 2015
TL;DR: An ML accelerator called PuDianNao is presented, which accommodates seven representative ML techniques, including k-means, k-nearest neighbors, naive bayes, support vector machine, linear regression, classification tree, and deep neural network, and can perform up to 1056 GOP/s, and consumes 596 mW only.
Abstract: Machine Learning (ML) techniques are pervasive tools in various emerging commercial applications, but have to be accommodated by powerful computer systems to process very large data. Although general-purpose CPUs and GPUs have provided straightforward solutions, their energy-efficiencies are limited due to their excessive supports for flexibility. Hardware accelerators may achieve better energy-efficiencies, but each accelerator often accommodates only a single ML technique (family). According to the famous No-Free-Lunch theorem in the ML domain, however, an ML technique performs well on a dataset may perform poorly on another dataset, which implies that such accelerator may sometimes lead to poor learning accuracy. Even if regardless of the learning accuracy, such accelerator can still become inapplicable simply because the concrete ML task is altered, or the user chooses another ML technique. In this study, we present an ML accelerator called PuDianNao, which accommodates seven representative ML techniques, including k-means, k-nearest neighbors, naive bayes, support vector machine, linear regression, classification tree, and deep neural network. Benefited from our thorough analysis on computational primitives and locality properties of different ML techniques, PuDianNao can perform up to 1056 GOP/s (e.g., additions and multiplications) in an area of 3.51 mm^2, and consumes 596 mW only. Compared with the NVIDIA K20M GPU (28nm process), PuDianNao (65nm process) is 1.20x faster, and can reduce the energy by 128.41x.

Journal ArticleDOI
TL;DR: The additional set of four control inputs actuating the propeller tilting angles is shown to yield full actuation to the quadrotor position/orientation in space, thus allowing it to behave as a fully actuated flying vehicle.
Abstract: Standard quadrotor unmanned aerial vehicles (UAVs) possess a limited mobility because of their inherent underactuation, that is, availability of four independent control inputs (the four propeller spinning velocities) versus the 6 degrees of freedom parameterizing the quadrotor position/orientation in space. Thus, the quadrotor pose cannot track arbitrary trajectories in space (e.g., it can hover on the spot only when horizontal). Because UAVs are more and more employed as service robots for interaction with the environment, this loss of mobility due to their underactuation can constitute a limiting factor. In this paper, we present a novel design for a quadrotor UAV with tilting propellers which is able to overcome these limitations. Indeed, the additional set of four control inputs actuating the propeller tilting angles is shown to yield full actuation to the quadrotor position/orientation in space, thus allowing it to behave as a fully actuated flying vehicle. We then develop a comprehensive modeling and control framework for the proposed quadrotor, and subsequently illustrate the hardware and software specifications of an experimental prototype. Finally, the results of several simulations and real experiments are reported to illustrate the capabilities of the proposed novel UAV design.

Journal ArticleDOI
TL;DR: The different definitions of the early repolarization pattern were reviewed to delineate the electrocardiographic measures to be used when defining this pattern and an agreed definition has been established, which requires the peak of an end-QRS notch and/or the onset of anEnd-Q RS slur as a measure to be determined when an interpretation of early Repolarization is being considered.

Journal ArticleDOI
TL;DR: A review from sociological concepts to social robotics and human-aware navigation, and recent robotic experiments focusing on the way social conventions and robotics must be linked are presented.
Abstract: In the context of a growing interest in modelling human behavior to increase the robots' social abilities, this article presents a survey related to socially-aware robot navigation. It presents a review from sociological concepts to social robotics and human-aware navigation. Social cues, signals and proxemics are discussed. Socially aware behavior in terms of navigation is tackled also. Finally, recent robotic experiments focusing on the way social conventions and robotics must be linked is presented.

Journal ArticleDOI
TL;DR: This work gives the first composable security proof for continuous-variable quantum key distribution with coherent states against collective attacks and shows the security of the protocol against general attacks, thereby confirming the long-standing conjecture that Gaussian attacks are optimal asymptotically in the Composable security framework.
Abstract: We give the first composable security proof for continuous-variable quantum key distribution with coherent states against collective attacks. Crucially, in the limit of large blocks the secret key rate converges to the usual value computed from the Holevo bound. Combining our proof with either the de Finetti theorem or the postselection technique then shows the security of the protocol against general attacks, thereby confirming the long-standing conjecture that Gaussian attacks are optimal asymptotically in the composable security framework. We expect that our parameter estimation procedure, which does not rely on any assumption about the quantum state being measured, will find applications elsewhere, for instance, for the reliable quantification of continuous-variable entanglement in finite-size settings.

Proceedings ArticleDOI
07 Jun 2015
TL;DR: In this article, a part-based region matching approach was proposed to solve the unsupervised discovery and localization of dominant objects from a noisy image collection with multiple object classes, which is far more general than typical colocalization, cosegmentation or weakly-supervised localization tasks.
Abstract: This paper addresses unsupervised discovery and localization of dominant objects from a noisy image collection with multiple object classes. The setting of this problem is fully unsupervised, without even image-level annotations or any assumption of a single dominant class. This is far more general than typical colocalization, cosegmentation, or weakly-supervised localization tasks. We tackle the discovery and localization problem using a part-based region matching approach: We use off-the-shelf region proposals to form a set of candidate bounding boxes for objects and object parts. These regions are efficiently matched across images using a probabilistic Hough transform that evaluates the confidence for each candidate correspondence considering both appearance and spatial consistency. Dominant objects are discovered and localized by comparing the scores of candidate regions and selecting those that stand out over other regions containing them. Extensive experimental evaluations on standard benchmarks demonstrate that the proposed approach significantly outperforms the current state of the art in colocalization, and achieves robust object discovery in challenging mixed-class datasets.

Journal ArticleDOI
TL;DR: In this article, the recovery properties of the support of the measure (i.e., the location of the Dirac masses) using total variation of measures (TV) regularization was studied.
Abstract: This paper studies sparse spikes deconvolution over the space of measures We focus on the recovery properties of the support of the measure (ie, the location of the Dirac masses) using total variation of measures (TV) regularization This regularization is the natural extension of the $$\ell ^1$$l1 norm of vectors to the setting of measures We show that support identification is governed by a specific solution of the dual problem (a so-called dual certificate) having minimum $$L^2$$L2 norm Our main result shows that if this certificate is non-degenerate (see the definition below), when the signal-to-noise ratio is large enough TV regularization recovers the exact same number of Diracs We show that both the locations and the amplitudes of these Diracs converge toward those of the input measure when the noise drops to zero Moreover, the non-degeneracy of this certificate can be checked by computing a so-called vanishing derivative pre-certificate This proxy can be computed in closed form by solving a linear system Lastly, we draw connections between the support of the recovered measure on a continuous domain and on a discretized grid We show that when the signal-to-noise level is large enough, and provided the aforementioned dual certificate is non-degenerate, the solution of the discretized problem is supported on pairs of Diracs which are neighbors of the Diracs of the input measure This gives a precise description of the convergence of the solution of the discretized problem toward the solution of the continuous grid-free problem, as the grid size tends to zero