scispace - formally typeset
Search or ask a question

Showing papers by "French Institute for Research in Computer Science and Automation published in 2016"


Journal ArticleDOI
TL;DR: A review of motion planning techniques implemented in the intelligent vehicles literature, with a description of the technique used by research teams, their contributions in motion planning, and a comparison among these techniques is presented.
Abstract: Intelligent vehicles have increased their capabilities for highly and, even fully, automated driving under controlled environments. Scene information is received using onboard sensors and communication network systems, i.e., infrastructure and other vehicles. Considering the available information, different motion planning and control techniques have been implemented to autonomously driving on complex environments. The main goal is focused on executing strategies to improve safety, comfort, and energy optimization. However, research challenges such as navigation in urban dynamic environments with obstacle avoidance capabilities, i.e., vulnerable road users (VRU) and vehicles, and cooperative maneuvers among automated and semi-automated vehicles still need further efforts for a real environment implementation. This paper presents a review of motion planning techniques implemented in the intelligent vehicles literature. A description of the technique used by research teams, their contributions in motion planning, and a comparison among these techniques is also presented. Relevant works in the overtaking and obstacle avoidance maneuvers are presented, allowing the understanding of the gaps and challenges to be addressed in the next years. Finally, an overview of future research direction and applications is given.

1,162 citations


Posted Content
TL;DR: In this paper, a membership inference attack is proposed to determine if a record was in the training dataset of a black-box machine learning model using a black box access to the model.
Abstract: We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained. We focus on the basic membership inference attack: given a data record and black-box access to a model, determine if the record was in the model's training dataset. To perform membership inference against a target model, we make adversarial use of machine learning and train our own inference model to recognize differences in the target model's predictions on the inputs that it trained on versus the inputs that it did not train on. We empirically evaluate our inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon. Using realistic datasets and classification tasks, including a hospital discharge dataset whose membership is sensitive from the privacy perspective, we show that these models can be vulnerable to membership inference attacks. We then investigate the factors that influence this leakage and evaluate mitigation strategies.

1,030 citations


Journal ArticleDOI
25 Aug 2016-Nature
TL;DR: A QEC system that reaches the break-even point by suppressing the natural errors due to energy loss for a qubit logically encoded in superpositions of Schrödinger-cat states of a superconducting resonator is demonstrated.
Abstract: Quantum error correction (QEC) can overcome the errors experienced by qubits1 and is therefore an essential component of a future quantum computer. To implement QEC, a qubit is redundantly encoded in a higher-dimensional space using quantum states with carefully tailored symmetry properties. Projective measurements of these parity-type observables provide error syndrome information, with which errors can be corrected via simple operations2. The ‘break-even’ point of QEC—at which the lifetime of a qubit exceeds the lifetime of the constituents of the system—has so far remained out of reach3. Although previous works have demonstrated elements of QEC4–16, they primarily illustrate the signatures or scaling properties of QEC codes rather than test the capacity of the system to preserve a qubit over time. Here we demonstrate a QEC system that reaches the break-even point by suppressing the natural errors due to energy loss for a qubit logically encoded in superpositions of Schrodinger-cat states17 of a superconducting resonator18–21. We implement a full QEC protocol by using real-time feedback to encode, monitor naturally occurring errors, decode and correct. As measured by full process tomography, without any post-selection, the corrected qubit lifetime is 320 microseconds, which is longer than the lifetime of any of the parts of the system: 20 times longer than the lifetime of the transmon, about 2.2 times longer than the lifetime of an uncorrected logical encoding and about 1.1 longer than the lifetime of the best physical qubit (the |0〉f and |1〉f Fock states of the resonator). Our results illustrate the benefit of using hardware-efficient qubit encodings rather than traditional QEC schemes. Furthermore, they advance the field of experimental error correction from confirming basic concepts to exploring the metrics that drive system performance and the challenges in realizing a fault-tolerant system.

844 citations


Journal ArticleDOI
TL;DR: The obitools package satisfies this requirement thanks to a set of programs specifically designed for analysing NGS data in a DNA metabarcoding context, helping to set up tailor‐made analysis pipelines for a broad range of DNA metabARCoding applications, including biodiversity surveys or diet analyses.
Abstract: DNA metabarcoding offers new perspectives in biodiversity research. This recently developed approach to ecosystem study relies heavily on the use of next-generation sequencing (NGS) and thus calls upon the ability to deal with huge sequence data sets. The obitools package satisfies this requirement thanks to a set of programs specifically designed for analysing NGS data in a DNA metabarcoding context. Their capacity to filter and edit sequences while taking into account taxonomic annotation helps to set up tailor-made analysis pipelines for a broad range of DNA metabarcoding applications, including biodiversity surveys or diet analyses. The obitools package is distributed as an open source software available on the following website: http://metabarcoding.org/obitools. A Galaxy wrapper is available on the GenOuest core facility toolshed: http://toolshed.genouest.org.

711 citations


Journal ArticleDOI
TL;DR: This work proposes to view attribute-based image classification as a label-embedding problem: each class is embedded in the space of attribute vectors, and introduces a function that measures the compatibility between an image and a label embedding.
Abstract: Attributes act as intermediate representations that enable parameter sharing between classes, a must when training data is scarce. We propose to view attribute-based image classification as a label-embedding problem: each class is embedded in the space of attribute vectors. We introduce a function that measures the compatibility between an image and a label embedding. The parameters of this function are learned on a training set of labeled samples to ensure that, given an image, the correct classes rank higher than the incorrect ones. Results on the Animals With Attributes and Caltech-UCSD-Birds datasets show that the proposed framework outperforms the standard Direct Attribute Prediction baseline in a zero-shot learning scenario. Label embedding enjoys a built-in ability to leverage alternative sources of information instead of or in addition to attributes, such as, e.g., class hierarchies or textual descriptions. Moreover, label embedding encompasses the whole range of learning settings from zero-shot learning to regular learning with a large number of labeled examples.

699 citations


Journal ArticleDOI
TL;DR: A comprehensive survey on the UAVs and the related issues will be introduced, the envisioned UAV-based architecture for the delivery of Uav-based value-added IoT services from the sky will be introduction, and the relevant key challenges and requirements will be presented.
Abstract: Recently, unmanned aerial vehicles (UAVs), or drones, have attracted a lot of attention, since they represent a new potential market. Along with the maturity of the technology and relevant regulations, a worldwide deployment of these UAVs is expected. Thanks to the high mobility of drones, they can be used to provide a lot of applications, such as service delivery, pollution mitigation, farming, and in the rescue operations. Due to its ubiquitous usability, the UAV will play an important role in the Internet of Things (IoT) vision, and it may become the main key enabler of this vision. While these UAVs would be deployed for specific objectives (e.g., service delivery), they can be, at the same time, used to offer new IoT value-added services when they are equipped with suitable and remotely controllable machine type communications (MTCs) devices (i.e., sensors, cameras, and actuators). However, deploying UAVs for the envisioned purposes cannot be done before overcoming the relevant challenging issues. These challenges comprise not only technical issues, such as physical collision, but also regulation issues as this nascent technology could be associated with problems like breaking the privacy of people or even use it for illegal operations like drug smuggling. Providing the communication to UAVs is another challenging issue facing the deployment of this technology. In this paper, a comprehensive survey on the UAVs and the related issues will be introduced. In addition, our envisioned UAV-based architecture for the delivery of UAV-based value-added IoT services from the sky will be introduced, and the relevant key challenges and requirements will be presented.

693 citations


Proceedings ArticleDOI
24 Oct 2016
TL;DR: This paper outlines a framework to analyze and verify both the runtime safety and the functional correctness of Ethereum contracts by translation to F*, a functional programming language aimed at program verification.
Abstract: Ethereum is a framework for cryptocurrencies which uses blockchain technology to provide an open global computing platform, called the Ethereum Virtual Machine (EVM). EVM executes bytecode on a simple stack machine. Programmers do not usually write EVM code; instead, they can program in a JavaScript-like language, called Solidity, that compiles to bytecode. Since the main purpose of EVM is to execute smart contracts that manage and transfer digital assets (called Ether), security is of paramount importance. However, writing secure smart contracts can be extremely difficult: due to the openness of Ethereum, both programs and pseudonymous users can call into the public methods of other programs, leading to potentially dangerous compositions of trusted and untrusted code. This risk was recently illustrated by an attack on TheDAO contract that exploited subtle details of the EVM semantics to transfer roughly $50M worth of Ether into the control of an attacker.In this paper, we outline a framework to analyze and verify both the runtime safety and the functional correctness of Ethereum contracts by translation to F*, a functional programming language aimed at program verification.

551 citations


Journal ArticleDOI
TL;DR: This paper aims at presenting a brief but almost self-contented introduction to the most important approaches dedicated to vision-based camera localization along with a survey of several extension proposed in the recent years.
Abstract: Augmented reality (AR) allows to seamlessly insert virtual objects in an image sequence. In order to accomplish this goal, it is important that synthetic elements are rendered and aligned in the scene in an accurate and visually acceptable way. The solution of this problem can be related to a pose estimation or, equivalently, a camera localization process. This paper aims at presenting a brief but almost self-contented introduction to the most important approaches dedicated to vision-based camera localization along with a survey of several extension proposed in the recent years. For most of the presented approaches, we also provide links to code of short examples. This should allow readers to easily bridge the gap between theoretical aspects and practical implementations.

506 citations


Book
17 Oct 2016
TL;DR: This book is a comprehensive treatment of the theory of persistence modules over the real line, presenting a set of mathematical tools to analyse the structure and to establish the stability of such modules, providing a sound mathematical framework for the study of persistence diagrams.
Abstract: This book is a comprehensive treatment of the theory of persistence modules over the real line. It presents a set of mathematical tools to analyse the structure and to establish the stability of such modules, providing a sound mathematical framework for the study of persistence diagrams. Completely self-contained, this brief introduces the notion of persistence measure and makes extensive use of a new calculus of quiver representations to facilitate explicit computations. Appealing to both beginners and experts in the subject, The Structure and Stability of Persistence Modules provides a purely algebraic presentation of persistence, and thus complements the existing literature, which focuses mainly on topological and algorithmic aspects.

457 citations


Journal ArticleDOI
01 Jul 2016
TL;DR: This work surveys VANETs focusing on their communication and application challenges, and discusses the protocol stack of this type of network, and provides a qualitative comparison between most common protocols in the literature.
Abstract: VANETs have emerged as an exciting research and application area. Increasingly vehicles are being equipped with embedded sensors, processing and wireless communication capabilities. This has opened a myriad of possibilities for powerful and potential life-changing applications on safety, efficiency, comfort, public collaboration and participation, while they are on the road. Although, considered as a special case of a Mobile Ad Hoc Network, the high but constrained mobility of vehicles bring new challenges to data communication and application design in VANETs. This is due to their highly dynamic and intermittent connected topology and different application's QoS requirements. In this work, we survey VANETs focusing on their communication and application challenges. In particular, we discuss the protocol stack of this type of network, and provide a qualitative comparison between most common protocols in the literature. We then present a detailed discussion of different categories of VANET applications. Finally, we discuss open research problems to encourage the design of new VANET solutions.

362 citations


Journal ArticleDOI
TL;DR: An overview of soft biometrics is provided and some of the techniques that have been proposed to extract them from the image and the video data are discussed, a taxonomy for organizing and classifying soft biometric attributes is introduced, and the strengths and limitations are enumerated.
Abstract: Recent research has explored the possibility of extracting ancillary information from primary biometric traits viz., face, fingerprints, hand geometry, and iris. This ancillary information includes personal attributes, such as gender, age, ethnicity, hair color, height, weight, and so on. Such attributes are known as soft biometrics and have applications in surveillance and indexing biometric databases. These attributes can be used in a fusion framework to improve the matching accuracy of a primary biometric system (e.g., fusing face with gender information), or can be used to generate qualitative descriptions of an individual (e.g., young Asian female with dark eyes and brown hair). The latter is particularly useful in bridging the semantic gap between human and machine descriptions of the biometric data. In this paper, we provide an overview of soft biometrics and discuss some of the techniques that have been proposed to extract them from the image and the video data. We also introduce a taxonomy for organizing and classifying soft biometric attributes, and enumerate the strengths and limitations of these attributes in the context of an operational biometric system. Finally, we discuss open research problems in this field. This survey is intended for researchers and practitioners in the field of biometrics.

Proceedings ArticleDOI
03 Oct 2016
TL;DR: The potential of the srsLTE library is shown by extending the baseline code to allow LTE transmissions in the unlicensed bands and coexistence with WiFi by showing how different vendor-specific mechanisms in WiFi cards might affect coexistence.
Abstract: Testbeds are essential for experimental evaluation as well as for product development. In the context of LTE networks, existing testbed platforms are limited either in functionality and/or extensibility or are too complex to modify and customise. In this work we present srsLTE, an open-source platform for LTE experimentation designed for maximum modularity and code reuse and fully compliant with LTE Release 8. We show the potential of the srsLTE library by extending the baseline code to allow LTE transmissions in the unlicensed bands and coexistence with WiFi. We also expand previous results on this emerging research area by showing how different vendor-specific mechanisms in WiFi cards might affect coexistence.

Journal ArticleDOI
27 May 2016-Science
TL;DR: It is shown that the cat can be in two separate locations at the same time and the ability to manipulate such multicavity quantum states paves the way for logical operations between redundantly encoded qubits for fault-tolerant quantum computation and communication.
Abstract: Quantum superpositions of distinct coherent states in a single-mode harmonic oscillator, known as “cat states,” have been an elegant demonstration of Schrodinger’s famous cat paradox. Here, we realize a two-mode cat state of electromagnetic fields in two microwave cavities bridged by a superconducting artificial atom, which can also be viewed as an entangled pair of single-cavity cat states. We present full quantum state tomography of this complex cat state over a Hilbert space exceeding 100 dimensions via quantum nondemolition measurements of the joint photon number parity. The ability to manipulate such multicavity quantum states paves the way for logical operations between redundantly encoded qubits for fault-tolerant quantum computation and communication.

Journal ArticleDOI
TL;DR: This article proposes a framework where deep neural networks are used to model the source spectra and combined with the classical multichannel Gaussian model to exploit the spatial information and presents its application to a speech enhancement problem.
Abstract: This article addresses the problem of multichannel audio source separation We propose a framework where deep neural networks (DNNs) are used to model the source spectra and combined with the classical multichannel Gaussian model to exploit the spatial information The parameters are estimated in an iterative expectation-maximization (EM) fashion and used to derive a multichannel Wiener filter We present an extensive experimental study to show the impact of different design choices on the performance of the proposed technique We consider different cost functions for the training of DNNs, namely the probabilistically motivated Itakura–Saito divergence, and also Kullback–Leibler, Cauchy, mean squared error, and phase-sensitive cost functions We also study the number of EM iterations and the use of multiple DNNs, where each DNN aims to improve the spectra estimated by the preceding EM iteration Finally, we present its application to a speech enhancement problem The experimental results show the benefit of the proposed multichannel approach over a single-channel DNN-based approach and the conventional multichannel nonnegative matrix factorization-based iterative EM algorithm

Proceedings ArticleDOI
11 Jan 2016
TL;DR: A new, completely redesigned, version of F*, a language that works both as a proof assistant as well as a general-purpose, verification-oriented, effectful programming language that confirms F*'s pay-as-you-go cost model.
Abstract: We present a new, completely redesigned, version of F*, a language that works both as a proof assistant as well as a general-purpose, verification-oriented, effectful programming language. In support of these complementary roles, F* is a dependently typed, higher-order, call-by-value language with _primitive_ effects including state, exceptions, divergence and IO. Although primitive, programmers choose the granularity at which to specify effects by equipping each effect with a monadic, predicate transformer semantics. F* uses this to efficiently compute weakest preconditions and discharges the resulting proof obligations using a combination of SMT solving and manual proofs. Isolated from the effects, the core of F* is a language of pure functions used to write specifications and proof terms---its consistency is maintained by a semantic termination check based on a well-founded order. We evaluate our design on more than 55,000 lines of F* we have authored in the last year, focusing on three main case studies. Showcasing its use as a general-purpose programming language, F* is programmed (but not verified) in F*, and bootstraps in both OCaml and F#. Our experience confirms F*'s pay-as-you-go cost model: writing idiomatic ML-like code with no finer specifications imposes no user burden. As a verification-oriented language, our most significant evaluation of F* is in verifying several key modules in an implementation of the TLS-1.2 protocol standard. For the modules we considered, we are able to prove more properties, with fewer annotations using F* than in a prior verified implementation of TLS-1.2. Finally, as a proof assistant, we discuss our use of F* in mechanizing the metatheory of a range of lambda calculi, starting from the simply typed lambda calculus to System F-omega and even micro-F*, a sizeable fragment of F* itself---these proofs make essential use of F*'s flexible combination of SMT automation and constructive proofs, enabling a tactic-free style of programming and proving at a relatively large scale.

Journal ArticleDOI
TL;DR: In this paper, the authors extract feature point matches between frames using SURF descriptors and dense optical flow, and use the matches to estimate a homography with RANSAC.
Abstract: This paper introduces a state-of-the-art video representation and applies it to efficient action recognition and detection. We first propose to improve the popular dense trajectory features by explicit camera motion estimation. More specifically, we extract feature point matches between frames using SURF descriptors and dense optical flow. The matches are used to estimate a homography with RANSAC. To improve the robustness of homography estimation, a human detector is employed to remove outlier matches from the human body as human motion is not constrained by the camera. Trajectories consistent with the homography are considered as due to camera motion, and thus removed. We also use the homography to cancel out camera motion from the optical flow. This results in significant improvement on motion-based HOF and MBH descriptors. We further explore the recent Fisher vector as an alternative feature encoding approach to the standard bag-of-words (BOW) histogram, and consider different ways to include spatial layout information in these encodings. We present a large and varied set of evaluations, considering (i) classification of short basic actions on six datasets, (ii) localization of such actions in feature-length movies, and (iii) large-scale recognition of complex events. We find that our improved trajectory features significantly outperform previous dense trajectories, and that Fisher vectors are superior to BOW encodings for video recognition tasks. In all three tasks, we show substantial improvements over the state-of-the-art results.

Journal ArticleDOI
TL;DR: This paper analyzes in detail the specific requirements that an OS should satisfy to run on low-end IoT devices, and surveys applicable OSs, focusing on candidates that could become an equivalent of Linux for such devices, i.e., a one-size-fits-most, open source OS for low- end IoT devices.
Abstract: The Internet of Things (IoT) is projected to soon interconnect tens of billions of new devices, in large part also connected to the Internet. IoT devices include both high-end devices which can use traditional go-to operating systems (OSs) such as Linux, and low-end devices which cannot, due to stringent resource constraints, e.g., very limited memory, computational power, and power supply. However, large-scale IoT software development, deployment, and maintenance requires an appropriate OS to build upon. In this paper, we thus analyze in detail the specific requirements that an OS should satisfy to run on low-end IoT devices, and we survey applicable OSs, focusing on candidates that could become an equivalent of Linux for such devices, i.e., a one-size-fits-most, open source OS for low-end IoT devices.

Book ChapterDOI
14 Aug 2016
TL;DR: This paper considers attacks where an adversary can query an oracle implementing a cryptographic primitive in a quantum superposition of different states, and shows that the most widely used modes of operation for authentication and authenticated encryption are completely broken in this security model.
Abstract: Due to Shor's algorithm, quantum computers are a severe threat for public key cryptography. This motivated the cryptographic community to search for quantum-safe solutions. On the other hand, the impact of quantum computing on secret key cryptography is much less understood. In this paper, we consider attacks where an adversary can query an oracle implementing a cryptographic primitive in a quantum superposition of different states. This model gives a lot of power to the adversary, but recent results show that it is nonetheless possible to build secure cryptosystems in it. We study applications of a quantum procedure called Simon's algorithm the simplest quantum period finding algorithm in order to attack symmetric cryptosystems in this model. Following previous works in this direction, we show that several classical attacks based on finding collisions can be dramatically sped up using Simon's algorithm: finding a collision requires $$\varOmega 2^{n/2}$$ queries in the classical setting, but when collisions happen with some hidden periodicity, they can be found with only On queries in the quantum model. We obtain attacks with very strong implications. First, we show that the most widely used modes of operation for authentication and authenticated encryption e.g. CBC-MAC, PMAC, GMAC, GCM, and OCB are completely broken in this security model. Our attacks are also applicable to many CAESAR candidates: CLOC, AEZ, COPA, OTR, POET, OMD, and Minalpher. This is quite surprising compared to the situation with encryption modes: Anand et al. show that standard modes are secure with a quantum-secure PRF. Second, we show that Simon's algorithm can also be applied to slide attacks, leading to an exponential speed-up of a classical symmetric cryptanalysis technique in the quantum model.

Proceedings ArticleDOI
19 Mar 2016
TL;DR: Results show that the sense of agency is stronger for less realistic virtual hands which also provide less mismatch between the participant's actions and the animation of the virtual hand.
Abstract: How do people appropriate their virtual hand representation when interacting in virtual environments? In order to answer this question, we conducted an experiment studying the sense of embodiment when interacting with three different virtual hand representations, each one providing a different degree of visual realism but keeping the same control mechanism. The main experimental task was a Pick-and-Place task in which participants had to grasp a virtual cube and place it to an indicated position while avoiding an obstacle (brick, barbed wire or fire). An additional task was considered in which participants had to perform a potentially dangerous operation towards their virtual hand: place their virtual hand close to a virtual spinning saw. Both qualitative measures and questionnaire data were gathered in order to assess the sense of agency and ownership towards each virtual hand. Results show that the sense of agency is stronger for less realistic virtual hands which also provide less mismatch between the participant's actions and the animation of the virtual hand. In contrast, the sense of ownership is increased for the human virtual hand which provides a direct mapping between the degrees of freedom of the real and virtual hand.

Journal ArticleDOI
TL;DR: A novel matching algorithm, called DeepMatching, to compute dense correspondences between images, which outperforms the state-of-the-art algorithms and shows excellent results in particular for repetitive textures.
Abstract: We introduce a novel matching algorithm, called DeepMatching, to compute dense correspondences between images. DeepMatching relies on a hierarchical, multi-layer, correlational architecture designed for matching images and was inspired by deep convolutional approaches. The proposed matching algorithm can handle non-rigid deformations and repetitive textures and efficiently determines dense correspondences in the presence of significant changes between images. We evaluate the performance of DeepMatching, in comparison with state-of-the-art matching algorithms, on the Mikolajczyk (Mikolajczyk et al. A comparison of affine region detectors, 2005), the MPI-Sintel (Butler et al. A naturalistic open source movie for optical flow evaluation, 2012) and the Kitti (Geiger et al. Vision meets robotics: The KITTI dataset, 2013) datasets. DeepMatching outperforms the state-of-the-art algorithms and shows excellent results in particular for repetitive textures. We also apply DeepMatching to the computation of optical flow, called DeepFlow, by integrating it in the large displacement optical flow (LDOF) approach of Brox and Malik (Large displacement optical flow: descriptor matching in variational motion estimation, 2011). Additional robustness to large displacements and complex motion is obtained thanks to our matching approach. DeepFlow obtains competitive performance on public benchmarks for optical flow estimation.

Posted Content
TL;DR: This work introduces two types of context-aware guidance models, additive and contrastive models, that leverage their surrounding context regions to improve localization in objects in images using image-level supervision only.
Abstract: We aim to localize objects in images using image-level supervision only. Previous approaches to this problem mainly focus on discriminative object regions and often fail to locate precise object boundaries. We address this problem by introducing two types of context-aware guidance models, additive and contrastive models, that leverage their surrounding context regions to improve localization. The additive model encourages the predicted object region to be supported by its surrounding context region. The contrastive model encourages the predicted object region to be outstanding from its surrounding context region. Our approach benefits from the recent success of convolutional neural networks for object recognition and extends Fast R-CNN to weakly supervised object localization. Extensive experimental evaluation on the PASCAL VOC 2007 and 2012 benchmarks shows hat our context-aware approach significantly improves weakly supervised localization and detection.

Journal ArticleDOI
TL;DR: SPOON enables Java developers to write a large range of domain‐specific analyses and transformations in an easy and concise manner and developers do not need to dive into parsing, to hack a compiler infrastructure, or to master a new formalism.
Abstract: This paper presents SPOON, a library for the analysis and transformation of Java source code. SPOON enables Java developers to write a large range of domain-specific analyses and transformations in an easy and concise manner. SPOON analyses and transformations are written in plain Java. With SPOON, developers do not need to dive into parsing, to hack a compiler infrastructure, or to master a new formalism. Copyright © 2015 John Wiley & Sons, Ltd.

Proceedings ArticleDOI
27 Feb 2016
TL;DR: The Surrey Face Model is presented, a multi-resolution 3D Morphable Model that is made available to the public for non-commercial purposes and a lightweight open-source C++ library designed with simplicity and ease of integration as its foremost goals.
Abstract: 3D Morphable Face Models are a powerful tool in computer vision. They consist of a PCA model of face shape and colour information and allow to reconstruct a 3D face from a single 2D image. 3D Morphable Face Models are used for 3D head pose estimation, face analysis, face recognition, and, more recently, facial landmark detection and tracking. However, they are not as widely used as 2D methods - the process of building and using a 3D model is much more involved. In this paper, we present the Surrey Face Model, a multi-resolution 3D Morphable Model that we make available to the public for non-commercial purposes. The model contains different mesh resolution levels and landmark point annotations as well as metadata for texture remapping. Accompanying the model is a lightweight open-source C++ library designed with simplicity and ease of integration as its foremost goals. In addition to basic functionality, it contains pose estimation and face frontalisation algorithms. With the tools presented in this paper, we aim to close two gaps. First, by offering different model resolution levels and fast fitting functionality, we enable the use of a 3D Morphable Model in time-critical applications like tracking. Second, the software library makes it easy for the community to adopt the 3D Morphable Face Model in their research, and it offers a public place for collaboration.

Book
31 Oct 2016
TL;DR: This survey presents an overview of the research on ProVerif, an automatic symbolic protocol verifier that automatically translates this protocol description into Horn clauses and determines whether the desired security properties hold by resolution on these clauses.
Abstract: ProVerif is an automatic symbolic protocol verifier It supports a wide range of cryptographic primitives, defined by rewrite rules or by equations It can prove various security properties: secrecy, authentication, and process equivalences, for an unbounded message space and an unbounded number of sessions It takes as input a description of the protocol to verify in a dialect of the applied pi calculus, an extension of the pi calculus with cryptography It automatically translates this protocol description into Horn clauses and determines whether the desired security properties hold by resolution on these clauses This survey presents an overview of the research on ProVerif

Proceedings ArticleDOI
14 Jun 2016
TL;DR: A large scale, unbiased study of social clicks, gathering a month of web visits to online resources that are located in 5 leading news domains and that are mentioned in the third largest social media by web referral (Twitter).
Abstract: Online news domains increasingly rely on social media to drive traffic to their websites. Yet we know surprisingly little about how a social media conversation mentioning an online article actually generates clicks. Sharing behaviors, in contrast, have been fully or partially available and scrutinized over the years. While this has led to multiple assumptions on the diffusion of information, each assumption was designed or validated while ignoring actual clicks. We present a large scale, unbiased study of social clicks---that is also the first data of its kind---gathering a month of web visits to online resources that are located in 5 leading news domains and that are mentioned in the third largest social media by web referral (Twitter). Our dataset amounts to 2.8 million shares, together responsible for 75 billion potential views on this social media, and 9.6 million actual clicks to 59,088 unique resources. We design a reproducible methodology and carefully correct its biases. As we prove, properties of clicks impact multiple aspects of information diffusion, all previously unknown:(i) Secondary resources, that are not promoted through headlines and are responsible for the long tail of content popularity, generate more clicks both in absolute and relative terms; (ii) Social media attention is actually long-lived, in contrast with temporal evolution estimated from shares or receptions; (iii) The actual influence of an intermediary or a resource is poorly predicted by their share count, but we show how that prediction can be made more precise.

Journal ArticleDOI
TL;DR: Evidence is reviewed and discussed that pinpoints to a possible non-neuronal, glial candidate for such orchestration: the regulation of synaptic plasticity by astrocytes.

Journal ArticleDOI
TL;DR: When analyzed at the single cell level, the differentiation process looks very different from its classical population average view, and new observables can be computed, fully compatible with the idea that differentiation is not a “simple” program that all cells execute identically but results from the dynamical behavior of the underlying molecular network.
Abstract: In some recent studies, a view emerged that stochastic dynamics governing the switching of cells from one differentiation state to another could be characterized by a peak in gene expression variability at the point of fate commitment. We have tested this hypothesis at the single-cell level by analyzing primary chicken erythroid progenitors through their differentiation process and measuring the expression of selected genes at six sequential time-points after induction of differentiation. In contrast to population-based expression data, single-cell gene expression data revealed a high cell-to-cell variability, which was masked by averaging. We were able to show that the correlation network was a very dynamical entity and that a subgroup of genes tend to follow the predictions from the dynamical network biomarker (DNB) theory. In addition, we also identified a small group of functionally related genes encoding proteins involved in sterol synthesis that could act as the initial drivers of the differentiation. In order to assess quantitatively the cell-to-cell variability in gene expression and its evolution in time, we used Shannon entropy as a measure of the heterogeneity. Entropy values showed a significant increase in the first 8 h of the differentiation process, reaching a peak between 8 and 24 h, before decreasing to significantly lower values. Moreover, we observed that the previous point of maximum entropy precedes two paramount key points: an irreversible commitment to differentiation between 24 and 48 h followed by a significant increase in cell size variability at 48 h. In conclusion, when analyzed at the single cell level, the differentiation process looks very different from its classical population average view. New observables (like entropy) can be computed, the behavior of which is fully compatible with the idea that differentiation is not a "simple" program that all cells execute identically but results from the dynamical behavior of the underlying molecular network.

Book ChapterDOI
08 Oct 2016
TL;DR: Zhang et al. as mentioned in this paper introduced two types of context-aware guidance models, additive and contrastive models, that leverage their surrounding context regions to improve localisation performance, which significantly improves weakly supervised object localization and detection.
Abstract: We aim to localize objects in images using image-level supervision only. Previous approaches to this problem mainly focus on discriminative object regions and often fail to locate precise object boundaries. We address this problem by introducing two types of context-aware guidance models, additive and contrastive models, that leverage their surrounding context regions to improve localization. The additive model encourages the predicted object region to be supported by its surrounding context region. The contrastive model encourages the predicted object region to be outstanding from its surrounding context region. Our approach benefits from the recent success of convolutional neural networks for object recognition and extends Fast R-CNN to weakly supervised object localization. Extensive experimental evaluation on the PASCAL VOC 2007 and 2012 benchmarks shows that our context-aware approach significantly improves weakly supervised localization and detection.

Proceedings ArticleDOI
20 Jun 2016
TL;DR: ReCon leverages machine learning to reveal potential PII leaks by inspecting network traffic, and provides a visualization tool to empower users with the ability to control these leaks via blocking or substitution of PII.
Abstract: It is well known that apps running on mobile devices extensively track and leak users' personally identifiable information (PII) [1] and users are generally unaware and unable to stop them [2]; however, these users have little visibility into PII leaked through the network traffic generated by their devices, and have poor control over how, when and where that traffic is sent and handled by third parties.This demo presents the design, implementation, and evaluation of ReCon: a cross-platform (iOS, Android, Windows) system that reveals PII leaks and gives users control over them without requiring any special privileges or custom OSes. ReCon, which has been accepted as a paper to the Mobisys conference, leverages machine learning to reveal potential PII leaks by inspecting network traffic, and provides a visualization tool to empower users with the ability to control these leaks via blocking or substitution of PII. For this demo, we will show how ReCon detects PII in real time, allows users to provide feedback on detected PII, and provides users with controls for blocking or modifying PII leaks. The system is publicly available and we will also help any interested users install it on their mobile devices.

Journal ArticleDOI
01 Oct 2016
TL;DR: The underlying measurement principles of time-of-flight cameras, including pulsed-light cameras, are described, which measure directly the time taken for a light pulse to travel from the device to the object and back again, and continuous-wave-modulated light cameras, whichMeasure the phase difference between the emitted and received signals, and hence obtain the travel time indirectly.
Abstract: Time-of-flight (TOF) cameras are sensors that can measure the depths of scene points, by illuminating the scene with a controlled laser or LED source and then analyzing the reflected light. In this paper, we will first describe the underlying measurement principles of time-of-flight cameras, including: (1) pulsed-light cameras, which measure directly the time taken for a light pulse to travel from the device to the object and back again, and (2) continuous-wave-modulated light cameras, which measure the phase difference between the emitted and received signals, and hence obtain the travel time indirectly. We review the main existing designs, including prototypes as well as commercially available devices. We also review the relevant camera calibration principles, and how they are applied to TOF devices. Finally, we discuss the benefits and challenges of combined TOF and color camera systems.