scispace - formally typeset
Search or ask a question

Showing papers by "French Institute for Research in Computer Science and Automation published in 2010"


Proceedings ArticleDOI
13 Jun 2010
TL;DR: This work proposes a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation, and shows how to jointly optimize the dimension reduction and the indexing algorithm.
Abstract: We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms.

2,782 citations


Proceedings Article
21 Jun 2010
TL;DR: It is shown that the reasons underlying the performance of various pooling methods are obscured by several confounding factors, such as the link between the sample cardinality in a spatial pool and the resolution at which low-level features have been extracted.
Abstract: Many modern visual recognition algorithms incorporate a step of spatial 'pooling', where the outputs of several nearby feature detectors are combined into a local or global 'bag of features', in a way that preserves task-related information while removing irrelevant details. Pooling is used to achieve invariance to image transformations, more compact representations, and better robustness to noise and clutter. Several papers have shown that the details of the pooling operation can greatly influence the performance, but studies have so far been purely empirical. In this paper, we show that the reasons underlying the performance of various pooling methods are obscured by several confounding factors, such as the link between the sample cardinality in a spatial pool and the resolution at which low-level features have been extracted. We provide a detailed theoretical analysis of max pooling and average pooling, and give extensive empirical comparisons for object recognition tasks.

1,239 citations


Book ChapterDOI
01 Jun 2010
TL;DR: Encryption-decryption is the most ancient cryptographic activity, but its nature has deeply changed with the invention of computers, because the cryptanalysis (the activity of the third person, the eavesdropper, who aims at recovering the message) can use their power.
Abstract: Introduction A fundamental objective of cryptography is to enable two persons to communicate over an insecure channel (a public channel such as the internet) in such a way that any other person is unable to recover their message (called the plaintext ) from what is sent in its place over the channel (the ciphertext ). The transformation of the plaintext into the ciphertext is called encryption , or enciphering. Encryption-decryption is the most ancient cryptographic activity (ciphers already existed four centuries b.c.), but its nature has deeply changed with the invention of computers, because the cryptanalysis (the activity of the third person, the eavesdropper, who aims at recovering the message) can use their power. The encryption algorithm takes as input the plaintext and an encryption key K E , and it outputs the ciphertext. If the encryption key is secret, then we speak of conventional cryptography , of private key cryptography , or of symmetric cryptography . In practice, the principle of conventional cryptography relies on the sharing of a private key between the sender of a message (often called Alice in cryptography) and its receiver (often called Bob). If, on the contrary, the encryption key is public, then we speak of public key cryptography . Public key cryptography appeared in the literature in the late 1970s.

943 citations


Journal ArticleDOI
TL;DR: The aim of this paper is to present OpenMEEG, both from the theoretical and the practical point of view, and to compare its performances with other competing software packages, to show that it represents the state of the art for forward computations.
Abstract: Interpreting and controlling bioelectromagnetic phenomena require realistic physiological models and accurate numerical solvers. A semi-realistic model often used in practise is the piecewise constant conductivity model, for which only the interfaces have to be meshed. This simplified model makes it possible to use Boundary Element Methods. Unfortunately, most Boundary Element solutions are confronted with accuracy issues when the conductivity ratio between neighboring tissues is high, as for instance the scalp/skull conductivity ratio in electro-encephalography. To overcome this difficulty, we proposed a new method called the symmetric BEM, which is implemented in the OpenMEEG software. The aim of this paper is to present OpenMEEG, both from the theoretical and the practical point of view, and to compare its performances with other competing software packages. We have run a benchmark study in the field of electro- and magneto-encephalography, in order to compare the accuracy of OpenMEEG with other freely distributed forward solvers. We considered spherical models, for which analytical solutions exist, and we designed randomized meshes to assess the variability of the accuracy. Two measures were used to characterize the accuracy. the Relative Difference Measure and the Magnitude ratio. The comparisons were run, either with a constant number of mesh nodes, or a constant number of unknowns across methods. Computing times were also compared. We observed more pronounced differences in accuracy in electroencephalography than in magnetoencephalography. The methods could be classified in three categories: the linear collocation methods, that run very fast but with low accuracy, the linear collocation methods with isolated skull approach for which the accuracy is improved, and OpenMEEG that clearly outperforms the others. As far as speed is concerned, OpenMEEG is on par with the other methods for a constant number of unknowns, and is hence faster for a prescribed accuracy level. This study clearly shows that OpenMEEG represents the state of the art for forward computations. Moreover, our software development strategies have made it handy to use and to integrate with other packages. The bioelectromagnetic research community should therefore be able to benefit from OpenMEEG with a limited development effort.

914 citations


Journal ArticleDOI
TL;DR: A more precise representation based on Hamming embedding (HE) and weak geometric consistency constraints (WGC) is derived and this approach is shown to outperform the state-of-the-art on the three datasets.
Abstract: This article improves recent methods for large scale image search. We first analyze the bag-of-features approach in the framework of approximate nearest neighbor search. This leads us to derive a more precise representation based on Hamming embedding (HE) and weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within an inverted file and are efficiently exploited for all images in the dataset. We then introduce a graph-structured quantizer which significantly speeds up the assignment of the descriptors to visual words. A comparison with the state of the art shows the interest of our approach when high accuracy is needed. Experiments performed on three reference datasets and a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short-list of images, is shown to be complementary to our weak geometric consistency constraints. Our approach is shown to outperform the state-of-the-art on the three datasets.

795 citations


Journal ArticleDOI
TL;DR: DETEX is proposed, a method that embodies and defines all the steps necessary for the specification and detection of code and design smells, and a detection technique that instantiates this method, and an empirical validation in terms of precision and recall of DETEX.
Abstract: Code and design smells are poor solutions to recurring implementation and design problems. They may hinder the evolution of a system by making it hard for software engineers to carry out changes. We propose three contributions to the research field related to code and design smells: (1) DECOR, a method that embodies and defines all the steps necessary for the specification and detection of code and design smells, (2) DETEX, a detection technique that instantiates this method, and (3) an empirical validation in terms of precision and recall of DETEX. The originality of DETEX stems from the ability for software engineers to specify smells at a high level of abstraction using a consistent vocabulary and domain-specific language for automatically generating detection algorithms. Using DETEX, we specify four well-known design smells: the antipatterns Blob, Functional Decomposition, Spaghetti Code, and Swiss Army Knife, and their 15 underlying code smells, and we automatically generate their detection algorithms. We apply and validate the detection algorithms in terms of precision and recall on XERCES v2.7.0, and discuss the precision of these algorithms on 11 open-source systems.

710 citations


Journal ArticleDOI
TL;DR: A novel method for accurate spectral-spatial classification of hyperspectral images by means of a Markov random field regularization is presented, which improves classification accuracies when compared to other classification approaches.
Abstract: The high number of spectral bands acquired by hyperspectral sensors increases the capability to distinguish physical materials and objects, presenting new challenges to image analysis and classification. This letter presents a novel method for accurate spectral-spatial classification of hyperspectral images. The proposed technique consists of two steps. In the first step, a probabilistic support vector machine pixelwise classification of the hyperspectral image is applied. In the second step, spatial contextual information is used for refining the classification results obtained in the first step. This is achieved by means of a Markov random field regularization. Experimental results are presented for three hyperspectral airborne images and compared with those obtained by recently proposed advanced spectral-spatial classification techniques. The proposed method improves classification accuracies when compared to other classification approaches.

697 citations


Journal ArticleDOI
TL;DR: The OpenViBE software platform is described which enables researchers to design, test, and use braincomputer interfaces (BCIs) and its suitability for the design of VR applications controlled with a BCI is shown.
Abstract: This paper describes the OpenViBE software platform which enables researchers to design, test, and use brain--computer interfaces (BCIs). BCIs are communication systems that enable users to send commands to computers solely by means of brain activity. BCIs are gaining interest among the virtual reality (VR) community since they have appeared as promising interaction devices for virtual environments (VEs). The key features of the platform are (1) high modularity, (2) embedded tools for visualization and feedback based on VR and 3D displays, (3) BCI design made available to non-programmers thanks to visual programming, and (4) various tools offered to the different types of users. The platform features are illustrated in this paper with two entertaining VR applications based on a BCI. In the first one, users can move a virtual ball by imagining hand movements, while in the second one, they can control a virtual spaceship using real or imagined foot movements. Online experiments with these applications together with the evaluation of the platform computational performances showed its suitability for the design of VR applications controlled with a BCI. OpenViBE is a free software distributed under an open-source license.

687 citations


Journal ArticleDOI
TL;DR: A method that predicts coronary flow and pressure of three-dimensional epicardial coronary arteries by considering models of the heart and arterial system and the interactions between the two models was developed.
Abstract: Coronary flow is different from the flow in other parts of the arterial system because it is influenced by the contraction and relaxation of the heart. To model coronary flow realistically, the compressive force of the heart acting on the coronary vessels needs to be included. In this study, we developed a method that predicts coronary flow and pressure of three-dimensional epicardial coronary arteries by considering models of the heart and arterial system and the interactions between the two models. For each coronary outlet, a lumped parameter coronary vascular bed model was assigned to represent the impedance of the downstream coronary vascular networks absent in the computational domain. The intramyocardial pressure was represented with either the left or right ventricular pressure depending on the location of the coronary arteries. The left and right ventricular pressure were solved from the lumped parameter heart models coupled to a closed loop system comprising a three-dimensional model of the aorta, three-element Windkessel models of the rest of the systemic circulation and the pulmonary circulation, and lumped parameter models for the left and right sides of the heart. The computed coronary flow and pressure and the aortic flow and pressure waveforms were realistic as compared to literature data.

522 citations


Proceedings ArticleDOI
13 Jun 2010
TL;DR: This paper combines existing tools for bottom-up image segmentation such as normalized cuts, with kernel methods commonly used in object recognition, used within a discriminative clustering framework to obtain a combinatorial optimization problem which is relaxed to a continuous convex optimization problem that can be solved efficiently for up to dozens of images.
Abstract: Purely bottom-up, unsupervised segmentation of a single image into foreground and background regions remains a challenging task for computer vision. Co-segmentation is the problem of simultaneously dividing multiple images into regions (segments) corresponding to different object classes. In this paper, we combine existing tools for bottom-up image segmentation such as normalized cuts, with kernel methods commonly used in object recognition. These two sets of techniques are used within a discriminative clustering framework: the goal is to assign foreground/background labels jointly to all images, so that a supervised classifier trained with these labels leads to maximal separation of the two classes. In practice, we obtain a combinatorial optimization problem which is relaxed to a continuous convex optimization problem, that can itself be solved efficiently for up to dozens of images. We illustrate the proposed method on images with very similar foreground objects, as well as on more challenging problems with objects with higher intra-class variations.

504 citations


Book ChapterDOI
01 Nov 2010
TL;DR: The model checking problem for stochastic systems with respect to such logics is typically solved by a numerical approach [31,8,35,22,21,5] that iteratively computes (or approximates) the exact measure of paths satisfying relevant subformulas as discussed by the authors.
Abstract: Quantitative properties of stochastic systems are usually specified in logics that allow one to compare the measure of executions satisfying certain temporal properties with thresholds The model checking problem for stochastic systems with respect to such logics is typically solved by a numerical approach [31,8,35,22,21,5] that iteratively computes (or approximates) the exact measure of paths satisfying relevant subformulas; the algorithms themselves depend on the class of systems being analyzed as well as the logic used for specifying the properties Another approach to solve the model checking problem is to simulate the system for finitely many executions, and use hypothesis testing to infer whether the samples provide a statistical evidence for the satisfaction or violation of the specification In this tutorial, we survey the statistical approach, and outline its main advantages in terms of efficiency, uniformity, and simplicity

Proceedings ArticleDOI
07 Jul 2010
TL;DR: Results of the BBOB-2009 benchmarking of 31 search algorithms on 24 noiseless functions in a black-box optimization scenario in continuous domain are presented and the choice of the best algorithm depends remarkably on the available budget of function evaluations.
Abstract: This paper presents results of the BBOB-2009 benchmarking of 31 search algorithms on 24 noiseless functions in a black-box optimization scenario in continuous domain. The runtime of the algorithms, measured in number of function evaluations, is investigated and a connection between a single convergence graph and the runtime distribution is uncovered. Performance is investigated for different dimensions up to 40-D, for different target precision values, and in different subgroups of functions. Searching in larger dimension and multi-modal functions appears to be more difficult. The choice of the best algorithm also depends remarkably on the available budget of function evaluations.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: This work considers a scenario where keywords are associated with the training images, e.g. as found on photo sharing websites, and learns a strong Multiple Kernel Learning (MKL) classifier using both the image content and keywords, and uses it to score unlabeled images.
Abstract: In image categorization the goal is to decide if an image belongs to a certain category or not. A binary classifier can be learned from manually labeled images; while using more labeled examples improves performance, obtaining the image labels is a time consuming process. We are interested in how other sources of information can aid the learning process given a fixed amount of labeled images. In particular, we consider a scenario where keywords are associated with the training images, e.g. as found on photo sharing websites. The goal is to learn a classifier for images alone, but we will use the keywords associated with labeled and unlabeled images to improve the classifier using semi-supervised learning. We first learn a strong Multiple Kernel Learning (MKL) classifier using both the image content and keywords, and use it to score unlabeled images. We then learn classifiers on visual features only, either support vector machines (SVM) or least-squares regression (LSR), from the MKL output values on both the labeled and unlabeled images. In our experiments on 20 classes from the PASCAL VOC'07 set and 38 from the MIR Flickr set, we demonstrate the benefit of our semi-supervised approach over only using the labeled images. We also present results for a scenario where we do not use any manual labeling but directly learn classifiers from the image tags. The semi-supervised approach also improves classification accuracy in this case.

Journal ArticleDOI
TL;DR: Mayavi as discussed by the authors is an open-source, general-purpose, 3D scientific visualization package that provides easy and interactive tools for data visualization that fit with the scientific user's workflow.
Abstract: Mayavi is an open-source, general-purpose, 3D scientific visualization package. It seeks to provide easy and interactive tools for data visualization that fit with the scientific user's workflow. For this purpose, Mayavi provides several entry points: a full-blown interactive application; a Python library with both a MATLAB-like interface focused on easy scripting and a feature-rich object hierarchy; widgets associated with these objects for assembling in a domain-specific application, and plugins that work with a general purpose application-building framework. In this article, we present an overview of the various features of Mayavi, we then provide insight on the design and engineering decisions made in implementing Mayavi, and finally discuss a few novel applications.

Book ChapterDOI
15 Aug 2010
TL;DR: In this paper, the authors report on the factorization of the 768-bit number RSA-768 by the number field sieve factoring method and discuss some implications for RSA.
Abstract: This paper reports on the factorization of the 768-bit number RSA-768 by the number field sieve factoring method and discusses some implications for RSA.

Journal ArticleDOI
TL;DR: A model for building, visualizing, and interacting with multiscale representations of information visualization techniques using hierarchical aggregation is presented to make visual representations more visually scalable and less cluttered.
Abstract: We present a model for building, visualizing, and interacting with multiscale representations of information visualization techniques using hierarchical aggregation. The motivation for this work is to make visual representations more visually scalable and less cluttered. The model allows for augmenting existing techniques with multiscale functionality, as well as for designing new visualization and interaction techniques that conform to this new class of visual representations. We give some examples of how to use the model for standard information visualization techniques such as scatterplots, parallel coordinates, and node-link diagrams, and discuss existing techniques that are based on hierarchical aggregation. This yields a set of design guidelines for aggregated visualizations. We also present a basic vocabulary of interaction techniques suitable for navigating these multiscale visualizations.

Journal ArticleDOI
TL;DR: In this article, the contribution of each source to all mixture channels in the time-frequency domain was modeled as a zero-mean Gaussian random variable whose covariance encodes the spatial characteristics of the source.
Abstract: This paper addresses the modeling of reverberant recording environments in the context of under-determined convolutive blind source separation. We model the contribution of each source to all mixture channels in the time-frequency domain as a zero-mean Gaussian random variable whose covariance encodes the spatial characteristics of the source. We then consider four specific covariance models, including a full-rank unconstrained model. We derive a family of iterative expectation-maximization (EM) algorithms to estimate the parameters of each model and propose suitable procedures adapted from the state-of-the-art to initialize the parameters and to align the order of the estimated sources across all frequency bins. Experimental results over reverberant synthetic mixtures and live recordings of speech data show the effectiveness of the proposed approach.

Proceedings ArticleDOI
01 May 2010
TL;DR: This paper first extends transition systems with features in order to describe the combined behaviour of an entire system family, and defines and implements a model checking technique that allows to verify such transition systems against temporal properties.
Abstract: In product line engineering, systems are developed in families and differences between family members are expressed in terms of features. Formal modelling and verification is an important issue in this context as more and more critical systems are developed this way. Since the number of systems in a family can be exponential in the number of features, two major challenges are the scalable modelling and the efficient verification of system behaviour. Currently, the few attempts to address them fail to recognise the importance of features as a unit of difference, or do not offer means for automated verification. In this paper, we tackle those challenges at a fundamental level. We first extend transition systems with features in order to describe the combined behaviour of an entire system family. We then define and implement a model checking technique that allows to verify such transition systems against temporal properties. An empirical evaluation shows substantial gains over classical approaches.

Journal ArticleDOI
TL;DR: The Hex spherical polar Fourier protein docking algorithm has been implemented on Nvidia graphics processor units (GPUs) and for the first time, exhaustive FFT-based protein docking calculations may now be performed in a matter of seconds on a contemporary GPU.
Abstract: Motivation: Modelling protein–protein interactions (PPIs) is an increasingly important aspect of structural bioinformatics. However, predicting PPIs using in silico docking techniques is computationally very expensive. Developing very fast protein docking tools will be useful for studying large-scale PPI networks, and could contribute to the rational design of new drugs. Results: The Hex spherical polar Fourier protein docking algorithm has been implemented on Nvidia graphics processor units (GPUs). On a GTX 285 GPU, an exhaustive and densely sampled 6D docking search can be calculated in just 15 s using multiple 1D fast Fourier transforms (FFTs). This represents a 45-fold speed-up over the corresponding calculation on a single CPU, being at least two orders of magnitude times faster than a similar CPU calculation using ZDOCK 3.0.1, and estimated to be at least three orders of magnitude faster than the GPU-accelerated version of PIPER on comparable hardware. Hence, for the first time, exhaustive FFT-based protein docking calculations may now be performed in a matter of seconds on a contemporary GPU. Three-dimensional Hex FFT correlations are also accelerated by the GPU, but the speed-up factor of only 2.5 is much less than that obtained with 1D FFTs. Thus, the Hex algorithm appears to be especially well suited to exploit GPUs compared to conventional 3D FFT docking approaches. Availability: http://hex.loria.fr/ and http://hexserver.loria.fr/ Contact: dave.ritchie@loria.fr Supplementary information:Supplementary data are available at Bioinformatics online.

Journal ArticleDOI
TL;DR: An object class detection approach which fully integrates the complementary strengths offered by shape matchers and can localize object boundaries accurately and does not need segmented examples for training (only bounding-boxes).
Abstract: We present an object class detection approach which fully integrates the complementary strengths offered by shape matchers. Like an object detector, it can learn class models directly from images, and can localize novel instances in the presence of intra-class variations, clutter, and scale changes. Like a shape matcher, it finds the boundaries of objects, rather than just their bounding-boxes. This is achieved by a novel technique for learning a shape model of an object class given images of example instances. Furthermore, we also integrate Hough-style voting with a non-rigid point matching algorithm to localize the model in cluttered images. As demonstrated by an extensive evaluation, our method can localize object boundaries accurately and does not need segmented examples for training (only bounding-boxes).

Journal ArticleDOI
TL;DR: This paper compares several families of space hashing functions in a real setup and reveals that unstructured quantizer significantly improves the accuracy of LSH, as it closely fits the data in the feature space.

Proceedings ArticleDOI
04 Feb 2010
TL;DR: It is believed that corroboration can serve in a wide range of applications such as source selection in the semantic Web, data quality assessment or semantic annotation cleaning in social networks, and this work sets the bases for a widerange of techniques for solving these more complex problems.
Abstract: We consider a set of views stating possibly conflicting facts. Negative facts in the views may come, e.g., from functional dependencies in the underlying database schema. We want to predict the truth values of the facts. Beyond simple methods such as voting (typically rather accurate), we explore techniques based on "corroboration", i.e., taking into account trust in the views. We introduce three fixpoint algorithms corresponding to different levels of complexity of an underlying probabilistic model. They all estimate both truth values of facts and trust in the views. We present experimental studies on synthetic and real-world data. This analysis illustrates how and in which context these methods improve corroboration results over baseline methods. We believe that corroboration can serve in a wide range of applications such as source selection in the semantic Web, data quality assessment or semantic annotation cleaning in social networks. This work sets the bases for a wide range of techniques for solving these more complex problems.

Journal ArticleDOI
TL;DR: Two stability results for Lipschitz functions on triangulable, compact metric spaces are proved and applications of both to problems in systems biology are considered.
Abstract: We prove two stability results for Lipschitz functions on triangulable, compact metric spaces and consider applications of both to problems in systems biology. Given two functions, the first result is formulated in terms of the Wasserstein distance between their persistence diagrams and the second in terms of their total persistence.

Journal ArticleDOI
TL;DR: A new method for the estimation of multiple concurrent pitches in piano recordings is presented, which addresses the issue of overlapping overtones by modeling the spectral envelope of the overtones of each note with a smooth autoregressive model.
Abstract: A new method for the estimation of multiple concurrent pitches in piano recordings is presented. It addresses the issue of overlapping overtones by modeling the spectral envelope of the overtones of each note with a smooth autoregressive model. For the background noise, a moving-average model is used and the combination of both tends to eliminate harmonic and sub-harmonic erroneous pitch estimations. This leads to a complete generative spectral model for simultaneous piano notes, which also explicitly includes the typical deviation from exact harmonicity in a piano overtone series. The pitch set which maximizes an approximate likelihood is selected from among a restricted number of possible pitch combinations as the one. Tests have been conducted on a large homemade database called MAPS, composed of piano recordings from a real upright piano and from high-quality samples.

Journal ArticleDOI
26 Jul 2010
TL;DR: A novel vision-based approach of collision avoidance between walkers that fits the requirements of interactive crowd simulation is explored and several examples of the simulation results show that the emergence of self-organized patterns of walkers is reinforced using this approach.
Abstract: In the everyday exercise of controlling their locomotion, humans rely on their optic flow of the perceived environment to achieve collision-free navigation In crowds, in spite of the complexity of the environment made of numerous obstacles, humans demonstrate remarkable capacities in avoiding collisions Cognitive science work on human locomotion states that relatively succinct information is extracted from the optic flow to achieve safe locomotion In this paper, we explore a novel vision-based approach of collision avoidance between walkers that fits the requirements of interactive crowd simulation By simulating humans based on cognitive science results, we detect future collisions as well as the level of danger from visual stimuli The motor-response is twofold: a reorientation strategy prevents future collision, whereas a deceleration strategy prevents imminent collisions Several examples of our simulation results show that the emergence of self-organized patterns of walkers is reinforced using our approach The emergent phenomena are visually appealing More importantly, they improve the overall efficiency of the walkers' traffic and avoid improbable locking situations

Journal ArticleDOI
TL;DR: The Spherical Demons algorithm as mentioned in this paper can also be modified to register a given spherical image to a probabilistic atlas by warping the atlas or warping a subject.
Abstract: We present the Spherical Demons algorithm for registering two spherical images. By exploiting spherical vector spline interpolation theory, we show that a large class of regularizors for the modified Demons objective function can be efficiently approximated on the sphere using iterative smoothing. Based on one parameter subgroups of diffeomorphisms, the resulting registration is diffeomorphic and fast. The Spherical Demons algorithm can also be modified to register a given spherical image to a probabilistic atlas. We demonstrate two variants of the algorithm corresponding to warping the atlas or warping the subject. Registration of a cortical surface mesh to an atlas mesh, both with more than 160 k nodes requires less than 5 min when warping the atlas and less than 3 min when warping the subject on a Xeon 3.2 GHz single processor machine. This is comparable to the fastest nondiffeomorphic landmark-free surface registration algorithms. Furthermore, the accuracy of our method compares favorably to the popular FreeSurfer registration algorithm. We validate the technique in two different applications that use registration to transfer segmentation labels onto a new image (1) parcellation of in vivo cortical surfaces and (2) Brodmann area localization in ex vivo cortical surfaces.

Journal ArticleDOI
TL;DR: A nonparametric regression method for denoising 3-D image sequences acquired via fluorescence microscopy and an original statistical patch-based framework for noise reduction and preservation of space-time discontinuities are presented.
Abstract: We present a nonparametric regression method for denoising 3-D image sequences acquired via fluorescence microscopy. The proposed method exploits the redundancy of the 3-D+time information to improve the signal-to-noise ratio of images corrupted by Poisson-Gaussian noise. A variance stabilization transform is first applied to the image-data to remove the dependence between the mean and variance of intensity values. This preprocessing requires the knowledge of parameters related to the acquisition system, also estimated in our approach. In a second step, we propose an original statistical patch-based framework for noise reduction and preservation of space-time discontinuities. In our study, discontinuities are related to small moving spots with high velocity observed in fluorescence video-microscopy. The idea is to minimize an objective nonlocal energy functional involving spatio-temporal image patches. The minimizer has a simple form and is defined as the weighted average of input data taken in spatially-varying neighborhoods. The size of each neighborhood is optimized to improve the performance of the pointwise estimator. The performance of the algorithm (which requires no motion estimation) is then evaluated on both synthetic and real image sequences using qualitative and quantitative criteria.

Book ChapterDOI
30 May 2010
TL;DR: This work revisits lattice enumeration algorithms and shows that surprising exponential speedups can be achieved both in theory and in practice by using a new technique, which is called extreme pruning.
Abstract: Lattice enumeration algorithms are the most basic algorithms for solving hard lattice problems such as the shortest vector problem and the closest vector problem, and are often used in public-key cryptanalysis either as standalone algorithms, or as subroutines in lattice reduction algorithms. Here we revisit these fundamental algorithms and show that surprising exponential speedups can be achieved both in theory and in practice by using a new technique, which we call extreme pruning. We also provide what is arguably the first sound analysis of pruning, which was introduced in the 1990s by Schnorr et al.

Journal ArticleDOI
TL;DR: An approach for prescribing lumped parameter outflow boundary conditions that accommodate transient phenomena is presented and applied to compute haemodynamic quantities in different physiologically relevant cardiovascular models to study non-periodic flow phenomena often observed in normal subjects and in patients with acquired or congenital cardiovascular disease.
Abstract: The simulation of blood flow and pressure in arteries requires outflow boundary conditions that incorporate models of downstream domains. We previously described a coupled multidomain method to couple analytical models of the downstream domains with 3D numerical models of the upstream vasculature. This prior work either included pure resistance boundary conditions or impedance boundary conditions based on assumed periodicity of the solution. However, flow and pressure in arteries are not necessarily periodic in time due to heart rate variability, respiration, complex transitional flow or acute physiological changes. We present herein an approach for prescribing lumped parameter outflow boundary conditions that accommodate transient phenomena. We have applied this method to compute haemodynamic quantities in different physiologically relevant cardiovascular models, including patient-specific examples, to study non-periodic flow phenomena often observed in normal subjects and in patients with acquired or congenital cardiovascular disease. The relevance of using boundary conditions that accommodate transient phenomena compared with boundary conditions that assume periodicity of the solution is discussed.

Journal ArticleDOI
TL;DR: A NMF-like algorithm is derived that performs similarly to supervised NMF using pre-trained piano spectra but improves pitch estimation performance by 6% to 10% compared to alternative unsupervised NMF algorithms.
Abstract: Multiple pitch estimation consists of estimating the fundamental frequencies and saliences of pitched sounds over short time frames of an audio signal. This task forms the basis of several applications in the particular context of musical audio. One approach is to decompose the short-term magnitude spectrum of the signal into a sum of basis spectra representing individual pitches scaled by time-varying amplitudes, using algorithms such as nonnegative matrix factorization (NMF). Prior training of the basis spectra is often infeasible due to the wide range of possible musical instruments. Appropriate spectra must then be adaptively estimated from the data, which may result in limited performance due to overfitting issues. In this paper, we model each basis spectrum as a weighted sum of narrowband spectra representing a few adjacent harmonic partials, thus enforcing harmonicity and spectral smoothness while adapting the spectral envelope to each instrument. We derive a NMF-like algorithm to estimate the model parameters and evaluate it on a database of piano recordings, considering several choices for the narrowband spectra. The proposed algorithm performs similarly to supervised NMF using pre-trained piano spectra but improves pitch estimation performance by 6% to 10% compared to alternative unsupervised NMF algorithms.