scispace - formally typeset
Search or ask a question
Author

Adrian Basarab

Bio: Adrian Basarab is an academic researcher from University of Toulouse. The author has contributed to research in topics: Motion estimation & Deconvolution. The author has an hindex of 26, co-authored 159 publications receiving 2125 citations. Previous affiliations of Adrian Basarab include Paul Sabatier University & Centre national de la recherche scientifique.


Papers
More filters
Journal ArticleDOI
TL;DR: This work reviews and brings together the recent works carried out in the automatic stress detection looking over the measurements executed along the three main modalities, namely, psychological, physiological and behaviouralmodalities, in order to give hints about the most appropriate techniques to be used and thereby, to facilitate the development of such a holistic system.

329 citations

Journal ArticleDOI
TL;DR: The development of an unobtrusive and transparent AD detection system should be based on a multimodal system in order to take full advantage of all kinds of symptoms, detect even the smallest changes and combine them, so as to detect AD as early as possible.

140 citations

Journal ArticleDOI
TL;DR: In the case of non-Gaussian priors, it is shown how the analytical solution derived from the Gaussian case can be embedded into traditional splitting frameworks, allowing the computation cost of existing algorithms to be decreased significantly.
Abstract: This paper addresses the problem of single image super-resolution (SR), which consists of recovering a high-resolution image from its blurred, decimated, and noisy version. The existing algorithms for single image SR use different strategies to handle the decimation and blurring operators. In addition to the traditional first-order gradient methods, recent techniques investigate splitting-based methods dividing the SR problem into up-sampling and deconvolution steps that can be easily solved. Instead of following this splitting strategy, we propose to deal with the decimation and blurring operators simultaneously by taking advantage of their particular properties in the frequency domain, leading to a new fast SR approach. Specifically, an analytical solution is derived and implemented efficiently for the Gaussian prior or any other regularization that can be formulated into an $\ell _{2}$ -regularized quadratic model, i.e., an $\ell _{2}$ – $\ell _{2}$ optimization problem. The flexibility of the proposed SR scheme is shown through the use of various priors/regularizations, ranging from generic image priors to learning-based approaches. In the case of non-Gaussian priors, we show how the analytical solution derived from the Gaussian case can be embedded into traditional splitting frameworks, allowing the computation cost of existing algorithms to be decreased significantly. Simulation results conducted on several images with different priors illustrate the effectiveness of our fast SR approach compared with existing techniques.

104 citations

Journal ArticleDOI
TL;DR: The goal of this paper is to evaluate the possibility of using unobtrusively collected activity-aware smart home behavior data to detect the multimodal symptoms that are often found to be impaired in AD.
Abstract: As members of an increasingly aging society, one of our major priorities is to develop tools to detect the earliest stage of age-related disorders such as Alzheimer's Disease (AD). The goal of this paper is to evaluate the possibility of using unobtrusively collected activity-aware smart home behavior data to detect the multimodal symptoms that are often found to be impaired in AD. After gathering longitudinal smart home data for 29 older adults over an average duration of $>$ 2 years, we automatically labeled the data with corresponding activity classes and extracted time-series statistics containing ten behavioral features. Mobility, cognition, and mood were evaluated every six months. Using these data, we created regression models to predict symptoms as measured by the tests and a feature selection analysis was performed. Classification models were built to detect reliable absolute changes in the scores predicting symptoms and SmoteBOOST and wRACOG algorithms were used to overcome class imbalance where needed. Results show that all mobility, cognition, and depression symptoms can be predicted from activity-aware smart home data. Similarly, these data can be effectively used to predict reliable changes in mobility and memory skills. Results also suggest that not all behavioral features contribute equally to the prediction of every symptom. Future work therefore can improve model sensitivity by including additional longitudinal data and by further improving strategies to extract relevant features and address class imbalance. The results presented herein contribute toward the development of an early change detection system based on smart home technology.

97 citations

Journal ArticleDOI
TL;DR: This paper evaluates the method's feasibility on two emblematic cases: cardiac tagged magnetic resonance and cardiac ultrasound, and finds that the proposed framework provides, along with higher accuracy, superior robustness to noise and a considerably shorter computation time.
Abstract: We present a method for the analysis of heart motion from medical images. The algorithm exploits monogenic signal theory, recently introduced as an N-dimensional generalization of the analytic signal. The displacement is computed locally by assuming the conservation of the monogenic phase over time. A local affine displacement model is considered to account for typical heart motions as contraction/expansion and shear. A coarse-to-fine B-spline scheme allows a robust and effective computation of the model's parameters, and a pyramidal refinement scheme helps to handle large motions. Robustness against noise is increased by replacing the standard point-wise computation of the monogenic orientation with a robust least-squares orientation estimate. Given its general formulation, the algorithm is well suited for images from different modalities, in particular for those cases where time variant changes of local intensity invalidate the standard brightness constancy assumption. This paper evaluates the method's feasibility on two emblematic cases: cardiac tagged magnetic resonance and cardiac ultrasound. In order to quantify the performance of the proposed method, we made use of realistic synthetic sequences from both modalities for which the benchmark motion is known. A comparison is presented with state-of-the-art methods for cardiac motion analysis. On the data considered, these conventional approaches are outperformed by the proposed algorithm. A recent global optical-flow estimation algorithm based on the monogenic curvature tensor is also considered in the comparison. With respect to the latter, the proposed framework provides, along with higher accuracy, superior robustness to noise and a considerably shorter computation time.

91 citations


Cited by
More filters
21 Jun 2010

1,966 citations

01 Jan 2016
TL;DR: The regularization of inverse problems is universally compatible with any devices to read and is available in the book collection an online access to it is set as public so you can download it instantly.
Abstract: Thank you for downloading regularization of inverse problems. Maybe you have knowledge that, people have search hundreds times for their favorite novels like this regularization of inverse problems, but end up in malicious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they juggled with some infectious bugs inside their computer. regularization of inverse problems is available in our book collection an online access to it is set as public so you can download it instantly. Our book servers spans in multiple locations, allowing you to get the most less latency time to download any of our books like this one. Kindly say, the regularization of inverse problems is universally compatible with any devices to read.

1,097 citations

Journal Article
TL;DR: The methodology proposed automatically adapts to the local structure when simulating paths across this manifold, providing highly efficient convergence and exploration of the target density, and substantial improvements in the time‐normalized effective sample size are reported when compared with alternative sampling approaches.
Abstract: The paper proposes Metropolis adjusted Langevin and Hamiltonian Monte Carlo sampling methods defined on the Riemann manifold to resolve the shortcomings of existing Monte Carlo algorithms when sampling from target densities that may be high dimensional and exhibit strong correlations. The methods provide fully automated adaptation mechanisms that circumvent the costly pilot runs that are required to tune proposal densities for Metropolis-Hastings or indeed Hamiltonian Monte Carlo and Metropolis adjusted Langevin algorithms. This allows for highly efficient sampling even in very high dimensions where different scalings may be required for the transient and stationary phases of the Markov chain. The methodology proposed exploits the Riemann geometry of the parameter space of statistical models and thus automatically adapts to the local structure when simulating paths across this manifold, providing highly efficient convergence and exploration of the target density. The performance of these Riemann manifold Monte Carlo methods is rigorously assessed by performing inference on logistic regression models, log-Gaussian Cox point processes, stochastic volatility models and Bayesian estimation of dynamic systems described by non-linear differential equations. Substantial improvements in the time-normalized effective sample size are reported when compared with alternative sampling approaches. MATLAB code that is available from http://www.ucl.ac.uk/statistics/research/rmhmc allows replication of all the results reported.

1,031 citations

Journal ArticleDOI
01 Aug 2014
TL;DR: The current comprehensive survey provides an overview of most of these published works by grouping them in a broad taxonomy, and common issues in super-resolution algorithms, such as imaging models and registration algorithms, optimization of the cost functions employed, dealing with color information, improvement factors, assessment of super- resolution algorithms, and the most commonly employed databases are discussed.
Abstract: Super-resolution, the process of obtaining one or more high-resolution images from one or more low-resolution observations, has been a very attractive research topic over the last two decades. It has found practical applications in many real-world problems in different fields, from satellite and aerial imaging to medical image processing, to facial image analysis, text image analysis, sign and number plates reading, and biometrics recognition, to name a few. This has resulted in many research papers, each developing a new super-resolution algorithm for a specific purpose. The current comprehensive survey provides an overview of most of these published works by grouping them in a broad taxonomy. For each of the groups in the taxonomy, the basic concepts of the algorithms are first explained and then the paths through which each of these groups have evolved are given in detail, by mentioning the contributions of different authors to the basic concepts of each group. Furthermore, common issues in super-resolution algorithms, such as imaging models and registration algorithms, optimization of the cost functions employed, dealing with color information, improvement factors, assessment of super-resolution algorithms, and the most commonly employed databases are discussed.

602 citations

Journal ArticleDOI
TL;DR: Cengage Learning, 2000. Brand New, Unread Copy in Perfect Condition as discussed by the authors. But they did not specify the exact condition of the book, only that it was in perfect condition.
Abstract: Cengage Learning, 2000. Book Condition: New. Brand New, Unread Copy in Perfect Condition. A+ Customer Service! Summary:

554 citations