Author
Robert Laganiere
Other affiliations: Institut national de la recherche scientifique, Ottawa University, École Polytechnique de Montréal ...read more
Bio: Robert Laganiere is an academic researcher from University of Ottawa. The author has contributed to research in topics: Object detection & Video tracking. The author has an hindex of 29, co-authored 164 publications receiving 4055 citations. Previous affiliations of Robert Laganiere include Institut national de la recherche scientifique & Ottawa University.
Papers published on a yearly basis
Papers
More filters
University of Ljubljana1, University of Birmingham2, Czech Technical University in Prague3, Linköping University4, Austrian Institute of Technology5, Carnegie Mellon University6, Parthenope University of Naples7, University of Isfahan8, Autonomous University of Madrid9, University of Ottawa10, University of Oxford11, Hong Kong Baptist University12, Kyiv Polytechnic Institute13, Middle East Technical University14, Hacettepe University15, King Abdullah University of Science and Technology16, Pohang University of Science and Technology17, University of Nottingham18, University at Albany, SUNY19, Chinese Academy of Sciences20, Dalian University of Technology21, Xi'an Jiaotong University22, Indian Institute of Space Science and Technology23, Hong Kong University of Science and Technology24, ASELSAN25, Australian National University26, Commonwealth Scientific and Industrial Research Organisation27, University of Missouri28, University of Verona29, Universidade Federal de Itajubá30, United States Naval Research Laboratory31, Marquette University32, Graz University of Technology33, Naver Corporation34, Imperial College London35, Electronics and Telecommunications Research Institute36, Zhejiang University37, University of Surrey38, Harbin Institute of Technology39, Lehigh University40
TL;DR: The Visual Object Tracking challenge VOT2016 goes beyond its predecessors by introducing a new semi-automatic ground truth bounding box annotation methodology and extending the evaluation system with the no-reset experiment.
Abstract: The Visual Object Tracking challenge VOT2016 aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 70 trackers are presented, with a large number of trackers being published at major computer vision conferences and journals in the recent years. The number of tested state-of-the-art trackers makes the VOT 2016 the largest and most challenging benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the Appendix. The VOT2016 goes beyond its predecessors by (i) introducing a new semi-automatic ground truth bounding box annotation methodology and (ii) extending the evaluation system with the no-reset experiment. The dataset, the evaluation kit as well as the results are publicly available at the challenge website (http://votchallenge.net).
744 citations
TL;DR: This paper conducts a comparative study on 12 selected image fusion metrics over six multiresolution image fusion algorithms for two different fusion schemes and input images with distortion and relates the results to an image quality measurement.
Abstract: Comparison of image processing techniques is critically important in deciding which algorithm, method, or metric to use for enhanced image assessment. Image fusion is a popular choice for various image enhancement applications such as overlay of two image products, refinement of image resolutions for alignment, and image combination for feature extraction and target recognition. Since image fusion is used in many geospatial and night vision applications, it is important to understand these techniques and provide a comparative study of the methods. In this paper, we conduct a comparative study on 12 selected image fusion metrics over six multiresolution image fusion algorithms for two different fusion schemes and input images with distortion. The analysis can be applied to different image combination algorithms, image processing methods, and over a different choice of metrics that are of use to an image processing expert. The paper relates the results to an image quality measurement based on power spectrum and correlation analysis and serves as a summary of many contemporary techniques for objective assessment of image fusion algorithms.
563 citations
Book•
23 May 2011
TL;DR: This book is a comprehensive reference guide that exposes you to practical and fundamental computer vision concepts, illustrated by extensive examples, and shows you how to install and deploy the OpenCV library to write an effective computer vision application.
Abstract: Over 50 recipes to help you build computer vision applications in C++ using the OpenCV library About This BookMaster OpenCV, the open source library of the computer vision communityMaster fundamental concepts in computer vision and image processingLearn the important classes and functions of OpenCV with complete working examples applied on real imagesWho This Book Is ForOpenCV 3 Computer Vision Application Programming Cookbook is appropriate for novice C++ programmers who want to learn how to use the OpenCV library to build computer vision applications. It is also suitable for professional software developers wishing to be introduced to the concepts of computer vision programming. It can also be used as a companion book in a university-level computer vision courses. It constitutes an excellent reference for graduate students and researchers in image processing and computer vision. In Detail OpenCV Computer Vision Application Programming Cookbook Second Edition is your guide to the development of computer vision applications.The book shows you how to install and deploy the OpenCV library to write an effective computer vision application. Different techniques for image enhancement, pixel manipulation, and shape analysis will be presented. You will also learn how to process video from files or cameras and detect and track moving objects. You will also be introduced to recent approaches in machine learning and object classification.This book is a comprehensive reference guide that exposes you to practical and fundamental computer vision concepts, illustrated by extensive examples.
287 citations
19 Jun 2001
TL;DR: In this paper, the authors proposed an algorithm that detects planar homographies in uncalibrated image pairs using a RANSAC scheme based on the linear computation of the homography matrix elements using four points.
Abstract: Because of their abundance and simplicity, planes are used in several computer vision tasks. Their simplicity results in that, under perspective projection, the transformation between a world plane and its corresponding image plane is projective linear, or a homography. These relations also hold between perspective views of a plane in different images. This paper proposes an algorithm that detects planar homographies in uncalibrated image pairs. It then demonstrates how this plane identification method can be used as a first step in an image analysis process, when point matching between images is unreliable. The detection is performed using a RANSAC scheme based on the linear computation of the homography matrix elements using four points. Results are shown on real image pairs.
222 citations
01 Jan 2018
TL;DR: The experimental results show that differentially private deep models may keep their promise to provide privacy protection against strong adversaries by only offering poor model utility, while exhibit moderate vulnerability to the membership inference attack when they offer an acceptable utility.
Abstract: The unprecedented success of deep learning is largely dependent on the availability of massive amount of training data. In many cases, these data are crowd-sourced and may contain sensitive and confidential information, therefore, pose privacy concerns. As a result, privacy-preserving deep learning has been gaining increasing focus nowadays. One of the promising approaches for privacy-preserving deep learning is to employ differential privacy during model training which aims to prevent the leakage of sensitive information about the training data via the trained model. While these models are considered to be immune to privacy attacks, with the advent of recent and sophisticated attack models, it is not clear how well these models trade-off utility for privacy. In this paper, we systematically study the impact of a sophisticated machine learning based privacy attack called the membership inference attack against a state-of-the-art differentially private deep model. More specifically, given a differentially private deep model with its associated utility, we investigate how much we can infer about the model’s training data. Our experimental results show that differentially private deep models may keep their promise to provide privacy protection against strong adversaries by only offering poor model utility, while exhibit moderate vulnerability to the membership inference attack when they offer an acceptable utility. For evaluating our experiments, we use the CIFAR-10 and MNIST datasets and the corresponding classification tasks.
182 citations
Cited by
More filters
TL;DR: A novel feature similarity (FSIM) index for full reference IQA is proposed based on the fact that human visual system (HVS) understands an image mainly according to its low-level features.
Abstract: Image quality assessment (IQA) aims to use computational models to measure the image quality consistently with subjective evaluations. The well-known structural similarity index brings IQA from pixel- to structure-based stage. In this paper, a novel feature similarity (FSIM) index for full reference IQA is proposed based on the fact that human visual system (HVS) understands an image mainly according to its low-level features. Specifically, the phase congruency (PC), which is a dimensionless measure of the significance of a local structure, is used as the primary feature in FSIM. Considering that PC is contrast invariant while the contrast information does affect HVS' perception of image quality, the image gradient magnitude (GM) is employed as the secondary feature in FSIM. PC and GM play complementary roles in characterizing the image local quality. After obtaining the local quality map, we use PC again as a weighting function to derive a single quality score. Extensive experiments performed on six benchmark IQA databases demonstrate that FSIM can achieve much higher consistency with the subjective evaluations than state-of-the-art IQA metrics.
4,028 citations
01 Dec 2013
TL;DR: Dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets are improved by taking into account camera motion to correct them.
Abstract: Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art.
3,487 citations
2,687 citations
TL;DR: In this paper, the authors offer a new book that enPDFd the perception of the visual world to read, which they call "Let's Read". But they do not discuss how to read it.
Abstract: Let's read! We will often find out this sentence everywhere. When still being a kid, mom used to order us to always read, so did the teacher. Some books are fully read in a week and we need the obligation to support reading. What about now? Do you still love reading? Is reading only for you who have obligation? Absolutely not! We here offer you a new book enPDFd the perception of the visual world to read.
2,250 citations
18 Jun 2018
TL;DR: The Siamese region proposal network (Siamese-RPN) is proposed which is end-to-end trained off-line with large-scale image pairs for visual object tracking and consists of SiAMESe subnetwork for feature extraction and region proposal subnetwork including the classification branch and regression branch.
Abstract: Visual object tracking has been a fundamental topic in recent years and many deep learning based trackers have achieved state-of-the-art performance on multiple benchmarks. However, most of these trackers can hardly get top performance with real-time speed. In this paper, we propose the Siamese region proposal network (Siamese-RPN) which is end-to-end trained off-line with large-scale image pairs. Specifically, it consists of Siamese subnetwork for feature extraction and region proposal subnetwork including the classification branch and regression branch. In the inference phase, the proposed framework is formulated as a local one-shot detection task. We can pre-compute the template branch of the Siamese subnetwork and formulate the correlation layers as trivial convolution layers to perform online tracking. Benefit from the proposal refinement, traditional multi-scale test and online fine-tuning can be discarded. The Siamese-RPN runs at 160 FPS while achieving leading performance in VOT2015, VOT2016 and VOT2017 real-time challenges.
2,016 citations