scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Object recognition from local scale-invariant features

20 Sep 1999-Vol. 2, pp 1150-1157
TL;DR: Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.
Abstract: An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales. The keys are used as input to a nearest neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low residual least squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.

Content maybe subject to copyright    Report

Citations
More filters
01 Jan 2014
TL;DR: In this article, the authors proposed a method to improve the quality of the information provided by the user by using the information from the user's profile and the user profile of the service provider.
Abstract: Натрийуретические пептиды (НУП) являются важными биомаркерами в диагностике и определении прогноза у пациентов с сердечной недостаточностью (СН). Оценка динамики концентрации НУП (BNP, Nt -proBNP) может быть использована в качестве критерия успешности проводимой терапии. так, при достижении целевых уровней НУП можно прогнозировать благоприятный исход заболевания. В настоящее время лечение СН с учетом уровней НУП является частью рекомендаций по лечению СН (класс IIа) и улучшению ее исхода (класс IIб) в США, однако такой подход не используется в российских клиниках. Цель. Представить современный взгляд на возможность использования НУП для оценки эффективности проводимой терапии пациентов с СН. Ключевые слова: натрийуретические пептиды, сердечная недостаточность, оценка эффективности терапии.

167 citations

Journal ArticleDOI
TL;DR: This work analyzes the existing challenges in video-based surveillance systems for the vehicle and presents a general architecture for video surveillance systems, i.e., the hierarchical and networked vehicle surveillance, to survey the different existing and potential techniques.
Abstract: Traffic surveillance has become an important topic in intelligent transportation systems (ITSs), which is aimed at monitoring and managing traffic flow. With the progress in computer vision, video-based surveillance systems have made great advances on traffic surveillance in ITSs. However, the performance of most existing surveillance systems is susceptible to challenging complex traffic scenes (e.g., object occlusion, pose variation, and cluttered background). Moreover, existing related research is mainly on a single video sensor node, which is incapable of addressing the surveillance of traffic road networks. Accordingly, we present a review of the literature on the video-based vehicle surveillance systems in ITSs. We analyze the existing challenges in video-based surveillance systems for the vehicle and present a general architecture for video surveillance systems, i.e., the hierarchical and networked vehicle surveillance, to survey the different existing and potential techniques. Then, different methods are reviewed and discussed with respect to each module. Applications and future developments are discussed to provide future needs of ITS services.

167 citations

Patent
02 Sep 2011
TL;DR: In this paper, a method and system for text-based searches of images using a visual signature associated with each image is described, where a measure of string similarity between a query and an annotation associated with the image is computed, and based upon the computed string similarity measures, a set of entries from the first database is selected.
Abstract: A method and system are disclosed for conducting text-based searches of images using a visual signature associated with each image. A measure of string similarity between a query and an annotation associated with each entry in a first database is computed, and based upon the computed string similarity measures, a set of entries from the first database is selected. Each entry of the first database also includes an associated visual signature. At least one entry is then retrieved from a second database based upon a measure of visual similarity between a visual signature of each of the entries in the second database and the visual signatures of the entries in the selected set. Information corresponding to the retrieved entries from the second database is then generated.

166 citations

Journal ArticleDOI
TL;DR: The historical connections between neuroscience and computer science are reviewed, and a new era of potential collaboration is looked forward to, enabled by recent rapid advances in both biologically-inspired computer vision and in experimental neuroscience methods.

165 citations


Cites background from "Object recognition from local scale..."

  • ...For instance, David Lowe’s widely influential Scale Invariant Feature Transform (SIFT) was originally described in analogy to the primate ventral visual pathway [31], but although it quickly became a ubiquitous component of conventional computer vision systems, its biological inspiration is rarely mentioned....

    [...]

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed a novel framework, namely, multi-query expansions, to retrieve semantically robust landmarks by two steps, where the top-$k$ photos regarding the latent topics of a query landmark were identified to construct a multiquery set so as to remedy its possible low quality shape.
Abstract: Given a query photo issued by a user (q-user), the landmark retrieval is to return a set of photos with their landmarks similar to those of the query, while the existing studies on the landmark retrieval focus on exploiting geometries of landmarks for similarity matches between candidate photos and a query photo. We observe that the same landmarks provided by different users over social media community may convey different geometry information depending on the viewpoints and/or angles, and may, subsequently, yield very different results. In fact, dealing with the landmarks with low quality shapes caused by the photography of q-users is often nontrivial and has seldom been studied. In this paper, we propose a novel framework, namely, multi-query expansions, to retrieve semantically robust landmarks by two steps. First, we identify the top- $k$ photos regarding the latent topics of a query landmark to construct multi-query set so as to remedy its possible low quality shape. For this purpose, we significantly extend the techniques of Latent Dirichlet Allocation. Then, motivated by the typical collaborative filtering methods, we propose to learn a collaborative deep networks-based semantically, nonlinear, and high-level features over the latent factor for landmark photo as the training set, which is formed by matrix factorization over collaborative user-photo matrix regarding the multi-query set. The learned deep network is further applied to generate the features for all the other photos, meanwhile resulting into a compact multi-query set within such space. Then, the final ranking scores are calculated over the high-level feature space between the multi-query set and all other photos, which are ranked to serve as the final ranking list of landmark retrieval. Extensive experiments are conducted on real-world social media data with both landmark photos together with their user information to show the superior performance over the existing methods, especially our recently proposed multi-query based mid-level pattern representation method [1] .

165 citations

References
More filters
Journal ArticleDOI
TL;DR: In this paper, color histograms of multicolored objects provide a robust, efficient cue for indexing into a large database of models, and they can differentiate among a large number of objects.
Abstract: Computer vision is moving into a new era in which the aim is to develop visual skills for robots that allow them to interact with a dynamic, unconstrained environment. To achieve this aim, new kinds of vision algorithms need to be developed which run in real time and subserve the robot's goals. Two fundamental goals are determining the identity of an object with a known location, and determining the location of a known object. Color can be successfully used for both tasks. This dissertation demonstrates that color histograms of multicolored objects provide a robust, efficient cue for indexing into a large database of models. It shows that color histograms are stable object representations in the presence of occlusion and over change in view, and that they can differentiate among a large number of objects. For solving the identification problem, it introduces a technique called Histogram Intersection, which matches model and image histograms and a fast incremental version of Histogram Intersection which allows real-time indexing into a large database of stored models. It demonstrates techniques for dealing with crowded scenes and with models with similar color signatures. For solving the location problem it introduces an algorithm called Histogram Backprojection which performs this task efficiently in crowded scenes.

5,672 citations

Journal ArticleDOI
TL;DR: It is shown how the boundaries of an arbitrary non-analytic shape can be used to construct a mapping between image space and Hough transform space, which makes the generalized Houghtransform a kind of universal transform which can beused to find arbitrarily complex shapes.

4,310 citations

Journal ArticleDOI
TL;DR: A near real-time recognition system with 20 complex objects in the database has been developed and a compact representation of object appearance is proposed that is parametrized by pose and illumination.
Abstract: The problem of automatically learning object models for recognition and pose estimation is addressed. In contrast to the traditional approach, the recognition problem is formulated as one of matching appearance rather than shape. The appearance of an object in a two-dimensional image depends on its shape, reflectance properties, pose in the scene, and the illumination conditions. While shape and reflectance are intrinsic properties and constant for a rigid object, pose and illumination vary from scene to scene. A compact representation of object appearance is proposed that is parametrized by pose and illumination. For each object of interest, a large set of images is obtained by automatically varying pose and illumination. This image set is compressed to obtain a low-dimensional subspace, called the eigenspace, in which the object is represented as a manifold. Given an unknown input image, the recognition system projects the image to eigenspace. The object is recognized based on the manifold it lies on. The exact position of the projection on the manifold determines the object's pose in the image. A variety of experiments are conducted using objects with complex appearance characteristics. The performance of the recognition and pose estimation algorithms is studied using over a thousand input images of sample objects. Sensitivity of recognition to the number of eigenspace dimensions and the number of learning samples is analyzed. For the objects used, appearance representation in eigenspaces with less than 20 dimensions produces accurate recognition results with an average pose estimation error of about 1.0 degree. A near real-time recognition system with 20 complex objects in the database has been developed. The paper is concluded with a discussion on various issues related to the proposed learning and recognition methodology.

2,037 citations

Journal ArticleDOI
TL;DR: This paper addresses the problem of retrieving images from large image databases with a method based on local grayvalue invariants which are computed at automatically detected interest points and allows for efficient retrieval from a database of more than 1,000 images.
Abstract: This paper addresses the problem of retrieving images from large image databases. The method is based on local grayvalue invariants which are computed at automatically detected interest points. A voting algorithm and semilocal constraints make retrieval possible. Indexing allows for efficient retrieval from a database of more than 1,000 images. Experimental results show correct retrieval in the case of partial visibility, similarity transformations, extraneous features, and small perspective deformations.

1,756 citations


"Object recognition from local scale..." refers background or methods in this paper

  • ...This allows for the use of more distinctive image descriptors than the rotation-invariant ones used by Schmid and Mohr, and the descriptor is further modified to improve its stability to changes in affine projection and illumination....

    [...]

  • ...For the object recognition problem, Schmid & Mohr [19] also used the Harris corner detector to identify interest points, and then created a local image descriptor at each interest point from an orientation-invariant vector of derivative-of-Gaussian image measurements....

    [...]

  • ..., Schmid & Mohr [19]) has shown that efficient recognition can often be achieved by using local image descriptors sampled at a large number of repeatable locations....

    [...]

  • ...However, recent research on the use of dense local features (e.g., Schmid & Mohr [19]) has shown that efficient recognition can often be achieved by using local image descriptors sampled at a large number of repeatable locations....

    [...]

Journal ArticleDOI
TL;DR: A robust approach to image matching by exploiting the only available geometric constraint, namely, the epipolar constraint, is proposed and a new strategy for updating matches is developed, which only selects those matches having both high matching support and low matching ambiguity.

1,574 citations


"Object recognition from local scale..." refers methods in this paper

  • ...[23] used the Harris corner detector to identify feature locations for epipolar alignment of images taken from differing viewpoints....

    [...]