scispace - formally typeset
Search or ask a question
Topic

Smart camera

About: Smart camera is a research topic. Over the lifetime, 5571 publications have been published within this topic receiving 93054 citations. The topic is also known as: intelligent camera.


Papers
More filters
Proceedings ArticleDOI
15 Apr 1995
TL;DR: Through this encapsulation, camera modules can be programmed and sequenced, and thus can be used as the underlying framework for controlling the virtual camera in the widely disparate types of graphical environments.
Abstract: In this paper, a method of encapsulation camera tasks into well defined units called “camera modules” is described. Through this encapsulation, camera modules can be programmed and sequenced, and thus can be used as the underlying framework for controlling the virtual camera in the widely disparate types of graphical environments. Two examples of the camera framework are shown: an agent which can film a conversation between two virtual actors and a visual programming language for filming a virtual football game.

135 citations

Patent
Robert Grover Baker1
30 Jun 1995
TL;DR: In this article, an automatic voice-directional video camera image steering system for teleconferencing is presented, which automatically selects segmented images from a selected panoramic video scene typically around a conference table so that the participant in the conference currently speaking will be the selected segmented image in the proper viewing aspect ratio, eliminating the need for manual camera movement or automated mechanical camera movement.
Abstract: An automatic, voice-directional video camera image steering system specifically for use for teleconferencing that electronically selects segmented images from a selected panoramic video scene typically around a conference table so that the participant in the conference currently speaking will be the selected segmented image in the proper viewing aspect ratio, eliminating the need for manual camera movement or automated mechanical camera movement. The system includes an audio detection circuit from an array of microphones that can instantaneously determine the direction of a particular speaker and provide directional signals to a video camera and lens system that provides a panoramic display that can electronically select portions of that image and, through warping techniques, remove any distortion from the most significant portions of the image which lie from the horizon up to approximately 30 degrees in a hemispheric viewing area.

134 citations

Journal ArticleDOI
TL;DR: It is demonstrated that it is possible to achieve a high rate of accuracy in the identification of source camera identification by noting the intrinsic lens radial distortion of each camera.
Abstract: Source camera identification refers to the task of matching digital images with the cameras that are responsible for producing these images. This is an important task in image forensics, which in turn is a critical procedure in law enforcement. Unfortunately, few digital cameras are equipped with the capability of producing watermarks for this purpose. In this paper, we demonstrate that it is possible to achieve a high rate of accuracy in the identification by noting the intrinsic lens radial distortion of each camera. To reduce manufacturing cost, the majority of digital cameras are equipped with lenses having rather spherical surfaces, whose inherent radial distortions serve as unique fingerprints in the images. We extract, for each image, parameters from aberration measurements, which are then used to train and test a support vector machine classifier. We conduct extensive experiments to evaluate the success rate of a source camera identification with five cameras. The results show that this is a viable approach with high accuracy. Additionally, we also present results on how the error rates may change with images captured using various optical zoom levels, as zooming is commonly available in digital cameras.

134 citations

Proceedings ArticleDOI
17 May 2011
TL;DR: A novel method to select camera sensors from an arbitrary deployment to form a camera barrier is proposed, and redundancy reduction techniques to effectively reduce the number of cameras used are presented.
Abstract: Barrier coverage has attracted much attention in the past few years However, most of the previous works focused on traditional scalar sensors We propose to study barrier coverage in camera sensor networks One fundamental difference between camera and scalar sensor is that cameras from different positions can form quite different views of the object As a result, simply combining the sensing range of the cameras across the field does not necessarily form an effective camera barrier since the face image (or the interested aspect) of the object may be missed To address this problem, we use the angle between the object's facing direction and the camera's viewing direction to measure the quality of sensing An object is full-view covered if there is always a camera to cover it no matter which direction it faces and the camera's viewing direction is sufficiently close to the object's facing direction We study the problem of constructing a camera barrier, which is essentially a connected zone across the monitored field such that every point within this zone is full-view covered We propose a novel method to select camera sensors from an arbitrary deployment to form a camera barrier, and present redundancy reduction techniques to effectively reduce the number of cameras used We also present techniques to deploy cameras for barrier coverage in a deterministic environment, and analyze and optimize the number of cameras required for this specific deployment under various parameters

133 citations

Patent
12 Feb 2010
TL;DR: In this paper, a method for determining the pose of a camera with respect to at least one object of a real environment for use in authoring/augmented reality application that includes generating a first image by the camera capturing a real object of the real environment, generating first orientation data from at least 1 orientation sensor associated with the camera or from an algorithm which analyses the first image for finding and determining features which are indicative of an orientation of the camera.
Abstract: Method for determining the pose of a camera with respect to at least one object of a real environment for use in authoring/augmented reality application that includes generating a first image by the camera capturing a real object of a real environment, generating first orientation data from at least one orientation sensor associated with the camera or from an algorithm which analyses the first image for finding and determining features which are indicative of an orientation of the camera, allocating a distance of the camera to the real object, generating distance data indicative of the allocated distance, determining the pose of the camera with respect to a coordinate system related to the real object of the real environment using the distance data and the first orientation data. May be performed with reduced processing requirements and/or higher processing speed, in mobile device such as mobile phones having display, camera and orientation sensor.

132 citations


Network Information
Related Topics (5)
Feature (computer vision)
128.2K papers, 1.7M citations
88% related
Feature extraction
111.8K papers, 2.1M citations
88% related
Image processing
229.9K papers, 3.5M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
87% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202323
202262
202173
2020142
2019161
2018158