scispace - formally typeset
Search or ask a question
Topic

Smart camera

About: Smart camera is a research topic. Over the lifetime, 5571 publications have been published within this topic receiving 93054 citations. The topic is also known as: intelligent camera.


Papers
More filters
PatentDOI
07 Jul 2005
TL;DR: In this paper, the authors describe a video surveillance system which is composed of three key components 1-smart camera, 2-server, 3-client, connected through IP-networks in wired or wireless configurations.
Abstract: This invention describes a video surveillance system which is composed of three key components 1-smart camera(s), 2-server(s), 3-client(s), connected through IP-networks in wired or wireless configurations. The system has been designed so as to protect the privacy of people and goods under surveillance. Smart cameras are based on JPEG 2000 compression where an analysis module allows for efficient use of security tools for the purpose of scrambling, and event detection. The analysis is also used in order to provide a better quality in regions of the interest in the scene. Compressed video streams leaving the camera(s) are scrambled and signed for the purpose of privacy and data integrity verification using JPSEC compliant methods. The same bit stream is also protected based on JPWL compliant methods for robustness to transmission errors. The operations of the smart camera are optimized in order to provide the best compromise in terms of perceived visual quality of the decoded video, versus the amount of power consumption. The smart camera(s) can be wireless in both power and communication connections. The server(s) receive(s), store(s), manage(s) and dispatch(es) the video sequences on wired and wireless channels to a variety of clients and users with different device capabilities, channel characteristics and preferences. Use of seamless scalable coding of video sequences prevents any need for transcoding operations at any point in the system.

104 citations

Proceedings ArticleDOI
30 Oct 2000
TL;DR: A graphical interface that enables 3D visual artists or developers of interactive 3D virtual environments to efficiently define sophisticated camera compositions by creating storyboard frames, indicating how a desired shot should appear.
Abstract: We have designed a graphical interface that enables 3D visual artists or developers of interactive 3D virtual environments to efficiently define sophisticated camera compositions by creating storyboard frames, indicating how a desired shot should appear. These storyboard frames are then automatically encoded into an extensive set of virtual camera constraints that capture the key visual composition elements of the storyboard frame. Visual composition elements include the size and position of a subject in a camera shot. A recursive heuristic constraint solver then searches the space of a given 3D virtual environment to determine camera parameter values which produce a shot closely matching the one in the given storyboard frame. The search method uses given ranges of allowable parameter values expressed by each constraint to reduce the size of the 7 Degree of Freedom search space of possible camera positions, aim direction vectors, and field of view angles. In contrast, some existing methods of automatically positioning cameras in 3D virtual environments rely on pre-defined camera placements that cannot account for unanticipated configurations and movement of objects or use program-like scripts to define constraint-based camera shots. For example, it is more intuitive to directly manipulate an object's size in the frame rather than editing a constraint script to specify that the object should cover 10% of the frame's area.

102 citations

Proceedings ArticleDOI
24 Apr 2004
TL;DR: Usability issues encountered in using a camera phone as an image annotation device immediately after image capture and users' responses to use of such a system are presented.
Abstract: In this paper we describe a system that allows users to annotate digital photos at the time of capture. The system uses camera phones with a lightweight client application and a server to store the images and metadata and assists the user in annotation on the camera phone by providing guesses about the location and content of the photos. By conducting user interface testing, surveys, and focus groups we were able to evaluate the usability of this system and uncover usage patterns and motivations that will inform our development of future mobile media annotation applications. In this paper we present usability issues encountered in using a camera phone as an image annotation device immediately after image capture and users' responses to use of such a system.

101 citations

Proceedings ArticleDOI
26 Sep 2010
TL;DR: Through sensor fusion, the method largely bypasses the motion correspondence problem from computer vision and is able to track people across large spatial or temporal gaps in sensing.
Abstract: We present a method to identify and localize people by leveraging existing CCTV camera infrastructure along with inertial sensors (accelerometer and magnetometer) within each person's mobile phones. Since a person's motion path, as observed by the camera, must match the local motion measurements from their phone, we are able to uniquely identify people with the phones' IDs by detecting the statistical dependence between the phone and camera measurements. For this, we express the problem as consisting of a two-measurement HMM for each person, with one camera measurement and one phone measurement. Then we use a maximum a posteriori formulation to find the most likely ID assignments. Through sensor fusion, our method largely bypasses the motion correspondence problem from computer vision and is able to track people across large spatial or temporal gaps in sensing. We evaluate the system through simulations and experiments in a real camera network testbed.

100 citations

Journal ArticleDOI
TL;DR: Experimental results reveal that the proposed approach can reliably pre-alarm security risk events, substantially reduce storage space of recorded video and significantly speed up the evidence video retrieval associated with specific suspects.
Abstract: Video surveillance system has become a critical part in the security and protection system of modem cities, since smart monitoring cameras equipped with intelligent video analytics techniques can monitor and pre-alarm abnormal behaviors or events. However, with the expansion of the surveillance network, massive surveillance video data poses huge challenges to the analytics, storage and retrieval in the Big Data era. This paper presents a novel intelligent processing and utilization solution to big surveillance video data based on the event detection and alarming messages from front-end smart cameras. The method includes three parts: the intelligent pre-alarming for abnormal events, smart storage for surveillance video and rapid retrieval for evidence videos, which fully explores the temporal-spatial association analysis with respect to the abnormal events in different monitoring sites. Experimental results reveal that our proposed approach can reliably pre-alarm security risk events, substantially reduce storage space of recorded video and significantly speed up the evidence video retrieval associated with specific suspects.

100 citations


Network Information
Related Topics (5)
Feature (computer vision)
128.2K papers, 1.7M citations
88% related
Feature extraction
111.8K papers, 2.1M citations
88% related
Image processing
229.9K papers, 3.5M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
87% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202323
202262
202173
2020142
2019161
2018158