scispace - formally typeset
Search or ask a question
Topic

Smart camera

About: Smart camera is a research topic. Over the lifetime, 5571 publications have been published within this topic receiving 93054 citations. The topic is also known as: intelligent camera.


Papers
More filters
Patent
22 Nov 2004
TL;DR: In this article, a passive infrared motion detector is used to detect the presence of a moving animal in an outdoor, battery powered digital camera, and the camera goes into a power-saving sleep mode between pictures.
Abstract: An outdoor, battery powered digital camera includes a passive infrared motion detector that allows the camera to be left unattended, as the detector automatically triggers the camera to take a picture upon sensing the presence of a moving animal. To prolong the life of the battery, the camera goes into a power-saving sleep mode between pictures. To enable the camera to instantly take a picture upon suddenly being awakened by the motion detector, the exposure setting of the camera is periodically checked, adjusted and stored so that the camera can use that fairly recent exposure setting, or one near it, to take an instant picture rather than wasting excessive time adjusting the exposure when the animal first appears. In some cases, the camera is used in conjunction with picture management software.

70 citations

Patent
26 Apr 2011
TL;DR: In this article, an imaging system of a mobile communications device includes a first camera having a control interface to a controller and a data interface to the controller, a second camera having an interface with the first camera, and a processor to combine an image from the second camera received through the second-camera data interface with an image extracted from the first-camera and send the combined image to the control interface through the data interface.
Abstract: Image overlay in a mobile device is described. In one embodiment an imaging system of a mobile communications device includes a first camera having a control interface to a controller and a data interface to the controller, a second camera having a data interface to the first camera, and a processor to combine an image from the second camera received through the second camera data interface with an image from the first camera and to send the combined image to the controller through the data interface.

70 citations

Journal ArticleDOI
27 Jul 2015
TL;DR: This work introduces the Toric space, a novel and compact representation for intuitive and efficient virtual camera control, and derives a novel screen-space manipulation technique that provides intuitive and real-time control of visual properties.
Abstract: A large range of computer graphics applications such as data visualization or virtual movie production require users to position and move viewpoints in 3D scenes to effectively convey visual information or tell stories. The desired viewpoints and camera paths are required to satisfy a number of visual properties (e.g. size, vantage angle, visibility, and on-screen position of targets). Yet, existing camera manipulation tools only provide limited interaction methods and automated techniques remain computationally expensive. In this work, we introduce the Toric space, a novel and compact representation for intuitive and efficient virtual camera control. We first show how visual properties are expressed in this Toric space and propose an efficient interval-based search technique for automated viewpoint computation. We then derive a novel screen-space manipulation technique that provides intuitive and real-time control of visual properties. Finally, we propose an effective viewpoint interpolation technique which ensures the continuity of visual properties along the generated paths. The proposed approach (i) performs better than existing automated viewpoint computation techniques in terms of speed and precision, (ii) provides a screen-space manipulation tool that is more efficient than classical manipulators and easier to use for beginners, and (iii) enables the creation of complex camera motions such as long takes in a very short time and in a controllable way. As a result, the approach should quickly find its place in a number of applications that require interactive or automated camera control such as 3D modelers, navigation tools or 3D games.

70 citations

Book ChapterDOI
TL;DR: This chapter presents a review of the more developed paradigms aimed to bring computational, storage and control capabilities closer to where data is generated in the IoT: fog and edge computing, contrasted with the cloud computing paradigm.
Abstract: The main postulate of the Internet of things (IoT) is that everything can be connected to the Internet, at anytime, anywhere. This means a plethora of objects (e.g. smart cameras, wearables, environmental sensors, home appliances, and vehicles) are ‘connected’ and generating massive amounts of data. The collection, integration, processing and analytics of these data enable the realisation of smart cities, infrastructures and services for enhancing the quality of life of humans. Nowadays, existing IoT architectures are highly centralised and heavily rely on transferring data processing, analytics, and decision-making processes to cloud solutions. This approach of managing and processing data at the cloud may lead to inefficiencies in terms of latency, network traffic management, computational processing, and power consumption. Furthermore, in many applications, such as health monitoring and emergency response services, which require low latency, delay caused by transferring data to the cloud and then back to the application can seriously impact their performances. The idea of allowing data processing closer to where data is generated, with techniques such as data fusion, trending of data, and some decision making, can help reduce the amount of data sent to the cloud, reducing network traffic, bandwidth and energy consumption. Also, a more agile response, closer to real-time, will be achieved, which is necessary in applications such as smart health, security and traffic control for smart cities. Therefore, this chapter presents a review of the more developed paradigms aimed to bring computational, storage and control capabilities closer to where data is generated in the IoT: fog and edge computing, contrasted with the cloud computing paradigm. Also an overview of some practical use cases is presented to exemplify each of these paradigms and their main differences.

70 citations

Patent
13 Jan 2011
TL;DR: In this paper, a camera system generates video data for an object from a viewpoint of the camera system at a location of the object, and the information is displayed on images in the video data on display systems at a number of locations.
Abstract: A method and apparatus for displaying information. A camera system generates video data for an object from a viewpoint of the camera system at a location of the object. Information is identified about the object. The information is displayed on images in the video data on a display system at a number of locations. The display of the images with the information on the images in the video data at the number of locations is from the viewpoint of the camera system.

70 citations


Network Information
Related Topics (5)
Feature (computer vision)
128.2K papers, 1.7M citations
88% related
Feature extraction
111.8K papers, 2.1M citations
88% related
Image processing
229.9K papers, 3.5M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
87% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202323
202262
202173
2020142
2019161
2018158