scispace - formally typeset
Search or ask a question
Topic

Smart camera

About: Smart camera is a research topic. Over the lifetime, 5571 publications have been published within this topic receiving 93054 citations. The topic is also known as: intelligent camera.


Papers
More filters
Patent
13 Jul 2012
TL;DR: In this article, a system that allows a camera enabled augmented reality application to run in a protected area may include a first device including a camera, the camera including a secure mode of operation and a display, an image processing module configured to convert image data from the camera to encoded data when the camera is in the secure mode and protect image data stored in the system, and a protected audiovisual path mechanism configured to securely send augmented encoded data to the display.
Abstract: An example system that allows a camera enabled application, such as an augmented reality application, to run in a protected area may include a first device including a camera, the camera including a secure mode of operation and a display, an image processing module configured to convert image data from the camera to encoded data when the camera is in the secure mode and protect image data stored in the system, an encryption module configured to encrypt encoded data from the image processing module, and a protected audiovisual path mechanism configured to securely send augmented encoded data to the display.

24 citations

Journal ArticleDOI
TL;DR: A novel application of constraint satisfaction in the design of a camera system that addresses the unique and difficult challenges of IDE is described, demonstrating a specialized constraint solver that exploits the spatial structure of the problem, enabling the real-time use of the camera system.
Abstract: Camera control techniques for interactive digital entertainment (IDE) are reaching their limits in terms of capabilities. To enable future growth, new methods must be derived to address these new challenges. Existing academic research into camera control is typically devoted to cinematography and guided exploration tasks, and is not directly applicable to IDE. This paper describes a novel application of constraint satisfaction in the design of a camera system that addresses the unique and difficult challenges of IDE. It demonstrates a specialized constraint solver that exploits the spatial structure of the problem, enabling the real-time use of the camera system. The merit of our solution is highlighted by demonstrating the computational efficiency and ability to extend the cameras capabilities in a simple and effective manner.

24 citations

Journal ArticleDOI
TL;DR: This paper surveys the available literature in terms of multi-camera systems’ physical arrangements, calibrations, algorithms, and their advantages and disadvantages, which are surveillance, sports, education, and mobile phones.
Abstract: A multi-camera system combines features from different cameras to exploit a scene of an event to increase the output image quality. The combination of two or more cameras requires prior settings in terms of calibration and architecture. Therefore, this paper surveys the available literature in terms of multi-camera systems’ physical arrangements, calibrations, algorithms, and their advantages and disadvantages. We also survey the recent developments and advancements in four areas of multi-camera system applications, which are surveillance, sports, education, and mobile phones. In the surveillance system, the combination of multiple heterogeneous cameras and the discovery of Pan-Tilt-Zoom (PTZ) and smart cameras have brought tremendous achievements in the area of multi-camera control and coordination. Different approaches have been proposed to facilitate effective collaboration and monitoring among the camera network. Furthermore, the application of multi-cameras in sports has made the games more interesting in the aspect of analyses and transparency. The application of the multi-camera system in education has taken education beyond the four walls of the class. The method of teaching, student attendance enrollment, determination of students’ attention, teacher and student assessment can now be determined with ease, and all forms of proxy and manipulation in education can be reduced by using a multi-camera system. Besides, the number of cameras featuring on smartphones is gaining noticeable recognition. However, most of these cameras serve different purposes, from zooming, telephoto, and wider Field of View (FOV). Therefore, future smartphones should be expecting more cameras or the development would be in a different direction.

24 citations

Journal ArticleDOI
09 Jun 2012
TL;DR: In this paper, a high-speed visible light camera based on commercial CMOS sensor with embedded processing implemented in FPGA is proposed for hard X-ray micro-imaging.
Abstract: X-ray computed tomography (CT) is a method for non-destructive investigation. Three-dimensional images of internal structure can be reconstructed using a two-dimensional detector. The poly-chromatic high density photon flux in the modern synchrotron light sources offers hard X-ray imaging with a spatio-temporal resolution up to the µm-µs range. Existing indirect X-ray image detectors can be adapted for fast image acquisition by employing CMOS-based digital high speed camera. In this paper, we propose a high-speed visible light camera based on commercial CMOS sensor with embedded processing implemented in FPGA. This platform has been used to develop a novel architecture for a self-event trigger. This feature is able to increase the original frame rate of the CMOS sensor and reduce the amount of the received data. Thanks to a low noise design, high frame rate (kilohertz range) and high speed data transfer, this camera can be employed in modern synchrotron ultra-fast X-ray radiography and computed tomography. The camera setup is accomplished by high-throughput Linux drivers and a seamless integration in our GPU computing framework. Selected applications from life sciences and materials research underline the high potential of this high-speed camera in a hard X-ray micro-imaging approach.

24 citations

Proceedings ArticleDOI
01 Oct 2016
TL;DR: This paper investigates the use of a photo-realistic simulation tool to address challenges of robust place recognition, visual SLAM and object recognition, and provides a multi-domain demonstration of the beneficial properties of using simulation to characterise and analyse a wide range of robotic vision algorithms.
Abstract: Robotic vision, unlike computer vision, typically involves processing a stream of images from a camera with time varying pose operating in an environment with time varying lighting conditions and moving objects. Repeating robotic vision experiments under identical conditions is often impossible, making it difficult to compare different algorithms. For machine learning applications a critical bottleneck is the limited amount of real world image data that can be captured and labelled for both training and testing purposes. In this paper we investigate the use of a photo-realistic simulation tool to address these challenges, in three specific domains: robust place recognition, visual SLAM and object recognition. For the first two problems we generate images from a complex 3D environment with systematically varying camera paths, camera viewpoints and lighting conditions. For the first time we are able to systematically characterise the performance of these algorithms as paths and lighting conditions change. In particular, we are able to systematically generate varying camera viewpoint datasets that would be difficult or impossible to generate in the real world. We also compare algorithm results for a camera in a real environment and a simulated camera in a simulation model of that real environment. Finally, for the object recognition domain, we generate labelled image data and characterise the viewpoint dependency of a current convolution neural network in performing object recognition. Together these results provide a multi-domain demonstration of the beneficial properties of using simulation to characterise and analyse a wide range of robotic vision algorithms.

24 citations


Network Information
Related Topics (5)
Feature (computer vision)
128.2K papers, 1.7M citations
88% related
Feature extraction
111.8K papers, 2.1M citations
88% related
Image processing
229.9K papers, 3.5M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
87% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202323
202262
202173
2020142
2019161
2018158