scispace - formally typeset
Search or ask a question
Topic

Smart camera

About: Smart camera is a research topic. Over the lifetime, 5571 publications have been published within this topic receiving 93054 citations. The topic is also known as: intelligent camera.


Papers
More filters
Proceedings ArticleDOI
01 Jan 2000
TL;DR: The authors propose to use a laser spot array and TV camera for the vision sensor, which can find the three dimensional position of each spot on the surface of the pipe by means of triangulation, and a method is presented to distinguish the shape whether it is straight or not.
Abstract: This research work contributes to an active vision system for an gas pipe inspection robot moving inside of the pipe. The robot must be able to know what shape of the pipes where the robot is moving. The shapes of the pipe include straight, L or T. Also, when it is moving in the straight section, the robot must detect the relative angle with respect to its principal axis. The authors propose to use a laser spot array and TV camera for the vision sensor, which can find the three dimensional position of each spot on the surface of the pipe by means of triangulation. The shape of the pipe inside can be reconstructed by the information of the camera. A method is also presented to distinguish the shape whether it is straight or not.

24 citations

Patent
15 Feb 2008
TL;DR: In this article, a dual camera module includes image-shifting optics configured to shift images from a second camera module onto a portion of a common image sensor such that images from the main camera module may be received by a first or main portion of the common sensor.
Abstract: A portable communication device is equipped with a dual camera module having a first or main camera module, a second or video telephony camera module and a common image sensor element configured to receive images from the first and second camera modules. The dual camera module includes image-shifting optics configured to shift images from the second camera module onto a portion of a common image sensor such that images from the main camera module may be received by a first or main portion of the common image sensor and images from the second camera module may be received by a secondary portion of the common image sensor.

24 citations

Journal ArticleDOI
12 Feb 2014-Sensors
TL;DR: An architecture is proposed, that realises the iterative Horn-Schunck algorithm in a pipelined manner, that allows to achieve data throughput of 175 MPixels/s and makes processing of Full HD video stream possible.
Abstract: This article presents an efficient hardware implementation of the Horn-Schunck algorithm that can be used in an embedded optical flow sensor. An architecture is proposed, that realises the iterative Horn-Schunck algorithm in a pipelined manner. This modification allows to achieve data throughput of 175 MPixels/s and makes processing of Full HD video stream (1; 920 × 1; 080 @ 60 fps) possible. The structure of the optical flow module as well as pre- and post-filtering blocks and a flow reliability computation unit is described in details. Three versions of optical flow modules, with different numerical precision, working frequency and obtained results accuracy are proposed. The errors caused by switching from floating- to fixed-point computations are also evaluated. The described architecture was tested on popular sequences from an optical flow dataset of the Middlebury University. It achieves state-of-the-art results among hardware implementations of single scale methods. The designed fixed-point architecture achieves performance of 418 GOPS with power efficiency of 34 GOPS/W. The proposed floating-point module achieves 103 GFLOPS, with power efficiency of 24 GFLOPS/W. Moreover, a 100 times speedup compared to a modern CPU with SIMD support is reported. A complete, working vision system realized on Xilinx VC707 evaluation board is also presented. It is able to compute optical flow for Full HD video stream received from an HDMI camera in real-time. The obtained results prove that FPGA devices are an ideal platform for embedded vision systems.

24 citations

Journal ArticleDOI
TL;DR: Compared to other localization solutions that use opportunistically found visual data, this solution is more suitable to battery-powered, processing-constrained camera networks because it requires communication only to determine simultaneous target viewings and for passing transform data.
Abstract: For distributed smart camera networks to perform vision-based tasks such as subject recognition and tracking, every camera's position and orientation relative to a single 3-D coordinate frame must be accurately determined. In this paper, we present a new camera network localization solution that requires successively showing a 3-D feature point-rich target to all cameras, then using the known geometry of a 3-D target, cameras estimate and decompose projection matrices to compute their position and orientation relative to the coordinatization of the 3-D target's feature points. As each 3-D target position establishes a distinct coordinate frame, cameras that view more than one 3-D target position compute translations and rotations relating different positions' coordinate frames and share the transform data with neighbors to facilitate realignment of all cameras to a single coordinate frame. Compared to other localization solutions that use opportunistically found visual data, our solution is more suitable to battery-powered, processing-constrained camera networks because it requires communication only to determine simultaneous target viewings and for passing transform data. Additionally, our solution requires only pairwise view overlaps of sufficient size to see the 3-D target and detect its feature points, while also giving camera positions in meaningful units. We evaluate our algorithm in both real and simulated smart camera networks. In the real network, position error is less than 1" when the 3-D target's feature points fill only 2.9% of the frame area.

24 citations

Proceedings ArticleDOI
06 Feb 1997
TL;DR: Progress in CMOS-based image sensors is creating opportunities for a low-cost, low-power one-chip video camera with digitizing, signal processing and image compression, and a smart camera head acquires compressed digital moving pictures directly into portable multimedia computers.
Abstract: Progress in CMOS-based image sensors is creating opportunities for a low-cost, low-power one-chip video camera with digitizing, signal processing and image compression. Such a smart camera head acquires compressed digital moving pictures directly into portable multimedia computers. Video encoders using a moving picture coding standard such as MPEG and H.26x are not always suitable for integration of image encoding on the image sensor, because of the complexity and the power dissipation. On-sensor image compression such as a CCD image sensor for lossless image compression and a CMOS image sensor with pixel-level interframe coding are reported. A one-chip digital camera with on-sensor video compression is shown in the block diagram. The chip contains a 128/spl times/128-pixel sensor, 8-channel parallel read-out circuits, an analog 2-dimensional discrete cosine transform (2D DCT) processor and a variable quantization-level ADC (ADC/Q).

24 citations


Network Information
Related Topics (5)
Feature (computer vision)
128.2K papers, 1.7M citations
88% related
Feature extraction
111.8K papers, 2.1M citations
88% related
Image processing
229.9K papers, 3.5M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
87% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202323
202262
202173
2020142
2019161
2018158