scispace - formally typeset
Search or ask a question
JournalISSN: 0737-6553

electronic imaging 

SPIE
About: electronic imaging is an academic journal published by SPIE. The journal publishes majorly in the area(s): Image processing & Pixel. It has an ISSN identifier of 0737-6553. Over the lifetime, 6428 publications have been published receiving 73457 citations.


Papers
More filters
Proceedings ArticleDOI
TL;DR: Details of a system that allows for an evolutionary introduction of depth perception into the existing 2D digital TV framework are presented and a comparison with the classical approach of "stereoscopic" video is compared.
Abstract: This paper presents details of a system that allows for an evolutionary introduction of depth perception into the existing 2D digital TV framework. The work is part of the European Information Society Technologies (IST) project “Advanced Three-Dimensional Television System Technologies” (ATTEST), an activity, where industries, research centers and universities have joined forces to design a backwards-compatible, flexible and modular broadcast 3D-TV system. At the very heart of the described new concept is the generation and distribution of a novel data representation format, which consists of monoscopic color video and associated perpixel depth information. From these data, one or more “virtual” views of a real-world scene can be synthesized in real-time at the receiver side (i. e. a 3D-TV set-top box) by means of so-called depth-image-based rendering (DIBR) techniques. This publication will provide: (1) a detailed description of the fundamentals of this new approach on 3D-TV; (2) a comparison with the classical approach of “stereoscopic” video; (3) a short introduction to DIBR techniques in general; (4) the development of a specific DIBR algorithm that can be used for the efficient generation of high-quality “virtual” stereoscopic views; (5) a number of implementation details that are specific to the current state of the development; (6) research on the backwards-compatible compression and transmission of 3D imagery using state-of-the-art MPEG (Moving Pictures Expert Group) tools.

1,560 citations

Proceedings ArticleDOI
TL;DR: A new dataset, UCID (pronounced "use it") - an Uncompressed Colour Image Dataset which tries to bridge the gap between standardised image databases and objective evaluation of image retrieval algorithms that operate in the compressed domain.
Abstract: Standardised image databases or rather the lack of them are one of the main weaknesses in the field of content based image retrieval (CBIR). Authors often use their own images or do not specify the source of their datasets. Naturally this makes comparison of results somewhat difficult. While a first approach towards a common colour image set has been taken by the MPEG 7 committee 1 their database does not cater for all strands of research in the CBIR community. In particular as the MPEG-7 images only exist in compressed form it does not allow for an objective evaluation of image retrieval algorithms that operate in the compressed domain or to judge the influence image compression has on the performance of CBIR algorithms. In this paper we introduce a new dataset, UCID (pronounced ”use it”) - an Uncompressed Colour Image Dataset which tries to bridge this gap. The UCID dataset currently consists of 1338 uncompressed images together with a ground truth of a series of query images with corresponding models that an ideal CBIR algorithm would retrieve. While its initial intention was to provide a dataset for the evaluation of compressed domain algorithms, the UCID database also represents a good benchmark set for the evaluation of any kind of CBIR method as well as an image set that can be used to evaluate image compression and colour quantisation algorithms.

1,117 citations

Journal ArticleDOI
TL;DR: The proposed framework for autonomous driving using deep reinforcement learning incorporates Recurrent Neural Networks for information integration, enabling the car to handle partially observable scenarios and integrates the recent work on attention models to focus on relevant information, thereby reducing the computational complexity for deployment on embedded hardware.
Abstract: Reinforcement learning is considered to be a strong AI paradigm which can be used to teach machines through interaction with the environment and learning from their mistakes. Despite its perceived utility, it has not yet been successfully applied in automotive applications. Motivated by the successful demonstrations of learning of Atari games and Go by Google DeepMind, we propose a framework for autonomous driving using deep reinforcement learning. This is of particular relevance as it is difficult to pose autonomous driving as a supervised learning problem due to strong interactions with the environment including other vehicles, pedestrians and roadworks. As it is a relatively new area of research for autonomous driving, we provide a short overview of deep reinforcement learning and then describe our proposed framework. It incorporates Recurrent Neural Networks for information integration, enabling the car to handle partially observable scenarios. It also integrates the recent work on attention models to focus on relevant information, thereby reducing the computational complexity for deployment on embedded hardware. The framework was tested in an open source 3D car racing simulator called TORCS. Our simulation results demonstrate learning of autonomous maneuvering in a scenario of complex road curvatures and simple interaction of other vehicles.

753 citations

Proceedings ArticleDOI
TL;DR: In this article, Gummy fingers, which are easily made of cheap and readily available gelatin, were accepted by extremely high rates by 11 particular fingerprint devices with optical or capacitive sensors.
Abstract: Potential threats caused by something like real fingers, which are called fake or artificial fingers, should be crucial for authentication based on fingerprint systems. Security evaluation against attacks using such artificial fingers has been rarely disclosed. Only in patent literature, measures, such as live and well detection, against fake fingers have been proposed. However, the providers of fingerprint systems usually do not mention whether or not these measures are actually implemented in emerging fingerprint systems for PCs or smart cards or portable terminals, which are expected to enhance the grade of personal authentication necessary for digital transactions. As researchers who are pursuing secure systems, we would like to discuss attacks using artificial fingers and conduct experimental research to clarify the reality. This paper reports that gummy fingers, namely artificial fingers that are easily made of cheap and readily available gelatin, were accepted by extremely high rates by 11 particular fingerprint devices with optical or capacitive sensors. We have used the molds, which we made by pressing our live fingers against them or by processing fingerprint images from prints on glass surfaces, etc. We describe how to make the molds, and then show that the gummy fingers, which are made with these molds, can fool the fingerprint devices.

752 citations

Proceedings ArticleDOI
TL;DR: This work presents a novel approach to still image denoising based on effective filtering in 3D transform domain by combining sliding-window transform processing with block-matching, and shows that the proposed method delivers state-of-art Denoising performance, both in terms of objective criteria and visual quality.
Abstract: We present a novel approach to still image denoising based on effective filtering in 3D transform domain by combining sliding-window transform processing with block-matching. We process blocks within the image in a sliding manner and utilize the block-matching concept by searching for blocks which are similar to the currently processed one. The matched blocks are stacked together to form a 3D array and due to the similarity between them, the data in the array exhibit high level of correlation. We exploit this correlation by applying a 3D decorrelating unitary transform and effectively attenuate the noise by shrinkage of the transform coefficients. The subsequent inverse 3D transform yields estimates of all matched blocks. After repeating this procedure for all image blocks in sliding manner, the final estimate is computed as weighed average of all overlapping blockestimates. A fast and efficient algorithm implementing the proposed approach is developed. The experimental results show that the proposed method delivers state-of-art denoising performance, both in terms of objective criteria and visual quality.

672 citations

Performance
Metrics
No. of papers from the Journal in previous years
YearPapers
20236
202222
202136
2020128
2019149
2018197