scispace - formally typeset
T

Toshikazu Wada

Researcher at Wakayama University

Publications -  104
Citations -  1302

Toshikazu Wada is an academic researcher from Wakayama University. The author has contributed to research in topics: Video tracking & Object detection. The author has an hindex of 15, co-authored 104 publications receiving 1247 citations. Previous affiliations of Toshikazu Wada include Kyoto University & Okayama University.

Papers
More filters
Proceedings ArticleDOI

Background subtraction based on cooccurrence of image variations

TL;DR: A novel background subtraction method for detecting foreground objects in dynamic scenes involving swaying trees and fluttering flags using the property that image variations at neighboring image blocks have strong correlation, also known as "cooccurrence".
Book ChapterDOI

Camera Calibration with Two Arbitrary Coplanar Circles

TL;DR: A novel camera calibration method to estimate the extrinsic parameters and the focal length of a camera by using only one single image of two coplanar circles with arbitrary radius is described.
Journal ArticleDOI

Shape from Shading with Interreflections Under a Proximal Light Source: Distortion-Free Copying of an UnfoldedBook

TL;DR: In this article, the shape-from-shading problem is formulated as an iterative, non-linear optimization problem and piecewise polynomial models of the 3D shape and albedo distribution are introduced to efficiently and stably compute the shape in practice.
Proceedings ArticleDOI

Homography based parallel volume intersection: toward real-time volume reconstruction using active cameras

TL;DR: From the preliminary experimental results, it is estimated near frame-rate volume reconstruction for a life-sized mannequin can be achieved at 3 cm spatial resolution on the authors' PC cluster system.
Proceedings ArticleDOI

Appearance sphere: background model for pan-tilt-zoom camera

TL;DR: The proposed method consists of an omnidirectional background model called appearance sphere and parallax free sensing that can be generated and background subtraction can be performed for any combination of pan-tilt-zoom parameters without restoring 3D scene information.