Author
Maiya Hori
Other affiliations: Nara Institute of Science and Technology, Tottori University
Bio: Maiya Hori is an academic researcher from Kyushu University. The author has contributed to research in topic(s): Facial expression & Rendering (computer graphics). The author has an hindex of 4, co-authored 35 publication(s) receiving 81 citation(s). Previous affiliations of Maiya Hori include Nara Institute of Science and Technology & Tottori University.
Papers
More filters
10 Jun 2007
TL;DR: In the proposed method, stereoscopic images are generated considering depth values estimated by dynamic programming (DP) matching using the images that are observed from different points and contain the same ray information in the real world.
Abstract: This paper describes a method of stereoscopic view generation by image-based rendering in wide outdoor environments. The stereoscopic view can be generated from an omnidirectional image sequence by a light field rendering approach which generates a novel view image from a set of images. The conventional methods of novel view generation have a problem such that the generated image is distorted because the image is composed of parts of several omnidirectional images captured at different points. To overcome this problem, we have to consider the distances between the novel viewpoint and observed real objects in the rendering process. In the proposed method, in order to reduce the image distortion, stereoscopic images are generated considering depth values estimated by dynamic programming (DP) matching using the images that are observed from different points and contain the same ray information in the real world. In experiments, stereoscopic images in wide outdoor environments are generated and displayed.
14 citations
23 Aug 2010
TL;DR: In this paper, appropriate ray information is selected from a number of omni directional images using a penalty function expressed as ray similarity, and the validity of this penalty function is shown by generating stereoscopic view from multiple real image sequences.
Abstract: This paper proposes a novel method for generating arbitrary stereoscopic view from multiple omni directional image sequences. Although conventional methods for arbitrary view generation with an image-based rendering approach can create binocular views, positions and directions of viewpoints for stereoscopic vision are limited to a small range. In this research, we attempt to generate arbitrary stereoscopic views from omni directional image sequences that are captured in various multiple paths. To generate a high-quality stereoscopic view from a number of images captured at various viewpoints, appropriate ray information needs to be selected. In this paper, appropriate ray information is selected from a number of omni directional images using a penalty function expressed as ray similarity. In experiments, we show the validity of this penalty function by generating stereoscopic view from multiple real image sequences.
9 citations
01 Jan 2010
TL;DR: This work proposes a method which simultaneously subtracts pedestrians based on background subtraction method and generates location metadata by manually input from maps and achieved an underground panoramic view system which displays no pedestrians.
Abstract: Toward a really useful navigation system, utilizing spherical panoramic photos with maps like Google Street View is efficient. Users expect the system to be available in all areas they go. Conventional shooting methods obtain the shot position from GPS sensor. However, indoor areas are out of GPS range. Furthermore, most urban public indoor areas are crowded with pedestrians. Even if we blur the pedestrians in a photo, the photos with blurring are not useful for scenic information. Thus, we propose a method which simultaneously subtracts pedestrians based on background subtraction method and generates location metadata by manually input from maps. Using these methods, we achieved an underground panoramic view system which displays no pedestrians.
9 citations
21 Jul 2013
TL;DR: Facial expressions are generated using Elfoid’s head-mounted mobile projector to overcome the problem of compactness and a lack of sufficiently small actuator motors and are emphasized using cartoon techniques.
Abstract: We propose a method for generating facial expressions emphasized with cartoon techniques using a cellular-phone-type teleoperated android with a mobile projector. Elfoid is designed to transmit the speaker’s presence to their communication partner using a camera and microphone, and has a soft exterior that provides the look and feel of human skin. To transmit the speaker’s presence, Elfoid sends not only the voice of the speaker but also emotional information captured by the camera and microphone. Elfoid cannot, however, display facial expressions because of its compactness and a lack of sufficiently small actuator motors. In this research, facial expressions are generated using Elfoid’s head-mounted mobile projector to overcome the problem. Additionally, facial expressions are emphasized using cartoon techniques: movements around the mouth and eyes are emphasized, the silhouette of the face and shapes of the eyes are varied by projection effects, and color stimuli that induce a particular emotion are added. In an experiment, representative face expressions are generated with Elfoid and emotions conveyed to users are investigated by subjective evaluation.
5 citations
29 Oct 2009
TL;DR: An MR telepresence system that presents a realistic image and an inertial force sensation using an immersive display and a motion base with limited degrees of freedom is proposed.
Abstract: This paper describes a mixed reality (MR) telepresence system for a ride to provide users with a highly realistic sensation. To make a realistic scene in a virtual environment, it is necessary to combine visual information with a reproduction of the forces which a user experiences in the real environment. This paper proposes an MR telepresence system that presents a realistic image and an inertial force sensation using an immersive display and a motion base with limited degrees of freedom. In our approach, the realistic image is acquired with an omnidirectional camera and the inertial force is generated virtually by a combination of the acceleration of gravity and a video effect. In experiments, a prototype system has been proven to produce a highly realistic sensation in various environments.
4 citations
Cited by
More filters
Journal Article•
317 citations
Journal Article•
TL;DR: In this article, the authors show that the increase in the Kanto region around Tokyo following the 2011 Tohoku-Oki earthquake (M w9.0) was well correlated with the static increases in the Coulomb failure function ( ∆CFF) transferred from the Tohoka-OKI earthquake sequence.
Abstract: We show that the seismicity rate increase in the Kanto region around Tokyo following the 2011 Tohoku-Oki earthquake (M w9.0) was well correlated with the static increases in the Coulomb failure function ( ∆CFF) transferred from the Tohoku-Oki earthquake sequence. Because earthquakes in the Kanto region exhibit various focal mechanisms, the receiver faults for the ∆CFF were assumed to be reliable focal mechanism solutions of ̃3,000 earthquakes compiled from three networks (F-net, JMA network, and MeSO-net). The histograms of ∆CFF showed that more events in the postseismic period had positive ∆CFF values than those in the preseismic period (2008 April 1 2011 March 10). Among the 928 receiver faults showing the significant ∆CFF with absolute values≥ 0.1 bars in the preseismic period, 717 receiver faults (77.3 %) indicated positive ∆CFF. On the contrary, 1,334 (88.2 %) out of 1,513 receiver faults indicated positive ∆CFF in the postseismic period. We confirmed that the result is similar for the longer preseismic period, between 1997 October 1 and 2011 March 10. To test the significance of the difference in the distribution of ∆CFF between preseismic and postseismic periods, we used a Monte Carlo method with bootstrap resampling. As a result, the ratio of positive ∆CFF randomly resampled from∆CFF values in the preseismic period never exceeded 83.1%, even after 10,000 iterations. This supports the findings of Toda & Stein [2013]; however, our calculation is more reliable than theirs because we used a much larger number of focal mechanisms compiled from the three networks. It also proves that the static stress changes transferred from the Tohoku-Oki earthquake sequence are responsible for the changes in the seismicity rate in the Kanto region. Earthquakes of focal mechanisms with positive ∆CFF values drastically increased, while those with negative ∆CFFs showed no obvious changes except for immediately after the mainshock. This fault-dependent seismicity rate change strongly supports the contribution of the Coulomb stress transferred from the Tohoku-Oki sequence to the seismicity rate change in the Kanto region. Immediately following the mainshock, earthquakes of all types of focal mechanisms were activated, but the increased seismicity rate of earthquakes with negative ∆CFFs returned to the background level within a few months. This suggests that there might be other contributing factors to the seismicity rate change such as dynamic stress triggering or pore-fluid pressure changes.
32 citations
TL;DR: A comparative study of the different cameras and methods to create stereoscopic panoramas of a scene, highlighting those that can be used for the real-time acquisition of imagery and video, is presented.
Abstract: Different camera configurations to capture panoramic images and videos are commercially available today. However, capturing omnistereoscopic snapshots and videos of dynamic scenes is still an open problem. Several methods to produce stereoscopic panoramas have been proposed in the last decade, some of which were conceived in the realm of robot navigation and three-dimensional (3-D) structure acquisition. Even though some of these methods can estimate omnidirectional depth in real time, they were not conceived to render panoramic images for binocular human viewing. Alternatively, sequential acquisition methods, such as rotating image sensors, can produce remarkable stereoscopic panoramas, but they are unable to capture real-time events. Hence, there is a need for a panoramic camera to enable the consistent and correct stereoscopic rendering of the scene in every direction. Potential uses for a stereo panoramic camera with such characteristics are free-viewpoint 3-D TV and image-based stereoscopic telepresence, among others. A comparative study of the different cameras and methods to create stereoscopic panoramas of a scene, highlighting those that can be used for the real-time acquisition of imagery and video, is presented.
24 citations
Patent•
08 Feb 2011TL;DR: In this paper, a method for controlling a mobile terminal image includes providing a first image and a second image via a controller on the mobile terminal, the first and second images reflecting a binocular disparity to form a three dimensional image, identifying an editing target from the 3D image, editing the first image of the identified editing target, and applying the edited first image corresponding to the corrected first image to the three-dimensional image.
Abstract: A method for controlling a mobile terminal image includes providing a first image and a second image via a controller on the mobile terminal, the first and second images reflecting a binocular disparity to form a three dimensional image, identifying an editing target from the three dimensional image, editing a first image of the identified editing target, and applying the edited first image and a second image corresponding to the edited first image to the three dimensional image.
22 citations