scispace - formally typeset
Search or ask a question
Patent

Method for editing three-dimensional image and mobile terminal using the same

Jonghwan Kim1
08 Feb 2011-
TL;DR: In this paper, a method for controlling a mobile terminal image includes providing a first image and a second image via a controller on the mobile terminal, the first and second images reflecting a binocular disparity to form a three dimensional image, identifying an editing target from the 3D image, editing the first image of the identified editing target, and applying the edited first image corresponding to the corrected first image to the three-dimensional image.
Abstract: A method for controlling a mobile terminal image includes providing a first image and a second image via a controller on the mobile terminal, the first and second images reflecting a binocular disparity to form a three dimensional image, identifying an editing target from the three dimensional image, editing a first image of the identified editing target, and applying the edited first image and a second image corresponding to the edited first image to the three dimensional image.
Citations
More filters
Patent
Minhun Kang1
27 May 2011
TL;DR: An electronic device having a display; a communication unit configured to communicate with a plurality of external electronic devices on a network; and a controller configured to cause displaying of a graphical user interface (GUI) on the display, the GUI having a plurality-of- areas, activate the GUI responsive to receiving a predetermined user input, identify a connection state of each of the plurality of having a connection to the electronic device, correspond each plurality of areas with a respective one of the external devices, and cause displaying the content relating to each of those devices in their respective one-of the plurality
Abstract: An electronic device having a display; a communication unit configured to communicate with a plurality of external electronic devices on a network; and a controller configured to cause displaying of a graphical user interface (GUI) on the display, the GUI having a plurality of areas, activate the GUI responsive to receiving a predetermined user input, identify a connection state of each of the plurality of external electronic devices having a connection to the electronic device, correspond each of the plurality of areas with a respective one of the plurality of external electronic devices, and cause displaying of content relating to each of the plurality of external electronic devices in their respective one of the plurality of areas.

46 citations

Patent
12 Nov 2014
TL;DR: In this paper, the disparity map of the first image and the second image are merged to estimate an optimized depth map of a scene, which is then used to estimate the scene depth map.
Abstract: METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR DISPARITY ESTIMATION In an example embo diment, a method, apparatus and computer program product are provided The method includes facilitating access of a first image and a second image associated with a scene The first image and the second image includes depth information and at least one non-redundant portion A first disparity map of the first image is computed based on the depth information associated with the first image At least one region of interest (ROI) associated with the at least one non-redundant portion is determined in the first image based on the depth information associated with the first image A second disparity map of at least one region in the second image corresponding to the at least one ROI of the first image is computed The first disparity map and the second disparity map are merged to estimate an optimized depth map of the scene FIGURE 5

16 citations

Patent
Maruyama Ayako1, Kunio Nobori1
15 May 2014
TL;DR: In this article, a comment information generation device includes: a video input section to which a video is input; an information input section, to which positional information is input to display a comment to track an object in the video; an initial trajectory acquisition section that is a trajectory of the object corresponding to the positional information; a trajectory extending section that acquires an extended trajectory by acquiring a following trajectory having a starting point after an ending point of the initial trajectory; and an output section that outputs the extended trajectory as comment information.
Abstract: A comment information generation device includes: a video input section, to which a video is input; an information input section, to which positional information is input to display a comment to track an object in the video; an initial trajectory acquisition section that acquires an initial trajectory that is a trajectory of the object corresponding to the positional information; a trajectory extending section that acquires an extended trajectory by acquiring a following trajectory that is a trajectory having a starting point after an ending point of the initial trajectory, collecting a first comment assigned in a vicinity of the initial trajectory and a second comment assigned in a vicinity of the following trajectory, and connecting the following trajectory to the initial trajectory on a basis of the first comment and the second comment; and an output section that outputs the extended trajectory as comment information.

13 citations

Patent
13 Aug 2014
TL;DR: In this paper, a user interface control for an image product creation application is used for adding user supplied text or graphic elements to an image, wherein the user interface controller is responsive to the position relative to a user supplied image, a recognized object within the user-supplied image, or a product related feature.
Abstract: A user interface control for an image product creation application is used for adding user supplied text or graphic elements to an image product, wherein the user interface control is responsive to the position relative to a user supplied image, a recognized object within the user supplied image, or an image product related feature, wherein the user interface control provides an indication when the text or graphic elements are positioned proximal to the user supplied image, the recognized object, or the image product related feature, and wherein the user interface control modifies an attribute of the text or graphic elements when placed proximal to the user supplied image, the recognized object, or the image product related feature.

10 citations

Patent
Amit Bleiweiss1, Dagan Eshar1
17 Feb 2017
TL;DR: In this paper, techniques for image modification and enhancement based on recognition of objects in a scene image are provided for image rendering and enhancement. But these techniques are limited to 3D models of objects.
Abstract: Techniques are provided for image modification and enhancement based on recognition of objects in a scene image An example system may include an image rendering circuit to render a number of image variations of an object based on a 3D model of the object The 3D model may be generated by a computer aided design tool or a 3D scanning tool The system may also include a classifier generation circuit to generate an object recognition classifier based on the rendered image variations The system may further include an object recognition circuit to recognize the object from an image of a scene containing the object The recognition is performed by the generated object recognition classifier The system may still further include an image modification circuit to create a mask to segment the recognized object from the image of the scene and modify the masked segment of the image of the scene

10 citations

References
More filters
Journal ArticleDOI
Ronald Azuma1
TL;DR: The characteristics of augmented reality systems are described, including a detailed discussion of the tradeoffs between optical and video blending approaches, and current efforts to overcome these problems are summarized.
Abstract: This paper surveys the field of augmented reality AR, in which 3D virtual objects are integrated into a 3D real environment in real time. It describes the medical, manufacturing, visualization, path planning, entertainment, and military applications that have been explored. This paper describes the characteristics of augmented reality systems, including a detailed discussion of the tradeoffs between optical and video blending approaches. Registration and sensing errors are two of the biggest problems in building effective augmented reality systems, so this paper summarizes current efforts to overcome these problems. Future directions and areas requiring further research are discussed. This survey provides a starting point for anyone interested in researching or using augmented reality.

8,053 citations

Journal ArticleDOI
TL;DR: In this article, a large collection of images with ground truth labels is built to be used for object detection and recognition research, such data is useful for supervised learning and quantitative evaluation.
Abstract: We seek to build a large collection of images with ground truth labels to be used for object detection and recognition research. Such data is useful for supervised learning and quantitative evaluation. To achieve this, we developed a web-based tool that allows easy image annotation and instant sharing of such annotations. Using this annotation tool, we have collected a large dataset that spans many object categories, often containing multiple instances over a wide variety of images. We quantify the contents of the dataset and compare against existing state of the art datasets used for object recognition and detection. Also, we show how to extend the dataset to automatically enhance object labels with WordNet, discover object parts, recover a depth ordering of objects in a scene, and increase the number of labels using minimal user supervision and images from the web.

3,501 citations

Journal ArticleDOI
01 Jul 2006
TL;DR: This work presents a system for interactively browsing and exploring large unstructured collections of photographs of a scene using a novel 3D interface that consists of an image-based modeling front end that automatically computes the viewpoint of each photograph and a sparse 3D model of the scene and image to model correspondences.
Abstract: We present a system for interactively browsing and exploring large unstructured collections of photographs of a scene using a novel 3D interface. Our system consists of an image-based modeling front end that automatically computes the viewpoint of each photograph as well as a sparse 3D model of the scene and image to model correspondences. Our photo explorer uses image-based rendering techniques to smoothly transition between photographs, while also enabling full 3D navigation and exploration of the set of images and world geometry, along with auxiliary information such as overhead maps. Our system also makes it easy to construct photo tours of scenic or historic locations, and to annotate image details, which are automatically transferred to other relevant images. We demonstrate our system on several large personal photo collections as well as images gathered from Internet photo sharing sites.

3,398 citations

Proceedings ArticleDOI
01 Aug 1996
TL;DR: This paper introduces a simple extension to image morphing that correctly handles 3D projective camera and scene transformations and works by prewarping two images prior to computing a morph and then postwarped the interpolated images.
Abstract: Image morphing techniques can generate compelling 2D transitions between images. However, differences in object pose or viewpoint often cause unnatural distortions in image morphs that are difficult to correct manually. Using basic principles of projective geometry, this paper introduces a simple extension to image morphing that correctly handles 3D projective camera and scene transformations. The technique, called view morphing, works by prewarping two images prior to computing a morph and then postwarping the interpolated images. Because no knowledge of 3D shape is required, the technique may be applied to photographs and drawings, as well as rendered scenes. The ability to synthesize changes both in viewpoint and image structure affords a wide variety of interesting 3D effects via simple image transformations. CR

872 citations

Patent
03 Dec 2001
TL;DR: In this article, an image-based telepresence system forward warps video images selected from a plurality fixed imagers using local depth maps and merges the warped images to form high quality images that appear as seen from a virtual position.
Abstract: An image-based tele-presence system forward warps video images selected from a plurality fixed imagers using local depth maps and merges the warped images to form high quality images that appear as seen from a virtual position. At least two images, from the images produced by the imagers, are selected for creating a virtual image (103). Depth maps are generated corresponding to each of the selected images (104). Selected images are warped to the virtual viewpoint using warp parameters calculated using corresponding depth maps (105, 106). Finally the warped images are merged to create the high quality virtual image as seen from the selected viewpoint (107). The system employs a video blanket of imagers, which helps both optimize the number of imagers and attain higher resolution. In an exemplary video blanket, cameras are deployed in a geometric pattern on a surface.

329 citations