scispace - formally typeset
Search or ask a question

Showing papers on "Zoom published in 2000"


Proceedings ArticleDOI
01 Nov 2000
TL;DR: Jazz, a general-purpose 2D scene graph toolkit that runs on all platforms that support Java 2.0, is described and the lessons learned using Jazz for ZUIs are described.
Abstract: : In this paper the authors investigate the use of scene graphs as a general approach for implementing two-dimensional (2D) graphical applications, and in particular Zoomable User Interfaces (ZUIs) Scene graphs are typically found in three-dimensional (3D) graphics packages such as Sun's Java3D and SGI's OpenInventor They have not been widely adopted by 2D graphical user interface toolkits To explore the effectiveness of scene graph techniques, the authors have developed Jazz, a general-purpose 2D scene graph toolkit Jazz is implemented in Java using Java2D, and runs on all platforms that support Java 2 This paper describes Jazz and the lessons we learned using Jazz for ZUIs It also discusses how 2D scene graphs can be applied to other application areas (5 figures, 27 refs)

313 citations


Proceedings ArticleDOI
01 Nov 2000
TL;DR: A navigation technique for browsing large documents that integrates rate-based scrolling with automatic zooming so that the perceptual scrolling speed in screen space remains constant, so the user can efficiently and smoothly navigate through a large document without becoming disoriented by extremely fast visual flow.
Abstract: We propose a navigation technique for browsing large documents that integrates rate-based scrolling with automatic zooming. The view automatically zooms out when the user scrolls rapidly so that the perceptual scrolling speed in screen space remains constant. As a result, the user can efficiently and smoothly navigate through a large document without becoming disoriented by extremely fast visual flow. By incorporating semantic zooming techniques, the user can smoothly access a global overview of the document during rate-based scrolling. We implemented several prototype systems, including a web browser, map viewer, image browser, and dictionary viewer. An informal usability study suggests that for a document browsing task, most subjects prefer automatic zooming and the technique exhibits approximately equal performance time to scroll bars , suggesting that automatic zooming is a helpful alternative to traditional scrolling when the zoomed out view provides appropriate visual cues.

305 citations


Patent
22 Mar 2000
TL;DR: In this article, a spreadsheet and a browser on a portable computer accept data from an input recognizer, including a non-cursive handwriting recognizer or a speech recognizer and communicate data directly with another computer or over the Internet using wireless media such as radio and infrared frequencies or over a landline.
Abstract: A spreadsheet and a browser on a portable computer accept data from an input recognizer, including a non-cursive handwriting recognizer or a speech recognizer and communicate data directly with another computer or over the Internet using wireless media such as radio and infrared frequencies or over a landline. The computer is endowed with a plurality of built-in or snap-on expansion accessories to enhance the data capture capability as well as the ease of reading data from the limited screen of the present invention. These accessories include a camera, a scanner, a voice recorder or voice capture unit, and a remote large screen television. The camera and scanner allows visual data to be capture, the voice recorder allows the user to make quick verbal annotations into a solid state memory to minimize the main memory requirements, while the voice capture unit allows the voice to be captured into memory for subsequent transmission over the Internet or for voice recognition purposes. The spreadsheet or database receives data from the Internet or from the accessories and further can graph or manipulate the data entered into the spreadsheet as necessary. Furthermore, the database has a smart search engine interface which performs fuzzy search such that inexact queries can still result in matches. The smart search engine thus allows users to locate information even though the exact spelling or concept is not known. To minimize user's work in locating information to analyze, the spreadsheet and database can spawn and train an intelligent agent to capture data from a suitable remote source such as the Internet and transmit the data to the spreadsheet or browser for further analysis. Alternatively, the user can capture data directly by scanning or dictating the information into the spreadsheet or browser. In another aspect of the invention, a pan and zoom capability is provided to provide the user with an appropriately scaled view of the data for ease of reading. Alternatively, when the portable computer is within range of a larger display device such as an appropriately equipped television display or a personal computer with a larger display, the present invention's wireless link transmits the video information to the larger display to allow the user to view data the larger display unit. Similarly, the present invention provides a remote stereo receiver adapted to receive sound data stream from the portable computer and driving high quality speakers to support multimedia applications on the portable computer.

225 citations


Patent
Jan van Ee1
19 Jul 2000
TL;DR: In this article, a mobile phone has a display with a touch screen, which is capable of retrieving a web page from the Internet and displaying it in its entirety on the display.
Abstract: A mobile phone has a display with a touch screen. The device has a browser and is capable of retrieving a Web page from the Internet. The page is first displayed in its entirety. The user can recognize the page's general lay-out and presence of hyperlinks. When the user touches a particular location on the touch screen that corresponds to a portion of the page's image, the portion gets displayed so as to fill the display's area. Thus, the user can browse the Web with a display of limited size.

222 citations


Proceedings ArticleDOI
12 Jun 2000
TL;DR: It is illustrated that by judiciously choosing the system modules and performing a careful analysis of the influence of various tuning parameters on the system it is possible to: perform proper statistical inference, automatically set control parameters and quantify limits of a dual-camera real-time video surveillance system.
Abstract: The engineering of computer vision systems that meet application specific computational and accuracy requirements is crucial to the deployment of real-life computer vision systems. This paper illustrates how past work on a systematic engineering methodology for vision systems performance characterization can be used to develop a real-time people detection and zooming system to meet given application requirements. We illustrate that by judiciously choosing the system modules and performing a careful analysis of the influence of various tuning parameters on the system it is possible to: perform proper statistical inference, automatically set control parameters and quantify limits of a dual-camera real-time video surveillance system. The goal of the system is to continuously provide a high resolution zoomed-in image of a person's head at any location of the monitored area. An omni-directional camera video is processed to detect people and to precisely control a high resolution foveal camera, which has pan, tilt and zoom capabilities. The pan and tilt parameters of the foveal camera and its uncertainties are shown to be functions of the underlying geometry, lighting conditions, background color/contrast, relative position of the person with respect to both cameras as well as sensor noise and calibration errors. The uncertainty in the estimates is used to adaptively estimate the zoom parameter that guarantees with a user specified probability, /spl alpha/, that the detected person's face is contained and zoomed within the image.

190 citations


Patent
Ephraim Feig1, Jeane Shu-Chun Chen1
30 Nov 2000
TL;DR: In this article, a graphical user interface displays a coarse control scrollbar to provide a user with coarse resolution sequential data control and a magnified view scrollbar proximate to the coarse control scrollingbar.
Abstract: A graphical user interface displays a coarse control scrollbar to provide a user with coarse resolution sequential data control and a magnified view scrollbar proximate to the coarse control scrollbar. The magnified view scrollbar provides the user with fine resolution sequential data control. When the cursor is on the scrollbar, an overlay is opened which is a zoomed version of the scrollbar. The zoom range of the overlay is adjustable and can either be preset by the user or set during the zooming operation. When operating the overlay, a menu is available which allows the user to choose between zooming up or down to select the desired position.

162 citations


Proceedings ArticleDOI
01 Nov 2000
TL;DR: The design principles of input devices that effectively use a human’s physical manipulation skills are discussed, and the system architecture and applications of the ToolStone input device are described.
Abstract: The ToolStone is a cordless, multiple degree-of-freedom (MDOF) input device that senses physical manipulation of itself, such as rotating, flipping, or tilting. As an input device for the non-dominant hand when a bimanual interface is used, the ToolStone provides several interaction techniques including a toolpalette selector, and MDOF interactors such as zooming, 3D rotation, and virtual camera control. In this paper, we discuss the design principles of input devices that effectively use a human’s physical manipulation skills, and describe the system architecture and applications of the ToolStone input device.

154 citations


Patent
21 Nov 2000
TL;DR: In this article, a user interface for a medical informatics system, which permits a physician to work with digitized medical images in a manner that the physician is accustomed to working with traditional analog film, is disclosed.
Abstract: A user interface for a medical informatics system, which permits a physician to work with digitized medical images in a manner that the physician is accustomed to working with traditional analog film, is disclosed. The user interface includes a through a patient browser view to provide the ability to select studies, which consist of medical images and series, for patients. After selecting the studies, the user, through a patient canvas view, may then organize the studies as well as the images/series within the studies, including resizing the studies and the images/series within a study. The user may also pan and zoom images to view portions of an image at various resolutions. Furthermore, the user of the user interface may analyze the image by selecting to view the image in detail in a large floating window.

149 citations


Patent
28 Jan 2000
TL;DR: In this paper, a method for transforming a video file provided to a computer into an object in a zooming universe established in such computer is described, which zooming object may be enlarged and panned by manipulation by a user via a computer input device.
Abstract: A method for transforming a video file provided to a computer into an object in a zooming universe established in such computer (120), which zooming object may be enlarged and panned by manipulation by a user via a computer input device. At the time a video file is opened in a video player library on such a computer, a zooming engine (222) is enabled on the computer and a zooming universe is enabled therefrom. Frames of the video file being played on the computer video player library are copied to a video object in the zooming universe and displayed (130) there. By manipulation of the parameters of the bounding box enclosing the zooming video object, through use of a computer input device, the user is able to scale and pan the video image in the zooming universe display up (or down) to a desired viewing size and perspective.

75 citations


Patent
18 Sep 2000
TL;DR: In this article, a handheld device has a display for presenting an image to a user, a processor electrically connected to the display, memory electrically attached to the processor, and an input panel electrically wired to the Processor.
Abstract: A handheld device has a display for presenting an image to a user, a processor electrically connected to the display, memory electrically connected to the processor, and an input panel electrically connected to the processor. The input panel has a number of keys for generating key signals, and a zoom control device for generating a zoom control signal. The display is used to present both text and iconic information to the user. A display program, held in the memory, will change the font size of displayed text or icons according to the zoom control signal. When doing so, the display program selects a proper amount of text or icons to be displayed within the boundary of the display, and arranges the selected text or icons within the display.

74 citations


Patent
26 Oct 2000
TL;DR: In this paper, a map navigation and display system which emphasizes the use of physical layout and location to identify and select areas to zoom in on is presented to assist users in locating stores and businesses, a central concept is the visual presentation of a shopping center showing the layout of the buildings and stores within the center.
Abstract: A map navigation and display system which emphasizes the use of physical layout and location to identify and select areas to zoom in on. Preferably used to assist users in locating stores and businesses, a central concept is the visual presentation of a shopping center showing the layout of the buildings and stores within the center. Each store is then linked to its own page with details about the business. Higher level maps may also show the layout and location of the shopping centers within a neighborhood or district and within a region. Optional density indicators at the regional level assist users in locating areas with a large number of stores. Optional text search capability supplements the visual methods.

Journal Article
TL;DR: Details of a software tool written specifically to provide facilities to perform image processing required in research and development of gel dosimetry are presented.
Abstract: Gel dosimetry using magnetic resonance imaging is a technique which allows measurement of three-dimensional absorbed dose distributions in radiation therapy. This paper presents details of a software tool written specifically to provide facilities to perform image processing required in research and development of gel dosimetry. Collections of magnetic resonance images can be converted into either longitudinal or transverse nuclear magnetic resonance relaxation images. The conversions are accomplished by means of a pixel-by-pixel non-linear least squares fitting algorithm. Adjustments can be made to the number of parameters used in the fitting algorithm. Fundamental image manipulation tools such as window width/level display adjustment, zooming, profile and region of interest tools are provided. The software has been developed using MATLAB (The MathWorks Inc., Natick, MA) running on Windows 95. User interaction is via a windows graphical user interface (GUI). Data such as statistics from regions of interest can be exported to other windows applications for further processing. Flexibility is incorporated in the GUI design by taking advantage of the developmental aspects of the MATLAB environment. Although originally designed for gel dosimetry, the software can be used in any application of MRI which requires production and manipulation of relaxation time images.

Patent
17 Feb 2000
TL;DR: In this article, a three-dimensional object may be imaged from several viewpoints distributed about the object, and the image obtained at each viewpoint may be stored in conjunction with the viewpoint's coordinates about the objects.
Abstract: A three-dimensional object may be imaged from several viewpoints distributed about the object, and the image obtained at each viewpoint may be stored in conjunction with the viewpoint's coordinates about the object. The object's image can then be transmitted for display over a client-server computer network, and the user may issue commands to manipulate the object, so as to very accurately simulate manipulation of the actual three-dimensional object. The client computer may display the object's image from one of the viewpoints. If the user then wishes to manipulate the object, the user will issue a command to the server to index from the coordinates of the first viewpoint to the coordinates of some adjacent viewpoint(s). The images of the adjacent viewpoints will then be displayed in a sequence corresponding to the order in which the coordinates of the viewpoints are indexed. Zooming (enlargement and reduction of views) and other features are also discussed, as well as various procedures for enhancing transmission time (and thus display speed).

Patent
26 Sep 2000
TL;DR: In this article, the authors present an interactive geographic information system on a personal digital assistant (PDA) that enables the viewing and interaction with geographic information on a PDA (102) while the PDA is connected to a network (118) and while disconnected (i.e., offline).
Abstract: One or more embodiments of the invention provide for an interactive geographic information system on a personal digital assistant (PDA) (102). The system enables the viewing and interaction with geographic information on a PDA (102). Such information is available while the PDA (102) is connected to a network (118) (i.e., online) and while disconnected (i.e., offline). Embodiments provide the PDA (102) with an application (130) that provides the functionality commonly available in a standard client (104) comprised of a complete computer system. For example, embodiments provide raster maps for multiple zoom levels, with each zoom level comprising multiple tiles allowing for 'virtual roaming' across a map. One or more embodiments also provide raster zooms (by scaling existing raster tiles), selectable vector geometry (for interacting and highlighting with user objects), geo-referencing information for map navigation, meta-data in the form of layer definitions (visibility, display attributes, etc.), links to object attributes in databases, links to object reports generated by corporate web servers, uploadable, sharable redlining data (created from scribbles on the field), offline access on a PDA (102), and a compact PDA (102) database, and parallel processing of map data for use on a PDA (102). Thus, one or more embodiments of the invention provide interactive maps and business objects that can be viewed and queried on a PDA (102), both in an online and offline mode.

Patent
07 Jul 2000
TL;DR: The panoramic video viewer as mentioned in this paper allows the user to pan through the scene to the left, right, up or down, and the user can zoom in or out within the portion of the scene being viewed.
Abstract: The primary components of the panoramic video viewer include a decoder module. The purpose of the decoder module is to input incoming encoded panoramic video data and to output a decoded version thereof. The incoming data may be provided over a network and originate from a server, or it may simply be read from a storage media, such as a hard drive, CD or DVD. Once decoded, the data associated with each video frame is preferably stored in a storage module and made available to a 3D rendering module. The 3D rendering module is essentially a texture mapper that takes the frame data and maps the desired views onto a prescribed environment model. The output of the 3D rendering module is provided to a display module where the panoramic video is viewed by a user of the system. Typically, the user will be viewing just a portion of the scene depicted in the panoramic video at any one time, and will be able to control what portion is viewed. Preferably, the panoramic video viewer will allow the user to pan through the scene to the left, right, up or down. In addition, the user would preferably be able to zoom in or out within the portion of the scene being viewed. The user could also be allowed to select what video should be played, choose when to play or pause the video, and to specify what temporal part of the video should be played.

Book
07 Mar 2000

Patent
John Hincks Duke1
03 Aug 2000
TL;DR: In this article, the system alternates between zooming in and zooming out at preset rates in response to successive user actuations of a unique button set on the pointing device.
Abstract: Method and apparatus for simultaneously scrolling and zooming graphic data in a display device in response to pointing device action by user. The system alternates between zooming in and zooming out at preset rates in response to successive user actuations of a unique button set on the pointing device. While the button set remains actuated the pointing device acts to pan the viewport.

Patent
14 Sep 2000
TL;DR: A 3D camera comprises at least two detector heads 15 and 16 which are moveable laterally with respect to each other but whose optical axes 17 and 18 are maintained parallel as discussed by the authors.
Abstract: A 3D camera comprises at least two detector heads 15 and 16 which are moveable laterally with respect to each other but whose optical axes 17 and 18 are maintained parallel. Each of the detector heads 15, 16 comprises a zoom lens (19, 20) and a detector (21, 22). A user selects the separation between the detector heads 15, 16 and the camera electronics 24 automatically select the field of view by controlling the zoom lenses (19, 20) as a function of the detector head separation.

Patent
09 Nov 2000
TL;DR: In this article, a scalable geospatial information management system and method for the assembly, packaging and online distribution of worldwide geo-spatial or geographic images and related information is presented. But this system is not suitable for large scale data sets.
Abstract: A scalable geospatial information management system and method for the assembly, packaging and online distribution of worldwide geospatial or geographic images and related information. The system includes a cluster computing architecture, capable of metering media and derivative product delivery streams. At the heart of this system is custom content processing, load balancing, caching, and delivery software. The dynamic rendering of geospatial images and related information permits a user to interact with the system to pan, zoom and navigate in real time or near term. The system includes a content management database for storing a worldwide collection of spatially indexed information and supporting metadata. The system also supports the real time generation of derivative products from the geospatial information.

Patent
Srinivas Gutta1
03 May 2000
TL;DR: In this article, the authors described a method for tracking an object of interest in a video processing system using clustering techniques, where an area is partitioned into approximate regions, referred to as clusters, each associated with an object.
Abstract: Methods and apparatus are disclosed for tracking an object of interest in a video processing system, using clustering techniques. An area is partitioned into approximate regions, referred to as clusters, each associated with an object of interest. Each cluster has associated average pan, tilt and zoom values. Audio or video information, or both, are used to identify the cluster associated with a speaker (or another object of interest). Once the cluster of interest is identified, the camera is focused on the cluster, using the recorded pan, tilt and zoom values, if available. An event accumulator initially accumulates audio (and optionally video) events for a specified time, to allow several speakers to speak. The accumulated audio events are then used by a cluster generator to generate clusters associated with the various objects of interest. After initialization of the clusters, the illustrative event accumulator gathers events at periodic intervals. The mean of the pan and tilt values (and zoom value, if available) occurring in each time interval are then used to compute the distance between the various clusters in the database by a similarity estimator, based on an empirically-set threshold. If the distance is greater than the established threshold, then a new cluster is formed, corresponding to a new speaker, and indexed into the database. Fuzzy clustering techniques allow the camera to be focused on more than one cluster at a given time, when the object of interest may be located in one or more clusters.

Patent
18 Aug 2000
TL;DR: In this paper, a lens unit and a camera capable of achieving stereoscopic television function and zoom function at the same time is presented, where the camera is equipped with a light quantity adjusting device.
Abstract: A lens unit and a camera capable of achieving stereoscopic television function and zoom function at the same time. More specifically, a lens unit ( 2 ) and a camera ( 1 ) each including at least a zoom lens ( 4 ), light quantity adjusting device ( 6 or 20 ), an electronic optical shutter provided on a stage of the zoom lens ( 4 ), and an optical shutter driving portion for controlling the electronic optical shutter ( 6 ) to open ( 6 A, 6 B) in a predetermined pattern.

Patent
01 Jun 2000
TL;DR: In this article, the stereoscopic microscope includes a common close-up optical system that faces an object, a pair of zoom optical systems that form a primary image and a field stops through the zoom optical system, an inter-axis distance reducing element, an image taking device and an illuminating optical system.
Abstract: The stereoscopic microscope includes a common close-up optical system that faces an object, a pair of zoom optical systems that form a pair of primary image, a pair of field stops, a pair of relay optical systems that relay the primary images to form a pair of secondary images, an inter-axis distance reducing element, an image taking device and an illuminating optical system. The object light rays incident on the close-up optical system form the primary images having predetermined parallax at the field stops through the zoom optical systems. The inter-axis distance reducing element reduces the inter-axis distance of the right and left light rays. The primary images are re-imaged by the relay optical systems as the secondary images on the adjacent regions on the single image taking surface of the image taking device, respectively.

Proceedings ArticleDOI
01 Apr 2000
TL;DR: In this article, a group of people, seated around a table, interact with objects in a virtual scene using real bricks, and a plan view of the scene is projected onto the table, where object manipulation takes place.
Abstract: BUILD-IT is a planning tool based on computer vision technology, supporting complex planning and composition tasks. A group of people, seated around a table, interact with objects in a virtual scene using real bricks. A plan view of the scene is projected onto the table, where object manipulation takes place. A perspective view is projected on the wall. The views are set by virtual cameras, having spatial attributes like shift, rotation and zoom. However, planar interaction with bricks provides only position and rotation information. This paper explores two alternative methods to bridge the gap between planar interaction and three-dimensional navigation.

Book ChapterDOI
01 Jan 2000
TL;DR: It is shown that a similar display can be obtained by using hierarchical feature maps to represent the contents of a document archive while still having the general maps available for global orientation.
Abstract: Text collections may be regarded as an almost perfect application arena for unsupervised neural networks. This is because many operations computers have to perform on text documents are classification tasks based on noisy patterns. In particular we rely on self-organizing maps which produce a map of the document space after their training process. From geography, however, it is known that maps are not always the best way to represent information spaces. For most applications it is better to provide a hierarchical view of the underlying data collection in form of an atlas where, starting from a map representing the complete data collection, different regions are shown at finer levels of granularity. Using an atlas, the user can easily “zoom” into regions of particular interest while still having general maps for overall orientation. We show that a similar display can be obtained by using hierarchical feature maps to represent the contents of a document archive. These neural networks have a layered architecture where each layer consists of a number of individual self-organizing maps. By this, the contents of the text archive may be represented at arbitrary detail while still having the general maps available for global orientation.

Patent
30 Nov 2000
TL;DR: In this article, a zoom lens system which improves the operability of zooming operation by enabling the zoom operation matching a state and preference by using a desired manual or motor-drive system for zooming as a uniaxial two-operation type operation rod is pushed or drawn.
Abstract: PROBLEM TO BE SOLVED: To provide a zoom lens system which improves the operability of zooming operation by enabling zooming operation matching a state and preference by using a desired manual or motor-drive system for zooming as a uniaxial two-operation type operation rod is pushed or drawn. SOLUTION: With an operation changeover switch 130, whether either of the operation rob 16 or a zoom rate demand 26 is used for the zooming operation is selected. When the operation rod 16 is selectively made effective, a drive changeover switch 132 is used to select which of manual driving and serve driving is used for zooming in response to the pushing/drawing operation of the operation rod 16. A control circuit 36 sets a clutch part 34 to a manual side for manual driving for zooming, which can be driven with the operation force of the operation rod 16. When the servo driving is used for zooming, the clutch part 34 is set to a servo side and the zooming is carried out by the servo driving of a zoom servo module 92. COPYRIGHT: (C)2004,JPO

01 Jan 2000
TL;DR: The ExploraGraph Navigator makes it possible to navigate through conceptual graphs with automatic arrangement of elements, zoom and "fish eye" effects.
Abstract: The ExploraGraph interface was designed to facilitate interaction in the context of distant learning. It was developed as an alternative to simple web interaction, in order to increase flexibility, visibility, and structure in the learning environment. It may be used as a front end to existing courses on the web. The ExploraGraph Navigator makes i t possible to navigate through conceptual graphs with automatic arrangement of elements, zoom and "fish eye" effects. Each node of the graph may have a description attached to i t and may give direct access to an application, a document or an Internet site. Graphic structures may thus be used to represent the organization of tools, activities, concepts or documents. The Navigator offers each user a tool to specify his goals and the system can support him, using multiple modalities: Hypertext, graphical cues (Lee & Lehman, 1993), Ms Agents avatars, voice, visual demonstrations and force feedback guiding.

Patent
Masahiko Kikuzawa1
20 Jan 2000
TL;DR: An image sensing apparatus and method which controls a noise reduction process under various photographing conditions and various functions of the apparatus, including a system control unit that controls an image sensing unit by setting the noise reduction control mode to a zoom stop mode under conditions that an electronic zoom unit and a zoom lens are stopped, and controls the noise reducing unit by adjusting the zoom operation mode under the conditions that either the zoom unit or the zoom lens is operated.
Abstract: An image sensing apparatus and method which controls a noise reduction process under various photographing conditions and various functions of the apparatus, including a system control unit that controls a noise reduction unit by setting the noise reduction control mode to a zoom stop mode under conditions that an electronic zoom unit and a zoom lens are stopped, and controls the noise reduction unit by setting the noise reduction control mode to a zoom operation mode under conditions that either the electronic zoom unit or the zoom lens is operated.

Journal ArticleDOI
TL;DR: An overall algorithm for real-time camera parameter extraction, which is one of the key elements in implementing virtual studio, and a new method for calculating the lens distortion parameter in real time are presented.
Abstract: In this paper, we present an overall algorithm for real-time camera parameter extraction, which is one of the key elements in implementing virtual studio, and we also present a new method for calculating the lens distortion parameter in real time. In a virtual studio, the motion of a virtual camera generating a graphic studio must follow the motion of the real camera in order to generate a realistic video product. This requires the calculation of camera parameters in real-time by analyzing the positions of feature points in the input video. Towards this goal, we first design a special calibration pattern utilizing the concept of cross-ratio, which makes it easy to extract and identify feature points, so that we can calculate the camera parameters from the visible portion of the pattern in real-time. It is important to consider the lens distortion when zoom lenses are used because it causes nonnegligible errors in the computation of the camera parameters. However, the Tsai algorithm, adopted for camera calibration, calculates the lens distortion through nonlinear optimization in triple parameter space, which is inappropriate for our real-time system. Thus, we propose a new linear method by calculating the lens distortion parameter independently, which can be computed fast enough for our real-time application. We implement the whole algorithm using a Pentium PC and Matrox Genesis boards with five processing nodes in order to obtain the processing rate of 30 frames per second, which is the minimum requirement for TV broadcasting. Experimental results show this system can be used practically for realizing a virtual studio.

Patent
04 Feb 2000
TL;DR: In this article, a force feedback interface with isotonic and isometric control capability coupled to a host computer that displays a graphical environment such as a GUI is presented. But the interface is limited to the use of a user manipulatable physical object movable in physical space.
Abstract: A force feedback interface having isotonic and isometric control capability coupled to a host computer that displays a graphical environment such as a GUI. The interface includes a user manipulatable physical object movable in physical space, such as a mouse or puck. A sensor detects the object's movement and an actuator applies output force on the physical object. A mode selector selects isotonic and isometric control modes of the interface from an input device such as a physical button or from an interaction between graphical objects. Isotonic mode provides input to the host computer based on a position of the physical object and updates a position of a cursor, and force sensations can be applied to the physical object based on movement of the cursor. Isometric mode provides input to the host computer based on an input force applied by the user to the physical object, where the input force is determined from a sensed deviation of the physical object in space. The input force opposes an output force applied by the actuator and is used to control a function of an application program, such as scrolling a document or panning or zooming a displayed view. An overlay force, such as a jolt or vibration, can be added to the output force in isometric mode to indicate an event or condition in the graphical environment

Patent
25 Jul 2000
TL;DR: In this paper, a camera array is utilized for catching continuous images consisting of a plurality of parts of a certain scene, and a seamless image of the scene is formed by zooming in a selected area or panning the area and the caught image is presented to a user.
Abstract: PROBLEM TO BE SOLVED: To utilize a camera array including a fixed camera array capable of zooming and panning all selected areas in a scene in order to catch the scene by an image and a video. SOLUTION: A camera array is utilized for catching continuous images consisting of a plurality of parts of a certain scene. Individual images are connected by using at least warping and fading technologies and a seamless image of the scene is formed. Part of the seamless image is caught by zooming in a selected area of the scene or panning the area and the caught image is presented to a user.