scispace - formally typeset
Search or ask a question

Showing papers on "Zoom published in 2003"


Proceedings ArticleDOI
05 Apr 2003
TL;DR: A user study comparing Halo with an arrow-based visualization technique with respect to four types of map-based route planning tasks finds that when using the Halo interface, users completed tasks 16-33% faster, while there were no significant differences in error rate for three out of four tasks.
Abstract: As users pan and zoom, display content can disappear into off-screen space, particularly on small-screen devices. The clipping of locations, such as relevant places on a map, can make spatial cognition tasks harder. Halo is a visualization technique that supports spatial cognition by showing users the location of off-screen objects. Halo accomplishes this by surrounding off-screen objects with rings that are just large enough to reach into the border region of the display window. From the portion of the ring that is visible on-screen, users can infer the off-screen location of the object at the center of the ring. We report the results of a user study comparing Halo with an arrow-based visualization technique with respect to four types of map-based route planning tasks. When using the Halo interface, users completed tasks 16-33% faster, while there were no significant differences in error rate for three out of four tasks in our study.

374 citations


01 Jan 2003
TL;DR: A user study comparing Halo with an arrow-based visualization technique with respect to four types of map-based route planning tasks finds that when using the Halo interface, users completed tasks 16-33% faster, while there were no significant differences in error rate for three out of four tasks.
Abstract: As users pan and zoom, display content can disappear into off-screen space, particularly on small-screen devices. The clipping of locations, such as relevant places on a map, can make spatial cognition tasks harder. Halo is a visualization technique that supports spatial cognition by showing users the location of off-screen objects. Halo accomplishes this by surrounding off-screen objects with rings that are just large enough to reach into the border region of the display window. From the portion of the ring that is visible on-screen, users can infer the offscreen location of the object at the center of the ring. We report the results of a user study comparing Halo with an arrow-based visualization technique with respect to four types of map-based route planning tasks. When using the Halo interface, users completed tasks 16-33% faster, while there were no significant differences in error rate for three out of four tasks in our study.

164 citations


Patent
30 Jun 2003
TL;DR: A zoom lens system has from an object side, a first lens unit being overall negative and including a reflecting surface that bends a luminous flux substantially 90 degrees; and a second lens unit disposed with a variable air distance from the first unit, and having an optical power, and at least one lens element made of resin is included in the entire lens system as discussed by the authors.
Abstract: An imaging device has a zoom lens system having a plurality of lens units and forming an optical image of an object so as to continuously optically zoom by varying distances between the lens unit; and an image sensor converting the optical image formed by the zoom lens system to an electric signal. The zoom lens system has from an object side, a first lens unit being overall negative and including a reflecting surface that bends a luminous flux substantially 90 degrees; and a second lens unit disposed with a variable air distance from the first lens unit, and having an optical power, and wherein at least one lens element made of resin is included in the entire lens system.

163 citations


Journal ArticleDOI
TL;DR: A formalism for describing multiscale visualizations of data cubes with both data and visual abstraction and a method for independently zooming along one or more dimensions by traversing a zoom graph with nodes at different levels of detail are presented.
Abstract: Most analysts start with an overview of the data before gradually refining their view to be more focused and detailed. Multiscale pan-and-zoom systems are effective because they directly support this approach. However, generating abstract overviews of large data sets is difficult and most systems take advantage of only one type of abstraction: visual abstraction. Furthermore, these existing systems limit the analyst to a single zooming path on their data and thus to a single set of abstract views. This paper presents: 1) a formalism for describing multiscale visualizations of data cubes with both data and visual abstraction and 2) a method for independently zooming along one or more dimensions by traversing a zoom graph with nodes at different levels of detail. As an example of how to design multiscale visualizations using our system, we describe four design patterns using our formalism. These design patterns show the effectiveness of multiscale visualization of general relational databases.

124 citations


Patent
05 Aug 2003
TL;DR: In this paper, the Play Rectangle (PR) and the Blackspace Timeline (BTL) allow users to navigate and operate media without the use of external tools, e.g., a zoom tool, a play cursor, which must be entered as a separate mode, etc.
Abstract: Methods and controls for playing media include a Blackspace Timeline (BTL), and the Play Rectangle (PR). Both of these structures exist as graphic objects and they permit users to both navigate and operate (e.g., edit, scrub, assemble, combine, etc.) media without the use of external tools, e.g., a zoom tool, a play cursor, which must be entered as a separate mode, etc. Media objects are dragged to the timeline, which is rescalable, and re-ranged by click and drag techniques. BTL and PR may function as a time scale device, or as a length measurement device.

122 citations


Proceedings ArticleDOI
19 Oct 2003
TL;DR: How smooth animations from one view to another can be defined is discussed, and a metric on the effect of simultaneous zooming and panning is defined, based on an estimate of the perceived velocity.
Abstract: Large 2D information spaces, such as maps, images, or abstract visualizations, require views at various level of detail: close ups to inspect details, overviews to maintain (literally) an overview. Users often switch between these views. We discuss how smooth animations from one view to another can be defined. To this end, a metric on the effect of simultaneous zooming and panning is defined, based on an estimate of the perceived velocity. Optimal is defined as smooth and efficient. Given the metric, these terms can be translated into a computational model, which is used to calculate an analytic solution for optimal animations. The model has two free parameters: animation speed and zoom/pan trade off. A user experiment to find good values for these is described.

119 citations


Patent
27 Mar 2003
TL;DR: In this paper, a zoom lens with an easily bendable optical path was proposed, which has high optical specification performance such as high zoom ratio, a wide-angle arrangement, a small F-number and reduced aberrations.
Abstract: The invention relates to a zoom lens with an easily bendable optical path, which has high optical specification performance such as a high zoom ratio, a wide-angle arrangement, a small F-number and reduced aberrations. The zoom lens comprises a first lens group G1 that remains fixed during zooming, a second lens group G2 that has negative refracting power and moves during zooming, a third lens group G3 that has positive refracting power and moves during zooming, and a fourth lens group G4 that has positive refracting power and moves during zooming and focusing. The first lens group is composed of, in order from an object side, a negative meniscus lens convex on an object side thereof, a reflecting optical element for bending an optical path and a positive lens. In a state in focus at an infinite-distance object point, the fourth lens group G4 moves in a locus opposite to that of movement of the third lens group G3 during zooming.

118 citations


Patent
15 Oct 2003
TL;DR: The pan-zoom tool as discussed by the authors is a semitransparent, bull's eye type tracking menu that tracks the position of the pen and allows the user to select pan and zoom functions located in concentric rings of the tool graphic.
Abstract: The present invention is a system that provides a user with a pan-zoom tool that is controlled by a limited input device, such as a pen or stylus, of a pen based computer. The pan-zoom tool is a semitransparent, bull's eye type tracking menu that tracks the position of the pen. A pen-cursor or tracking symbol that corresponds to the location of the pen is allowed to move about within a pan-zoom tool graphic. The tool is moved when the location of the pen encounters a tracking boundary of the tool at an exterior edge of the menu. While moving within the pen-mouse the pen can select pan and zoom functions located in concentric rings of the tool graphic as the active function of the tool. Once one of the pan or zoom functions is activated motion of the pen on the surface of the display is interpreted as corresponding pan or zoom control commands, the tool is becomes transparent and the tracking symbol is replaced by a corresponding pan or zoom icon. The concentric ring menu can have additional button type controls, for functions in addition to pan and zoom, located on a boundary between the rings forming access lanes for movement of the tracking menu during function selection. The function or control of the center ring can be the most recently selected function.

111 citations


Proceedings ArticleDOI
02 Nov 2003
TL;DR: A novel approach which allows users to overcome the display constraints by zooming into video frames while browsing by detecting the focus regions is introduced to minimize the amount of user interaction.
Abstract: With the growing popularity of personal digital assistants and smart phones, people have become enthusiastic to watch videos through these mobile devices. However, a crucial challenge is to provide a better user experience for browsing videos on the limited and heterogeneous screen sizes. In this paper, we present a novel approach which allows users to overcome the display constraints by zooming into video frames while browsing. An automatic approach for detecting the focus regions is introduced to minimize the amount of user interaction. In order to improve the quality of output stream, virtual camera control is employed in the system. Preliminary evaluation shows that this approach is an effective way for video browsing on small displays.

105 citations


Patent
22 Dec 2003
TL;DR: In this article, a hierarchy of presentation information is presented in the zoomable space based on the structure, and a path may be created based on this hierarchy and may be a sequence of the presentation information for the slide show.
Abstract: Methods and systems for supporting presentation using a zoomable space. In an exemplary method, a structure, such as a hierarchy, of presentation information is provided. The presentation information may include slides, text labels and graphical elements. The presentation information is laid out in the zoomable space based on the structure. A path may be created based on the hierarchy and may be a sequence of the presentation information for the slide show. When a modification is received in at least one of the hierarchy and the layout, the path may be automatically updated based on the modification. During a presentation, the presentation information is displayed based on the path.

103 citations


Patent
20 Oct 2003
TL;DR: In this paper, a force feedback interface with isotonic and isometric control capability coupled to a host computer that displays a graphical environment such as a GUI is presented. But the interface is limited to the use of a user manipulatable physical object movable in physical space.
Abstract: A force feedback interface having isotonic and isometric control capability coupled to a host computer that displays a graphical environment such as a GUI. The interface includes a user manipulatable physical object movable in physical space, such as a mouse or puck. A sensor detects the object's movement and an actuator applies output force on the physical object. A mode selector selects isotonic and isometric control modes of the interface from an input device such as a physical button or from an interaction between graphical objects. Isotonic mode provides input to the host computer based on a position of the physical object and updates a position of a cursor, and force sensations can be applied to the physical object based on movement of the cursor. Isometric mode provides input to the host computer based on an input force applied by the user to the physical object, where the input force is determined from a sensed deviation of the physical object in space. The input force opposes an output force applied by the actuator and is used to control a function of an application program, such as scrolling a document or panning or zooming a displayed view. An overlay force, such as a jolt or vibration, can be added to the output force in isometric mode to indicate an event or condition in the graphical environment.

01 Jan 2003
TL;DR: In this article, the authors present a visual servoing approach which allows to position a camera, equipped with a motorized zoom lens, with respect to an object independently if the object is planar or not.
Abstract: This paper concerns the visual servoing with a zooming camera. It presents a new visual servoing approach which allows to position a camera, equipped with a motorized zoom lens, with respect to an object independently if the object is planar or not. Indeed, planar objects are singular cases which cannot be handled by previous intrinsics-free visual servoing. The proposed method makes it possible to bring back a camera to its reference position while zooming. The focal length is initially controlled in order to keep the object in the eld of view of the camera during the servoing. Then, the control of the focal length allows to solve the singularity for planar objects.

Patent
01 Dec 2003
TL;DR: In this article, a system and method for controlling the scaling of 3D computer models in a 3D display system include activating a zoom mode, selecting a model zoom point and setting a zoom scale factor.
Abstract: A system and method for controlling the scaling of a 3D computer model in a 3D display system include activating a zoom mode, selecting a model zoom point and setting a zoom scale factor are presented. In exemplary embodiments according to the present invention, a system, in response to the selected model zoom point and the set scale factor, can implements a zoom operation and automatically move a model zoom point from its original position towards an optimum viewing point. In exemplary embodiments according to the present invention, upon a user's activating a zoom mode, selecting a model zoom point and setting a zoom scale factor, a system can simultaneously move a model zoom point to an optimum viewing point. In preferred exemplary embodiments according to the present invention, a system can automatically identify a model zoom point by applying defined rules to visible points of a displayed model that lie in a central viewing area. If no such visible points are available the system can prompt a user to move the model until such points become available, or can select a model and a zoom point on that model by an automatic scheme.

Book ChapterDOI
01 Jan 2003
TL;DR: PadPrints as mentioned in this paper is a browser companion that dynamically builds a graphical history-map of visited web pages using a zooming user interface (ZUI) development substrate to display the history map using minimal screen space.
Abstract: We have implemented a browser companion called PadPrints that dynamically builds a graphical history-map of visited web pages. PadPrints relies on Pad++, a zooming user interface (ZUI) development substrate, to display the history-map using minimal screen space. PadPrints functions in conjunction with a traditional web browser but without requiring any browser modifications. We performed two usability studies of PadPrints. The first addressed general navigation effectiveness. The second focused on history-related aspects of navigation. In tasks requiring returns to prior pages, users of PadPrints completed tasks in 61.2% of the time required by users of the same browser without PadPrints. We also observed significant decreases in the number of pages accessed when using PadPrints. Users found browsing with PadPrints more satisfying than using Netscape alone.

Patent
27 Feb 2003
TL;DR: In this paper, a composite camera system includes a control section for performing positional control and magnification ratio control of at least one zoom camera ratio for omnidirectional image data centered on a prescribed portion thereof.
Abstract: A composite camera system includes a control section for performing positional control and magnification ratio control of at least one zoom camera ratio for omnidirectional image data centered on a prescribed portion thereof, the omnidirectional image data being obtained by an omnidirectional camera capable of taking an omnidirectional image over a viewing angle of a maximum of 360 degrees; and a display section for displaying an omnidirectional image taken by the omnidirectional camera and a zoom image taken by the zoom camera.

Journal ArticleDOI
TL;DR: A new approach, space-optimized tree, is described, for the visualization and navigation of tree-structured relational data, that uses a new hybrid viewing technique that combines two viewing methods, the modified semantic zooming and a focus+context technique.
Abstract: This paper describes a new approach, space-optimized tree, for the visualization and navigation of tree-structured relational data. This technique can be used especially for the display of very large hierarchies in a two-dimensional space. We discuss the advantages and limitations of current techniques of tree visualization. Our strategy is to optimize the drawing of trees in a geometrical plane and maximize the utilization of display space by allowing more nodes and links to be displayed at a limited screen resolution. Space-optimized tree is a connection+enclosure visualization approach that recursively positions children of a subtree into polygon areas and still uses a node-link diagram to present the entire hierarchical structure. To be able to handle the navigation of large hierarchies, we use a new hybrid viewing technique that combines two viewing methods, the modified semantic zooming and a focus+context technique. While the semantic zooming technique can enlarge a particular viewing area by filtering out the rest of tree structure from the visualization, the focus+context technique allows the user to interactively focus, view and browse the entire visual structure with a reasonable high-density display.

Proceedings ArticleDOI
27 Apr 2003
TL;DR: An interactive 3D browser for large topographic maps using a visual display augmented by a haptic, or force feedback, display using a new haptic contact model and a collision detection algorithm optimized for the heightfield dataset.
Abstract: In this paper we develop an interactive 3D browser for large topographic maps using a visual display augmented by a haptic, or force feedback, display. The extreme size of our data files (over 100 million triangles) requires us to develop the "proxy graph algorithm", a new haptic contact model. The proxy graph algorithm approximates proven virtual proxy methods but enhances the performance significantly by restricting the proxy location to the edges and vertices of the object. The resulting algorithm requires less computation and reduces the average number of collision detection operations per triangle that the proxy crosses during each haptic update cycle. We also develop a collision detection algorithm optimized for our heightfield dataset.Our "MarsView" software enables hands-on interactive display of visual and geologic data with polygon counts in excess of 100 million triangles using a standard PC computer and a commercial haptic interface. MarsView's haptic user interface allows the user to physically interact with the surface as they pan it around and zoom in on details. The hybrid system renders complex scenes at full visual and haptic rates resulting in a more immersive user experience than a visual display alone.

Patent
Kenji Konno1
29 Jul 2003
TL;DR: In this article, a taking lens system has a first lens unit that is disposed at the object-side end of the zoom lens system, that has a negative optical power as a whole, that includes a reflective member for bending the optical axis of the lens system as the whole at substantially 90°, and that remains stationary relative to the image sensor during the zooming of the system, while the second lens unit is disposed on the image-sensor side of the first unit with a variable aerial distance secured in between, and the third lens unit moves toward the object side during
Abstract: A taking lens apparatus has a zoom lens system that is composed of a plurality of lens units and that achieves zooming by varying the distances between the lens units and an image sensor that converts the optical image formed by the zoom lens system into an electrical signal. The zoom lens system has a first lens unit that is disposed at the object-side end of the zoom lens system, that has a negative optical power as a whole, that includes a reflective member for bending the optical axis of the zoom lens system as a whole at substantially 90°, and that remains stationary relative to the image sensor during the zooming of the zoom lens system, a second lens unit that is disposed on the image-sensor side of the first lens unit with a variable aerial distance secured in between, that has a positive optical power as a whole, and that moves toward the object side during the zooming of the zoom lens system from the wide-angle end to the telephoto end, and a third lens unit that is disposed on the image-sensor side of the second lens unit with a variable aerial distance secured in between, that has a positive optical power as a whole, and that moves toward the object side during the zooming of the zoom lens system.

Journal ArticleDOI
TL;DR: Very large gigapixel images of tissue whole-mounts and tissue arrays with high quality and morphologic detail are now being generated for teaching, publication, research, and morphometric analysis.
Abstract: A standard microscope was reconfigured as a virtual slide generator by adding a Prior Scientific H101 robotic stage with H29 controller and 0.1 microm linear scales and a Hitachi HV-C20 3CCD camera. Media Cybernetics Image Pro Plus version 4 (IP4) software controlled stage movement in the X-, Y-, and Z-axis, whereas a Media Cybernetics Pro-Series Capture Kit captured images at 640 x 480 pixels. Stage calibration, scanning algorithms, storage requirements, and viewing modes were standardized. IP4 was used to montage the captured images into a large virtual slide image that was subsequently saved in TIF or JPEG format. Virtual slides were viewed at the workstation using the IP4 viewer as well as Adobe Photoshop and Kodak Imaging. MGI Zoom Server delivered the virtual slides to the Internet, and MicroBrightField's Neuroinformatica viewing software provided a browser-based virtual microscope interface together with labeling tools for annotating virtual slides. The images were served from a Windows 2000 platform with 2 GB RAM, 500 GB of disk storage, and a 1.0 GHz P4 processor. To conserve disk space on the image server, TIF files were converted to the FlashPix (FPX) file format using a compression ratio of 10:1. By using 4x, 10x, 20x, and 40x objectives, very large gigapixel images of tissue whole-mounts and tissue arrays with high quality and morphologic detail are now being generated for teaching, publication, research, and morphometric analysis. Technical details and a demonstration of our system can be found on the Web at http://virtualmicroscope.osu.edu.

Patent
17 Jun 2003
TL;DR: A combination mobile communication device and camera that combines a hand-held mobile terminal and camera in the same physical package is described in this article, where a user selectively places the device in either a communication mode for engaging in wireless communication with a remote device, or in a camera mode for capturing and/or viewing images.
Abstract: A combination mobile communication device and camera that combines a hand-held mobile terminal and camera in the same physical package is described herein. The combination mobile communication device and camera uses one or more multi-function controls to control communication functions and camera functions. A user selectively places the device in either a communication mode for engaging in wireless communication with a remote device, or in a camera mode for capturing and/or viewing images. A multi-function control disposed on a side of the combination mobile communication device and camera comprises a multi-directional button that controls a communication function, such as the volume of the speaker, when the device is in a communication mode. When the device is in a camera mode, the multi-function control is used as a zoom control.

01 Jan 2003
TL;DR: Results from tests performed to investigate different designs of a scrolling function used to make it possible for the user to navigate a virtual environment larger than the limited workspace of the haptic device are reported.
Abstract: The present article reports results from tests performed to investigate different designs of a scrolling function used to make it possible for the user to navigate a virtual environment larger than the limited workspace of the haptic device. A preliminary

Patent
28 May 2003
TL;DR: In this article, the authors describe a system for developing RTN-compatible grammatical models within one integrated development software environment (IDE) that is efficient, controllable and overviewable, because editing is done through a 2D/3D graphical user interface (GUI) with a pointing device (e.g. a mouse).
Abstract: The described methods and systems allow to develop RTN-compatible grammatical models, for syntax or other linguistical levels, within one integrated development software environment (IDE) that is efficient, controllable and overviewable, because editing is done through a 2D/3D graphical user interface (GUI) with a pointing device (e.g. a mouse). Instantaneous evaluation of any fresh change to the grammar is carried out by re-parsing a test corpus of typically thousands of sentences, functioning as language examples to try to comply with, yielding immediate statistical feedback on parsability, thanks to an integrated new fast parsing method. Problematic corpus sentences can automatically be brought under the user's attention, who can visually zoom in on a problem spot in the model or the sentence without losing the overview. All this increases efficiency and quality in grammar model development. The integrated parser can also be used separately, for instance for machine translation, and is based on efficient concatenation of pre-calculated partial pathways through the RTN grammar. These are exhaustively calculated within reasonable preset limits.

Patent
28 Mar 2003
TL;DR: In this article, a low-vision viewer magnifies the face-up source material in the visual field of a camera and displays the magnified image on a VDU or other display means.
Abstract: A low-vision viewer magnifies the face-up source material in the visual field of a camera and displays the magnified image on a VDU or other display means. In a static mode, the camera captures and stores a high-resolution image of the source material. This high-resolution image can be manipulated and subsequently displayed on the VDU. In a live mode, the camera captures a low resolution image of the source material or a high resolution image of a section of the source material to provide a high frame rate for full motion video. In the live capture mode, the low-vision user can move their view around the source material and zoom in on a desired section of interest. The same camera is used in either static or live modes.

01 Jan 2003
TL;DR: This work has developed new interaction techniques for digital video based on semantic zooming and lenses that provide multiple lenses on the same timeline, so the user can see more than one location simultaneously.
Abstract: Digital video is becoming increasingly prevalent. Unfortunately, editing video remains difficult for several reasons: it is a time-based medium, it has dual tracks of audio and video, and current tools force users to work at the smallest level of detail. Based on interviews with professional video editors and observations of the use of our own editor, we have developed new interaction techniques for digital video based on semantic zooming and lenses. When copying or cutting a piece of video, it is desirable to select both ends precisely. However, although many video editing tools allow zooming into a fine level of detail, they do not allow zooming at more than one location simultaneously. Our system provides multiple lenses on the same timeline, so the user can see more than one location in

Patent
14 Jul 2003
TL;DR: In this article, a monitor device for a moving body such as a vehicle, aircraft or vessel, moving at a certain speed is described, where the area of image to be displayed is selected in accordance with the zoom ratio and the image of the selected area is displayed on a display screen in an enlarged form.
Abstract: Disclosed is a monitor device for a moving body such as a vehicle, aircraft or vessel, moving at a certain speed. The monitor device displays an image of a front scene of the moving body with the image of the central area being enlarged in accordance with the running speed of the moving body such that condition of a far away portion can be recognized accurately. A zoom ratio calculating section determines a zoom ratio in accordance with running speed of the moving body. The area of image to be displayed is selected in accordance with the zoom ratio and the image of the selected area is displayed on a display screen in an enlarged form. A specially designed distortion lens may be used to take the picture of the front scene to form an image of the scene with its central area being optically enlarged.

Patent
10 Jun 2003
TL;DR: A zoom optical system includes a deformable element having a focusing function and two lens groups movable in a magnification change and having a magnification varying function or a compensating function for compensating for a shift of an image surface as discussed by the authors.
Abstract: A zoom optical system includes a deformable element having a focusing function and two lens groups movable in a magnification change and having a magnification varying function or a compensating function for compensating for a shift of an image surface. Alternatively, a zoom optical system includes, in order from the object side, a first group having a negative power and being fixed in a magnification change, a second group having a positive power and being movable in a magnification change, and a third group movable in a magnification change. The first group has a deformable element having a focusing function. An imaging apparatus is provided with either zoom optical system. Whereby, a high-performance zoom optical system with small fluctuation of aberrations in spite of use of a deformable element and a photographing apparatus using the same zoom optical system are provided.

Proceedings ArticleDOI
03 Dec 2003
TL;DR: The ShareCam system, online experiments, and results with two frame selection models based on user "satisfaction", one memoryless and the second based on satisfaction over multiple motion cycles are described.
Abstract: ShareCam is a robotic pan, tilt, and zoom web-based camera controlled by simultaneous frame requests from online users. Part II describes algorithms. This paper, part I, focuses on the system. Robotic Webcameras are commercially available but currently restrict control only one user at a time. ShareCam introduces a new interface that allows simultaneous control many users. In this Java-based interface, participating users interact desired frames remotely located browsers where users draw desired frames over a fixed panoramic image. User inputs re transmitted back to pair of PC servers that compute optimal camera back to a pair of PC servers that compute optimal camera parameters, servo the camera, and provide a video stream to all users. We describe the system, online experiments, and compare results with two frame selection models based on user "satisfaction", one memoryless and the second based on satisfaction over multiple motion cycles.

Patent
07 Nov 2003
TL;DR: In this paper, a method and system for generating an image display plan is presented, which allows a user to create a display plan that specifies a sequence of images that are to be displayed and how the images are displayed.
Abstract: A method and system for generating an image display plan is provided. In one embodiment, a planning system allows a user to create a display plan that specifies a sequence of images that are to be displayed and how the images are to be displayed. The planning system allows a user to specify different versions of the plan for different aspect ratios. When displaying the image, the planning system may display multiple viewports simultaneously on the image, one for each of the different aspect ratios. The planning system may allow the multiple viewports to be moved around and resized as a unit maintaining a common center point for the viewports.

Proceedings Article
01 Jan 2003
TL;DR: This work presents a method to animate the background canvas for non-photorealistic rendering animations and walkthroughs, which greatly improves the sensation of motion and 3D ``immersion''.
Abstract: The static background paper or canvas texture usually used for non-photorealistic animation greatly impedes the sensation of motion and results in a disturbing ``shower door'' effect. We present a method to animate the background canvas for non-photorealistic rendering animations and walkthroughs, which greatly improves the sensation of motion and 3D ``immersion''. The complex motion field induced by the 3D displacement is matched using purely 2D transformations. The motion field of forward translations is approximated using a 2D zoom in the texture, and camera rotation is approximated using 2D translation and rotation. A rolling-ball metaphor is introduced to match the instantaneous 3D motion with a 2D transformation. An infinite zoom in the texture is made possible by using a paper model based on multifrequency solid turbulence. Our results indicate a dramatic improvement over a static background.

Proceedings ArticleDOI
08 Jun 2003
TL;DR: A system that allows n networked users to share control over a robotic webcamera to best satisfy the user requests, by solving a geometric optimization problem that requires fitting one rectangle to many, is considered.
Abstract: We consider a system that allows n networked users to share control over a robotic webcamera. Each user guides the camera pan, tilt and zoom, by drawing a rectangle in the user interface. The server adjusts the camera to best satisfy the user requests, by solving a geometric optimization problem that requires fitting one rectangle to many. We improve upon previous results with an O(n3/2 log3 n) time exact algorithm for this problem. We also present a simple near-linear time e-approximation algorithm. We have implemented the latter and report on experimental results.