scispace - formally typeset
Search or ask a question

Showing papers on "Zoom published in 2006"


Patent
14 Feb 2006
TL;DR: In this paper, the authors present a control framework for organizing, selecting and launching media items including graphical user interfaces coupled with an optional 3D control device for collection of the basic control primitives of point, click, scroll, hover and zoom which permit for easy and rapid selection of media items, e.g., movies, songs etc., from large or small collections.
Abstract: Systems and methods according to the present invention provide a control framework for organizing, selecting and launching media items including graphical user interfaces coupled with an optional 3D control device for collection of the basic control primitives of point, click, scroll, hover and zoom which permit for easy and rapid selection of media items, e.g., movies, songs etc., from large or small collections. The remote control maps natural hand movements and gestures into relevant commands while the graphical display uses images, zooming for increased/decreased levels of detail and continuity of GUI objects to provide easy organization, selection and navigation among the media items by a user.

201 citations


Patent
06 Feb 2006
TL;DR: In this paper, a group of photo thumbnails are presented to the user, and where a user selects one of the thumbnails, a transition is provided replacing the group of thumbnails with the photo represented by the selected thumbnail.
Abstract: Groups of photo thumbnails are presented to the user, and where a user selects one of the thumbnails, a transition is provided replacing the group of thumbnails with the photo represented by the selected thumbnail. The photo may be displayed without cropping or stretching. In addition, a zoom/enlargement animation of the selected thumbnail is provided, and also possibly of the remaining thumbnails in the group, which then transitions into the represented photo. In addition, after or during the zooming animation, a cross-fading may occur such that the thumbnails fade out and the represented photo fades in. These types of transitions and user inputs both while the user is manually browsing thumbnails and when the user is viewing an automated slideshow of the thumbnails.

177 citations


Patent
11 Jul 2006
TL;DR: In this article, a fly over user interface (FOUI) is presented for navigating a display screen to search for a desired item of information stored in an electronic device, such as a portable computer, personal computer, a cellular telephone, a digital watch, etc.
Abstract: A method and a system for navigating a display screen to search for a desired item of information stored in an electronic device. The electronic device includes a novel fly over user interface (FOUI) capable of receiving commands from a user to provide a zoom out view of the display screen. A user may commence a navigation session by touching the display screen in a non active area or by clicking on a specifically designated icon to activate the user interface. During the navigation session, the display screen is zoomed-out and a magnifying area may appear on the display screen. The user interface enables the user to scroll the zoomed-out display screen by dragging the magnifying area towards an edge of the display screen to find a desired item of information. The display screen may be a display screen of a digital device (e.g., portable computer, personal computer, a cellular telephone, a digital watch, etc). The user may terminate a navigation session by removing the pointer from the display screen.

176 citations


Proceedings ArticleDOI
Shumeet Baluja1
23 May 2006
TL;DR: This work casts the web page segmentation problem into a machine learning framework, where it re-examine this task through the lens of entropy reduction and decision tree learning, and results in an efficient and effectivepage segmentation algorithm.
Abstract: Fitting enough information from webpages to make browsing on small screens compelling is a challenging task. One approach is to present the user with a thumbnail image of the full web page and allow the user to simply press a single key to zoom into a region (which may then be transcoded into wml/xhtml, summarized, etc). However, if regions for zooming are presented naively, this yields a frustrating experience because of the number of coherent regions, sentences, images, and words that may be inadvertently separated. Here, we cast the web page segmentation problem into a machine learning framework, where we re-examine this task through the lens of entropy reduction and decision tree learning. This yields an efficient and effective page segmentation algorithm. We demonstrate how simple techniques from computer vision can be used to fine-tune the results. The resulting segmentation keeps coherent regions together when tested on a broad set of complex webpages.

153 citations


Journal ArticleDOI
TL;DR: A model that makes predictions about user performance on comparison tasks with different interface options is presented, and a design heuristic is proposed: extra windows are needed when visual comparisons must be made involving patterns of a greater complexity than can be held in visual working memory.
Abstract: In order to investigate large information spaces effectively, it is often necessary to employ navigation mechanisms that allow users to view information at different scales. Some tasks require frequent movements and scale changes to search for details and compare them. We present a model that makes predictions about user performance on such comparison tasks with different interface options. A critical factor embodied in this model is the limited capacity of visual working memory, allowing for the cost of visits via fixating eye movements to be compared to the cost of visits that require user interaction with the mouse. This model is tested with an experiment that compares a zooming user interface with a multi-window interface for a multiscale pattern matching task. The results closely matched predictions in task performance times; however error rates were much higher with zooming than with multiple windows. We hypothesized that subjects made more visits in the multi-window condition, and ran a second experiment using an eye tracker to record the pattern of fixations. This revealed that subjects made far more visits back and forth between pattern locations when able to use eye movements than they made with the zooming interface. The results suggest that only a single graphical object was held in visual working memory for comparisons mediated by eye movements, reducing errors by reducing the load on visual working memory. Finally we propose a design heuristic: extra windows are needed when visual comparisons must be made involving patterns of a greater complexity than can be held in visual working memory.

150 citations


Patent
03 Jan 2006
TL;DR: In this paper, the authors present a data processing tool for the viewing of real-time, critical patient data on remote and/or mobile devices, which is based on the latest GDI+, GAPI and PDA drawing techniques.
Abstract: A data processing tool for the viewing of real-time, critical patient data on remote and/or mobile devices. The tool efficiently renders graphical data on the screen of the remote device in a manner that makes it practical for the health care provider to accurately and timely review the data for the purpose of making an informed decision about the condition of the patient. Charting control is established and implemented using the latest GDI+, GAPI and PDA drawing techniques. The charting components provide landscape support, an ability to overlay patient data and patient images, zoom in/zoom out, custom variable speed scrolling, split screen support, and formatting control. The methodology operates as an asynchronous application, without sacrificing crucial processing time in the mobile/handheld device. The methodology allows the critical patient data to be streamed in real-time to the handheld device while conserving enough CPU power to simultaneously allow the end user to interact at will with the responsive display application. The methodology is structured using object oriented concepts and design patterns. Each logical tier of the methodology, from the data access objects and the charting control objects, to the user interface objects, is structured with precise interfaces. Finally, the methodology implements an IT management console that allows system managers to monitor the exchange of data between hospital systems and the primary database, including all patient data packets, notifications and alerts, connected remote devices, etc.

142 citations


Journal ArticleDOI
TL;DR: It is demonstrated that it is possible to achieve a high rate of accuracy in the identification of source camera identification by noting the intrinsic lens radial distortion of each camera.
Abstract: Source camera identification refers to the task of matching digital images with the cameras that are responsible for producing these images. This is an important task in image forensics, which in turn is a critical procedure in law enforcement. Unfortunately, few digital cameras are equipped with the capability of producing watermarks for this purpose. In this paper, we demonstrate that it is possible to achieve a high rate of accuracy in the identification by noting the intrinsic lens radial distortion of each camera. To reduce manufacturing cost, the majority of digital cameras are equipped with lenses having rather spherical surfaces, whose inherent radial distortions serve as unique fingerprints in the images. We extract, for each image, parameters from aberration measurements, which are then used to train and test a support vector machine classifier. We conduct extensive experiments to evaluate the success rate of a source camera identification with five cameras. The results show that this is a viable approach with high accuracy. Additionally, we also present results on how the error rates may change with images captured using various optical zoom levels, as zooming is commonly available in digital cameras.

134 citations


Patent
29 Jun 2006
TL;DR: In this paper, a 3D GUI provides an interface for a variety of applications including games, web browsers and operating systems, and a user initiates a browser application by entering a URL (402) into a client device, which is forwarded to the content distribution system.
Abstract: A 3D GUI provides an interface for a variety of applications including games, web browsers and operating systems. A user initiates a browser application by entering a URL (402) into a client device, which is forwarded to the content distribution system. The content distribution system retrieves the associated web page (404) which forms one interior surface of the cell (400). The remaining interior surfaces (406), (408), (410), (412), (414), (416) and (418) include the preceding seven web pages visited by the user. The interior view of the cell is controlled by the user with a walkthrough interface, and an exterior view of the cell is controlled by a user with rotation and zoom functions.

127 citations


Patent
Blaise Aguera y Arcas1
29 Sep 2006
TL;DR: In this article, a non-physically proportional scaling of an image having at least one object is discussed, where at least some elements of the at least single object are scaled up and/or down in a way that is not physically proportional to one or more zoom levels associated with the zooming.
Abstract: Methods and apparatus are contemplated to perform various actions, including: zooming into or out of an image having at least one object, wherein at least some elements of the at least one object are scaled up and/or down in a way that is non-physically proportional to one or more zoom levels associated with the zooming, and wherein, for example, the non-physically proportional scaling may be expressed by the following formula: p=d′·z a , where p is a linear size in pixels of one or more elements of the object at the zoom level, d′ is an imputed linear size of the one or more elements of the object in physical units, z is the zoom level in units of physical linear size/pixel, and a is a power law where a≠−1.

126 citations


Proceedings ArticleDOI
22 Apr 2006
TL;DR: An evaluation of this Windows XP ABC system showed that users found the ABC XP extension easy to use and likely to be useful in their own work, and was based on a multi-method approach.
Abstract: Research has shown that computers are notoriously bad at supporting the management of parallel activities and interruptions, and that mobility increases the severity of these problems. This paper presents activity-based computing (ABC) which supplements the prevalent data- and application-oriented computing paradigm with technologies for handling multiple, parallel and mobile work activities. We present the design and implementation of ABC support embedded in the Windows XP operating system. This includes replacing the Windows Taskbar with an Activity Bar, support for handling Windows applications, a zoomable user interface, and support for moving activities across different computers. We report an evaluation of this Windows XP ABC system which is based on a multi-method approach, where perceived ease-of-use and usefulness was evaluated together with rich interview material. This evaluation showed that users found the ABC XP extension easy to use and likely to be useful in their own work.

122 citations


Proceedings ArticleDOI
12 Sep 2006
TL;DR: This study presents a user study comparing the Halo [2] approach with two other techniques based on arrows, and investigates the effectiveness of the three techniques with respect to the number of off-screen objects.
Abstract: Browsing large information spaces such as maps on the limited screen of mobile devices often requires people to perform panning and zooming operations that move relevant display content off-screen. This makes it difficult to perform spatial tasks such as finding the location of Points Of Interest (POIs) in a city. Visualizing the location of off-screen objects can mitigate this problem: in this paper, we present a user study comparing the Halo [2] approach with two other techniques based on arrows. Halo surrounds off-screen objects with circles that reach the display window, so that users can derive the location and distance of objects by observing the visible portion of the corresponding circles. In the two arrow-based techniques, arrows point at objects and their size and body length, respectively, inform about the distance of objects. Our study involved four tasks requiring users to identify and compare off-screen objects locations, and also investigated the effectiveness of the three techniques with respect to the number of off-screen objects. Arrows allowed users to order off-screen objects faster and more accurately according to their distance, while Halo allowed users to better identify the correct location of off-screen objects. Implications of these results for mobile map-based applications are also discussed.

Patent
04 Dec 2006
TL;DR: In this paper, the authors leverage programming language extensions, e.g., for SVG, to create zoomable user interfaces, and present methods according to the present invention to leverage programming languages extensions.
Abstract: Systems and methods according to the present invention leverage programming language extensions, e.g., for SVG, to create zoomable user interfaces.

Proceedings ArticleDOI
22 Apr 2006
TL;DR: The OrthoZoom Scroller is introduced, a novel interaction technique that improves target acquisition in very large one-dimensional spaces and is about twice as fast as Speed Dependant Automatic Zooming to perform pointing tasks whose index of difficulty is in the 10-30 bits range.
Abstract: This article introduces the OrthoZoom Scroller, a novel interaction technique that improves target acquisition in very large one-dimensional spaces. The OrthoZoom Scroller requires only a mouse to perform panning and zooming in a 1D space. Panning is performed along the slider dimension while zooming is performed along the orthogonal one. We present a controlled experiment showing that the OrthoZoom Scroller is about twice as fast as Speed Dependant Automatic Zooming to perform pointing tasks whose index of difficulty is in the 10-30 bits range. We also present an application to browse large textual documents with the OrthoZoom Scroller that uses semantic zooming and snapping on the structure.

Journal ArticleDOI
TL;DR: A scatterplot tool for personal digital assistants that allows the handling of many thousands of items is presented by incorporating two alternative interaction techniques: a geometric-semantic zoom that provides smooth transition between overview and detail, and a fisheye distortion that displays the focus and context regions of the scatterplot in a single view.
Abstract: Existing information-visualization techniques that target small screens are usually limited to exploring a few hundred items. In this article we present a scatterplot tool for personal digital assistants that allows the handling of many thousands of items. The application's scalability is achieved by incorporating two alternative interaction techniques: a geometric-semantic zoom that provides smooth transition between overview and detail, and a fisheye distortion that displays the focus and context regions of the scatterplot in a single view. A user study with 24 participants was conducted to compare the usability and efficiency of both techniques when searching a book database containing 7500 items. The study was run on a pen-driven Wacom board simulating a PDA interface. While the results showed no significant difference in task-completion times, a clear majority of 20 users preferred the fisheye view over the zoom interaction. In addition, other dependent variables such as user satisfaction and subjective rating of orientation and navigation support revealed a preference for the fisheye distortion. These findings partly contradict related research and indicate that, when using a small screen, users place higher value on the ability to preserve navigational context than they do on the ease of use of a simplistic, metaphor-based interaction style

Patent
31 Aug 2006
TL;DR: In this article, the authors present a system that uses a zooming effect to provide additional display space to manage applications in a display device for application management purposes, such that the user can access such items easily and efficiently without losing the context of the computer desktop.
Abstract: One embodiment of the present invention provides a system that uses a zooming effect to provide additional display space to manage applications. In one mode of operation, the system presents an image of a computer desktop to the user on a display device. When the system receives a request from a user to provide additional display space in a display device for application management purposes, the system decreases the size of the computer desktop in the display device to provide an extended display area. The system then facilitates application management by displaying items useful for application management in this extended display area. By providing the extended display area, the system allows the user to access such items easily and efficiently without losing the context of the computer desktop.

Journal ArticleDOI
Beth Yost1, Chris North1
TL;DR: A controlled experiment on user performance time, accuracy, and subjective workload when scaling up data quantity with different space-time-attribute visualizations using a large, tiled display showed that current designs are perceptually scalable because they result in a decrease in task completion time when normalized per number of data attributes along with no decrease in accuracy.
Abstract: Larger, higher resolution displays can be used to increase the scalability of information visualizations. But just how much can scalability increase using larger displays before hitting human perceptual or cognitive limits? Are the same visualization techniques that are good on a single monitor also the techniques that are best when they are scaled up using large, high-resolution displays? To answer these questions we performed a controlled experiment on user performance time, accuracy, and subjective workload when scaling up data quantity with different space-time-attribute visualizations using a large, tiled display. Twelve college students used small multiples, embedded bar matrices, and embedded time-series graphs either on a 2 megapixel (Mp) display or with data scaled up using a 32 Mp tiled display. Participants performed various overview and detail tasks on geospatially-referenced multidimensional time-series data. Results showed that current designs are perceptually scalable because they result in a decrease in task completion time when normalized per number of data attributes along with no decrease in accuracy. It appears that, for the visualizations selected for this study, the relative comparison between designs is generally consistent between display sizes. However, results also suggest that encoding is more important on a smaller display while spatial grouping is more important on a larger display. Some suggestions for designers are provided based on our experience designing visualizations for large displays

Journal ArticleDOI
TL;DR: The estimated camera intrinsics model along with the cube-maps provides a calibration reference for images captured on the fly by the active pan-tilt-zoom camera under operation making the approach promising for active camera network calibration.

Patent
28 Jun 2006
TL;DR: A skin testing and imaging station and corresponding method for capturing, displaying and analyzing images of a person and for testing the skin using a variety of probes includes a digital camera, a light source capable of providing at least two different wavelengths of light, a plurality of probes for conducting skin tests, a touch-screen display and a computer for controlling the components of the station as mentioned in this paper.
Abstract: A skin testing and imaging station and corresponding method for capturing, displaying and analyzing images of a person and for testing the skin using a variety of probes includes a digital camera, a light source capable of providing at least two different wavelengths of light, a plurality of probes for conducting skin tests, a touch-screen display and a computer for controlling the components of the station. The apparatus selectively captures and displays a plurality of digital images using different wavelengths of illuminating light, e.g., using a plurality of flashes and filters, some of which may be adjustable to adjust the angle of incidence of the illuminating light on the subject. In video mode, the camera displays a real time image on the display facilitating a user to position a probe for testing any specific area of the skin. Preferably, the apparatus is self-serve, allowing any person to capture, review and analyze the images and skin data. Verbal and/or graphic instructions to a user aid in use of the station. An intuitive graphic user interface with thumbnail images is employed. Focus control, zoom and synchronized side-by side comparison of images are available.

Patent
01 Mar 2006
TL;DR: In this article, a digital camera has a first image capturing optical system having a lens and a first sensor, and a second image capturing system with a second sensor and a clock driver.
Abstract: In a digital camera having multiple optical systems, multiple image capturing elements are effectively driven to reduce power consumption. A digital camera has a first image capturing optical system having a lens and a first image sensor and a second image capturing optical system having a lens and a second image sensor. A controller and timing generator selects the image signal from the first image capturing optical system while controlling an operation or power of the second image sensor and a clock driver to be OFF when the zoom position falls within a first zoom range. When the zoom position falls within a second zoom range, the image signal from the second image capturing optical system is selected while an operation or power of the first image sensor and a clock driver is controlled to be OFF. An operation or power of the image capturing optical system which is not selected is stopped so that power consumption is reduced.

Patent
01 Feb 2006
TL;DR: In this article, a system and method for dynamically zooming and rearranging display items through a series of output displays is presented, where visual components making up the display are rearranged and scaled.
Abstract: The invention relates to a system and method for dynamically zooming and rearranging display items through a series of output displays. In each subsequent output display, visual components making up the display are rearranged and scaled. The visual components in a first layout are displayed at a first rendered size. In response to a zoom input to change the first rendered size of the plurality of visual components to a second rendered size, an intermediate visual display of the plurality of visual components is generated by calculating an intermediate zoom factor intermediate between unity and the ratio of second to first rendered sizes, calculating a second layout of the plurality of visual components dependent on the intermediate zoom factor, and scaling the plurality of visual components by a magnification level. The generated intermediate visual display is displayed in a display area; and the visual components at the second rendered size is displayed in a third layout.

Patent
29 Jun 2006
TL;DR: A graphical user interface (GUI) as mentioned in this paper provides a plurality of views of a network and its elements in the same viewing engine, each view showing relationship or interconnection information, allowing a user to view inter-related objects at the same level, and to view at a lower level sub-objects that make up each of those objects.
Abstract: A graphical user interface (GUI) provides a plurality of views of a network and its elements in the same viewing engine. A user can switch between the plurality of views in a context-sensitive manner, each view showing relationship or interconnection information. The GUI allows a user to view inter-related objects at the same level, and to view at a lower level sub-objects that make up each of those objects. Different functional views can be provided at the same hierarchical or logical level based on the stored relationship information. A user can navigate between a network level view, a site level view, a shelf level view, and a schematic level view, via element selection or by zooming. A network element data set provides context-sensitive data and images to each level and view for that network element and enables automatic generation of a network topology.

Proceedings ArticleDOI
22 Apr 2006
TL;DR: The results show that Pan and Zoom Navigation was significantly faster and required less mental effort than Rubber Sheet Navigation, independent of the presence or absence of an overview.
Abstract: We present a study that evaluates conventional Pan and Zoom Navigation and Rubber Sheet Navigation, a rectilinear Focus+Context technique. Each of the two navigation techniques was evaluated both with and without an overview. All interfaces guaranteed that regions of interest would remain visible, at least as a compressed landmark, independent of navigation actions. Interfaces implementing these techniques were used by 40 subjects to perform a task that involved navigating a large hierarchical tree dataset and making topological comparisons between nodes in the tree. Our results show that Pan and Zoom Navigation was significantly faster and required less mental effort than Rubber Sheet Navigation, independent of the presence or absence of an overview. Also, overviews did not appear to improve performance, but were still perceived as beneficial by users. We discuss the implications of our task and guaranteed visibility on the results and the limitations of our study, and we propose preliminary design guidelines and recommendations for future work.

Journal ArticleDOI
TL;DR: This work forms the multi-camera control strategy as an online scheduling problem and proposes a solution that combines the information gathered by the wide-FOV cameras with weighted round-robin scheduling to guide the available PTZ cameras, such that each pedestrian is observed by at least one PTZ camera while in the designated area.
Abstract: We present a surveillance system, comprising wide field-of-view (FOV) passive cameras and pan/tilt/zoom (PTZ) active cameras, which automatically captures high-resolution videos of pedestrians as they move through a designated area. A wide-FOV static camera can track multiple pedestrians, while any PTZ active camera can capture high-quality videos of one pedestrian at a time. We formulate the multi-camera control strategy as an online scheduling problem and propose a solution that combines the information gathered by the wide-FOV cameras with weighted round-robin scheduling to guide the available PTZ cameras, such that each pedestrian is observed by at least one PTZ camera while in the designated area. A centerpiece of our work is the development and testing of experimental surveillance systems within a visually and behaviorally realistic virtual environment simulator. The simulator is valuable as our research would be more or less infeasible in the real world given the impediments to deploying and experimenting with appropriately complex camera sensor networks in large public spaces. In particular, we demonstrate our surveillance system in a virtual train station environment populated by autonomous, lifelike virtual pedestrians, wherein easily reconfigurable virtual cameras generate synthetic video feeds. The video streams emulate those generated by real surveillance cameras monitoring richly populated public spaces.

Journal ArticleDOI
TL;DR: A zoom-dependent calibration process is proposed whereby the image coordinate correction model for interior orientation and lens distortion is expressed as a function of the focal length written to the EXIF header of the image file.
Abstract: One of the well-known constraints applying to the adoption of consumer-grade digital cameras for photogrammetric measurement is the requirement to record imagery at fixed zoom and focus settings. The camera is then calibrated for the lens setting employed. This requirement arises because calibration parameters vary significantly with zoom/focus setting. In this paper, a zoom-dependent calibration process is proposed whereby the image coordinate correction model for interior orientation and lens distortion is expressed as a function of the focal length written to the EXIF header of the image file. The proposed approach frees the practitioner from the requirement to utilize fixed zoom/focus settings for the images forming the photogrammetric network. Following a review of the behavior of camera calibration parameters with varying zoom settings, an account of the newly developed zoom-dependent calibration model is presented. Experimental results of its application to four digital cameras are analysed. These show that the proposed approach is suited to numerous applications of medium-accuracy, digital, close-range photogrammetry.

Patent
26 Sep 2006
TL;DR: In this paper, a digital mirror system is provided that emulates a traditional mirror by displaying real-time video imagery of a user who stands before it, providing a plurality of digital mirroring features including an image freeze feature, an image zoom feature, and an image buffering feature.
Abstract: A digital mirror system is provided that emulates a traditional mirror by displaying real-time video imagery of a user who stands before it. The digital mirror system provides a plurality of digital mirror modes, including a traditional mirror mode and a third person mirror mode. The digital mirror system provides a plurality of digital mirroring features including an image freeze feature, an image zoom feature, and an image buffering feature. The digital mirror system provides a plurality of operational states including a digital mirroring state and an alternate state, the alternate state including a power-conservation state and/or a digital picture frame state. The digital mirror system provides a user sensor that automatically transitions between operational states in response to whether or not a user is detected before the digital mirror display screen for a period of time. The digital mirror system provides for hands-free user control using speech recognition, the speech recognition being employed to enable a user to selectively access one or more of the digital mirror modes or features in response to verbal commands relationally associated with those modes or features.

Patent
28 Nov 2006
TL;DR: In this article, a transducer is used to detect motion of a mobile phone and a control circuit is responsive to detected motion to perform at least one pan or zoom of information provided to the display, wherein the pan and/or zoom correspond to a direction and velocity of the detected motion.
Abstract: An electronic equipment, such as a mobile phone (10), includes a display (22) for viewing content and/or information, a transducer (40) operable to detect motion of the electronic equipment, and a control circuit (42) for providing information to the display (22). The control circuit (42) is responsive to detected motion to perform at least one of a pan or zoom of information provided to the display, wherein the pan and/or zoom correspond to a direction and velocity of the detected motion.

Patent
18 Sep 2006
TL;DR: In this article, an endoscopic surgical navigation system comprises a multi-dimensional video generation module that enables a user to visually navigate captured endoscopic video with six degrees of freedom, which can be translated in three orthogonal axes in 3D space as well as allowing control of vertical panning, horizontal panning and tilt tilt.
Abstract: An endoscopic surgical navigation system comprises a multi-dimensional video generation module that enables a user to visually navigate captured endoscopic video with six degrees of freedom. This capability provides the user with control of a virtual camera (point of view) that can be translated in three orthogonal axes in 3-D space as well as allowing control of vertical panning (pitch), horizontal panning (yaw) and tilt (roll) of the virtual camera, as well as zoom.

Patent
22 Sep 2006
TL;DR: In this article, a remote controlled robot system that includes a mobile robot and a remote control station is described, where a user can control movement of the robot from the remote controller.
Abstract: A remote controlled robot system that includes a mobile robot and a remote control station. A user can control movement of the robot from the remote control station. The mobile robot includes a camera system that can capture and transmit to the remote station a zoom image and a non-zoom image. The remote control station includes a monitor that displays a robot view field. The robot view field can display the non-zoom image. The zoom image can be displayed in the robot view field by highlighting an area of the non-zoom field. The remote control station may also store camera locations that allow a user to move the camera system to preset locations.

Patent
04 May 2006
TL;DR: An optical feedback mechanism corresponding to a variation in input by a user's digit on an input element is proposed in this paper, where the variation can be movement by the user's finger, or a change in the amount of pressure or force applied to a button.
Abstract: An optical feedback mechanism corresponding to a variation in input by a user's digit on an input element The variation in input can be movement by the user's finger, or a change in the amount of pressure or force applied to a button In one embodiment, the optical feedback is a linear light array adjacent a solid-state scroll/zoom sensor, with the light corresponding to the finger position Alternately, a solid state button could provide feedback corresponding to the amount of pressure in the form of a change in intensity, color or blinking In one embodiment, the input signal from an input element alternates between a scroll, zoom and/or other functions depending on the current application

Proceedings ArticleDOI
23 May 2006
TL;DR: Results suggest that participants with higher spatial ability were slowed down by the overview more than low spatial-ability users, and indicates that, on small screens, a larger detail view can outweigh the benefits gained from an overview window.
Abstract: While zoomable user interfaces can improve the usability of applications by easing data access, a drawback is that some users tend to become lost after they have zoomed in. Previous studies indicate that this effect could be related to individual differences in spatial ability. To overcome such orientation problems, many desktop applications feature an additional overview window showing a miniature of the entire information space. Small devices, however, have a very limited screen real estate and incorporating an overview window often means pruning the size of the detail view considerably. Given this context, we report the results of a user study in which 24 participants solved search tasks by using two zoomable scatterplot applications on a PDA - one of the applications featured an overview, the other relied solely on the detail view. In contrast to similar studies for desktop applications, there was no significant difference in user preference between the interfaces. On the other hand, participants solved search tasks faster without the overview. This indicates that, on small screens, a larger detail view can outweigh the benefits gained from an overview window. Individual differences in spatial ability did not have a significant effect on task-completion times although results suggest that participants with higher spatial ability were slowed down by the overview more than low spatial-ability users.