scispace - formally typeset
Search or ask a question

Showing papers on "Zoom published in 2002"


Journal ArticleDOI
TL;DR: No difference between interfaces in subjects' ability to solve tasks correctly is found, and subjects who switched between the overview and the detail windows used more time, suggesting that integration of overview and detail windows adds complexity and requires additional mental and motor effort.
Abstract: The literature on information visualization establishes the usability of interfaces with an overview of the information space, but for zoomable user interfaces, results are mixed. We compare zoomable user interfaces with and without an overview to understand the navigation patterns and usability of these interfaces. Thirty-two subjects solved navigation and browsing tasks on two maps. We found no difference between interfaces in subjects' ability to solve tasks correctly. Eighty percent of the subjects preferred the interface with an overview, stating that it supported navigation and helped keep track of their position on the map. However, subjects were faster with the interface without an overview when using one of the two maps. We conjecture that this difference was due to the organization of that map in multiple levels, which rendered the overview unnecessary by providing richer navigation cues through semantic zooming. The combination of that map and the interface without an overview also improved subjects' recall of objects on the map. Subjects who switched between the overview and the detail windows used more time, suggesting that integration of overview and detail windows adds complexity and requires additional mental and motor effort.

282 citations


Patent
05 Jul 2002
TL;DR: In this article, a method of automatically selecting regions of interest within an image in response to a selection signal, and panning across an image so as to keep the selected region in view is disclosed, together with an image processing system employing the method.
Abstract: A method of automatically selecting regions of interest within an image in response to a selection signal, and panning across an image so as to keep the selected region in view is disclosed, together with an image processing system employing the method.

161 citations


Journal ArticleDOI
TL;DR: This first interface combines three technologies: augmented reality (AR), immersive virtual reality (VR), and computer vision-based hand and object tracking and explores alternative interface techniques, including a zoomable user interface, paddle interactions, and pen annotations.
Abstract: In this paper, we describe two explorations in the use of hybrid user interfaces for collaborative geographic data visualization. Our first interface combines three technologies: augmented reality (AR), immersive virtual reality (VR), and computer vision-based hand and object tracking. Wearing a lightweight display with an attached camera, users can look at a real map and see three-dimensional virtual terrain models overlaid on the map. From this AR interface, they can fly in and experience the model immersively, or use free hand gestures or physical markers to change the data representation. Building on this work, our second interface explores alternative interface techniques, including a zoomable user interface, paddle interactions, and pen annotations. We describe the system hardware and software and the implications for GIS and spatial science applications.

131 citations


Journal ArticleDOI
TL;DR: InfoSky is a system enabling users to explore large, hierarchically structured document collections using a planar graphical representation with variable magnification, and can map metadata such as document size or age to attributes of the visualisation such as colour and luminance.
Abstract: InfoSky is a system enabling users to explore large, hierarchically structured document collections. Similar to a real-world telescope, InfoSky employs a planar graphical representation with variable magnification. Documents of similar content are placed close to each other and are visualised as stars, forming clusters with distinct shapes. For greater performance, the hierarchical structure is exploited and force-directed placement is applied recursively at each level on much fewer objects, rather than on the whole corpus. Collections of documents at a particular level in the hierarchy are visualised with bounding polygons using a modified weighted Voronoi diagram. Their area is related to the number of documents contained. Textual labels are displayed dynamically during navigation, adjusting to the visualisation content. Navigation is animated and provides a seamless zooming transition between summary and detail view. Users can map metadata such as document size or age to attributes of the visualisation such as colour and luminance. Queries can be made and matching documents or collections are highlighted. Formative usability testing is ongoing; a small baseline experiment comparing the telescope browser to a tree browser is discussed.

128 citations


Patent
22 Feb 2002
TL;DR: In this article, a workstation-user interface for evaluating computer assisted diagnosis (CAD) methods for digital mammography is disclosed, which enables multiple, large-size images to be handled at high speeds.
Abstract: A workstation-user interface for evaluating computer assisted diagnosis (CAD) methods for digital mammography is disclosed. Implementation of such an interface enables multiple, large-size images to be handled at high speeds. Furthermore, controls such as contrast, pan, and zoom, and tools such as reporting forms, case information, and analysis of results are included. The software and hardware used to develop such a workstation and interface were based on Sun platforms and the Unix operating system. The software is user friendly, and comparable to standard mammography film reading in terms of display layout and speed. The software, as designed, will work on entry-level workstations as well as high-end workstations with specialized hardware, thus being usable in an educational, training, or clinical environment for annotation purposes using CAD techniques as well as primary diagnosis.

95 citations


Patent
12 Jun 2002
TL;DR: In this paper, a graphical user interface (GUI) is provided for manipulating a presentation of a region of interest within visual information displayed on a display screen of a computer display system.
Abstract: A graphical user interface (GUI) is provided for manipulating a presentation of a region of interest within visual information displayed on a display screen of a computer display system. The GUI includes: a first bounding shape surrounding the focal region; a second bounding shape surrounding the shoulder region; a base outline; a pickup point; a slide bar; a move area within the region of interest; at least one zoom area; and, a zoom button.

89 citations


Proceedings ArticleDOI
28 Oct 2002
TL;DR: A formalism for describing multiscale visualizations of data cubes with both data and visual abstraction and a method for independently zooming along one or more dimensions by traversing a zoom graph with nodes at different levels of detail are presented.
Abstract: Most analysts start with an overview of the data before gradually refining their view to be more focused and detailed. Multiscale pan-and-zoom systems are effective because they directly support this approach. However generating abstract overviews of large data sets is difficult, and most systems take advantage of only one type of abstraction: visual abstraction. Furthermore, these existing systems limit the analyst to a single zooming path on their data and thus a single set of abstract views. This paper presents: (1) a formalism for describing multiscale visualizations of data cubes with both data and visual abstraction, and (2) a method for independently zooming along one or more dimensions by traversing a zoom graph with nodes at different levels of detail. As an example of how to design multiscale visualizations using our system, we describe four design patterns using our formalism. These design patterns show the effectiveness of multiscale visualization of general relational databases.

86 citations


Patent
15 Mar 2002
TL;DR: In this article, computer vision algorithms are used to detect, locate, and track people in the field of view of a wide-angle, stationary camera, and the estimated acoustic delay obtained from a microphone array, consisting of only two horizontally spaced microphones, is used to select the person speaking.
Abstract: A method and apparatus for a video conferencing system using an array of two microphones and a stationary camera to automatically locate a speaker and electronically manipulate the video image to produce the effect of a movable pan tilt zoom ('PTZ') camera. Computer vision algorithms are used to detect, locate, and track people in the field of view of a wide-angle, stationary camera. The estimated acoustic delay obtained from a microphone array, consisting of only two horizontally spaced microphones, is used to select the person speaking. This system can also detect any possible ambiguities, in which case, it can respond in a fail-safe way, for example, it can zoom out to include all the speakers located at the same horizontal position.

84 citations


Patent
11 Jun 2002
TL;DR: In this article, the authors present a method and computer-readable medium for executing a method of placing an order for a subcomponent (part) of a product, including generally three steps: displaying a diagram depicting sub-components in an exploded view on a user screen; receiving a user selection of a sub-component to be ordered; and processing the selection of the subcomponent.
Abstract: The present invention provides a method and computer-readable medium for executing a method of placing an order for a sub-component (part) of a product, including generally three steps: displaying a diagram depicting sub-components of a product in an exploded view on a user screen; receiving a user selection of a sub-component to be ordered; and processing the selection of a sub-component to be ordered. Optionally, the method may allow the user to selectively view further information pertinent to displayed sub-components, such as their prices and specifications, or to selectively zoom in on and zoom out of the exploded view or to scroll the exploded view. The method thus assists the user when attempting to identify and/or order a sub-component for the product. The method may also be implemented in a stand-alone computer system.

77 citations


Proceedings ArticleDOI
01 Dec 2002
TL;DR: The system architecture is presented, an information-theoretic approach to combining panoramic and zoomed images to optimally satisfy user requests, and experimental results that show the FlySPEC system significantly assists users in a remote inspection tasks are presented.
Abstract: FlySPEC is a video camera system designed for real-time remote operation. A hybrid design combines the high resolution of an optomechanical video camera with the wide field of view always available from a panoramic camera. The control system integrates requests from multiple users so that each controls a virtual camera. The control system seamlessly integrates manual and fully automatic control. It supports a range of options from untended automatic to full manual control. The system can also learn control strategies from user requests. Additionally, the panoramic view is always available for an intuitive interface, and objects are never out of view regardless of the zoom factor. We present the system architecture, an information-theoretic approach to combining panoramic and zoomed images to optimally satisfy user requests, and experimental results that show the FlySPEC system significantly assists users in a remote inspection tasks.

73 citations


Proceedings ArticleDOI
28 Oct 2002
TL;DR: A new method for the visualization of tree structured relational data using the concept of enclosure to partition the entire display space into a collection of local regions that are assigned to all nodes in tree T for the display of their sub-trees and themselves is described.
Abstract: We describe a new method for the visualization of tree structured relational data. It can be used especially for the display of very large hierarchies in a 2-dimensional space. We discuss the advantages and limitations of current techniques of tree visualization. Our strategy is to optimize the drawing of trees in a geometrical plane and maximize the utilization of display space by allowing more nodes and links to be displayed at a limit screen resolution. We use the concept of enclosure to partition the entire display space into a collection of local regions that are assigned to all nodes in tree T for the display of their sub-trees and themselves. To enable the exploration of large hierarchies, we use a modified semantic zooming technique to view the detail of a particular part of the hierarchy at a time based on user's interest. Layout animation is also provided to preserve the mental map while the user is exploring the hierarchy by changing zoomed views.

Patent
Walter Keller1
27 Mar 2002
TL;DR: In this paper, a method for the display of standardised internet pages (for example, in HTML or XML standard), generated for display on large-size screens, on a small display on hand-held devices (mini-computer, PDA or mobile radio device) is disclosed, in which the handheld device can be provided with a mobile radio connection to the internet.
Abstract: A method for the display of standardised internet pages (for example, in HTML or XML standard), generated for display on large-size screens, on a small display on hand-held devices (mini-computer, PDA or mobile radio device) is disclosed, in which the hand-held device can be provided with a mobile radio connection to the internet. A virtual large image memory is thus maintained in the device. Within the large virtual image the device display can be freely displaced as a readable image section. A zoom function permits an overview and coarse positioning of the detailed representation. The detailed representation can be continuously moved around within the virtual image in the form of a screen section by means of a pointer device (mouse pointer) by moving the mouse pointer to the display edge (and beyond). Switching can be performed at any time between the display modes.

Patent
12 Aug 2002
TL;DR: In this paper, a system and method for displaying a geographical map using a single stylus movement is presented, where a user contacts a display with the stylus and selects an area on the geographical map in which the user wishes to view.
Abstract: A system and method for displaying a geographical map using a single stylus movement is presented. A user contacts a display with the stylus and selects an area on the geographical map in which the user wishes to view. The user moves the stylus into a zoom zone which processing interprets stylus movements to either zoom in or zoom out of the map. When the user is satisfied with a magnification level corresponding to the zoom commands, the user enters a pan zone. The user moves the stylus around the selected area to view other parts of the map. Processing displays different map views corresponding to the stylus movement.

Patent
25 Jul 2002
TL;DR: In this paper, a camera control apparatus comprises a control device for controlling the zoom pan and tilt conditions of a camera data relating to the positioning of the camera in pan, tilt and zoom is transmitted to the control means and the control mean converts the data into a value in a coordinate system, for example (3D) polar co-ordinates.
Abstract: A camera control apparatus (10) comprises a control device (14) for controlling the zoom pan and tilt conditions of a camera Data relating to the positioning of the camera in pan, tilt and zoom is transmitted to the control means and the control means converts the data into a value in a co-ordinate system, for example (3D) polar co-ordinates The camera may be controlled and directed by pointing a pointer to an area in the image displayed whereby in response to selection of a point on a display the control means pans and/or tilts the camera so that the image viewed by the camera is centred substantially on the point selected Still further, an area of the screen can be selected, for example by tracking and dropping a box using a mouse pointer on a computer screen and the control means is arranged to pan and tilt the camera so the image is centred on the centre of the selected area and zoomed so that the selected area becomes substantially the entire image viewed by the camera In a further aspect a multiple camera control apparatus is provided in which a plurality of cameras may be controlled using the aforesaid control apparatus and the multiple camera control apparatus includes data relating to the location of the cameras with reference to the site plan so that multiple cameras can be co-ordinated to provide better image data, blind spot illumination and "hand over" functionality Still further a security apparatus is provided in which a camera views an image and the security apparatus includes image processing means and data relating to the site viewed by the camera so as to determine the location and size of an object viewed

Patent
16 Sep 2002
TL;DR: In this article, the authors present an automatic zoom lens that is controlled by a processor that is linked to a gaze tracking system, which collects data relating to the position of each eye of the user.
Abstract: The present invention relates to a device containing an automatic zoom lens, and more particularly to a zoom lens that is controlled by a processor that is linked to a gaze tracking system. As a user looks onto an object through the device, the gaze tracking system collects data relating to the position of each eye of the user. This eye position data is input into the processor where the focal point of the user is determined. The processor then adjusts the zoom lens to zoom in or out onto the object based on either a predetermined or user input zoom factor.

Proceedings ArticleDOI
14 Oct 2002
TL;DR: An interface to textual information for the visually impaired that uses video, image processing, optical-character-recognition (OCR) and text-to-speech (TTS) is described.
Abstract: We describe the development of an interface to textual information for the visually impaired that uses video, image processing, optical-character-recognition (OCR) and text-to-speech(TTS). The video provides a sequence of low resolution images in which text must be detected, rectified and converted into high resolution rectangular blocks that are capable of being analyzed via off-the-shelf OCR. To achieve this, various problems related to feature detection, mosaicing, auto-focus, zoom, and systems integration were solved in the development of the system, and these are described.

Journal ArticleDOI
TL;DR: The nesting order between the Galois lattices corresponding to various languages and extensions is exploited in the interactive system ZooM, which aims to give a general view of concepts addressing a large data set.
Abstract: This paper deals with the representation of multi-valued data by clustering them in a small number of classes organized in a hierarchy and described at an appropriate level of abstraction. The contribution of this paper is three fold. First, we investigate a partial order, namely nesting, relating Galois lattices. A nested Galois lattice is obtained by reducing (through projections) the original lattice. As a consequence it makes coarser the equivalence relations defined on extents and intents. Second we investigate the intensional and extensional aspects of the languages used in our system ZooM. In particular we discuss the notion of α-extension of terms of a class language £. We also present our most expressive language £3, close to a description logic, and which expresses optionality or/and multi-valuation of attributes. Finally, the nesting order between the Galois lattices corresponding to various languages and extensions is exploited in the interactive system ZooM. Typically a ZooM session starts fr...

Journal ArticleDOI
TL;DR: A new generator, called generator of time-evolving regional data (G-TERD), for this class of data is presented and it is easy for the user to manipulate the generator according to specific application requirements and at the same time to examine the reliability of the underlying generalized data model.
Abstract: Benchmarking of spatio-temporal databases is an issue of growing importance. In case large real data sets are not available, benchmarking requires the generation of artificial data sets following the real-world behavior of spatial objects that change their locations, shapes and sizes over time. Only a few innovative papers have recently addressed the topic of spatio-temporal data generators. However, all existing approaches do not consider several important aspects of continuously changing regional data. In this report, a new generator, called generator of time-evolving regional data (G-TERD), for this class of data is presented. The basic concepts that determine the function of G-TERD are the structure of complex 2-D regional objects, their color, maximum speed, zoom and rotation-angle per time slot, the influence of other moving or static objects on the speed and on the moving direction of an object, the position and movement of the scene-observer, the statistical distribution of each changing factor and finally, time. Apart from these concepts, the operation and basic algorithmic issues of G-TERD are presented. In the framework developed, the user can control the generator response by setting several parameters values. To demonstrate the use of G-TERD, the generation of a number of sample data sets is presented and commented. The source code and a visualization tool for using and testing the new generator are available on the Web.1 Thus, it is easy for the user to manipulate the generator according to specific application requirements and at the same time to examine the reliability of the underlying generalized data model.

Patent
04 Dec 2002
TL;DR: In this paper, an object is detected in comparison between an input image from an image pick-up device having a zoom mechanism and a template image stored in a manner that a first image within a view field of the image pickup device is stored as a first as the template image, a power of the zoom mechanism to be changed is recorded, a second image to be detected is picked-up from the image picking up device.
Abstract: An object is detected in comparison between an input image from an image pick-up device having a zoom mechanism and a template image stored in a manner that a first image within a view field of the image pick-up device is stored as a first as the template image, a power of the zoom mechanism to be changed is recorded, a second image to be detected is picked-up from the image pick-up device. Then, a size of either one of the template image and the second image is changed on the bases of the changed power of the zoom mechanism, and a template matching is performed between the template image and the second image to detect the object. This process makes it possible to track an object within the view field.

Patent
21 Nov 2002
TL;DR: In this paper, clickpath visualization software is presented to enable the user to easily analyze and evaluate clickpaths by focusing only on subpaths of interest, which provides the user with various functions including, zoom, sort, expand, reverse, preview, and activate.
Abstract: Displaying a funnel from clickstream data as a hierarchy to a user for analysis wherein the funnel represents an ordered path of web pages successively viewed by the user. The invention includes clickpath visualization software to enable the user to easily analyze and evaluate clickpaths by focusing only on subpaths of interest. The invention software provides the user with various functions including, but not limited to, zoom, sort, expand, reverse, preview, and activate.

Patent
13 Sep 2002
TL;DR: In this paper, a look-up table is provided with control information for controlling an optimum variable optical-property optical element in accordance with a distance to an object, a zoom state, or a combination of the distance to the object with the zoom state.
Abstract: An optical apparatus has a look-up table provided with control information for controlling an optimum variable optical-property optical element in accordance with a distance to an object, a zoom state, or a combination of the distance to the object with the zoom state A drive of the variable optical-property optical element is controlled on the control information obtained from the look-up table or a predetermined calculation process is executed on the control information obtained from the look-up table, and information obtained from the calculation process is used to control the drive of the variable optical-property optical element

Patent
09 Apr 2002
TL;DR: In this article, a zoom lens is provided with a lens group G1 nearest to an object side, which is fixed on an optical axis at the time of varying power and at time of focusing operation.
Abstract: PROBLEM TO BE SOLVED: To provide a zoom lens eliminating the need of start-up time to a using condition as seen in a collapsible mount type lens barrel, desirable in terms of water- proof and dust-proof effects, easily taking constitution to bend the optical path of an optical system by a catoptric element in order to make a camera extremely thin in the depth direction, having high optical specifications concerning zoom ratio, the angle of view, and F-value, and being little in aberration, and further capable of shortening the length obtained after bonding the optical path, and to provide an electronic image pickup device having the zoom lens. SOLUTION: The zoom lens is provided with a lens group G1 nearest to an object side, which is fixed on an optical axis at the time of varying power and at the time of focusing operation, a lens group G5 nearest to an image side, which is fixed on the optical axis at least at the time of focusing operation, and lens group G2 to G4 positioned between the lens groups G1 and G5 and moving on the optical axis at the time of varying power. The lens group G5 is constituted of a negative lens component L1 1 , the catoptric element R1 having a reflection surface for bending the optical path, and a positive lens component L1 2 in order from the object side, and is provided with at least one aspherical surface. COPYRIGHT: (C)2004,JPO

Journal ArticleDOI
TL;DR: This paper presents an advanced video camera system with robust automatic focus (AF), automatic white-balance (AWB), and enhanced zoom tracking that can achieve accurate zoom tracking with significantly reduced system memory.
Abstract: This paper presents an advanced video camera system with robust automatic focus (AF), automatic white-balance (AWB), and enhanced zoom tracking. The proposed system can achieve accurate zoom tracking with significantly reduced system memory. It can also find accurate in-focus state even when the camera shoots at a CRT monitor or a light source. The proposed AWB technique compensates the luminance intensity of color components without degrading the image.

Patent
08 Aug 2002
TL;DR: In this article, an airport display device including a display including at least one window, a database including data related to an airport, a selector configured to select a degree of zoom for the airport to be displayed from a plurality of different degrees of zoom, and a control unit connected to the display, the database and the selector and configured to control the display to display in at least 1 window the airport according to a scale value representative of the degree selected by the selector.
Abstract: An airport display device including a display including at least one window, a database including data related to an airport, a selector configured to select a degree of zoom for the airport to be displayed from a plurality of different degrees of zoom, a control unit connected to the display, the database and the selector and configured to control the display to display in the at least one window the airport according to a scale value representative of the degree of zoom selected by the selector, and a changing unit configured to change the scale value representative of the degree of zoom.

Patent
15 Nov 2002
TL;DR: In this paper, a method and apparatus for the manipulation of thumbnail images as used in image-based browsing file management systems is presented, whereby zooming in and out of thumbnail image can be performed without a continued need to decompress a true image thus providing for faster operation.
Abstract: Disclosed are method and apparatus for the manipulation of thumbnail images as used in image-based browsing file management systems. Disclosed are arrangements whereby zooming in and out of thumbnail images can be performed without a continued need to decompress a true image thus providing for faster operation. Pixel interpolation and/or replication are used to generate intermediate images that are display to deliver to the user a perception of a transitory zoom yet are of sufficient detail to maintain user orientation. Aspect ratio zooming of thumbnail containment areas is also disclosed which facilitates ease of browsing. The compression of thumbnail type images using a discrete wavelet transform facilitates the fast zoom of thumbnails and their associated containment areas.

Proceedings ArticleDOI
20 Apr 2002
TL;DR: This work confirms previous work showing that multiscale pointing performance strongly depends on the degree of pan-zoom parallelism and finds that two-handed input and a constant zoom speed allow more input parallelism, thereby increasing performance speed.
Abstract: In a laboratory experiment on multiscale pointing, we compared one-handed vs two-handed input for two zoom-control devices, a wheel vs a mini-joystick with an all or none response Using a recent method of quantifying multiple degree-of-freedom (DOF) input coordination to evaluate pan-zoom parallelism, we confirm previous work [1] showing that multiscale pointing performance strongly depends on the degree of pan-zoom parallelism The new finding is that two-handed input and a constant zoom speed allow more input parallelism, thereby increasing performance speed

Patent
Satoshi Ejima1, Tomoaki Kawamura1
15 Aug 2002
TL;DR: In this paper, an electronic camera includes a zoom changing unit that changes a focal length of a zoom lens, an image capturing unit that executes photoelectric conversion for a subject image projected by the zoom lens onto an image-capturing area, a range finding unit that detects a distance to a subject, a photographic range setting unit that sets a size of photographic range at a subject position, and a zoom control unit that controls the zoom controlling unit based upon the photographic range that has been set and the subject distance.
Abstract: An electronic camera includes a zoom changing unit that changes a focal length of a zoom lens, an image-capturing unit that executes photoelectric conversion for a subject image projected by the zoom lens onto an image-capturing area, a range finding unit that detects a distance to a subject, a photographic range setting unit that sets a size of a photographic range at a subject position, and a zoom control unit that controls the zoom changing unit based upon the photographic range that has been set and the subject distance so that the subject within the photographic range is projected almost over the entirety of the image-capturing area.

Patent
06 Jun 2002
TL;DR: In this paper, a tunnel-type automatic package identification system for automatically identifying packages transported along a conveyor belt structure is presented, which consists of a data communications network, a package dimensioning subsystem and a package identification unit.
Abstract: A tunnel-type automatic package identification system for automatically identifying packages transported along a conveyor belt structure. The tunnel-type automatic package identification system comprises a data communications network, a package dimensioning subsystem and a package identification unit. The package identification subsystem is mounted above a conveyor belt structure in a work environment, and includes: a linear imaging subsystem having a field of view (FOV) with variable zoom and variable focus characteristics, for producing linear images of packages as said packages are transported along said conveyor belt structure and beneath said package identification subsystem; and a camera control computer operably connected to the linear imaging subsystem and the package dimensioning subsystem, and the communication medium of the data communications network. The camera control computer receives the package dimension data producing zoom and focus control signals for automatically and dynamically controlling the variable zoom and variable focus characteristics of the linear imaging subsystem as the package is transported beneath the package identification subsystem. The camera control computer receives the package dimension data, and producing zoom and focus control signals for automatically and controlling the variable zoom and variable focus characteristics of the linear imaging subsystem as the package is transported past the package identification subsystem. An image keying station, located remotely from the work environment, is connected to the communications medium by way of an Ethernet-over-fiber-optic data communication link, and enables the operator of the image keying station to visually display images of packages which can not be identified by computer-based image processing, so that the operator may read such images and manually keyed into a database, information which identifies the package associated with the operator-read image. By virtue of the present invention, images of packages can be captured with constant dpi resolution, and if necessary, manually identified at a highly remote image keying station, thereby providing increased flexibility in setting up tunnel-type automatic package identification systems in industrial environments where high noise levels and other distractions to carrying out manual image reading operations are predominant.

Patent
Adam Yeh1
16 Jan 2002
TL;DR: In this paper, the authors propose a database architecture and method for processing data in a multidimensional database, including performing Web usage analysis, using XML metadata, using zoom in/zoom out events to navigate between information in the summary cube and information in detail cubes.
Abstract: A database architecture and method for processing data in a multidimensional database, including performing Web usage analysis. A summary cube contains the members of an upper level of a dimension and a detail cube contains a subset of the members of a lower level of the dimension partitioned therefrom based on a selected member of the upper level of the dimension. The detail cube also includes one or more sub-cubes containing aggregations of the first subset of the lower level members. An XML template implements a workflow to automatically create a second detail cube partitioned from the dimension based on another selected member of the upper level. Using XML metadata, the invention implements zoom in/zoom out events to navigate between information in the summary cube and information in the detail cubes.

Patent
23 Apr 2002
TL;DR: In this article, a system for sensing and displaying lens data for a cinematography zoom lens and camera in real time is presented, where a plurality of sensors are connected to the lens for producing signals continually representing the present positions of focus, zoom and T-stop setting rings of the lens.
Abstract: A system for sensing and displaying lens data for a cinematography zoom lens and camera in real time. A plurality of sensors are connected to the lens for producing signals continually representing the present positions of focus, zoom and T-stop setting rings of the lens. A range finder is positioned adjacent the lens for producing a signal representing the distance from the lens to an object located in front of the lens. A printed circuit board with a microprocessor receives and processes the signals and has a memory with data representing the focus, zoom and T-stop characteristics of that lens. A display device is positioned adjacent the lens and selectively displays indicia representing the positions of the focus, zoom and T-stop settings, the distance to the object and the depth of field.