scispace - formally typeset
Search or ask a question

Showing papers on "Zoom published in 2011"


Patent
Avi Bar-Zeev1, John R. Lewis1
02 Nov 2011
TL;DR: In this article, a microdisplay assembly attached to a see-through display device generates a virtual object for display in the user's current focal region by adjusting its focal region, and the variable focus lens may also be adjusted to provide one or more zoom features.
Abstract: An augmented reality system provides improved focus of real and virtual objects. A see-through display device includes a variable focus lens a user looks through. A focal region adjustment unit automatically focuses the variable focus lens in a current user focal region. A microdisplay assembly attached to the see-through display device generates a virtual object for display in the user's current focal region by adjusting its focal region. The variable focus lens may also be adjusted to provide one or more zoom features. Visual enhancement of an object may also be provided to improve a user's perception of an object.

356 citations


Journal ArticleDOI
TL;DR: A class of digital imaging device capable of reversible deformation into hemispherical shapes with radii of curvature that can be adjusted dynamically, via hydraulics is described, useful for night-vision surveillance, endoscopic imaging, and other areas that require compact cameras with simple zoom optics and wide-angle fields of view.
Abstract: Imaging systems that exploit arrays of photodetectors in curvilinear layouts are attractive due to their ability to match the strongly nonplanar image surfaces (i.e., Petzval surfaces) that form with simple lenses, thereby creating new design options. Recent work has yielded significant progress in the realization of such “eyeball” cameras, including examples of fully functional silicon devices capable of collecting realistic images. Although these systems provide advantages compared to those with conventional, planar designs, their fixed detector curvature renders them incompatible with changes in the Petzval surface that accompany variable zoom achieved with simple lenses. This paper describes a class of digital imaging device that overcomes this limitation, through the use of photodetector arrays on thin elastomeric membranes, capable of reversible deformation into hemispherical shapes with radii of curvature that can be adjusted dynamically, via hydraulics. Combining this type of detector with a similarly tunable, fluidic plano-convex lens yields a hemispherical camera with variable zoom and excellent imaging characteristics. Systematic experimental and theoretical studies of the mechanics and optics reveal all underlying principles of operation. This type of technology could be useful for night-vision surveillance, endoscopic imaging, and other areas that require compact cameras with simple zoom optics and wide-angle fields of view.

239 citations


Patent
30 Sep 2011
TL;DR: In this article, the authors present an environment with user interface software interacting with a software application to provide gesture operations for a display of a device, such as performing a scaling transform such as a zoom in or zoom out in response to a user input having two or more input points.
Abstract: At least certain embodiments of the present disclosure include an environment with user interface software interacting with a software application to provide gesture operations for a display of a device. A method for operating through an application programming interface (API) in this environment includes transferring a scaling transform call. The gesture operations include performing a scaling transform such as a zoom in or zoom out in response to a user input having two or more input points. The gesture operations also include performing a rotation transform to rotate an image or view in response to a user input having two or more input points.

183 citations


Patent
23 Jun 2011
TL;DR: In this article, the effects of a "zoom" operation within the scene on the visual presentation of the visual element in a manner other than an adjustment of visual dimensions and resolution of a visual element are presented.
Abstract: A scene comprising a set of visual elements may allow a user to perform "zoom" operations in order to navigate the depth of the scene. The "zoom" semantic is often applied to simulate optical visual depth, wherein the visual elements are presented with different visual dimensions and visual resolution to simulate physical proximity or distance. However, the "zoom" semantic may be alternatively applied to other aspects of the visual elements of a scene, such as a user selection of a zoomed-in visual element, a "drill-down" operation on a data set, or navigation through a portal in a first data set to view a second data set. These alternative "zoom" semantics may be achieved by presenting the effects of a "zoom" operation within the scene on the visual presentation of the visual element in a manner other than an adjustment of the visual dimensions and resolution of the visual element.

124 citations


Patent
16 May 2011
TL;DR: In this article, a strategy for annotating a digital map is described, where the user can link a single uploaded object to multiple locations within a map (or maps) without requiring separate uploading and storing operations.
Abstract: A strategy is described for annotating a digital map. According to one exemplary aspect, the user can link a single uploaded object to multiple locations within a map (or maps) without requiring separate uploading and storing operations. According to another exemplary aspect, the user can specify a range of zoom levels in which an object is made visible on the map. According to another exemplary aspect, the user can instruct map processing functionality (MPF) to automatically extract objects from a data source (such as an RSS data source) and annotate the map with the objects. Still further aspects are described.

121 citations


Patent
03 May 2011
TL;DR: In this article, an imaging system and method are provided, which consists of a first image sensor array, a first optical system to project an image on the first image, and a second optical system having a second zoom level.
Abstract: In various example embodiments, an imaging system and method are provided. In an embodiment, the system comprises a first image sensor array, a first optical system to project a first image on the first image sensor array, the first optical system having a first zoom level. A second optical system is to project a second image on a second image sensor array, the second optical system having a second zoom level. The second image sensor array and the second optical system are pointed in the same direction as the first image sensor array and the first optical system. The second zoom level is greater than the first zoom level such that the second image projected onto the second image sensor array is a zoomed in on portion of the first image projected on the first image sensor array. The first image sensor array includes at least four megapixels and the second image sensor array includes one-half or less than the number of pixels in the first image sensor array.

111 citations


Journal ArticleDOI
TL;DR: An electrically tunable-focusing optical zoom system using two composite LC lenses with a large zoom ratio is demonstrated and the optical principle is investigated.
Abstract: An electrically tunable-focusing optical zoom system using two composite LC lenses with a large zoom ratio is demonstrated. The optical principle is investigated. To enhance the electrically tunable focusing range of the negative lens power of the LC lens for a large zoom ratio, we adopted two composite LC lenses. Each composite LC lens consists of a sub-LC lens and a planar polymeric lens. The zoom ratio of the optical zooming system reaches ~7.9:1 and the object can be zoomed in or zoomed out continuously at the objective distance of infinity to 10 cm. The potential applications are cell phones, cameras, telescope and pico projectors.

101 citations


Patent
15 Feb 2011
TL;DR: In this article, a user interface of an electronic device is configured to display user interface elements along a timeline, where each displayed user interface element is associated with an event that is characterized by one or more event attributes.
Abstract: An embodiment of a user interface of an electronic device is configured to display user interface elements along a timeline. Each displayed user interface element is associated with an event that is characterized by one or more event attributes. The event attributes include a temporal attribute (e.g., a date and/or time). Each user interface element is relatively positioned along the timeline based on its temporal attribute, and each user interface element is displayed with a visual representation of a set of its associated event attributes. The displayed set of event attributes for a particular user interface element is determined based on a position of the user interface element along the timeline and/or a spatial zoom level at the time. The spatial zoom level and/or position along the timeline of each user interface element may be modified based on user inputs.

89 citations


Journal ArticleDOI
TL;DR: In this article, a monoview and multiple scale 2-D visual control scheme is implemented for this purpose, where the relation between the focal length and the zoom factor is explicitly established.
Abstract: This paper investigates sequential robotic micromanipulation and microassembly in order to build 3-D microsystems and devices. A monoview and multiple scale 2-D visual control scheme is implemented for this purpose. The imaging system used is a photon video microscope endowed with an active zoom enabling to work at multiple scales. It is modeled by a nonlinear projective method, where the relation between the focal length and the zoom factor is explicitly established. A distributed robotic system (xyθ system and φz system) with a two-fingers gripping system is used in conjunction with the imaging system. The results of experiments demonstrate the relevance of the proposed approaches. The tasks were performed with the following accuracy: 1.4 μm for the positioning error and 0.5° for the orientation error.

85 citations


Patent
10 Jan 2011
TL;DR: In this article, the main scene for the game is displayed in the HMD and not the smart phone screen, and the game can take advantage of the processing power in such a smart HMD, to implement functions such as side-by-side video processing to provide 3D video to the user.
Abstract: Connections, software programming and interaction between a smart phone and a Head Mounted Display (HMD) or other video eyewear to improve user experience. The signal from an accelerometer and/or a touch screen in a smart phone is used only for certain control of an application program, such as to steer a racing car or a plane or to move a game persona character within a virtual space. The main scene for the game is displayed in the HMD and not the smart phone screen. One or more inputs from the HMD such as a head tracker or camera, are connected to the smart phone either via a wire or wirelessly such as via WiFi or Bluetooth. The head tracking and/or camera inputs are used as another input to the game, such as to pan/zoom or change the viewpoint of the user. In a still further implementation, the HMD also can have an integrated processor to make it a “smart” HMD. The game can take advantage of the processing power in such a smart HMD, to implement functions such as side-by-side video processing to provide 3D video to the user.

84 citations


Journal ArticleDOI
TL;DR: A cross-correlation based iterative procedure is developed to find both the zoom factor and the zoom centre between two EBSD diffraction patterns acquired at two camera positions with an accuracy better than 1/100th of pixel.

Journal ArticleDOI
TL;DR: The proposed system gives an inspector the ability to compare the current (visual) situation of a structure with its former condition and allows an inspector to evaluate the evolution of changes by simultaneously comparing the structure's condition at different time periods.
Abstract: It is well-recognized that civil infrastructure monitoring approaches that rely on visual approaches will continue to be an important methodology for condition assessment of such systems. Current inspection standards for structures such as bridges require an inspector to travel to a target structure site and visually assess the structure's condition. A less time-consuming and inexpensive alternative to current visual monitoring methods is to use a system that could inspect structures remotely and also more frequently. This article presents and evaluates the underlying technical elements for the development of an integrated inspection software tool that is based on the use of inexpensive digital cameras. For this purpose, digital cameras are appropriately mounted on a structure (e.g., a bridge) and can zoom or rotate in three directions (similar to traffic cameras). They are remotely controlled by an inspector, which allows the visual assessment of the structure's condition by looking at images captured by...

Patent
11 Oct 2011
TL;DR: Semantic zoom techniques are described in this article, which may also include a variety of different input features, such as to support gestures, cursor control device, and keyboard inputs, as well as support semantic swaps and zooming in and out.
Abstract: Semantic zoom techniques are described. In one or more implementations, techniques are described that may be utilized by a user to navigate to content of interest. These techniques may also include a variety of different features, such as to support semantic swaps and zooming in and out. These techniques may also include a variety of different input features, such as to support gestures, cursorcontrol device, and keyboard inputs. A variety of other features are also supported as further described in the detailed description and figures.

Book
29 Dec 2011
TL;DR: The interaction library Squidy is introduced, which eases the design of natural user interfaces by unifying relevant frameworks and toolkits in a common library and allows users to adjust the complexity of the user interface to their current need and knowledge.
Abstract: We introduce the interaction library Squidy, which eases the design of natural user interfaces by unifying relevant frameworks and toolkits in a common library. Squidy provides a central design environment based on high-level visual data flow programming combined with zoomable user interface concepts. The user interface offers a simple visual language and a collection of ready-to-use devices, filters and interaction techniques. The concept of semantic zooming enables nevertheless access to more advanced functionality on demand. Thus, users are able to adjust the complexity of the user interface to their current need and knowledge.

Patent
28 Jul 2011
TL;DR: In this article, a diagram of a system may include a plurality of icons representing physical components of the system, and user input to zoom on a first physical component in the diagram may be received.
Abstract: Providing zooming within a system diagram. Initially, a diagram of a system may be displayed. The diagram may include a plurality of icons representing physical components of the system. These plurality of icons may be initially displayed at a first level of magnification. User input to zoom on a first physical component in the diagram may be received. Accordingly, the first physical component may be displayed at a second level of magnification and other ones of the physical components may be displayed at a third level of magnification. The second level of magnification may be greater than the first level of magnification and the third level of magnification may be less than the first level of magnification. Alternatively, or additionally, different representations for various components of the system may be displayed in the diagram during or after the zoom.

Journal Article
TL;DR: The best leaders can zoom in to examine problems and then zoom out to look for patterns and causes, says Harvard Business School's Kanter, and learn to move across a continuum of perspectives.
Abstract: Zoom buttons on digital devices let us examine images from many viewpoints. They also provide an apt metaphor for modes of strategic thinking. Some people prefer to see things up close, others from afar. Both perspectives have virtues. But they should not be fixed positions, says Harvard Business School's Kanter. To get a complete picture, leaders need to zoom in and zoom out. A close-in perspective is often found in relationship-intensive settings. It brings details into sharp focus and makes opportunities look large and compelling. But it can have significant downsides. Leaders who prefer to zoom in tend to create policies and systems that depend too much on politics and favors. They can focus too closely on personal status and on turf protection. And they often miss the big picture. When leaders zoom out, they can see events in context and as examples of general trends. They are able to make decisions based on principles. Yet a far-out perspective also has traps. Leaders can be so high above the fray that they don't recognize emerging threats. Having zoomed out to examine all possible routes, they may fail to notice when the moment is right for action on one path. They may also seem too remote and aloof to their staffs. The best leaders can zoom in to examine problems and then zoom out to look for patterns and causes. They don't divide the world into extremes-idiosyncratic or structural, situational or strategic, emotional or contextual. The point is not to choose one over the other but to learn to move across a continuum of perspectives.

Patent
11 Oct 2011
TL;DR: Semantic zoom techniques are described in this paper, which may also include a variety of different input features, such as to support gestures, cursor-control device, and keyboard inputs.
Abstract: Semantic zoom techniques are described. In one or more implementations, techniques are described that may be utilized by a user to navigate to content of interest. These techniques may also include a variety of different features, such as to support semantic swaps and zooming “in” and “out.” These techniques may also include a variety of different input features, such as to support gestures, cursor-control device, and keyboard inputs. A variety of other features are also supported as further described in the detailed description and figures.

Patent
16 Mar 2011
TL;DR: In this article, the authors proposed a method for realizing multi-screen playing video, comprising steps: finishing the connection and management of plural video cards and display devices; the computer reading the display resolution gross data and the number of the display device, recording the relative position number of respective display device.
Abstract: The invention provides a method for realizing multi-screen playing video, comprising steps: finishing the connection and management of plural video cards and display devices; the computer reading the display resolution gross data and the number of the display device, recording the relative position number of respective display device; CPU calculating a dividing way of an entire video image frame based on the quantity of the display device and the number of the relative position, determining the display device number and the divided zoom ratio corresponding to respective divided original resolution video image data block; CPU dividing each frame video image data of the multi-screen playing video file to plural original resolution video image data blocks based on the dividing mode, transmitting each data block to the corresponding display card by a bus; each display card processing zoom to the original resolution video image data block according to the zoom ratio and displaying on the display device. The invention improves the refreshing rate of the multiple-screen playing, reduces the CPU occupation resource and achieves the real time playing effect.

Patent
11 Oct 2011
TL;DR: Semantic zoom techniques are described in this article, which may also include a variety of different input features, such as to support gestures, cursor-control device, and keyboard inputs.
Abstract: Semantic zoom techniques are described. In one or more implementations, techniques are described that may be utilized by a user to navigate to content of interest. These techniques may also include a variety of different features, such as to support semantic swaps and zooming "in" and "out." These techniques may also include a variety of different input features, such as to support gestures, cursor-control device, and keyboard inputs. A variety of other features are also supported as further described in the detailed description and figures.

Patent
28 Jan 2011
TL;DR: In this article, a GUI layout for displaying a main document image and multiple thumbnail images in a more space efficient manner is presented, where one unified pane displays both a selected page of the document in a main image area and several thumbnail images each corresponding to a document page.
Abstract: A GUI layout for displaying a main document image and multiple thumbnail images in a more space-efficient manner. One unified pane displays both a selected page of the document in a main image area and multiple thumbnail images each corresponding to a document page. The thumbnails and the main image area do not overlap. In some embodiments, the thumbnails include multiple groups of thumbnails having different sizes. When a user selects a thumbnail image, the corresponding document page is displayed in the main image area. The pane is provided with functions that allow the user to integrate the page viewing and selection process and to customize the pane, such as: scrolling of the thumbnail images, moving the document content displayed in the main image area, changing the physical size of the main image area, changing the zoom size of the document content in the main image area, etc.

Patent
09 May 2011
TL;DR: In this paper, the authors present a method of creating and presenting a user interface comprising a Dynamic Mosaic Extended Electronic Programming Guide (DMXEPG) using video, audio, special applications, and service dynamic metadata.
Abstract: The present invention teaches a method of creating and presenting a user interface comprising a Dynamic Mosaic Extended Electronic Programming Guide (DMXEPG) using video, audio, special applications, and service dynamic metadata. The system enables television or digital radio service subscribers to select and display of various programs including video, interactive TV applications, or any combination of audio or visual components grouped and presented in accordance with the dynamic program/show metadata, business rules and objectives of service providers, broadcasters, and/or personal subscriber choices, collectively referred to as mosaic element presentation criteria.

Journal ArticleDOI
TL;DR: A novel actuation method for a smooth impact drive mechanism that positions dual-slider by a single piezo-element is introduced and applied to a compact zoom lens system.
Abstract: In this paper, a novel actuation method for a smooth impact drive mechanism that positions dual-slider by a single piezo-element is introduced and applied to a compact zoom lens system. A mode chart that determines the state of the slider at the expansion or shrinkage periods of the piezo-element is presented, and the design guide of a driving input profile is proposed. The motion of dual-slider holding lenses is analyzed at each mode, and proper modes for zoom functions are selected for the purpose of positioning two lenses. Because the proposed actuation method allows independent movement of two lenses by a single piezo-element, the zoom lens system can be designed to be compact. For a feasibility test, a lens system composed of an afocal zoom system and a focusing lens was developed, and the passive auto-focus method was implemented.

01 Jul 2011
TL;DR: This paper presents the first true vario-scale structure for geographic information: a delta in scale leads to aDelta in the map (and smaller scale deltas lead to smaller map deltAs until and including the infinitesimal small delta) for all scales.
Abstract: This paper presents the first true vario-scale structure for geographic information: a delta in scale leads to a delta in the map (and smaller scale deltas lead to smaller map deltas until and including the infinitesimal small delta) for all scales. The structure is called smooth tGAP and its integrated 2d space and scale representation is stored as a single 3d data structure: space-scale cube (ssc). The polygonal area objects are mapped to polyhedral representations in the smooth tGAP structure. The polyhedral primitive is integrating all scale representations of a single 2d area object. Together all polyhedral primitives form a partition of the space-scale cube: no gaps and no overlaps (in space or scale). Obtaining a single scale map is computing an horizontal slice through the structure. The structure can be used to implement smooth zoom in an animation or morphing style. The structure can also be used for mixed-scale representation: more detail near to user/viewer, less detail further away by taking non-horizontal slices. For all derived representations, slices and smooth-zoom animations, the 2d maps are always perfect planar partitions (even mixed-scales objects fit together and form a planar partition). Perhaps mixed-scale is not very useful for 2d maps, but for 3d computer graphics it is one of the key techniques. Our approach does also work for 3d space and scale integrated in one 4d hypercube.

Patent
27 May 2011
TL;DR: In this paper, a gesture made to a user interface displaying multiple content objects, determine which content object to zoom, determine an appropriate size for the content object based on bounds of the object and the size of the user interface, and zoom the object to the appropriate size.
Abstract: This document describes techniques and apparatuses for gesture-based content-object zooming. In some embodiments, the techniques receive a gesture made to a user interface displaying multiple content objects, determine which content object to zoom, determine an appropriate size for the content object based on bounds of the object and the size of the user interface, and zoom the object to the appropriate size.

Proceedings ArticleDOI
07 Aug 2011
TL;DR: Google Body gives any user access to 3D anatomy information typically reserved for physicians and medical students, and it all works from a browser, smartphone, or tablet.
Abstract: Google Body gives any user access to 3D anatomy information typically reserved for physicians and medical students The user can peel away and add back layers of anatomy, rotate and zoom, select entities such as muscles and nerves, and search Direct links to any view of the male or female model -- with an optional user-supplied annotation -- can be forwarded to friends, family, or physicians And it all works from a browser, smartphone, or tablet

Journal Article
TL;DR: In this paper, a sparse local model of image appearance is proposed for image deblurring and digital zoom, where small image patches are represented as linear combinations of a few elements drawn from some large set (dictionary) of candidates.
Abstract: This paper proposes a novel approach to image deblurring and digital zooming using sparse local models of image appearance. These models, where small image patches are represented as linear combinations of a few elements drawn from some large set (dictionary) of candidates, have proven well adapted to several image restoration tasks. A key to their success has been to learn dictionaries adapted to the reconstruction of small image patches. In contrast, recent works have proposed instead to learn dictionaries which are not only adapted to data reconstruction, but also tuned for a specific task. We introduce here such an approach to deblurring and digital zoom, using pairs of blurry/sharp (or low-/high-resolution) images for training, as well as an effective stochastic gradient algorithm for solving the corresponding optimization task. Although this learning problem is not convex, once the dictionaries have been learned, the sharp/high-resolution image can be recovered via convex optimization at test time. Experiments with synthetic and real data demonstrate the effectiveness of the proposed approach, leading to state-of-the-art performance for non-blind image deblurring and digital zoom.

Journal ArticleDOI
TL;DR: A Web-based metabolic-map diagram, which can be interactively explored by the user, called the Cellular Overview, which is available as part of the Pathway Tools software that powers multiple metabolic databases including Biocyc.org.
Abstract: Background Displaying complex metabolic-map diagrams, for Web browsers, and allowing users to interact with them for querying and overlaying expression data over them is challenging.

Proceedings ArticleDOI
20 Oct 2011
TL;DR: A hand gesture recognition method based on color marker detection is presented that provides more flexible, natural and intuitive interaction possibilities, and also offers an economic and practical way of interaction.
Abstract: In this paper, a hand gesture recognition method based on color marker detection is presented, In this case, we have used four types of colored markers (red, blue, yellow and green) mounted on the two hands. With this posture the user can perform different gestures as zoom, move, draw, and write on a virtual keyboard. This implemented system provides more flexible, natural and intuitive interaction possibilities, and also offers an economic and practical way of interaction.

Patent
09 Sep 2011
TL;DR: Semantic zoom techniques are described in this article, which may also include a variety of different input features, such as to support gestures, cursor-control device, and keyboard inputs.
Abstract: Semantic zoom techniques are described. In one or more implementations, techniques are described that may be utilized by a user to navigate to content of interest. These techniques may also include a variety of different features, such as to support semantic swaps and zooming “in” and “out.” These techniques may also include a variety of different input features, such as to support gestures, cursor-control device, and keyboard inputs. A variety of other features are also supported as further described in the detailed description and figures.

Patent
Tero Rissa1, Kaj Kristian Gronholm1
27 Dec 2011
TL;DR: In this article, an apparatus, method, and computer program product for receiving a first input, initiating a zoom function in response to the first input; receiving a second input during the zoom function, wherein the second input and first input are independent of each other.
Abstract: In accordance with an example embodiment of the present invention, an apparatus, method, and computer program product for: receiving a first input; initiating a zoom function in response to the first input; receiving a second input during the zoom function, wherein the second input and the first input are independent of each other;and controlling the zoom function based on the second input.