scispace - formally typeset
Search or ask a question

Showing papers on "Zoom published in 2014"


Journal ArticleDOI
TL;DR: A 3D reconstruction and visualization system for automatically producing clean and well-regularized texture-mapped 3D models for large indoor scenes, from ground-level photographs and 3D laser points, with a new algorithm called “inverse constructive solid geometry (CSG)” for reconstructing a scene with a CSG representation consisting of volumetric primitives.
Abstract: Virtual exploration tools for large indoor environments (e.g. museums) have so far been limited to either blueprint-style 2D maps that lack photo-realistic views of scenes, or ground-level image-to-image transitions, which are immersive but ill-suited for navigation. On the other hand, photorealistic aerial maps would be a useful navigational guide for large indoor environments, but it is impossible to directly acquire photographs covering a large indoor environment from aerial viewpoints. This paper presents a 3D reconstruction and visualization system for automatically producing clean and well-regularized texture-mapped 3D models for large indoor scenes, from ground-level photographs and 3D laser points. The key component is a new algorithm called "inverse constructive solid geometry (CSG)" for reconstructing a scene with a CSG representation consisting of volumetric primitives, which imposes powerful regularization constraints. We also propose several novel techniques to adjust the 3D model to make it suitable for rendering the 3D maps from aerial viewpoints. The visualization system enables users to easily browse a large-scale indoor environment from a bird's-eye view, locate specific room interiors, fly into a place of interest, view immersive ground-level panorama views, and zoom out again, all with seamless 3D transitions. We demonstrate our system on various museums, including the Metropolitan Museum of Art in New York City--one of the largest art galleries in the world.

193 citations


Journal ArticleDOI
TL;DR: In this paper, two different types of cameras are used to monitor the response of a bridge to a train pass-by, and the acquired images are analyzed using three different image processing techniques (Pattern Matching, Edge Detection and Digital Image Correlation) and the results are compared with a reference measurement, obtained by a laser interferometer providing single point measurements.
Abstract: Bridge static and dynamic vibration monitoring is a key activity for both safety and maintenance purposes. The development of vision-based systems allows to use this type of devices for remote estimation of a bridge vibration, simplifying the measuring system installation. The uncertainty of this type of measurements is strongly related to the experimental conditions (mainly the pixel-to-millimeters conversion, the target texture, the camera characteristics and the image processing technique). In this paper two different types of cameras are used to monitor the response of a bridge to a train pass-by. The acquired images are analyzed using three different image processing techniques (Pattern Matching, Edge Detection and Digital Image Correlation) and the results are compared with a reference measurement, obtained by a laser interferometer providing single point measurements. Tests with different zoom levels are shown and the corresponding uncertainty values are estimated. As the zoom level decreases it is possible not only to measure the displacement of one point of the bridge, but also to grab images from a wide structure portion in order to recover displacements of a large number of points in the field of view. The extreme final solution would be having wide area measurements with no targets, to make measurements really easy, with clear advantages, but also with some drawbacks in terms of uncertainty to be fully comprehended.

165 citations


Patent
12 Jun 2014
TL;DR: In this paper, a dual-aperture zoom digital camera is presented, which includes Wide and Tele imaging sections with respective lens/sensor combinations and image signal processors and a camera controller operatively coupled to the wide and tele imaging sections.
Abstract: A dual-aperture zoom digital camera operable in both still and video modes. The camera includes Wide and Tele imaging sections with respective lens/sensor combinations and image signal processors and a camera controller operatively coupled to the Wide and Tele imaging sections. The Wide and Tele imaging sections provide respective image data. The controller is configured to combine in still mode at least some of the Wide and Tele image data to provide a fused output image from a particular point of view, and to provide without fusion continuous zoom video mode output images, each output image having a given output resolution, wherein the video mode output images are provided with a smooth transition when switching between a lower zoom factor (ZF) value and a higher ZF value or vice versa, and wherein at the lower ZF the output resolution is determined by the Wide sensor while at the higher ZF value the output resolution is determined by the Tele sensor.

115 citations


Journal ArticleDOI
TL;DR: Path analyses revealed the importance of users' assessment of the interface (perceived levels of natural mapping, intuitiveness, and ease of use), which can have significant consequences for user engagement as well as resulting attitudes and behavioral outcomes.
Abstract: From scrolling and clicking to dragging, flipping, sliding, hovering, and zooming, the wide array of interaction techniques has vastly expanded the range of user actions on an interface. Each of these interaction techniques affords a distinct action. But do these techniques differ in their ability to engage users and contribute to their user experience? Furthermore, do they affect how users view the content and how much they learn from it? We address these questions via two between-subjects laboratory experiments. Study 1 N = 128 investigated the relative effects of six on-screen interaction techniques click-to-download, drag, mouseover, slide, zoom, and 3D carousel on users' assessment of—as well as their engagement with—an informational website. The site for each condition was identical in content and design, except for the interaction technique used, so that we could isolate the effects of each technique on various cognitive, attitudinal and behavioral outcomes. Study 2 N = 127 examined the relative effects of four combinations of interaction techniques slide+click, slide+mouseover, drag+mouseover, and drag+zoom on the same dependent variables. Data from Study 1 suggest that although the 3D carousel generates more user action, the slide is better at aiding memory. The zoom-in/out tool was the least favored, whereas the mouseover feature fostered greater engagement among power users. Findings from Study 2, which was conducted with a different content domain, replicated the positive effects of slide and negative effects of drag in influencing user experience. Path analyses, using structural equation modeling, revealed the importance of users' assessment of the interface perceived levels of natural mapping, intuitiveness, and ease of use, which can have significant consequences for user engagement as well as resulting attitudes and behavioral outcomes. Design insights, theories, and techniques to test and capture user experience are discussed.

111 citations


Journal ArticleDOI
TL;DR: Unorganized collections of 3D models are analyzed to facilitate explorative shape synthesis by providing high‐level feedback of possible synthesizable shapes by jointly analyzing arrangements and shapes of parts across models.
Abstract: Recent advances in modeling tools enable non-expert users to synthesize novel shapes by assembling parts extracted from model databases. A major challenge for these tools is to provide users with relevant parts, which is especially difficult for large repositories with significant geometric variations. In this paper we analyze unorganized collections of 3D models to facilitate explorative shape synthesis by providing high-level feedback of possible synthesizable shapes. By jointly analyzing arrangements and shapes of parts across models, we hierarchically embed the models into low-dimensional spaces. The user can then use the parameterization to explore the existing models by clicking in different areas or by selecting groups to zoom on specific shape clusters. More importantly, any point in the embedded space can be lifted to an arrangement of parts to provide an abstracted view of possible shape variations. The abstraction can further be realized by appropriately deforming parts from neighboring models to produce synthesized geometry. Our experiments show that users can rapidly generate plausible and diverse shapes using our system, which also performs favorably with respect to previous modeling tools.

91 citations


Proceedings ArticleDOI
01 Oct 2014
TL;DR: This work introduces a framework for a feedback-driven view exploration, inspired by relevance feedback approaches used in Information Retrieval, and presents an instantiation of the framework for exploration of Scatter Plot Spaces based on visual features.
Abstract: The extraction of relevant and meaningful information from multivariate or high-dimensional data is a challenging problem. One reason for this is that the number of possible representations, which might contain relevant information, grows exponentially with the amount of data dimensions. Also, not all views from a possibly large view space, are potentially relevant to a given analysis task or user. Focus+Context or Semantic Zoom Interfaces can help to some extent to efficiently search for interesting views or data segments, yet they show scalability problems for very large data sets. Accordingly, users are confronted with the problem of identifying interesting views, yet the manual exploration of the entire view space becomes ineffective or even infeasible. While certain quality metrics have been proposed recently to identify potentially interesting views, these often are defined in a heuristic way and do not take into account the application or user context. We introduce a framework for a feedback-driven view exploration, inspired by relevance feedback approaches used in Information Retrieval. Our basic idea is that users iteratively express their notion of interestingness when presented with candidate views. From that expression, a model representing the user's preferences, is trained and used to recommend further interesting view candidates. A decision support system monitors the exploration process and assesses the relevance-driven search process for convergence and stability. We present an instantiation of our framework for exploration of Scatter Plot Spaces based on visual features. We demonstrate the effectiveness of this implementation by a case study on two real-world datasets. We also discuss our framework in light of design alternatives and point out its usefulness for development of user- and context-dependent visual exploration systems.

78 citations


Patent
09 Jul 2014
TL;DR: In this paper, the authors described methods and apparatus for performing zoom in and zoom out operations using multiple optical chains in a camera device, at least one optical chain in the camera device includes a moveable light redirection device.
Abstract: Methods and apparatus for performing zoom in and zoom out operations are described using multiple optical chains in a camera device. At least one optical chain in the camera device includes a moveable light redirection device, said light redirection device being one of a substantially plane mirror or a prism. Different zoom focal length settings correspond to different scene capture areas for the optical chain with the moveable light redirection device. Overlap between scene areas captured by different optical chains increases during zoom in and decreases during zoom out. Images captured by different optical chains are combined and/or cropped to generate a composite image corresponding to a zoom focal length setting.

60 citations


Proceedings ArticleDOI
26 Apr 2014
TL;DR: This paper systematically compared the Pinch-Drag-Flick approach with a technique that relies on spatial manipulation, such as lifting a display up/down to zoom, and shows that spatial manipulation can significantly outperform traditional Pinch/Drag/Flick.
Abstract: The multi-touch-based pinch to zoom, drag and flick to pan metaphor has gained wide popularity on mobile displays, where it is the paradigm of choice for navigating 2D documents. But is finger-based navigation really the gold standard' In this paper, we present a comprehensive user study with 40 participants, in which we systematically compared the Pinch-Drag-Flick approach with a technique that relies on spatial manipulation, such as lifting a display up/down to zoom. While we solely considered known techniques, we put considerable effort in implementing both input strategies on popular consumer hardware (iPhone, iPad). Our results show that spatial manipulation can significantly outperform traditional Pinch-Drag-Flick. Given the carefully optimized prototypes, we are confident to have found strong arguments that future generations of mobile devices could rely much more on spatial interaction principles.

50 citations


Patent
Changyin Zhou1
09 Jun 2014
TL;DR: In this article, a method that includes operating a first camera to capture a first image stream and operating a second camera to catch a second image stream is presented. But the method is limited to the case where the first camera is operating in a live view and the second camera is in a zoom mode.
Abstract: A method is provided that includes operating a first camera to capture a first image stream and operating a second camera to capture a second image stream. The method further includes initially using the first image stream to display a first field of view in a live-view interface of a graphic display and, while displaying the first image stream in the live-view interface, receiving an input corresponding to a zoom command. The method further includes, in response to receiving the input: (a) switching from using the first image stream to display the first field of view in the live-view interface to using a combination of the first image stream and the second stream to display a transitional field of view of the environment in the live-view interface and (b) subsequently switching to using the second image stream to display the second field of view in the live-view interface.

47 citations


Patent
19 Feb 2014
TL;DR: A zoom lens as mentioned in this paper is defined as an optical path extending between object and image ends, two or more zoom lens groups, an intermediate real image plane in optical path, and all zoom lenses groups on an image side or an object side of the image plane.
Abstract: A zoom lens includes an optical path extending between object and image ends, two or more zoom lens groups, an intermediate real image plane in optical path, and all zoom lens groups on an image side or an object side of intermediate real image plane. Zoom lens may include at least one optical path fold in optical path. Field optics in the vicinity of and associated with intermediate real image plane may be in optical path. Zoom lens may include a fixed rear optical group nearest to image end in optical path and a fixed aperture stop in fixed rear optical group wherein aperture stop remains stationary during zooming. Zoom lens may have a magnification with an absolute value greater than 0.4 between intermediate and final real image planes located at image end. Zoom lens may be entirely within a housing of a digital camera or cellphone during its operation.

45 citations


Posted Content
TL;DR: This work introduces a purely feed-forward architecture for semantic segmentation that exploits statistical structure in the image and in the label space without setting up explicit structured prediction mechanisms, and thus avoids complex and expensive inference.
Abstract: We introduce a purely feed-forward architecture for semantic segmentation. We map small image elements (superpixels) to rich feature representations extracted from a sequence of nested regions of increasing extent. These regions are obtained by "zooming out" from the superpixel all the way to scene-level resolution. This approach exploits statistical structure in the image and in the label space without setting up explicit structured prediction mechanisms, and thus avoids complex and expensive inference. Instead superpixels are classified by a feedforward multilayer network. Our architecture achieves new state of the art performance in semantic segmentation, obtaining 64.4% average accuracy on the PASCAL VOC 2012 test set.

Patent
12 Mar 2014
TL;DR: In this paper, a remote detection device detects a control object associated with a user, and an attached computing device may use the detection information to estimate a maximum and minimum extension for the control object, and may match this with the maximum or minimum zoom amount available for a content displayed on a content surface.
Abstract: Methods, systems, computer-readable media, and apparatuses for implementation of a contactless zooming gesture are disclosed. In some embodiments, a remote detection device detects a control object associated with a user. An attached computing device may use the detection information to estimate a maximum and minimum extension for the control object, and may match this with the maximum and minimum zoom amount available for a content displayed on a content surface. Remotely detected movement of the control object may then be used to adjust a current zoom of the content.

Journal ArticleDOI
TL;DR: The proposed new structure has very significant advantages over existing multi-scale/multi-representation solutions (in addition to being truly vario-scale): due to tight integration of space and scale, there is guaranteed consistency between scales, it is relatively easy to implement smooth zoom, and compact, object-oriented encoding is provided for a complete scale range.
Abstract: The proposed new structure has very significant advantages over existing multi-scale/multi-representation solutions in addition to being truly vario-scale: 1 due to tight integration of space and scale, there is guaranteed consistency between scales, 2 it is relatively easy to implement smooth zoom, and 3 compact, object-oriented encoding is provided for a complete scale range

Journal ArticleDOI
TL;DR: An improved gesture control interface for 3D modeling manipulation tasks that possesses conventional interface level usability with low user fatigue while maintaining a high level of intuitiveness is proposed.
Abstract: Natural and intuitive interfaces for CAD modeling such as hand gesture controls have received a lot of attention recently. However, in spite of its high intuitiveness and familiarity, their use for actual applications has been found to be less comfortable than a conventional mouse interface because of user physical fatigue over long periods of operation. In this paper, we propose an improved gesture control interface for 3D modeling manipulation tasks that possesses conventional interface level usability with low user fatigue while maintaining a high level of intuitiveness. By analyzing problems associated with previous hand gesture controls in translation, rotation and zooming, we developed a multi-modal control interface GaFinC: Gaze and Finger Control interface. GaFinC can track precise hand positions, recognizes several finger gestures, and utilizes an independent gaze pointing interface for setting the point of interest. To verify the performance of GaFinC, tests of manipulation accuracy and time are conducted and their results are compared with those of a conventional mouse. The comfort and intuitiveness level are also scored by means of user interviews. As a result, although the GaFinC interface posted insufficient performance in accuracy and times compared with a mouse, it shows applicable level performance. Also users found it to be more intuitive than a mouse interface while maintaining a usable level of comfort.

Patent
24 Jun 2014
TL;DR: In this article, a methodology and user interface enabling a user to create and edit a product design from an electronic device includes features that are particular optimal for designing customized products from mobile devices, by integrating form control with WYWIWYG presentation and including zoom and edit control independently and directly associated with the visual presentation element itself.
Abstract: A methodology and user interface enabling a user to create and edit a product design from an electronic device includes features that are particular optimal for designing customized products from mobile devices. By integrating form control with WYWIWYG presentation and including zoom and edit control independently and directly associated with the visual presentation element itself, users of mobile device experience a cleaner simplified design process, leading to more satisfied customers.

Patent
11 Aug 2014
TL;DR: In this paper, a map data associated with a set of zoom levels, where the map data includes style attribute data corresponding to various features of a map surface at corresponding zoom levels.
Abstract: A graphics or image rendering system, such as a map image rendering system, may receive map data associated with a set of zoom levels, where the map data includes style attribute data corresponding to various features of a map surface at corresponding zoom levels. The system may interpolate at least some of the style parameter values from the received map data to provide style parameter values over a range of zoom levels.

Patent
21 May 2014
TL;DR: In this article, a user can select an object represented in video content in order to set a magnification level with respect to that object, such that a portion of the video frames containing a representation of the object is selected to maintain a presentation size of the representation corresponding to the magnification level.
Abstract: A user can select an object represented in video content in order to set a magnification level with respect to that object. A portion of the video frames containing a representation of the object is selected to maintain a presentation size of the representation corresponding to the magnification level. The selection provides for a “smart zoom” feature enabling an object of interest, such as a face of an actor, to be used in selecting an appropriate portion of each frame to magnify, such that the magnification results in a portion of the frame being selected that includes the one or more objects of interest to the user. Pre-generated tracking data can be provided for some objects, which can enable a user to select an object and then have predetermined portion selections and magnifications applied that can provide for a smoother user experience than for dynamically-determined data.

Patent
14 May 2014
TL;DR: In this article, a touch screen terminal and a multi-interface switching method for desktop devices including clock devices and weather devices is presented. But the interface selection is based on the user's selected user interface.
Abstract: The invention discloses a touch screen terminal and a multi-interface switching method thereof The method includes: monitoring a touch even from a user; if the touch event which triggers a command of switching a user interface is received, popping out a thumbnail of the user interface for the user to select the user interface from the thumbnail; dynamically loading image contents included by the user interface, according to the user-selected user interface; rendering the image contents in real time and displaying the contents to the user The image contents include icons of desktop devices including a clock device and a weather device; rendering treatments include window setting, matrix projection, lighting, rotation and zooming The touch screen terminal and the multi-interface switching method thereof have the advantages that the user interface can be rendered and displayed in real time and accordingly users can dynamically change the user interface according to needs

Journal ArticleDOI
TL;DR: The 3D reconstruction on variable zoom shows that higher lens magnification results in a more accurate 3D sensing system, as demonstrated in the experimental results of this research.

Patent
04 Mar 2014
TL;DR: Semantic zoom based navigation may be used to navigate content, such as content related to spreadsheets as mentioned in this paper, where different gestures (e.g. pinch/stretch, pan, swipe) may also be used while navigating the content.
Abstract: Semantic zoom based navigation may be used to navigate content, such as content related to spreadsheets. Different gestures (e.g. pinch/stretch, pan, swipe) may be used while navigating the content. For example, while viewing data from a particular sheet in a workbook a pinch gesture may be received that changes the displayed content to a thumbnail view showing thumbnails that each represent a different sheet within the workbook. A gesture may also be received to change a view of an object. For example, a user may perform a stretch gesture near an object (e.g. a chart, graph, ) that changes the current view to a view showing underlying data for the object. A user may also perform a gesture (e.g. a stretch gesture) on a portion of a displayed object that changes the current view to a view showing the underlying data for a specific portion of the object.

Journal ArticleDOI
01 Feb 2014
TL;DR: An extended method for the rotational intrinsic self-calibration of a camera that exploits the rotation knowledge provided by the camera’s pan-tilt unit to robustly estimate the intrinsic camera parameters for different zoom steps as well as the rotation between pan-Tilt unit and camera.
Abstract: We present a method for active self-calibration of multi-camera systems consisting of pan-tilt zoom cameras. The main focus of this work is on extrinsic self-calibration using active camera control. Our novel probabilistic approach avoids multi-image point correspondences as far as possible. This allows an implicit treatment of ambiguities. The relative poses are optimized by actively rotating and zooming each camera pair in a way that significantly simplifies the problem of extracting correct point correspondences. In a final step we calibrate the entire system using a minimal number of relative poses. The selection of relative poses is based on their uncertainty. We exploit active camera control to estimate consistent translation scales for triplets of cameras. This allows us to estimate missing relative poses in the camera triplets. In addition to this active extrinsic self-calibration we present an extended method for the rotational intrinsic self-calibration of a camera that exploits the rotation knowledge provided by the camera's pan-tilt unit to robustly estimate the intrinsic camera parameters for different zoom steps as well as the rotation between pan-tilt unit and camera. Quantitative experiments on real data demonstrate the robustness and high accuracy of our approach. We achieve a median reprojection error of $$0.95$$ 0.95 pixel.

Patent
Ronald Loren Kirkby1, Hiro Mitsuji1, Eden Sherry1, Lawrence W. Neal1, Yohannes Kifle1 
08 Oct 2014
TL;DR: In this article, an electronic device with a display, processor(s), and memory detects a first user input to zoom in on a respective portion of a first video feed displayed on the display, and, in response, performs a software zoom function on the corresponding portion of the first video to display the respective portion at a first resolution.
Abstract: An electronic device with a display, processor(s), and memory detects a first user input to zoom in on a respective portion of a first video feed displayed on the display, and, in response, performs a software zoom function on the respective portion of the first video feed to display the respective portion at a first resolution. The electronic device determines a current zoom magnification and coordinates of the respective portion of the first video feed, and sends a command to the camera to perform a hardware zoom function on the coordinates of the respective portion according to the current zoom magnification. The electronic device receives a second video feed from the camera with a field of view corresponding to the respective portion, and displays, on the display, the second video feed in the video monitoring user interface with a second resolution that is higher than the first resolution.

Patent
26 Oct 2014
TL;DR: In this article, a plurality of optical chain modules, e.g., camera modules, are used to support zoom operations, and the optical chains are switched between use of groups of optical chains with different focal lengths.
Abstract: Methods and apparatus for supporting zoom operations using a plurality of optical chain modules, e.g., camera modules, are described. Switching between use of groups of optical chains with different focal lengths is used to support zoom operations. Digital zoom is used in some cases to support zoom levels corresponding to levels between the zoom levels of different optical chain groups or discrete focal lengths to which optical chains may be switched. In some embodiments optical chains have adjustable focal lengths and are switched between different focal lengths. In other embodiments optical chains have fixed focal lengths with different optical chain groups corresponding to different fixed focal lengths. Composite images are generate from images captured by multiple optical chains of the same group and/or different groups. Composite image is in accordance with a user zoom control setting. Individual composite images may be generated and/or a video sequence.

Proceedings ArticleDOI
19 Mar 2014
TL;DR: Compared to existing systems using perspective panoramas with cropping, this approach creates a cylindrical panorama, where the perspective is corrected in real-time, and the result is a better and more natural zoom.
Abstract: Panorama video is becoming increasingly popular, and we present an end-to-end real-time system to interactively zoom and pan into high-resolution panoramic videos. Compared to existing systems using perspective panoramas with cropping, our approach creates a cylindrical panorama. Here, the perspective is corrected in real-time, and the result is a better and more natural zoom. Our experimental results also indicate that such zoomed virtual views can be generated far below the frame-rate threshold. Taking into account recent trends in device development, our approach should be able to scale to a large number of concurrent users in the near future.

Patent
04 Jul 2014
TL;DR: In this paper, a dual-aperture zoom camera comprising a Wide camera with a respective Wide lens and a Tele camera with an respective Tele lens, the Wide and Tele cameras mounted directly on a single printed circuit board, are presented.
Abstract: A dual-aperture zoom camera comprising a Wide camera with a respective Wide lens and a Tele camera with a respective Tele lens, the Wide and Tele cameras mounted directly on a single printed circuit board, wherein the Wide and Tele lenses have respective effective focal lengths EFLW and EFLT and respective total track lengths TTLW and TTLT and wherein TTLW/EFLW>1.1 and TTLT/EFLT<1.0. Optionally, the dual-aperture zoom camera may further comprise an optical OIS controller configured to provide a compensation lens movement according to a user-defined zoom factor (ZF) and a camera tilt (CT) through LMV=CT*EFLZF, where EFLZF is a zoom-factor dependent effective focal length.

Journal ArticleDOI
29 Dec 2014-Sensors
TL;DR: A system for inferring the pinch-to-zoom gesture using surface EMG (Electromyography) signals in real time using a one-versus-one strategy and yields 93.38% classification accuracy averaged over six subjects.
Abstract: In this paper, we propose a system for inferring the pinch-to-zoom gesture using surface EMG (Electromyography) signals in real time. Pinch-to-zoom, which is a common gesture in smart devices such as an iPhone or an Android phone, is used to control the size of images or web pages according to the distance between the thumb and index finger. To infer the finger motion, we recorded EMG signals obtained from the first dorsal interosseous muscle, which is highly related to the pinch-to-zoom gesture, and used a support vector machine for classification between four finger motion distances. The powers which are estimated by Welch's method were used as feature vectors. In order to solve the multiclass classification problem, we applied a one-versus-one strategy, since a support vector machine is basically a binary classifier. As a result, our system yields 93.38% classification accuracy averaged over six subjects. The classification accuracy was estimated using 10-fold cross validation. Through our system, we expect to not only develop practical prosthetic devices but to also construct a novel user experience (UX) for smart devices.

Journal ArticleDOI
TL;DR: Virtual Petrographic Microscope (VPM) as discussed by the authors is a desktop software tool designed to aid geoscience researchers, students and educators in rock thin-section analysis without the need for a petrographic microscope.
Abstract: We present a free, standalone Windows and Mac OSX desktop software tool designed to aid geoscience researchers, students and educators in rock thin-section analysis without the need for a petrographic microscope. Virtual Petrographic Microscope (VPM) allows a user to analyse prepared high-resolution images of rock thin-sections on a computer using traditional features familiar to users of microscopes including stage rotation, objective zoom and switching between plane-polarised light and crossed-polarised light. VPM includes a range of ‘virtual’ features not possible when analysing physical thin-sections, including auto-scaling grid overlays, and annotation of thin-section images with the ability to save, export and import annotation files for collaboration and education. A case study involved a trial of the software by an intermediate undergraduate geology class. Analysis of the final examination results shows that incorporation of the VPM tool into the class program improved skill at recognising common ...

Journal ArticleDOI
Di Wang1, Qiong-Hua Wang1, Chuan Shen2, Xin Zhou1, Chun-Mei Liu2 
TL;DR: The zoom module of the system is formed by a liquid lens and a spatial light modulator (SLM) and can change the magnification of an image without mechanical moving parts and keep the output plane stationary.
Abstract: In this work, we propose an active optical zoom system The zoom module of the system is formed by a liquid lens and a spatial light modulator (SLM) By controlling the focal lengths of the liquid lens and the encoded digital lens on the SLM panel, we can change the magnification of an image without mechanical moving parts and keep the output plane stationary The magnification can change from 1/3 to 3/2 as the focal length of the encoded lens on the SLM changes from infinity to 24 cm The proposed active zoom system is simple and flexible, and has widespread application in optical communications, imaging systems, and displays

Journal ArticleDOI
TL;DR: The framework of reaction systems is extended by introducing (extended) zoom structures which formalize a depository of knowledge of a discipline of science which allows one to deal with the hierarchical nature of biology.
Abstract: In this paper we extend the framework of reaction systems by introducing (extended) zoom structures which formalize a depository of knowledge of a discipline of science. The integrating structure of such a depository (which is a well-founded partial order) allows one to deal with the hierarchical nature of biology. This leads to the notion of an exploration system which consists of (1) a static part which is a depository of knowledge given by an extended zoom structure , and (2) a dynamic part given by a family of reaction systems . In this setup the depository of knowledge is explored by computations/processes provided by reaction systems from , where this exploration can use/integrate knowledge present on different levels (e.g., atomic, cellular, organism, species, … levels).

Journal ArticleDOI
TL;DR: This paper proposes a novel method for simultaneously estimating the intrinsic and extrinsic camera parameters based on an energy minimization framework and confirmed experimentally that the proposed method can achieve accurate camera parameter estimation during camera zooming.