scispace - formally typeset
Search or ask a question

Showing papers on "Orientation (computer vision) published in 2001"


Patent
06 Jun 2001
TL;DR: In this article, the orientation of a device is determined by detecting movement followed by an end of movement of the device and the orientation is then determined and used to set the orientation on the display.
Abstract: In a device having a display, a change in focus for an application is used with a requested usage of a context attribute to change the amount of information regarding the context attribute that is sent to another application. A method of changing the orientation of images on a device's display detects movement followed by an end of movement of the device. The orientation of the device is then determined and is used to set the orientation of images on the display. A method of setting the orientation of a display also includes storing information regarding an item displayed in a first orientation before changing the orientation. When the orientation is returned to the first orientation, the stored information is retrieved and is used to display the item in the first orientation. The stored information can include whether the item is to appear in the particular orientation.

565 citations


Patent
17 Sep 2001
TL;DR: In this article, a system for processing image data corresponding a scene comprising an imaging device and an image reading instruction indicia is described, in which the image data is processed in a manner that depends on features of the image reading instructions, such as size, scaling, orientation, and distortion.
Abstract: The invention relates to a system for processing image data corresponding a scene comprising an imaging device and an image reading instruction indicia. In accordance with the invention, image data corresponding to a scene comprising an image reading instruction indicia is processed in a manner that depends on features of the image reading instruction indicia. If the image reading instruction indicia is of a type whose size, scaling, orientation, and distortion can be determined, scaling, orientation, and distortion characteristics determined from the image reading instruction indicia can be used to improve the image reading process.

228 citations


Journal ArticleDOI
TL;DR: This paper describes a generalized formulation of optical flow estimation based on models of brightness variations that are caused by time-dependent physical processes, which simultaneously estimate the 2D image motion and the relevant physical parameters of the brightness change model.
Abstract: Although most optical flow techniques presume brightness constancy, it is well-known that this constraint is often violated, producing poor estimates of image motion. This paper describes a generalized formulation of optical flow estimation based on models of brightness variations that are caused by time-dependent physical processes. These include changing surface orientation with respect to a directional illuminant, motion of the illuminant, and physical models of heat transport in infrared images. With these models, we simultaneously estimate the 2D image motion and the relevant physical parameters of the brightness change model. The estimation problem is formulated using total least squares, with confidence bounds on the parameters. Experiments in four domains, with both synthetic and natural inputs, show how this formulation produces superior estimates of the 2D image motion.

209 citations


Proceedings ArticleDOI
07 Oct 2001
TL;DR: A simple derivation is presented to show that RS generates the minimum mean-squared error (MMSE) estimate of the high- resolution image, given the low-resolution image.
Abstract: We introduce a new approach to optimal image scaling called resolution synthesis (RS). In RS, the pixel being interpolated is first classified in the context of a window of neighboring pixels; and then the corresponding high-resolution pixels are obtained by filtering with coefficients that depend upon the classification. RS is based on a stochastic model explicitly reflecting the fact that pixels falls into different classes such as edges of different orientation and smooth textures. We present a simple derivation to show that RS generates the minimum mean-squared error (MMSE) estimate of the high-resolution image, given the low-resolution image. The parameters that specify the stochastic model must be estimated beforehand in a training procedure that we have formulated as an instance of the well-known expectation-maximization (EM) algorithm. We demonstrate that the model parameters generated during the training may be used to obtain superior results even for input images that were not used during the training.

195 citations


Book
01 Sep 2001
TL;DR: In this article, a hierarchy of novel, accurate and flexible techniques are developed to address a number of different situations ranging from the absence of scene metric information to cases where some world distances are known but there is not sufficient information for a complete camera calibration.
Abstract: This monograph presents some research carried out by the author into three- dimensional visual reconstruction during his studies for the achievement of a Doctor of Philosophy Degree at the University of Oxford. This book constitutes the author’s D.Phil. dissertation which, having been awarded the British Computer Society Distinguished Dissertation Award for the year 2000, has kindly been published by Springer-Verlag London Ltd. The work described in this book develops the theory of computing world measurements (e.g. distances, areas etc.) from photographs of scenes and reconstructing three-dimensional models of the scene. The main tool used is projective geometry which forms the basis for accurate estimation algorithms. Novel methods are described for computing virtual reality-like environments from any kind of perspective image. The techniques presented employ uncalibrated images; no knowledge of the internal parameters of the camera (such as focal length and aspect ratio) nor its pose (position and orientation with respect to the viewed scene) are required at any time. Extensive use is made of geometric characteristics of the scene. Thus there is no need for specialized calibration devices. A hierarchy of novel, accurate and flexible techniques is developed to address a number of different situations ranging from the absence of scene metric information to cases where some world distances are known but there is not sufficient information for a complete camera calibration. The geometry of single views is explored and monocular vision shown to be sufficient to obtain a partial or complete three-dimensional reconstruction of a scene. To achieve this, the properties of planar homographies and planar homologies are extensively exploited. The geometry of multiple views is also investigated, particularly the use of a parallax-based approach for structure and camera recovery. The duality between two-view and three-view configurations is described in detail. In order to pro

167 citations


Journal ArticleDOI
TL;DR: This paper presents a procedure for the analysis of left-right (bilateral) asymmetry in mammograms based upon the detection of linear directional components by using a multiresolution representation based upon Gabor wavelets.
Abstract: This paper presents a procedure for the analysis of left-right (bilateral) asymmetry in mammograms. The procedure is based upon the detection of linear directional components by using a multiresolution representation based upon Gabor wavelets. A particular wavelet scheme with two-dimensional Gabor filters as elementary functions with varying tuning frequency and orientation, specifically designed in order to reduce the redundancy in the wavelet-based representation, is applied to the given image. The filter responses for different scales and orientation are analyzed by using the Karhunen-Loeve (KL) transform and Otsu's method of thresholding. The KL transform is applied to select the principal components of the filter responses, preserving only the most relevant directional elements appearing at all scales. The selected principal components, thresholded by using Otsu's method, are used to obtain the magnitude and phase of the directional components of the image. Rose diagrams computed from the phase images and statistical measures computed thereof are used for quantitative and qualitative analysis of the oriented patterns. A total of 80 images from 20 normal cases, 14 asymmetric cases, and six architectural distortion cases from the Mini-MIAS (Mammographic Image Analysis Society, London, U.K.) database were used to evaluate the scheme using the leave-one-out methodology. Average classification accuracy rates of up to 74.4% were achieved.

164 citations


Journal ArticleDOI
TL;DR: An efficient algorithm for human face detection and facial feature extraction is devised using the genetic algorithm and the eigenface technique, and the lighting effect and orientation of the faces are considered and solved.

163 citations


Patent
06 Jun 2001
TL;DR: In this article, a surgical operation image acquisition/display apparatus comprises an observation section, an image display section and a specifying section, which is adapted to alternatively display any of the images obtained by the observation sections or synthetically combine and display the combined images.
Abstract: A navigation apparatus comprises a navigation-related information generating section and a display section. The navigation-related information generating section measures the position and orientation of an object and a target in a three-dimensional space and generate navigation-related information to be used for navigating the object toward the target. The display section displays the navigation-related information generated by the navigation-related information generating section in any of different modes depending on the relationship of the position and orientation of the object and that of the target. A surgical operation image acquisition/display apparatus comprises an observation section, an image display section and a specifying section. The observation section includes a plurality of observation sections whose position and orientation is modifiable. The image display section is adapted to alternatively display any of the images obtained by the observation sections or synthetically combine and display the combined images. The specifying section specifies the image to be displayed to the image display section according to the position and orientation of the observation section.

162 citations


Patent
03 Jan 2001
TL;DR: In this article, a system that incorporates curvature sensors, a garment for sensing the body position of a person, and a method for registering a patient's body to a volumetric image data set in preparation for computer-assisted surgery or other therapeutic interventions is presented.
Abstract: There is provided a device for generating a frame of reference and tracking the position and orientation of a ool in computer-assisted image guided surgery or therapy system. A first curvature sensor including fiducial markers is provided for positioning on a patient prior to volumetric imaging, and sensing the patient's body position during surgery. A second curvature sensor is coupled to the first curvature sensor at one end and to a tool at the other end to inform the computer-assisted image guided surgery or therapy system of the position and orientation of the tool with respect to the patient's body. A system is provided that incorporates curvature sensors, a garment for sensing the body position of a person, and a method for registering a patient's body to a volumetric image data set in preparation for computer-assisted surgery or other therapeutic interventions. This system can be adapted for remote applications as well.

152 citations


Proceedings ArticleDOI
08 Dec 2001
TL;DR: A novel, non-linear representation of edge structure is used to improve the performance of model matching algorithms and object verification/recognition tasks, and leads to better recognition/verification of faces in an access control task.
Abstract: We show how a novel, non-linear representation of edge structure can be used to improve the performance of model matching algorithms and object verification/recognition tasks. Rather than represent the image structure using intensity values or gradients, we use a measure which indicates the orientation of structures at each pixel, together with an indication of how reliable the orientation estimate is. Orientations in flat, noisy regions tend to be penalised whereas those near strong edges are favoured. We demonstrate that this representation leads to more accurate and reliable matching between models and new images, and leads to better recognition/verification of faces in an access control task.

151 citations


Journal ArticleDOI
TL;DR: In this paper, an approach for localization using geometric features from a 360 laser range finder and a monocular vision system is investigated in large-scale experiments and the results were obtained with a fully self-contained system where extensive tests with an overall length of more than 6.4 km and 150,000 localization cycles have been conducted.

Journal ArticleDOI
TL;DR: The variational method is applied to denoise and restore general nonflat image features and Riemannian objects such as metric, distance and Levi--Civita connection play important roles in the models.
Abstract: We develop both mathematical models and computational algorithms for variational denoising and restoration of nonflat image features Nonflat image features are those that live on Riemannian manifolds, instead of on the Euclidean spaces Familiar examples include the orientation feature (from optical flows or gradient flows) that lives on the unit circle S1 , the alignment feature (from fingerprint waves or certain texture images) that lives on the real projective line $\mathbb{RP}^1$, and the chromaticity feature (from color images) that lives on the unit sphere S2 In this paper, we apply the variational method to denoise and restore general nonflat image features Mathematical models for both continuous image domains and discrete domains (or graphs) are constructed Riemannian objects such as metric, distance and Levi--Civita connection play important roles in the models Computational algorithms are also developed for the resulting nonlinear equations The mathematical framework can be applied to res

Proceedings ArticleDOI
07 Jul 2001
TL;DR: In this article, the authors presented a new velocity estimation algorithm, using orientation tensors and parametric motion models to provide both fast and accurate results, but the tradeoffs between accuracy and speed was that no attempts were made to obtain regions of coherent motion when estimating the parametric models.
Abstract: In a previous paper, the author presented a new velocity estimation algorithm, using orientation tensors and parametric motion models to provide both fast and accurate results. One of the tradeoffs between accuracy and speed was that no attempts were made to obtain regions of coherent motion when estimating the parametric models. In this paper we show how this can be improved by doing a simultaneous segmentation of the motion field. The resulting algorithm is slower than the previous one, but more accurate. This is shown by evaluation on the well-known Yosemite sequence, where already the previous algorithm showed an accuracy which was substantially better than for earlier published methods. This result has now been improved further.

Patent
10 Aug 2001
TL;DR: In this paper, an imaging system comprising a plurality of first image capture devices is presented, where overlapping rectilinear images are captured and halved, with the left halves being stitched and transformed into a first equirectangular image and the right halves being transformed into another equirectial image.
Abstract: An imaging system comprising a plurality of first image capture devices. Overlapping rectilinear images are captured and halved, with the left halves being stitched and transformed into a first equirectangular image and the right halves being stitched and transformed into a second equirectangular image. The first equirectangular image, and second equirectangular image are displayed in a stereoscopic orientation to produce a stereoscopic equirectangular image. The imaging system may be utilized to capture a plurality of sequential images, to produce a full-motion stereoscopic equirectangular image.

Journal ArticleDOI
TL;DR: Results indicate that internal noise shows a primary dependence on texture density but that, counterintuitively, subjects rely on a sample size approximately equal to a fixed power of the number of samples present, regardless of their spatial arrangement.
Abstract: Channel-based models of human spatial vision require that the output of spatial filters be pooled across space. This pooling yields global estimates of local feature attributes such as orientation that are useful in situations in which that attribute may be locally variable, as is the case for visual texture. The spatial characteristics of orientation summation are considered in the study. By assessing the effect of orientation variability on observers' ability to estimate the mean orientation of spatially unstructured textures, one can determine both the internal noise on each orientation sample and the number of samples being pooled. By a combination of fixing and covarying the size of textured regions and the number of elements constituting them, one can then assess the effects of the texture's size, density, and numerosity (the number of elements present) on the internal noise and the sampling density. Results indicate that internal noise shows a primary dependence on texture density but that, counterintuitively, subjects rely on a sample size approximately equal to a fixed power of the number of samples present, regardless of their spatial arrangement. Orientation pooling is entirely flexible with respect to the position of input features.

Patent
13 Sep 2001
TL;DR: In this article, the authors disclosed a three-dimensional position/orientation sensing apparatus which can measure a broad region, when an image obtained by capturing a marker having a known position/ orientation relation with an object is analyzed, a relative position/ orientation relation between the marker and a capturing apparatus is obtained, and a position and orientation of the object are obtained, an information presenting system in which the captured image of an actual object can easily be compared with object data.
Abstract: There are disclosed a three-dimensional position/orientation sensing apparatus which can measure a broad region, when an image obtained by capturing a marker having a known position/orientation relation with an object is analyzed, a relative position/orientation relation between the marker and a capturing apparatus is obtained, and a position and orientation of the object are obtained, an information presenting system in which the captured image of an actual object can easily be compared with object data, and a model error detecting system.

Journal ArticleDOI
TL;DR: A fast calibration method for computing the position and orientation of 2-D ultrasound images in 3-D space where a position sensor is mounted on the US probe, based on a custom-built phantom.
Abstract: We have developed a fast calibration method for computing the position and orientation of 2-D ultrasound (US) images in 3-D space where a position sensor is mounted on the US probe. This calibration is required in the fields of 3-D ultrasound and registration of ultrasound with other imaging modalities. Most of the existing calibration methods require a complex and tedious experimental procedure. Our method is simple and it is based on a custom-built phantom. Thirty N-fiducials (markers in the shape of the letter “N”) embedded in the phantom provide the basis for our calibration procedure. We calibrated a 3.5-MHz sector phased-array probe with a magnetic position sensor, and we studied the accuracy and precision of our method. A typical calibration procedure requires approximately 2 min. We conclude that we can achieve accurate and precise calibration using a single US image, provided that a large number (approximately ten) of N-fiducials are captured within the US image, enabling a representative sampling of the imaging plane. (E-mail: ykim@u.washington.edu)

Journal ArticleDOI
TL;DR: The authors' proposed algorithm considers line fits in both FOV directions and gives a globally consistent set of expansion coefficients and an optimal image center, which provides more exhaustive FOV correction than previously proposed algorithms.
Abstract: Modern video based endoscopes offer physicians a wide-angle field of view (FOV) for minimally invasive procedures, Unfortunately, inherent barrel distortion prevents accurate perception of range. This makes measurement and distance judgment difficult and causes difficulties in emerging applications, such as virtual guidance of endoscopic procedures. Such distortion also arises in other wide FOV camera circumstances. This paper presents a distortion correction technique that can automatically calculate correction parameters, without precise knowledge of horizontal and vertical orientation. The method is applicable to any camera-distortion correction situation. Based on a least-squares estimation, the authors' proposed algorithm considers line fits in both FOV directions and gives a globally consistent set of expansion coefficients and an optimal image center. The method is insensitive to the initial orientation of the endoscope and provides more exhaustive FOV correction than previously proposed algorithms. The distortion-correction procedure is demonstrated for endoscopic video images of a calibration test pattern, a rubber bronchial training device, and real human circumstances. The distortion correction is also shown as a necessary component of an image-guided virtual-endoscopy system that matches endoscope images to corresponding rendered three-dimensional computed tomography views.

Journal ArticleDOI
TL;DR: In this paper, the authors present a review of the most-squares camera calibration including lens distortion, automatic editing of calibration points, and self-calibration of a stereo camera from unknown camera Motions and Point Correspondences.
Abstract: 1 Introduction.- 2 Minimum Solutions for Orientation.- 3 Generic Estimation Procedures for Orientation with Minimum and Redundant Information.- 4 Photogrammetric Camera Component Calibration: A Review of Analytical Techniques.- 5 Least-Squares Camera Calibration Including Lens Distortion and Automatic Editing of Calibration Points.- 6 Modeling and Calibration of Variable-Parameter Camera Systems.- 7 System Calibration Through Self-Calibration.- 8 Self-Calibration of a Stereo Rig from Unknown Camera Motions and Point Correspondences.

Journal ArticleDOI
TL;DR: It seems that this model, which had originally been designed not to find small, hidden military vehicles, but rather to find the few most obviously conspicuous objects in an image, performed as an efficient target detector on the Search–2 dataset.
Abstract: Rather than attempting to fully interpret visual scenes in a parallel fashion, biological systems appear to employ a serial strategy by which an attentional spotlight rapidly selects circumscribed regions in the scene for further analysis The spatiotemporal deployment of attention has been shown to be controlled by both bottom-up (image-based) and top-down (volitional) cues We describe a detailed neuromimetic computer implementation of a bottom-up scheme for the control of visual attention, focusing on the problem of combining information across modalities (orientation, intensity, and color information) in a purely stimulusdriven manner We have applied this model to a wide range of target detection tasks, using synthetic and natural stimuli Performance has, however, remained difficult to objectively evaluate on natural scenes, because no objective reference was available for comparison We present predicted search times for our model on the Search–2 database of rural scenes containing a military vehicle Overall, we found a poor correlation between human and model search times Further analysis, however, revealed that in 75% of the images, the model appeared to detect the target faster than humans (for comparison, we calibrated the model’s arbitrary internal time frame such that 2 to 4 image locations were visited per second) It seems that this model, which had originally been designed not to find small, hidden military vehicles, but rather to find the few most obviously conspicuous objects in an image, performed as an efficient target detector on the Search–2 dataset Further developments of the model are finally explored, in particular through a more formal treatment of the difficult problem of extracting suitable low-level features to be fed into the saliency map

Patent
26 Mar 2001
TL;DR: In this paper, techniques for determining the presence, orientation and location of features in an image of a two-dimensional optical code are presented for use in mapping data in a pixel plane with grid locations in a grid-based two dimensional code to account for size, rotation, tilt, warping and distortion of the code symbol.
Abstract: The disclosure relates to techniques for determining the presence, orientation and location of features in an image of a two dimensional optical code. The techniques are adapted for use in mapping data in an image pixel plane with grid locations in a grid-based two dimensional code to account for size, rotation, tilt, warping and distortion of the code symbol. Where such a code is a MaxiCode, techniques are disclosed for determining the presence and location of the MaxiCode bulls-eye, orientation modules, primary data modules and secondary data modules.

01 Jan 2001
TL;DR: The Applanix Position and Orientation System for Airborne Vehicles (POS/AV TM) has been used successfully since 1994 to georeference airborne data collected from multispectral and hyperspectral scanners, LIDAR's, and film and digital cameras.
Abstract: This paper describes how position and orientation measurement systems are used to directly georeference airborne imagery data, and presents the accuracies that are attainable for the final mapping products. The Applanix Position and Orientation System for Airborne Vehicles (POS/AV TM ) has been used successfully since 1994 to georeference airborne data collected from multispectral and hyperspectral scanners, LIDAR’s, and film and digital cameras. The POS/ AV TM uses integrated inertial/GPS technology to directly compute the position and orientation of the airborne sensor with respect to the local mapping frame. A description of the POS/AV TM system is given, along with an overview of the sensors used and the theory behind the integrated inertial/GPS processing. An error analysis for the airborne direct georeferencing technique is then presented. Firstly, theoretical analysis is used to determine the attainable positioning accuracy of ground objects using only camera position, attitude, and image data, without ground control. Besides theoretical error analysis, a practical error analysis was done to present actual results using only the POS data plus digital imagery without ground control except for QA/QC. The results show that the use of POS/AV enables a variety of mapping products to be generated from airborne navigation and imagery data without the use of ground control.

Proceedings ArticleDOI
29 Oct 2001
TL;DR: A registration method for outdoor wearable AR systems using a high precision gyroscope and a vision-based drift compensation algorithm, which tracks natural features in the outdoor environment as landmarks from images captured by a camera on an HMD.
Abstract: A registration method for outdoor wearable AR systems is described. Our approach is based on using a high precision gyroscope, which can measure 3DOF angle of head direction accurately, but with some drift error. We solved the drift problem with a vision-based drift compensation algorithm, which tracks natural features in the outdoor environment as landmarks from images captured by a camera on an HMD. The paper first describes the detail of the vision-based drift compensation method. Then, a calibration method for the orientation sensor is proposed. Finally, using results from an actual wearable AR system, a comparison of registration error with and without vision-based drift compensation demonstrates the feasibility of the proposed method.

Patent
02 Oct 2001
TL;DR: In this article, a device for recording three-dimensional ultrasound images is described, including an ultrasound head which can be freely moved by hand, an ultrasound recording apparatus, an image processing system, and a position detection system.
Abstract: The present invention is related to a device for recording three-dimensional ultrasound images. The device includes an ultrasound head which can be freely moved by hand, an ultrasound recording apparatus, an image processing system, and a position detection system. The position detection system has an analyzing unit and at least two sensors for detecting electromagnetic waves so that the position and orientation of the ultrasound head and, thus, the position and orientation of the ultrasound section images in space can be determined.

Journal ArticleDOI
TL;DR: In this paper, the confocal technique for fiber-orientation distribution measurement is described and the associated errors discussed in detail, where fibres are followed for short distances into the bulk of the sample enabling a complete description of the fibre orientation distribution tensor.

Proceedings ArticleDOI
21 May 2001
TL;DR: It is demonstrated that the use of color and textures as discriminative properties of the image can improve, to a large extent, the accuracy of the constructed mosaic.
Abstract: Mosaics have been commonly used as visual maps for undersea exploration and navigation. The position and orientation of an underwater vehicle can be calculated by integrating the apparent motion of the images which form the mosaic. A feature-based mosaicking method is proposed in this paper. The creation of the mosaic is accomplished in four stages: feature selection and matching, detection of points describing the dominant motion, homography computation and mosaic construction. In this work we demonstrate that the use of color and textures as discriminative properties of the image can improve, to a large extent, the accuracy of the constructed mosaic. The system is able to provide 3D metric information concerning the vehicle motion using the knowledge of the intrinsic parameters of the camera while integrating the measurements of an ultrasonic sensor. The experimental results of real images have been tested on the GARBI underwater vehicle.

Patent
03 May 2001
TL;DR: In this paper, the orientation of a camera associated with a first image is determined based on a shape of a perimeter of a corrected version of the first image, which has less perspective distortion relative to a reference image than the original image.
Abstract: The orientation of a camera associated with a first image is determined based on a shape of a perimeter of a corrected version of the first image. The corrected version of the first image has less perspective distortion relative to a reference image than the first image. The shape of the perimeter of the corrected version of the first image is also different from the shape of the perimeter of the first image. The first image is then projected onto a surface based on the orientation of the camera.

Journal Article
TL;DR: This paper presents a novel approach for the integration input to extract a complete object outline and incorporates accuracy information in the technique, e.g., node locations, road orientation.
Abstract: Suetens et al. (1992), Gruen et al. (1995b), Gruen et al. (1997), The automation of object extraction from digital imagery has and Lukes (1998). been a key research issue in digital photogrammetry and com- The majority of current object extraction methodologies puter vision. In the spatiotemporal context of modern GIS, with are semi-automatic, whereby a human operator provides manuconstantly changing environments and periodic database re- ally some approximations (e.g., by selecting points on a monivisions, change detection is becoming increasingly important. tor display) and an automated algorithm uses these points as In this paper, we present a novel approach for the integration input to extract a complete object outline. Considering roads of object extraction and image-based geospatial change de- and similar linear features, these approximations may be in the tection. We extend the model of deformable contour models form of an initial point and an approximate direction. This (snakes) to function in a differential mode, and introduce a information is used as input to automated algorithms that pronew framework to differentiate change detection from the ceed by profile matching (Vosselman and de Knecht, 1995), recording of numerous slightly different versions of objects that edge analysis (Nevatia and Babu, 1980), or even combinations may remain unchanged. We assume the existence of prior of both (McKeown and Denlinger, 1988). Alternatively, the information for an object (an older record of its shape available human operator may provide a set of points that roughly in a GIS) with accompanying accuracy estimates. This infor- approximate the road from start to end, e.g., a polygonic mation becomes input for our “differential snakes” approach. approximation of a long road segment. This information is In a departure from standard techniques, the objective of our used by automated methods like dynamic programming and object extraction is not to extract yet another version of an deformable contour models, i.e., snakes (Gruen and Li, 1997; object from the new image, but instead to update the pre- Li, 1997). Full automation is pursued by automating the selecexisting GIS information (shape and corresponding accuracy). tion of the above-mentioned necessary initial information By incorporating accuracy information in our technique, we (e.g., node locations, road orientation). Examples of substantial identify local or global changes to this prior information, and efforts towards full automation may be found in Baumgartner et

Patent
30 Nov 2001
TL;DR: In this paper, an electronic device is provided that includes a user interface feature, a detection mechanism and one or more internal components, each of which can select the orientation of the user interface based on the detected orientation information.
Abstract: An electronic device is provided that includes a user-interface feature, a detection mechanism and one or more internal components. The user-interface feature is configurable to have a selected orientation about one or more axes. The detection mechanism can detect orientation information about the electronic device. The one or more components may select the orientation of the user-interface feature based on the detected orientation information.

Book ChapterDOI
TL;DR: A scale-invariant dissimilarity measure is proposed for comparing scale-space features at different positions and scales, and the likelihood of hierarchical, parameterized models can be evaluated in such a way that maximization of the measure over different models and their parameters allows for both model selection and parameter estimation.
Abstract: This paper presents an approach for simultaneous tracking and recognition of hierarchical object representations in terms of multiscale image features. A scale-invariant dissimilarity measure is proposed for comparing scale-space features at different positions and scales. Based on this measure, the likelihood of hierarchical, parameterized models can be evaluated in such a way that maximization of the measure over different models and their parameters allows for both model selection and parameter estimation. Then, within the framework of particle filtering, we consider the area of hand gesture analysis, and present a method for simultaneous tracking and recognition of hand models under variations in the position, orientation, size and posture of the hand. In this way, qualitative hand states and quantitative hand motions can be captured, and be used for controlling different types of computerised equipment.