scispace - formally typeset
Search or ask a question

Showing papers on "Orientation (computer vision) published in 1996"


Journal ArticleDOI
TL;DR: In this paper, an information-theoretic approach for finding the registration of volumetric medical images of differing modalities is presented, which is achieved by adjustment of the relative position and orientation until the mutual information between the images is maximized.

2,005 citations


Patent
26 Jul 1996
TL;DR: In this article, a system is provided for rapidly recognizing hand gestures for the control of computer graphics, in which image moment calculations are utilized to determine an overall equivalent rectangle corresponding to hand position, orientation and size, with size in one embodiment correlating to the width of the hand.
Abstract: A system is provided for rapidly recognizing hand gestures for the control of computer graphics, in which image moment calculations are utilized to determine an overall equivalent rectangle corresponding to hand position, orientation and size, with size in one embodiment correlating to the width of the hand. In a further embodiment, a hole generated through the utilization of the touching of the forefinger with the thumb provides a special trigger gesture recognized through the corresponding hole in the binary representation of the hand. In a further embodiment, image moments of images of other objects are detected for controlling or directing onscreen images.

371 citations


Journal ArticleDOI
TL;DR: This work addresses the problem of contour inference from partial data, as obtained from state-of-the-art edge detectors, and argues that in order to obtain more pereeptually salient contours, it is necessary to impose generic constraints such as continuity and co-curvilinearity.
Abstract: We address the problem of contour inference from partial data, as obtained from state-of-the-art edge detectors. We argue that in order to obtain more pereeptually salient contours, it is necessary to impose generic constraints such as continuity and co-curvilinearity. The implementation is in the form of a convolution with a mask which encodes both the orientation and the strength of the possible continuations. We first show how the mask, called the “Extension field” is derived, then how the contributions from different sites are collected to produce a saliency map. We show that the scheme can handle a variety of input data, from dot patterns to oriented edgels in a unified manner, and demonstrate results on a variety of input stimuli. We also present a similar approach to the problem of inferring contours formed by end points. In both cases, the scheme is non-linear, non iterative, and unified in the sense that all types of input tokens are handled in the same manner.

316 citations


Patent
Eric C. Anderson1
19 Jan 1996
TL;DR: In this paper, an image sensor, an orientation sensor, a memory and a processing unit is used for generating captured image data, which is then transferred to an image processing unit in response to the orientation sensor signals.
Abstract: The apparatus of the present invention preferably comprises an image sensor, an orientation sensor, a memory and a processing unit. The image sensor is used for generating captured image data. The orientation sensor is coupled to the image sensor, and is used for generating signals relating to the position of the image sensor. The memory, has an auto-rotate unit comprising program instructions for transforming the captured image data into rotated image data in response to the orientation sensor signals. The processing unit, executes program instructions stored in the memory, and is coupled to the image sensor, the orientation sensor and the memory. The method of the present invention preferably comprises the steps of: generating image data representative of an object with an image sensor; identifying an orientation of the image sensor relative to the object during the generating step; and selectively transferring the image data to an image processing unit in response to the identifying step.

308 citations


Journal ArticleDOI
TL;DR: A computational model for binocular stereopsis is developed, attempting to explain the process by which the information detailing the 3-D geometry of object surfaces is encoded in a pair of stereo images.
Abstract: We develop a computational model for binocular stereopsis, attempting to explain the process by which the information detailing the 3-D geometry of object surfaces is encoded in a pair of stereo images. We design our model within a Bayesian framework, making explicit all of our assumptions about the nature of image coding and the structure of the world. We start by deriving our model for image formation, introducing a definition of half-occluded regions and deriving simple equations relating these regions to the disparity function. We show that the disparity function alone contains enough information to determine the half-occluded regions. We use these relations to derive a model for image formation in which the half-occluded regions are explicitly represented and computed. Next, we present our prior model in a series of three stages, or “worlds,” where each world considers an additional complication to the prior. We eventually argue that the prior model must be constructed from all of the local quantities in the scene geometry-i.e., depth, surface orientation, object boundaries, and surface creases. In addition, we present a new dynamic programming strategy for estimating these quantities. Throughout the article, we provide motivation for the development of our model by psychophysical examinations of the human visual system.

289 citations


Journal ArticleDOI
TL;DR: This method iteratively refines up to two different pose estimates, and provides an associated quality measure for each pose, when the camera distance is large compared with the object depth, or when the accuracy of feature point extraction is low because of image noise.

249 citations


Patent
12 Apr 1996
TL;DR: In this paper, an electronic still camera is provided with an electronic image sensor for generating an image signal corresponding to a still image of a subject and an orientation determination section for sensing the orientation of the camera relative to the subject.
Abstract: An electronic still camera is provided with an electronic image sensor for generating an image signal corresponding to a still image of a subject and an orientation determination section for sensing the orientation of the camera relative to the subject. The orientation determination section provides an orientation signal recognizing either the vertical or the horizontal orientation of the camera relative to the subject. An image processor is responsive to the orientation signal for processing the image signal and correcting the orientation thereof so that the still image is output from the image processor in a predetermined orientation. In this way, the electronic still camera can be positioned in a variety of orientations relative to a subject, including both clockwise and counterclockwise vertical "portrait" orientations and a horizontal "landscape" orientation, without affecting the orientation of the images output by the camera.

238 citations


Journal ArticleDOI
TL;DR: A segmentation algorithm using deformable template models to segment a vehicle of interest both from the stationary complex background and other moving vehicles in an image sequence is proposed and solved by the Metropolis algorithm.
Abstract: This paper proposes a segmentation algorithm using deformable template models to segment a vehicle of interest both from the stationary complex background and other moving vehicles in an image sequence. We define a polygonal template to characterize a general model of a vehicle and derive a prior probability density function to constrain the template to be deformed within a set of allowed shapes. We propose a likelihood probability density function which combines motion information and edge directionality to ensure that the deformable template is contained within the moving areas in the image and its boundary coincides with strong edges with the same orientation in the image. The segmentation problem is reduced to a minimization problem and solved by the Metropolis algorithm. The system was successfully tested on 405 image sequences containing multiple moving vehicles on a highway.

237 citations


Journal ArticleDOI
TL;DR: It is believed that studies from preattentive vision should be used to assist in the design of visualization tools, especially those for which high-speed target detection, boundary identification, and region detection are important.
Abstract: A new method is presented for performing rapid and accurate numerical estimation. The method is derived from an area of human cognitive psychology called preattentive processing. Preattentive processing refers to an initial organization of the visual field based on cognitive operations believed to be rapid, automatic, and spatially parallel. Examples of visual features that can be detected in this way include hue, intensity, orientation, size, and motion. We beleive that studies from preattentive vision should be used to assist in the design of visualization tools, especially those for which high-speed target detection, boundary identification, and region detection are important. In our present study, we investigated two known preattentive features (hue and orientation) in the context of a new task (numerical estimation) in order to see whether preattentive estimation was possible. Our experiments tested displays that were designed to visualize data from salmon migration simulations. The results showed that rapid and accurate estimation was indeed possible using either hue or orientation. Furthermore, random variation in one of these features resulted in no interference when subjects estimated the percentage of the other. To test the generality of our results, we varied two important display parameters—display duration and feature difference—and found boundary conditions for each. Implications of our results for application to real-world data and tasks are discussed.

232 citations


Proceedings ArticleDOI
14 Oct 1996
TL;DR: Two algorithms are described, based on image moments and orientation histograms, which exploit the capabilities of the chip to provide interactive response to the player's hand or body positions at 10 msec frame time and at low-cost.
Abstract: The appeal of computer games may be enhanced by vision-based user inputs. The high speed and low cost requirements for near-term, mass-market game applications make system design challenging. The response time of the vision interface should be less than a video frame time and the interface should cost less than $50 U.S. We meet these constraints with algorithms tailored to particular hardware. We have developed a special detector, called the artificial retina chip, which allows for fast, on-chip image processing. We describe two algorithms, based on image moments and orientation histograms, which exploit the capabilities of the chip to provide interactive response to the player's hand or body positions at 10 msec frame time and at low-cost. We show several possible game interactions.

217 citations


Journal ArticleDOI
TL;DR: A more general set of steerable filters is presented that alleviate the problem of local orientation patterns in imagery that are periodic with period pi, independent of image structure.
Abstract: Steerable filters have been used to analyze local orientation patterns in imagery. Such filters are typically based on directional derivatives, whose symmetry produces orientation responses that are periodic with period /spl pi/, independent of image structure. We present a more general set of steerable filters that alleviate this problem.

Journal ArticleDOI
TL;DR: The paper presents the basic iconic notation for spatial orientation relations that exploits the structure of the spatial domain and explores a variety of ways in which these relations can be manipulated and combined for spatial reasoning.
Abstract: We give an overview of an approach to qualitative spatial reasoning based on directional orientation information as available through perception processes or natural language descriptions. Qualitative orientations in 2-dimensional space are given by the relation between a point and a vector. The paper presents our basic iconic notation for spatial orientation relations that exploits the structure of the spatial domain and explores a variety of ways in which these relations can be manipulated and combined for spatial reasoning. Using this notation, we explore a method for exploiting interactions between space and movement in this space for enhancing the inferential power. Finally, the orientation-based approach is augmented by distance information, which can be mapped into position constraints and vice versa.

Proceedings ArticleDOI
14 Oct 1996
TL;DR: In this paper, an approach for estimating 3D head orientation in a monocular image sequence is proposed, which employs recently developed image-based parameterized tracking for face and face features to locate the area in which a sub-pixel parameterized shape estimation of the eye's boundary is performed.
Abstract: An approach for estimating 3D head orientation in a monocular image sequence is proposed. The approach employs recently developed image-based parameterized tracking for face and face features to locate the area in which a sub-pixel parameterized shape estimation of the eye's boundary is performed. This involves tracking of five points (four at the eye corners and the fifth is the lip of the nose). The authors describe an approach that relies on the coarse structure of the face to compute orientation relative to the camera plane. Our approach employs projective invariance of the cross-ratios of the eye corners and anthropometric statistics to estimate the head yaw, roll and pitch. Analytical and experimental results are reported.

Proceedings ArticleDOI
25 Aug 1996
TL;DR: A real-time wide-baseline stereo person tracking system which can self-calibrate itself from watching a moving person and can subsequently track people's head and hands with RIMS errors of 1-2 cm in translation and 2 degrees in rotation.
Abstract: We describe a method for estimation of 3D geometry from 2D blob features. Blob features are clusters of similar pixels in the image plane and can arise from similarity of color, texture, motion and other signal-based metrics. The motivation for considering such features comes from recent successes in real-time extraction and tracking of such blob features in complex cluttered scenes in which traditional feature finders fail, e.g. scenes containing moving people. We use nonlinear modeling and a combination of iterative and recursive estimation methods to recover 3D geometry from blob correspondences across multiple images. The 3D geometry includes the 3D shapes, translations, and orientations of blobs and the relative orientation of the cameras. Using this technique, we have developed a real-time wide-baseline stereo person tracking system which can self-calibrate itself from watching a moving person and can subsequently track people's head and hands with RIMS errors of 1-2 cm in translation and 2 degrees in rotation. The blob formulation is efficient and reliable, running at 20-30 Hz on a pair of SGI Indy R4400 workstations with no special hardware.

Proceedings ArticleDOI
TL;DR: This paper considers the detection of areas of interest and edges in images compressed using the discrete cosine transform (DCT) and shows how a measure based on certain DCT coefficients of a block can provide an indication of underlying activity.
Abstract: This paper examines the issue of direct extraction of low level features from compressed images. Specifically, we consider the detection of areas of interest and edges in images compressed using the discrete cosine transform (DCT). For interest areas, we show how a measure based on certain DCT coefficients of a block can provide an indication of underlying activity. For edges, we show using an ideal edge model how the relative values of different DCT coefficients of a block can be used to estimate the strength and orientation of an edge. Our experimental results indicate that coarse edge information from compressed images can be extracted up to 20 times faster than conventional edge detectors.

Journal ArticleDOI
18 Jan 1996-Nature
TL;DR: Testing whether visual experience is responsible for the match in a reverse-suturing experiment in which kittens were raised so that both eyes were never able to see at the same time indicates that correlated visual input is not required for the alignment of orientation preference maps.
Abstract: IN the mammalian visual cortex, many neurons are driven binocularly and response properties such as orientation preference or spatial frequency tuning are virtually identical for the two eyes1. A precise match of orientation is essential in order to detect disparity and is therefore a prerequisite for stereoscopic vision. It is not clear whether this match is accomplished by activity-dependent mechanisms together with the common visual experience normally received by the eyes2,3, or whether the visual system relies on other, perhaps even innate, cues to achieve this task4–7. Here we test whether visual experience is responsible for the match in a reverse-suturing experiment in which kittens were raised so that both eyes were never able to see at the same time. A comparison of the layout of the two maps formed under these conditions showed them to be virtually identical. Considering that the two eyes never had common visual experience, this indicates that correlated visual input is not required for the alignment of orientation preference maps.

Journal ArticleDOI
TL;DR: This paper presents a structure adaptive anisotropic filtering technique with its application to processing magnetic resonance images that differs from other techniques in that, instead of using local gradients as a means of controlling the anisotropism of filters, it uses both a local intensity orientation and ananisotropic measure of level contours to control the shape and extent of the filter kernel.

Journal ArticleDOI
TL;DR: This paper addresses the problem of computing cues to the three-dimensional structure of surfaces in the world directly from the local structure of the brightness pattern of either a single monocular image or a binocular image pair using a multi-scale descriptor of image structure called the windowed second moment matrix.
Abstract: This paper addresses the problem of computing cues to the three-dimensional structure of surfaces in the world directly from the local structure of the brightness pattern of either a single monocular image or a binocular image pair. It is shown that starting from Gaussian derivatives of order up to two at a range of scales in scale-space, local estimates of (i) surface orientation from monocular texture foreshortening, (ii) surface orientation from monocular texture gradients, and (iii) surface orientation from the binocular disparity gradient can be computed without iteration or search, and by using essentially the same basic mechanism. The methodology is based on a multi-scale descriptor of image structure called the windowed second moment matrix, which is computed with adaptive selection of both scale levels and spatial positions. Notably, this descriptor comprises two scale parameters; a local scale parameter describing the amount of smoothing used in derivative computations, and an integration scale parameter determining over how large a region in space the statistics of regional descriptors is accumulated. Experimental results for both synthetic and natural images are presented, and the relation with models of biological vision is briefly discussed.

Journal ArticleDOI
TL;DR: A new method for automatic quantification of the patient setup in three dimensions (3D) using one set of computed tomography (CT) data and two transmission images and was found to be robust for imperfections in the delineation of bony structures in the transmission images.
Abstract: In external beam radiotherapy, conventional analysis of portal images in two dimensions (2D) is limited to verification of in-plane rotations and translations of the patient. We developed and clinically tested a new method for automatic quantification of the patient setup in three dimensions (3D) using one set of computed tomography (CT) data and two transmission images. These transmission images can be either a pair of simulator images or a pair of portal images. Our procedure adjusts the position and orientation of the CT data in order to maximize the distance through bone in the CT data along lines between the focus of the irradiation unit and bony structures in the transmission images. For this purpose, bony features are either automatically detected or manually delineated in the transmission images. The performance of the method was quantified by aligning randomly displaced CT data with transmission images simulated from digitally reconstructed radiographs. In addition, the clinical performance were assessed in a limited number of images of prostate cancer and parotid gland tumor treatments. The complete procedure takes less than 2 min on a 90-MHz Pentium PC. The alignment time is 50 s for portal images and 80 s for simulator images. The accuracy is about 1 mm and 1 degrees. Application to clinical cases demonstrated that the procedure provides essential information for the correction of setup errors in case of large rotations (typically larger than 2 degrees) in the setup. The 3D procedure was found to be robust for imperfections in the delineation of bony structures in the transmission images. Visual verification of the results remains, however, necessary. It can be concluded that our strategy for automatic analysis of patient setup in 3D is accurate and robust. The procedure is relatively fast and reduces the human workload compared with existing techniques for the quantification of patient setup in 3D. In addition, the procedure improves the accuracy of treatment verification in 2D in some cases where rotational deviations in the setup occur.

Patent
Thomas W. Karpen1
31 Jul 1996
TL;DR: In this article, an image sensor including a generally rectangular 2D array of photosensitive elements is secured to a mounting structure so that the plane of the array is generally parallel to and approximately in the focal plane of an associated 2D imaging optics assembly.
Abstract: An imaging assembly having a 2D image sensor so oriented in relation to its supporting structure that, when a reader including the imaging assembly is held in its normal operating position during the reading of a 1D bar code symbol, the image of that 1D symbol is aligned with a diagonal of the image sensor, thereby increasing the resolution with which the 1D symbol is read. An image sensor including a generally rectangular 2D array of photosensitive elements is secured to a mounting structure so that the plane of the array is generally parallel to and approximately in the focal plane of an associated 2D imaging optics assembly. The angular orientation of the image sensor with respect to its mounting structure is selected so that, when a reader including the imaging assembly is held in its normal reading position during the reading of a 1D symbol, the image of the 1D symbol is formed along a diagonal of the array.

Proceedings ArticleDOI
14 Oct 1996
TL;DR: An approach to the identification of skin-colored regions of the image that is robust in terms of variations in skin pigmentation in a single subject, differences in skin Pigmentation across a population of potential users, and subject clothing and image background is described.
Abstract: There are many applications where it is desirable to segment a video image into regions defined by color. Among these are the recognition of gesture from the image (as opposed to instrumented gloves), facial expression and orientation, and video teleconferencing. In these examples, the important elements of the images are human hands and face, which share common skin coloration of the subject. This paper describes an approach to the identification of skin-colored regions of the image that is robust in terms of variations in skin pigmentation in a single subject, differences in skin pigmentation across a population of potential users, and subject clothing and image background. The paper also discusses the potential for being robust over a wide range of illuminating conditions.

Journal ArticleDOI
TL;DR: Three new experiments found that DF is able to grasp everyday tools and utensils proficiently but has difficulty in visually selecting the correct part of the object to grasp for subsequent use of that object, and suggests limitations on the visual processing capacities of the human dorsal stream.

Journal ArticleDOI
M. Menke1, M.S. Atkins, K. Buckley1
01 Feb 1996
TL;DR: Preliminary results obtained with test data indicate that the methods have the potential to improve the resolution of PET images in cases where significant head motion has occurred, provided that the head position and orientation can be accurately measured.
Abstract: The authors describe two methods to correct for motion artifacts in head images obtained by positron emission tomography (PET). The methods are based on six-dimensional motion data of the head that have to be acquired simultaneously during scanning. The data are supposed to represent the rotational and translational deviations of the head as a function of time, with respect to the initial head position. The first compensation method is a rebinning procedure by which the lines of response are geometrically transformed according to the current values of the motion data, assuming a cylindrical scanner geometry. An approximation of the rebinning transformations by use of large look-up tables, having the potential of on-line event processing, is presented. The second method comprises post-processing of the reconstructed images by unconstrained or constrained deconvolution of the image or image segments with kernels that are generated from the motion data. The authors use motion data that were acquired with a volunteer in supine position, immobilized by a thermoplastic head holder, to demonstrate the effects of the compensation methods. Preliminary results obtained with test data indicate that the methods have the potential to improve the resolution of PET images in cases where significant head motion has occurred, provided that the head position and orientation can be accurately measured.

Journal ArticleDOI
TL;DR: In this article, a three-dimensional analysis of aggregate particles was performed by attaching aggregates in sample trays with two perpendicular faces, where the aggregates were rotated 90 degrees so that they were now perpendicular to their original orientation.
Abstract: Digital image analysis provides the capability for rapid measurement of particle characteristics. When an image is captured and digitized, numerous measurements can be made in near real time for each particle. Usually, image analysis techniques treat particles as two-dimensional objects since only the two-dimensional projection of the particles is captured. In this study, three-dimensional analysis of aggregate particles that was performed by attaching aggregates in sample trays with two perpendicular faces is described. After the initial projected image of the aggregates is captured and measured, the sample trays are rotated 90 degrees so that the aggregates are now perpendicular to their original orientation and the dimensions of the aggregates in the new projected image are captured and measured. The long, intermediate, and short particle dimensions (dL, dI, and dS, respectively) provide direct measures of the flatness and elongation of the particles. Some other shape indexes can also be derived from t...

01 Jan 1996
TL;DR: In this paper, a spatially adaptive multiscale Kalman smoothing filter is proposed for the restoration of noisy blurred images, which is particularly effective at producing sharp deconvolution while suppressing the noise in the flat regions of an image.
Abstract: Abstruct- In this paper, we present a new spatially adaptive approach to the restoration of noisy blurred images, which is particularly effective at producing sharp deconvolution while suppressing the noise in the flat regions of an image. This is accomplished through a multiscale Kalman smoothing filter applied to a prefiltered observed image in the discrete, separable, 2-D wavelet domain. The prefiltcering step involves constrained leastsquares filtering based on optimal choices for the regularization parameter. This leads to a reduction in the support of the required state vectors of the multiscale restoration filter in the wavelet domain and improv’ement in the computational efficiency of the multiscale filter. The proposed method has the benefit that the majority of the regularization, or noise suppression, of the restoration is accomplished by the efficient multiscale filtering of wavelet detail coefficients airdered on quadtrees. Not only does this lead to potential parallel implementation schemes, but it permits adaptivity to the local edge information in the image. In particular, this method changes filter parameters depending on scale, local signal-to-noise ratio (SNR), and orientation. Because the wavelet detail coefficients are a manifestation of the multiscale edge information in an image, this algorithm may be viewed as an “edge-adaptive” multiscale restoration approach.

Journal ArticleDOI
TL;DR: A new spatially adaptive approach to the restoration of noisy blurred images, which is particularly effective at producing sharp deconvolution while suppressing the noise in the flat regions of an image.
Abstract: In this paper, we present a new spatially adaptive approach to the restoration of noisy blurred images, which is particularly effective at producing sharp deconvolution while suppressing the noise in the flat regions of an image. This is accomplished through a multiscale Kalman smoothing filter applied to a prefiltered observed image in the discrete, separable, 2-D wavelet domain. The prefiltering step involves constrained least-squares filtering based on optimal choices for the regularization parameter. This leads to a reduction in the support of the required state vectors of the multiscale restoration filter in the wavelet domain and improvement in the computational efficiency of the multiscale filter. The proposed method has the benefit that the majority of the regularization, or noise suppression, of the restoration is accomplished by the efficient multiscale filtering of wavelet detail coefficients ordered on quadtrees. Not only does this lead to potential parallel implementation schemes, but it permits adaptivity to the local edge information in the image. In particular, this method changes filter parameters depending on scale, local signal-to-noise ratio (SNR), and orientation. Because the wavelet detail coefficients are a manifestation of the multiscale edge information in an image, this algorithm may be viewed as an "edge-adaptive" multiscale restoration approach.

Patent
18 Dec 1996
TL;DR: In this article, a method for detecting an amount of rotation or magnification in a modified image was proposed, which is based on autocorrelation peaks corresponding to the location of the features of the marker image in the modified image.
Abstract: A method for detecting an amount of rotation or magnification in a modified image, includes the steps of: embedding a marker image having a pair of identical features separated by a distance d and oriented at an angle α in an original image to produce a marked image, the marked image having been rotated and/or magnified to produce the modified image; performing an autocorrelation on the modified image to produce a pair of autocorrelation peaks corresponding to the location of the features of the marker image in the modified image; and comparing the separation d' and orientation α' of the autocorrelation peaks with the separation d and orientation α of the features in the marker image to determine the amount of rotation and magnification in the modified image.

PatentDOI
TL;DR: An imaging apparatus and method for use in presenting composite two dimensional and three dimensional images from individual ultrasonic frames is presented in this paper, where a cross-sectional reconstruction is applied by using digital ultrasound frames, transducer orientation and a known center.
Abstract: An imaging apparatus and method for use in presenting composite two dimensional and three dimensional images from individual ultrasonic frames. A cross-sectional reconstruction is applied by using digital ultrasound frames, transducer orientation and a known center. Motion compensation, rank value filtering, noise suppression and tissue classification are utilized to optimize the composite image.

Proceedings ArticleDOI
07 May 1996
TL;DR: A spatio-temporal model of the human visual system (HVS) for video imaging applications, predicting the response of the neurons of the primary visual cortex with a three-dimensional filter bank.
Abstract: This paper describes a spatio-temporal model of the human visual system (HVS) for video imaging applications, predicting the response of the neurons of the primary visual cortex. The model simulates the behavior of the HVS with a three-dimensional filter bank which decomposes the data into perceptual channels, each one being tuned to a specific spatial frequency, orientation and temporal frequency. It further accounts for contrast sensitivity, inter-stimuli masking and spatio-temporal interaction. The free parameters of the model have been estimated by psychophysics. The model can then be used as the basis for many applications. As an example, a quality metric for coded video sequences is presented.

Patent
14 Aug 1996
TL;DR: In this paper, a user-selected set of image properties may be specified so that each image in the group of images is presented for viewing with a particular orientation, and the method is particularly adapted for viewing large numbers of financial document images.
Abstract: A method for providing user access to a selected group of document images. A user-selected set of image properties may be specified so that each image in the group of images is presented for viewing with a particular orientation. The method is particularly adapted for viewing large numbers of financial document images to include check images.