scispace - formally typeset
Search or ask a question

Showing papers on "Feature (computer vision) published in 1994"


Proceedings ArticleDOI
21 Jun 1994
TL;DR: A feature selection criterion that is optimal by construction because it is based on how the tracker works, and a feature monitoring method that can detect occlusions, disocclusions, and features that do not correspond to points in the world are proposed.
Abstract: No feature-based vision system can work unless good features can be identified and tracked from frame to frame. Although tracking itself is by and large a solved problem, selecting features that can be tracked well and correspond to physical points in the world is still hard. We propose a feature selection criterion that is optimal by construction because it is based on how the tracker works, and a feature monitoring method that can detect occlusions, disocclusions, and features that do not correspond to points in the world. These methods are based on a new tracking algorithm that extends previous Newton-Raphson style search methods to work under affine image transformations. We test performance with several simulations and experiments. >

8,432 citations


Journal ArticleDOI
TL;DR: A general method for performing the synoptic identification of phenomena that can be used for an objective analysis of atmospheric, or oceanographic, datasets obtained from numerical models and remote sensing.
Abstract: Recent interest in the validation of general circulation models (GCMS) has been devoted to objective methods. A small number of authors have used the direct synoptic identification of phenomena together with a statistical analysis to perform the objective comparison between various datasets. This paper describes a general method for performing the synoptic identification of phenomena that can be used for an objective analysis of atmospheric, or oceanographic, datasets obtained from numerical models and remote sensing. Methods usually associated with image processing have been used to segment the scene and to identify suitable feature points to represent the phenomena of interest. This is performed for each time level. A technique from dynamic scene analysis is then used to link the feature points to form trajectories. The method is fully automatic and should be applicable to a wide range of geophysical fields. An example will be shown of results obtained from this method using data obtained from ...

430 citations


Proceedings ArticleDOI
13 Nov 1994
TL;DR: The authors show that with this relatively simple feature set, effective texture discrimination can be achieved, and hope that the performance for texture discrimination of these simple energy-based features will allow images in a database to be efficiently and effectively indexed by contents of their textured regions.
Abstract: Proposes a method for classification and discrimination of textures based on the energies of image subbands. The authors show that with this relatively simple feature set, effective texture discrimination can be achieved. In the paper, subband-energy feature sets extracted from the following typical image decompositions are compared: wavelet subband, uniform subband, discrete cosine transform (DCT), and spatial partitioning. The authors report that over 90% correct classification was attained using the feature set in classifying the full Brodatz [1965] collection of 112 textures. Furthermore, the subband energy-based feature set can be readily applied to a system for indexing images by texture content in image databases, since the features can be extracted directly from spatial-frequency decomposed image data. The authors also show that to construct a suitable space for discrimination, Fisher discrimination analysis (Dillon and Goldstein, 1984) can be used to compact the original features into a set of uncorrelated linear discriminant functions. This procedure makes it easier to perform texture-based searches in a database by reducing the dimensionality of the discriminant space. The authors also examine the effects of varying training class size, the number of training classes, the dimension of the discriminant space and number of energy measures used for classification. The authors hope that the performance for texture discrimination of these simple energy-based features will allow images in a database to be efficiently and effectively indexed by contents of their textured regions. >

415 citations


Journal ArticleDOI
TL;DR: It is demonstrated that features extracted from multiresolution representations can provide an adaptive mechanism for accomplishing local contrast enhancement and by improving the visualization of breast pathology, one can improve chances of early detection while requiring less time to evaluate mammograms for most patients.
Abstract: Introduces a novel approach for accomplishing mammographic feature analysis by overcomplete multiresolution representations. The authors show that efficient representations may be identified within a continuum of scale-space and used to enhance features of importance to mammography. Methods of contrast enhancement are described based on three overcomplete multiscale representations: 1) the dyadic wavelet transform (separable), 2) the /spl phi/-transform (nonseparable, nonorthogonal), and 3) the hexagonal wavelet transform (nonseparable). Multiscale edges identified within distinct levels of transform space provide local support for image enhancement. Mammograms are reconstructed from wavelet coefficients modified at one or more levels by local and global nonlinear operators. In each case, edges and gain parameters are identified adaptively by a measure of energy within each level of scale-space. The authors show quantitatively that transform coefficients, modified by adaptive nonlinear operators, can make more obvious unseen or barely seen features of mammography without requiring additional radiation. The authors' results are compared with traditional image enhancement techniques by measuring the local contrast of known mammographic features. They demonstrate that features extracted from multiresolution representations can provide an adaptive mechanism for accomplishing local contrast enhancement. By improving the visualization of breast pathology, one can improve chances of early detection while requiring less time to evaluate mammograms for most patients. >

382 citations


Journal ArticleDOI
01 Apr 1994
TL;DR: The effectiveness of the generalized Minkowski metrics is presented, an approach to the hierarchical conceptual clustering, and a generalization of the principal component analysis for mixed feature data are presented.
Abstract: This paper presents simple and convenient generalized Minkowski metrics on the multidimensional feature space in which coordinate axes are associated with not only quantitative features but also qualitative and structural features. The metrics are defined on a new mathematical model (U/sup (d/),[+], [X]) which is called simply the Cartesian space model, where U/sup (d/) is the feature space which permits mixed feature types, [+] is the Cartesian join operator which yields a generalized description for given descriptions on U/sup (d/), and [X] is the Cartesian meet operator which extracts a common description from given descriptions on U/sup (d/). To illustrate the effectiveness of our generalized Minkowski metrics, we present an approach to the hierarchical conceptual clustering, and a generalization of the principal component analysis for mixed feature data. >

331 citations


Proceedings ArticleDOI
11 Nov 1994
TL;DR: It is demonstrated that repetitive motion is such a strong cue, that the moving actor can be segmented, normalized spatially and temporally, and recognized by matching against a spatiotemporal template of motion features.
Abstract: The recognition of human movements such as walking, running or climbing has been approached previously by tracking a number of feature points and either classifying the trajectories directly or matching them with a high-level model of the movement. A major difficulty with these methods is acquiring and trading the requisite feature points, which are generally specific joints such as knees or angles. This requires previous recognition and/or part segmentation of the actor. We show that the recognition of walking or any repetitive motion activity can be accomplished on the basis of bottom up processing, which does not require the prior identification of specific parts, or classification of the actor. In particular, we demonstrate that repetitive motion is such a strong cue, that the moving actor can be segmented, normalized spatially and temporally, and recognized by matching against a spatiotemporal template of motion features. We have implemented a real-time system that can recognize and classify repetitive motion activities in normal gray-scale image sequences. >

324 citations


Journal ArticleDOI
TL;DR: To decide if two regions should be merged, instead of comparing the difference of region feature means with a predefined threshold, the authors adaptively assess region homogeneity from region feature distributions, resulting in an algorithm that is robust with respect to various image characteristics.
Abstract: Proposes a simple, yet general and powerful, region-growing framework for image segmentation. The region-growing process is guided by regional feature analysis; no parameter tuning or a priori knowledge about the image is required. To decide if two regions should be merged, instead of comparing the difference of region feature means with a predefined threshold, the authors adaptively assess region homogeneity from region feature distributions. This results in an algorithm that is robust with respect to various image characteristics. The merge criterion also minimizes the number of merge rejections and results in a fast region-growing process that is amenable to parallelization. >

321 citations


Book ChapterDOI
07 May 1994
TL;DR: The paper presents a framework for extracting low level features, which leads to new techniques for deriving image parameters, to either the elimination or the elucidation of ”buttons”, like thresholds, and to interpretable quality measures for the results, which may be used in subsequent steps.
Abstract: The paper presents a framework for extracting low level features. Its main goal is to explicitely exploit the information content of the image as far as possible. This leads to new techniques for deriving image parameters, to either the elimination or the elucidation of ”buttons”, like thresholds, and to interpretable quality measures for the results, which may be used in subsequent steps. Feature extraction is based on local statistics of the image function. Methods are available for blind estimation of a signal dependent noise variance, for feature preserving restoration, for feature detection and classification, and for the location of general edges and points. Their favorable scale space properties are discussed.

312 citations


Book ChapterDOI
02 May 1994
TL;DR: A paraperspective factorization method that can be applied to a much wider range of motion scenarios, such as image sequences containing significant translational motion toward the camera or across the image, is developed.
Abstract: The factorization method, first developed by Tomasi and Kanade, recovers both the shape of an object and its motion from a sequence of images, using many images and tracking many feature points to obtain highly redundant feature position information. The method robustly processes the feature trajectory information using singular value decomposition (SVD), taking advantage of the linear algebraic properties of orthographic projection. However, an orthographic formulation limits the range of motions the method can accommodate. Paraperspective projection, first introduced by Ohta, is a projection model that closely approximates perspective projection by modelling several effects not modelled under orthographic projection, while retaining linear algebraic properties. We have developed a paraperspective factorization method that can be applied to a much wider range of motion scenarios, such as image sequences containing significant translational motion toward the camera or across the image. We present the results of several experiments which illustrate the method's performance in a wide range of situations, including an aerial image sequence of terrain taken from a low-altitude airplane.

289 citations


Book ChapterDOI
TL;DR: In this paper, the authors compared the performance of the (l, r) search algorithm with the genetic approach to feature subset search in high-dimensional spaces and found that the properties inferred for these techniques from medium scale experiments involving up to a few tens of dimensions extend to dimensionalities of one order of magnitude higher.
Abstract: The combinatorial search problem arising in feature selection in high dimensional spaces is considered. Recently developed techniques based on the classical sequential methods and the (l, r) search called Floating search algorithms are compared against the Genetic approach to feature subset search. Both approaches have been designed with the view to give a good compromise between efficiency and effectiveness for large problems. The purpose of this paper is to investigate the applicability of these techniques to high dimensional problems of feature selection. The aim is to establish whether the properties inferred for these techniques from medium scale experiments involving up to a few tens of dimensions extend to dimensionalities of one order of magnitude higher. Further, relative merits of these techniques vis-a-vis such high dimensional problems are explored and the possibility of exploiting the best aspects of these methods to create a composite feature selection procedure with superior properties is considered.

252 citations


Patent
01 Mar 1994
TL;DR: In this paper, a handwriting signal processing front-end method and apparatus for a handwriting training and recognition system which includes non-uniform segmentation and feature extraction in combination with multiple vector quantization is presented.
Abstract: A handwriting signal processing front-end method and apparatus for a handwriting training and recognition system which includes non-uniform segmentation and feature extraction in combination with multiple vector quantization. In a training phase, digitized handwriting samples are partitioned into segments of unequal length. Features are extracted from the segments and are grouped to form feature vectors for each segment. Groups of adjacent from feature vectors are then combined to form input frames. Feature-specific vectors are formed by grouping features of the same type from each of the feature vectors within a frame. Multiple vector quantization is then performed on each feature-specific vector to statistically model the distributions of the vectors for each feature by identifying clusters of the vectors and determining the mean locations of the vectors in the clusters. Each mean location is represented by a codebook symbol and this information is stored in a codebook for each feature. These codebooks are then used to train a recognition system. In the testing phase, where the recognition system is to identify handwriting, digitized test handwriting is first processed as in the training phase to generate feature-specific vectors from input frames. Multiple vector quantization is then performed on each feature-specific vector to represent the feature-specific vector using the codebook symbols that were generated for that feature during training. The resulting series of codebook symbols effects a reduced representation of the sampled handwriting data and is used for subsequent handwriting recognition.

Journal ArticleDOI
Yao Wang1, Ouseb Lee1
TL;DR: The proposed representation retains the salient merit of the original model as a feature tracker based on local and collective information, while facilitating more accurate image interpolation and prediction, and can successfully track facial feature movements in head-and-shoulder type of sequences.
Abstract: This paper introduces a representation scheme for image sequences using nonuniform samples embedded in a deformable mesh structure. It describes a sequence by nodal positions and colors in a starting frame, followed by nodal displacements in the following frames. The nodal points in the mesh are more densely distributed in regions containing interesting features such as edges and corners; and are dynamically updated to follow the same features in successive frames. They are determined automatically by maximizing feature (e.g., gradient) magnitudes at nodal points, while minimizing interpolation errors within individual elements, and matching errors between corresponding elements. In order to avoid the mesh elements becoming overly deformed, a penalty term is also incorporated, which measures the irregularity of the mesh structure. The notions of shape functions and master elements commonly used in the finite element method have been applied to simplify the numerical calculation of the energy functions and their gradients. The proposed representation is motivated by the active contour or snake model proposed by Kass, Witkin, and Terzopoulos (1988). The current representation retains the salient merit of the original model as a feature tracker based on local and collective information, while facilitating more accurate image interpolation and prediction. Our computer simulations have shown that the proposed scheme can successfully track facial feature movements in head-and-shoulder type of sequences, and more generally, interframe changes that can be modeled as elastic deformation. The treatment for the starting frame also constitutes an efficient representation of arbitrary still images. >

Proceedings ArticleDOI
06 Oct 1994
TL;DR: This paper takes advantage of the ability of many active optical range sensors to record intensity or even color in addition to the range information to improve the registration procedure by constraining potential matches between pairs of points based on a similarity measure derived from the intensity information.
Abstract: The determination of relative pose between two range images, also called registration, is a ubiquitous problem in computer vision, for geometric model building as well as dimensional inspection. The method presented in this paper takes advantage of the ability of many active optical range sensors to record intensity or even color in addition to the range information. This information is used to improve the registration procedure by constraining potential matches between pairs of points based on a similarity measure derived from the intensity information. One difficulty in using the intensity information is its dependence on the measuring conditions such as distance and orientation. The intensity or color information must first be converted into a viewpoint-independent feature. This can be achieved by inverting an illumination model, by differential feature measurements or by simple clustering. Following that step, a robust iterative closest point method is then used to perform the pose determination. Using the intensity can help to speed up convergence or, in cases of remaining degrees of freedom (e.g. on images of a sphere), to additionally constrain the match. The paper will describe the algorithmic framework and provide examples using range-and-color images.

Patent
21 Jan 1994
Abstract: A signal processing arrangement uses a codebook of first vector quantized speech feature signals formed responsive to a large collection of speech feature signals. The codebook is altered by combining the first speech feature signals of the codebook with second speech feature signals generated responsive to later input speech patterns during normal speech processing. A speaker recognition template can be updated in this fashion to take account of change which may occur in the voice and speaking characteristics of a known speaker.


01 Jan 1994
TL;DR: A set of such algorithms that use case-based classifiers, empirically compare them, and introduce novel extensions of backward sequential selection that allows it to scale to this task are described.
Abstract: Accurate weather prediction is crucial for many activities, including Naval operations. Researchers within the meteorological division of the Naval Research Laboratory have developed and fielded several expert systems for problems such as fog and turbulence forecasting, and tropical storm movement. They are currently developing an automated system for satellite image interpretation, part of which involves cloud classification. Their cloud classification database contains 204 high-level features, but contains only a few thousand instances. The predictive accuracy of classifiers can be improved on this task by employing a feature selection algorithm. We explain why non-parametric case-based classifiers are excellent choices for use in feature selection algorithms. We then describe a set of such algorithms that use case-based classifiers, empirically compare them, and introduce novel extensions of backward sequential selection that allows it to scale to this task. Several of the approaches we tested located feature subsets that attain significantly higher accuracies than those found in previously published research, and some did so with fewer features.

Patent
07 Nov 1994
TL;DR: In this paper, the authors propose a method of operating an image recognition system including a neural network including a plurality of input neurons, output neurons and an interconnection weight matrix; providing a display including an indicator; initializing the indicator to an initialized state; obtaining an image of a structure; digitizing the image so that the image can be digitized and transforming the input object space to a feature vector including a set of n scale-, position- and rotation-invariant feature signals.
Abstract: A method of operating an image recognition system including providing a neural network including a plurality of input neurons, a plurality of output neurons and an interconnection weight matrix; providing a display including an indicator; initializing the indicator to an initialized state; obtaining an image of a structure; digitizing the image so as to obtain a plurality of input intensity cells and define an input object space; transforming the input object space to a feature vector including a set of n scale-, position- and rotation- invariant feature signals, where n is a positive integer not greater than the plurality of input neurons, by extracting the set of n scale-, position- and rotation-invariant feature signals from the input object space according to a set of relationships I k =∫.sub.Ω ∫I(x,y)h[k,I(x,y)]dxdy, where I k is the set of n scale-, position- and rotation-invariant feature signals, k is a series of counting numbers from 1 to n inclusive, (x,y) are the coordinates of a given cell of the plurality of input intensity cells, I(x,y) is a function of an intensity of the given cell of the plurality of input intensity cells, Ω is an area of integration of input intensity cells, and h[k,I(x,y)] is a data dependent kernel transform from a set of orthogonal functions, of I(x,y) and k; transmitting the set of n scale-, position- and rotation- invariant feature signals to the plurality of input neurons; transforming the set of n scale-, position- and rotation- invariant feature signals at the plurality of input neurons to a set of structure recognition output signals at the plurality of output neurons according to a set of relationships defined at least in part by the interconnection weight matrix of the neural network; transforming the set of structure recognition output signals to a structure classification signal; and transmitting the structure classification signal to the display so as to perceptively alter the initialized state of the indicator and display the structure recognition signal for the structure.

Proceedings ArticleDOI
21 Jun 1994
TL;DR: A visual attention system that extracts regions of interest by integrating multiple image cues by introducing an alerting (motion-based) system able to explore and avoid obstacles is described.
Abstract: Active and selective perception seeks regions of interest in an image in order to reduce the computational complexity associated with time-consuming processes such as object recognition. We describe in this paper a visual attention system that extracts regions of interest by integrating multiple image cues. Bottom-up cues are detected by decomposing the image into a number: of feature and conspicuity maps, while a-priori knowledge (i.e. models) about objects is used to generate top-down attention cues. Bottom-up and top-down information is combined through a non-linear relaxation process using energy minimization-like procedures. The functionality of the attention system is expanded by the introduction of an alerting (motion-based) system able to explore and avoid obstacles. Experimental results are reported, using cluttered and noisy scenes. >

Journal ArticleDOI
Peter Bull1
TL;DR: It has been claimed that the question and answer format is the defining feature of the news interview, although the definition of what constitute questions, replies, or non-replies is by no means s...
Abstract: It has been claimed that the question and answer format is the defining feature of the news interview, although the definition of what constitute questions, replies, or non-replies is by no means s...

01 Jan 1994
TL;DR: In this paper, the authors describe and analyze a number of feature interactions by using two independent classification schemes and provide a coherent industry-wide collection of illustrative features and their interactions.
Abstract: Rapid creation of new services for telecommunications systems is hindered by the feature interaction problem. This is an important issue for development of IN services, not only because of interactions among IN services themselves but because of interactions of IN services with switch-based services and potential interactions with services not yet developed. Furthermore, the problem is fundamental to services creation; it is not restricted to IN services. Any platform for telecommunication services requires a method for dealing with the feature interaction problem. A number of approaches for managing feature interactions have been proposed. However, lack of structured ways to categorize feature interactions makes it difficult to determine if a particular approach has addressed some, if not all, classes of interactions. We describe and analyze a number of feature interactions by using two independent classification schemes. This paper is a step to achieving the goal of a coherent industry-wide collection of illustrative features and their interactions. The collection will helpconvey the scope of the feature interaction problem. It will also serve as a benchmark for determining the coverage of various approaches, and as a guideline for identifying potential interactions in software architectures and platforms.

Journal ArticleDOI
TL;DR: The current algorithm is capable of recognizing certain classes of interacting prismatic depression features, such as slots, steps, blind slots, blind steps, pockets, and prismatic holes, and it provides multiple interpretations.
Abstract: A new feature recognition algorithm capable of recognizing interacting machining features and providing multiple interpretations is presented. For machined parts with interacting features, multiple equally valid sets of feature interpretations exist. The term ‘multiple interpretations’ is associated with identifying all the possible sets of machining features that can be recognized from the part. It represents multiple ways to decompose the total machinable volume into feature volumes. The feature recognition algorithm presented in the paper uses a boundary representation as input, and it is developed in two stages: (a) volume decomposition, and (b) reconstruction of features. In the first stage, the volume to be machined is identified and decomposed into small blocks by extending boundary faces of the part. In the second stage, feature volumes are reconstructed by systematically connecting the small blocks built in the previous stage. The current algorithm is capable of recognizing certain classes of interacting prismatic depression features, such as slots, steps, blind slots, blind steps, pockets, and prismatic holes, and it provides multiple interpretations. Test software is implemented and integrated with the i-deas solid modeller. Sample results demonstrating the algorithm are also presented.


Journal ArticleDOI
TL;DR: The degree of feature co-alignment in the output of oriented filters is the cue used by human vision to perform these tasks, particularly for object detection and image segmentation.
Abstract: When bilaterally symmetric images are spatially filtered and thresholded, a subset of the resultant 'blobs' cluster around the axis of symmetry. Consequently, a quantitative measure of blob alignment can be used to code the degree of symmetry and to locate the axis of symmetry. Four alternative models were tested to examine which components of this scheme might be involved in human detection of symmetry. Two used a blob-alignment measure, operating on the output of either isotropic or oriented filters. The other two used similar filtering schemes, but measured symmetry by calculating the correlation of one half of the pattern with a reflection of the other. Simulations compared the effect of spatial jitter, proportion of matched to unmatched dots and width or location of embedded symmetrical regions, on models' detection of symmetry. Only the performance of the oriented filter + blob-alignment model was consistent with human performance in all conditions. It is concluded that the degree of feature co-alignment in the output of oriented filters is the cue used by human vision to perform these tasks. The broader computational role that feature alignment detection could play in early vision is discussed, particularly for object detection and image segmentation. In this framework, symmetry is a consequence of a more general-purpose grouping scheme.

Journal ArticleDOI
TL;DR: A novel stereo matching algorithm which integrates learning, feature selection, and surface reconstruction, and a self-diagnostic method for determining when apriori knowledge is necessary for finding the correct match is presented.
Abstract: We present a novel stereo matching algorithm which integrates learning, feature selection, and surface reconstruction. First, a new instance based learning (IBL) algorithm is used to generate an approximation to the optimal feature set for matching. In addition, the importance of two separate kinds of knowledge, image dependent knowledge and image independent knowledge, is discussed. Second, we develop an adaptive method for refining the feature set. This adaptive method analyzes the feature error to locate areas of the image that would lead to false matches. Then these areas are used to guide the search through feature space towards maximizing the class separation distance between the correct match and the false matches. Third, we introduce a self-diagnostic method for determining when apriori knowledge is necessary for finding the correct match. If the a priori knowledge is necessary then we use a surface reconstruction model to discriminate between match possibilities. Our algorithm is comprehensively tested against fixed feature set algorithms and against a traditional pyramid algorithm. Finally, we present and discuss extensive empirical results of our algorithm based on a large set of real images. >

Journal ArticleDOI
TL;DR: These methods are reviewed from the two points of view: feature projection and feature density equalization and a systematic comparison of them has been made based on the following criteria: recognition rate, processing speed, computational complexity and degree of variation.

Journal ArticleDOI
TL;DR: The accuracy of object feature measurement is proposed as a criterion for judging the quality of segmentation results and assessing the performance of applied algorithms.

Journal ArticleDOI
TL;DR: A system architecture for feature-based modelling is presented which is founded on this integration that is obtained through the definition of a common feature library and an intermediate model, which plays the role of communication link between the geometric model and the feature- based model.
Abstract: Previous work on feature-based modelling has emphasized generating features either in the design phase (design by features) or in the later product-development phases (feature-recognition). Recently, some attempts have been made to integrate both strategies, with the major aim of combining the positive aspects while reducing the drawbacks. The paper presents a system architecture for feature-based modelling which is founded on this integration that is obtained through the definition of a common feature library and an intermediate model, which plays the role of communication link between the geometric model and the feature-based model.

Posted Content
TL;DR: In this article, the authors present a typed feature-based representation language and inference system called ''tdl'' that supports open-and closed-world reasoning over types and allows for partitions and incompatible types.
Abstract: This paper presents \tdl, a typed feature-based representation language and inference system. Type definitions in \tdl\ consist of type and feature constraints over the boolean connectives. \tdl\ supports open- and closed-world reasoning over types and allows for partitions and incompatible types. Working with partially as well as with fully expanded types is possible. Efficient reasoning in \tdl\ is accomplished through specialized modules.

01 Jan 1994
TL;DR: Methods for implementing the Negotiating Agents approach are given along with the methods for deening IN features in terms of the Negotiation Agents approach in order to resolve connicts between these features.
Abstract: This article describes how to use the Negotiating Agents approach on a telecommunications platform. Negotiation is used in this approach to resolve connicts between features of one user and of diierent users. The theory behind the approach is discussed brieey. Methods for implementing the approach are given along with the methods for deening IN features in terms of the Negotiating Agents approach in order to resolve connicts between these features.

Patent
Maureen Stone1, Anthony Derose1
30 May 1994
TL;DR: In this article, a method for operating on an object-based model data structure from which a first image has been produced in order to produce a second image for display in the spatial context of the first image is presented.
Abstract: A method for operating a processor-controlled machine and a machine having a processor are disclosed for operating on an object-based model data structure from which a first image has been produced in order to produce a second image for display in the spatial context of the first image. A viewing operation region (VOR) is displayed coextensively with a first image segment of the first image in the display area of the machine's display device. The first image segment includes a display object representing a model data item in the object-based model data structure. In response to the display of the VOR, a second image is produced using the model data item in the object-based model data structure. The second image is displayed in the VOR, in the spatial context of the first image, simultaneously with the display of the first image, replacing the first image segment in the display area. In one aspect of the invention, the display object in the first image includes a display feature, and the second image presented in the VOR, in the spatial context of the first image, shows the display object having a modified display feature. In one illustrated implementation, the method operates cooperatively with a graphical object editor application executing in a graphical user interface environment. A machine user moves the VOR over a portion of a graphical object image, and in response to the user's movement action, a viewing operation associated with the VOR operates on the editable object-based model data structure that produced the graphical object image to produce a second modified view of the portion of the graphical object image coextensively positioned with the VOR, displaying the second modified view in the VOR. The method also supports object selection, permitting second views to be produced according to the viewing operation for seleded display objects in the first image.