scispace - formally typeset
Search or ask a question

Showing papers on "Feature (computer vision) published in 1990"


Journal ArticleDOI
TL;DR: A new method for computing a transformation from a three-dimensional model coordinate frame to the two-dimensional image coordinate frame, using three pairs of model and image points, is developed, showing that this transformation always exists for three noncollinear points, and is unique up to a reflective ambiguity.
Abstract: In this paper we consider the problem of recognizing solid objects from a single two-dimensional image of a three-dimensional scene. We develop a new method for computing a transformation from a three-dimensional model coordinate frame to the two-dimensional image coordinate frame, using three pairs of model and image points. We show that this transformation always exists for three noncollinear points, and is unique up to a reflective ambiguity. The solution method is closed-form and only involves second-order equations. We have implemented a recognition system that uses this transformation method to determine possible alignments of a model with an image. Each of these hypothesized matches is verified by comparing the entire edge contours of the aligned object with the image edges. Using the entire edge contours for verification, rather than a few local feature points, reduces the chance of finding false matches. The system has been tested on partly occluded objects in highly cluttered scenes.

550 citations



Proceedings ArticleDOI
04 Nov 1990
TL;DR: A texture segmentation algorithm inspired by the multichannel filtering theory for visual information processing in the early stages of the human visual system is presented and appears to perform as predicted by preattentive texture discrimination by a human.
Abstract: A texture segmentation algorithm inspired by the multichannel filtering theory for visual information processing in the early stages of the human visual system is presented. The channels are characterized by a bank of Gabor filters that nearly uniformly covers the spatial-frequency domain. A systematic filter selection scheme based on reconstruction of the input image from the filtered images is proposed. Texture features are obtained by subjecting each (selected) filtered image to a nonlinear transformation and computing a measure of energy in a window around each pixel. An unsupervised square-error clustering algorithm is then used to integrate the feature images and produce a segmentation. A simple procedure to incorporate spatial adjacency information in the clustering process is proposed. Experiments on images with natural textures as well as artificial textures with identical second and third-order statistics are reported. The algorithm appears to perform as predicted by preattentive texture discrimination by a human. >

426 citations


Patent
05 Nov 1990
TL;DR: In this article, a time series of successive relatively high-resolution frames of image data, any frame of which may or may not include a graphical representation of one or more predetermined specific members of a given generic class (e.g., human beings), is examined in order to recognize the identity of a specific member if that member's image is included in the time series.
Abstract: A time series of successive relatively high-resolution frames of image data, any frame of which may or may not include a graphical representation of one or more predetermined specific members (e.g., particular known persons) of a given generic class (e.g., human beings), is examined in order to recognize the identity of a specific member if that member's image is included in the time series. The frames of image data may be examined in real time at various resolutions, starting with a relatively low resolution, to detect whether some earlier-occurring frame includes any of a group of image features possessed by an image of a member of the given class. The image location of a detected image feature is stored and then used in a later-occurring, higher resolution frame to direct the examination only to the image region of the stored location in order to (1) verify the detection of the aforesaid image feature, and (2) detect one or more other of the group of image features, if any is present in that image region of the frame being examined. By repeating this type of examination for later and later occurring frames, the accumulated detected features can first reliably recognize the detected image region to be an image of a generic object of the given class, and later can reliably recognize the detected image region to be an image of a certain specific member of the given class.

308 citations


Journal ArticleDOI
Myrna Gopnik1
19 Apr 1990-Nature

294 citations


Journal ArticleDOI
TL;DR: Results obtained show that direct feature statistics such as the Bhattacharyya distance are not appropriate evaluation criteria if texture features are used for image segmentation, and that the Haralick, Laws and Unser methods gave best overall results.

228 citations


Patent
21 Dec 1990
TL;DR: In this article, the configuration of a path for motor vehicles is recognized on the basis of image data produced by a television camera or the like, and a group of straight lines approximating the array of the feature points are also determined.
Abstract: The configuration of a path for motor vehicles is recognized on the basis of image data produced by a television camera or the like. Feature points contained in original image data of the path are determined, and a group of straight lines approximating the array of the feature points are also determined. From the group of straight lines, there are extracted straight lines which are effective to determine boundaries of the path. The extracted straight lines divided into a plurality of line segments by points of intersection between the straight lines. The line segments are then checked against the feature points of the original image data to determine whether the line segments correspond to the boundaries of the path. The original image data may be divided into a plurality of areas, and the above process may be carried out with respect to the image data in each of the areas.

212 citations


Journal ArticleDOI
TL;DR: Modifications to an existing iterative optimization procedure for solving the formulation of the correspondence problem as an optimization problem are discussed and experimental results are presented to show the merits of the formulation.
Abstract: Occlusion and poor feature point detection are two of the main difficulties in the use of multiple frames for establishing correspondence of feature points. A formulation of the correspondence problem as an optimization problem is used to handle these difficulties. Modifications to an existing iterative optimization procedure for solving the formulation of the correspondence problem are discussed. Experimental results are presented to show the merits of the formulation. >

191 citations


Journal ArticleDOI
01 Sep 1990
TL;DR: An adaptive method for visually tracking a known moving object with a single mobile camera to predict the location of features of the object on the image plane based on past observations and past control inputs and to determine an optimal control input that will move the camera so that the image features align with their desired positions.
Abstract: An adaptive method for visually tracking a known moving object with a single mobile camera is described. The method differs from previous methods of motion estimation in that both the camera and the object are moving. The objective is to predict the location of features of the object on the image plane based on past observations and past control inputs and then to determine an optimal control input that will move the camera so that the image features align with their desired positions. A resolved motion rate control structure is used to control the relative position and orientation between the camera and the object. A geometric model of the camera is used to determine the linear differential transformation from image features to camera position and orientation. To adjust for modeling errors and system nonlinearities, a self-tuning adaptive controller is used to update the transformation and compute the optimal control. Computer simulations were conducted to verify the performance of the adaptive feature prediction and control. >

171 citations


Journal ArticleDOI
TL;DR: It is shown that with uniform sampling in time, three noncollinear feature points in five consecutive binocular image pairs contain all the spatial and temporal information.
Abstract: A kinematic model-based approach for the estimation of 3-D motion parameters from a sequence of noisy stereo images is discussed The approach is based on representing the constant acceleration translational motion and constant precession rotational motion in the form of a bilinear state-space model using standard rectilinear states for translation and quaternions for rotation Closed-form solutions of the state transition equations are obtained to propagate the quaternions The measurements are noisy perturbations of 3-D feature points represented in an inertial coordinate system It is assumed that the 3-D feature points are extracted from the stereo images and matched over the frames Owing to the nonlinearity in the state model, nonlinear filters are designed for the estimation of motion parameters Simulation results are included The Cramer-Rao performance bounds for motion parameter estimates are computed A constructive proof for the uniqueness of motion parameters is given It is shown that with uniform sampling in time, three noncollinear feature points in five consecutive binocular image pairs contain all the spatial and temporal information Both nondegenerate and degenerate motions are analyzed A deterministic algorithm to recover motion parameters from a stereo image sequence is summarized from the constructive proof >

170 citations


Proceedings ArticleDOI
17 Jun 1990
TL;DR: The proposal of G. Cottrell et al. (1987) that their image compression network might be used to extract image features for pattern recognition automatically, is tested by training a neural network to compress 64 face images, spanning 11 subjects, and 13 nonface images.
Abstract: The proposal of G. Cottrell et al. (1987) that their image compression network might be used to extract image features for pattern recognition automatically, is tested by training a neural network to compress 64 face images, spanning 11 subjects, and 13 nonface images. Features extracted in this manner (the output of the hidden units) are given as input to a one-layer network trained to distinguish faces from nonfaces and to attach a name and sex to the face images. The network successfully recognizes new images of familiar faces, categorizes novel images as to their `faceness' and, to a great extent, gender, and exhibits continued accuracy over a considerable range of partial or shifted input

Journal ArticleDOI
TL;DR: A rule-based, low-level segmentation system that can automatically identify the space occupied by different structures of the brain by magnetic resonance imaging (MRI) is described and is applied to several MR images.
Abstract: A rule-based, low-level segmentation system that can automatically identify the space occupied by different structures of the brain by magnetic resonance imaging (MRI) is described. Given three-dimensional image data as a stack of slices, it can extract brain parenchyma, cerebro-spinal fluid, and high-intensity abnormalities. The multiple feature environment of MR imaging is used to comput several low-level features to enhance the separability of voxels of different structures. The population distribution of each feature is considered and a confidence function is computed whose amplitude indicates the likelihood of a voxel, with a given feature value, being a member of a class of voxels. Confidence levels are divided into a set of ranges to define notions such as highly confident, moderately confident, and least confident. The rule-based system consists of a set of sequential stages in which partially segmented binary scenes of one stage guide the next stage. Some important low-level definitions and rules for a clinical imaging protocol are presented. The system is applied to several MR images. >

Journal ArticleDOI
TL;DR: In this article, an automatic wake detection algorithm based on the Radon transform was developed and applied to the Seasat imagery, which can detect ship wakes and differentiate ship wakes from other linear ocean features produced by the underwater topography and existing sea conditions.
Abstract: A moving ship produces a set of waves in a characteristic linear "V" pattern. This pattern, or some of its components, can often be detected in ocean imagery produced by satellite-borne Synthetic Ap- erture Radar (SAR) sensors operating at L-band. Some wake compo- nents, notably the turbulent wake, may extend for 5-15 km behind the ship. As ship wake detection can provide information such as ship di- rection and speed, the detection of these wakes can play an important role in satellite surveillance of shipping. Described is research done on the use of the Radon transform to automatically detect ship wakes .in Seasat ocean imagery. The objective of the research was twofold: to automatically detect ship wakes and to differentiate ship wakes from other linear ocean features produced by the underwater topography and existing sea conditions. An ADA (Automatic Detection Algorithm) based on the Radon transform was developed and applied to the Seasat imagery. The basic system performs the Radon transform of the SAR image, then detects bright and dark peaks produced in the transform by wakes (or other linear features) in the image. As the Radon trans- form essentially integrates the image intensity along every straight line through an image, each integral becomes one element in transform space. This integration process averages out the intensity fluctuations due to noise, thereby increasing the signal-to-noise ratio of the feature of interest in the transform space relative to that in the original image. A number of additional processing techniques were developed and tested to improve the PD (probability of detection) and reduce the PFA (probability of false alarm). To date, the use of an ADA, which com- bines a high-pass filter followed by a normalized Radon transform and a Wiener filter, has been shown to reliably distinguish wake peaks from false alarms. Keywords-Wake, Radon, radar.

Patent
14 Aug 1990
TL;DR: In this article, an automatic object image follow-up device includes a setting circuit for shiftably setting a followup field, an extracting circuit for extracting a feature of an object in relation to the follow up field, a store for storing the extracted feature and a detecting circuit for detecting a relative shift between the object and the device on the basis of the feature of the object extracted by the extracting circuit and the feature stored by the store.
Abstract: An automatic object image follow-up device includes a setting circuit for shiftably setting a follow-up field; an extracting circuit for extracting a feature of an object in relation to the follow-up field; a store for storing the extracted feature; a detecting circuit for detecting a relative shift between the object and the device on the basis of the feature of the object extracted by the extracting circuit and the feature stored by the store; and a shifting circuit for shifting the follow-up field following up the object according to the relative shift.

Journal ArticleDOI
TL;DR: A procedure for defining and recognizing shape features 3-D solid models is presented in which a shape feature is defined as a single face or a set of continuous faces possessing certain characteristic facts in topology and geometry.
Abstract: A procedure for defining and recognizing shape features 3-D solid models is presented in which a shape feature is defined as a single face or a set of continuous faces possessing certain characteristic facts in topology and geometry. The system automatically extracts these facts from an example shape feature interactively indicated by the user. The resulting representation of the shape feature can be interactively edited and parameterized. Graph matching accomplishes feature recognition. The system searches the solid model for B-rep subgraphs with the same characteristic facts as the shape feature to be recognized. When the system recognizes a shape feature, it removes the geometry associated with the feature from the original solid model to produce a simpler solid model. It then examines the simpler solid model to determine whether additional features have been revealed. The process repeats until no additional features are found. >

Journal ArticleDOI
TL;DR: For example, Treisman et al. as discussed by the authors showed that visual search can also be based on three-dimensional objects in the corresponding scene, provided that these objects are simple convex blocks.
Abstract: Previous theories of early vision have assumed that visual search is based on simple two-dimensional aspects of an image, such as the orientation of edges and lines. It is shown here that search can also be based on three-dimensi onal orientation of objects in the corresponding scene, provided that these objects are simple convex blocks. Direct comparison shows that image-based and scene-based orientation are similar in their ability to facilitate search. These findings support the hypothesis that scene-based properties are represented at preattentive levels in early vision. Visual search is a powerful tool for investigating the representations and processes at the earliest stages of human vision. In this task, observers try to determine as rapidly as possible whether a given target item is present or absent in a display. If the time to detect the target is relatively independent of the number of other items present, the display is considered to contain a distinctive visual feature. Features found in this way (e.g. orientation, color, motion) are taken to be the primitive elements of the visual systems. The most comprehensive theories of visual search (Beck, 1982; Julesz, 1984; Treisman, 1986) hypothesize the existence of two visual subsystems. A preattentive system detects features in parallel across the visual field. Spatial relations between features are not registered at this stage. These can only be determined by an attentive system that inspects serially each collection of features in the image. Recent findings, however, have argued for more sophisticated pre- attentive processes. For example, numerous reports show features to be context-sensi tive (Callaghan, 1989; Enns, 1986; Nothdurft, 1985). Others show that spatial conjunctions of features permit rapid search under some conditions (McLeod, Driver, & Crisp, 1988; Treisman, 1988; Wolfe, Franzel, & Cave, 1988). These findings suggest that spatial information can be used at the preattentive stage. Recent studies also suggest that the features are more complex than previously thought. For example, rapid search is possible for items defined by differences in binocular disparity (Nakayama & Silverman, 1986), raising the possibility that stereoscopic depth may be deter- mined preattentively. Indeed, it appears that the features do not simply describe two-dimensional aspects of the image, but also describe attributes of the three-dimensional scene that gave rise to the image. Ramachandran (1988) has shown that the convexity/concavity of surfaces permits spontaneous texture segregation, and Enns and Rensink (1990) have found that search for shaded polygons is rapid when these items can be interpreted as three-dimensi onal blocks. Although the relevant scene-based properties present at preattentive levels have not yet been completely mapped out, likely candidates include lighting direction, surface reflectance, and three-dimensional orientation.

Patent
04 Jun 1990
TL;DR: In this paper, a feature vector consisting of the highest order (most discriminatory) magnitude information from the power spectrum of the Fourier transform of the image is formed, and the output vector is subjected to statistical analysis to determine if a sufficiently high confidence level exists to indicate that a successful identification has been made.
Abstract: A method and apparatus under software control for pattern recognition utilizes a neural network implementation to recognize two dimensional input images which are sufficiently similar to a database of previously stored two dimensional images. Images are first image processed and subjected to a Fourier transform which yields a power spectrum. An in-class to out-of-class study is performed on a typical collection of images in order to determine the most discriminatory regions of the Fourier transform. A feature vector consisting of the highest order (most discriminatory) magnitude information from the power spectrum of the Fourier transform of the image is formed. Feature vectors are input to a neural network having preferably two hidden layers, input dimensionality of the number of elements in the feature vector and output dimensionality of the number of data elements stored in the database. Unique identifier numbers are preferably stored along with the feature vector. Application of a query feature vector to the neural network will result in an output vector. The output vector is subjected to statistical analysis to determine if a sufficiently high confidence level exists to indicate that a successful identification has been made. Where a successful identification has occurred, the unique identifier number may be displayed.

Journal ArticleDOI
TL;DR: This paper examines feature classification based on local energy detection and shows that local energy measures are intrinsically capable of making this classification because of the use of odd and even filters.

Patent
30 Apr 1990
TL;DR: In this article, the authors proposed a layered network having several layers of constrained feature detection, where each layer of the network includes a plurality of feature maps and a corresponding plurality of reduction maps, each feature reduction map is connected to only one constrained feature map in the same layer for undersampling that constrained feature maps.
Abstract: Highly accurate, reliable optical character recognition is afforded by a layered network having several layers of constrained feature detection wherein each layer of constrained feature detection includes a plurality of constrained feature maps and a corresponding plurality of feature reduction maps. Each feature reduction map is connected to only one constrained feature map in the same layer for undersampling that constrained feature map. Units in each constrained feature map of the first constrained feature detection layer respond as a function of a corresponding kernel and of different portions of the pixel image of the character captured in a receptive field associated with the unit. Units in each feature map of the second constrained feature detection layer respond as a function of a corresponding kernel and of different portions of an individual feature reduction map or a combination of several feature reduction maps in the first constrained feature detection layer as captured in a receptive field of the unit. The feature reduction maps of the second constrained feature detection layer are fully connected to each unit in the final character classification layer. Kernels are automatically learned by constrained back propagation during network initialization or training.

Book ChapterDOI
01 Apr 1990
TL;DR: A method for extracting geometric and relational structures from raw intensity data and an intermediate-level description between low- and high-level vision is suggested, produced by grouping image features into more and more abstract structures.
Abstract: We present a method for extracting geometric and relational structures from raw intensity data. On one hand, low-level image processing extracts isolated features. On the other hand, image interpretation uses sophisticated object descriptions in representation frameworks such as semantic networks. We suggest an intermediate-level description between low- and high-level vision. This description is produced by grouping image features into more and more abstract structures. First, we motivate our choice with respect to what should be represented and we stress the limitations inherent with the use of sensory data. Second, we describe our current implementation and illustrate it with various examples.

Proceedings ArticleDOI
11 Nov 1990
TL;DR: Algorithms are presented for finite state machine (FSM) verification and image computation which improve on the results of O. Coudert et al (1989), giving 1-4 orders of magnitude speedup.
Abstract: Algorithms are presented for finite state machine (FSM) verification and image computation which improve on the results of O. Coudert et al (1989), giving 1-4 orders of magnitude speedup. Novel features include primary input splitting-this PODEM feature enlarges the search space but shortens the search due to implications. Another new feature, identical subtree recombination, is shown to be effective for iterative networks (eg, serial multipliers). The free-variable recognition feature prevents unbalanced bipartitioning trees in tautological subspaces. Finally, reached set pruning is significant when the image contains large numbers of previously reached states. >

Patent
07 Jun 1990
TL;DR: In this paper, the outlines are based on intensity contours, where the intensity of the contour is intermediate that within and outside of the feature, and the intermediate intensity is chosen objectively based on a histogram of intensity levels.
Abstract: A system for automatically determining the outline of a selected anatomical feature or region (e.g., in a slice of magnetic resonance data) and then making a quantitative determination of a morphometric parameter (such as area or volume) associated with the feature. A volumetric measurement of the feature is made by determining for each slice the areas within the outline for the feature and summing the areas for all the slices; the outlines are based on intensity contours, where the intensity of the contour is intermediate that within and outside of the feature; the intermediate intensity is chosen objectively based on a histogram of intensity levels; interpolation is used to assign contour locations in areas where the intensity of the contour is not present exactly; the accuracy of the outline is improved using an edge-optimization procedure in which the outline is shifted transversely to the location at which an estimate of the derivative (e.g., the Sobel operator) is a maximum; an alternative technique for choosing the initial outline is to examine the drop or rise in intensity along a radial direction from a starting point within the region of interest and assign the contour to the location at which the difference in intensity reaches a predetermined value; the optimized outline for the first slice of data is saved and used as the initial outline for the adjoining slice, and the procedure of adjusting the outline transversely to the location at which a derivative estimate is a maximum is repeated, and so on, until outlines have been generated for all slices.

Book ChapterDOI
01 Apr 1990
TL;DR: A Picture Archive and Communication System (PACS) based image analysis program which employs the technique of deformable templates to localize features in dual energy CT images and has several advantages over the human operator, for example, consistency, accuracy and cost.
Abstract: We propose a method for detecting and describing features in medical images using deformable templates, for the purpose of diagnostic analysis of these features. The feature of interest can described by a parameterized template. An energy function is defined which links edges in the image intensity to corresponding properties of the template. The template then interacts dynamically with the image content, by evaluating the energy function and accordingly altering its parameter values. A gradient maximization technique is used to optimize the placement and shape of the deformable template to fit the desired anatomical feature. The final parameter values can be used as descriptors for the feature. Measurements of intensity values within a region of the template can be used as inputs to a medical diagnostic system. We have developed a Picture Archive and Communication System (PACS) based image analysis program which employs the technique of deformable templates to localize features in dual energy CT images. Measurements can then be automatically made which can be used for maintenance of patients suffering from bone loss and abnormal marrow fat content. This system has been successfully tested on 552 (69 × 8) images and is currently in use at Massachusetts General Hospital, Boston, MA. Statistical comparisons between the system and previously used manual techniques show that their performances are practically equivalent and that the system has several advantages over the human operator, for example, consistency, accuracy and cost.

Journal ArticleDOI
TL;DR: The transform space obtained by this algorithm contains less extraneous data and more significant maxima, thus making it easier to extract the desired parameters from it.

Journal ArticleDOI
Jaroslaw R. Rossignac1
TL;DR: This paper points out the semantic ambiguities of simplistic feature-based commands for editing models, and shows how space decomposition techniques and CSG expressions based on active zones reduce the cost of executing an editing command.

Proceedings ArticleDOI
04 Dec 1990
TL;DR: A novel imaging method is presented for acquiring an omni-directional view with range information using a single camera, and the difficult correspondence problem is solved by tracking each feature in the image sequence in the same manner as in epipolar plane image analysis.
Abstract: A novel imaging method is presented for acquiring an omni-directional view with range information using a single camera. The difficult correspondence problem is solved by tracking each feature in the image sequence in the same manner as in epipolar plane image analysis. Although the equivalent camera distance is not long, the authors can obtain reliable estimates because of the high resolution of the panoramic views. The authors expect the resolution in locating sharp edges to be very fine, up to the resolution of camera rotation, 0.005 degree. A global map making procedure is also proposed which uses omni-directional views at different locations. Omni-directional binocular stereo is used to acquire a path-centered local map with direction-dependent uncertainty. By combining multiple local maps, it is possible to build a more reliable global map. Also explored is the possibility for using the panoramic stereo for map making by a robot. >

Journal ArticleDOI
TL;DR: A new methodology for coupling design and automatic process planning based on form features is described that offers a balance between the computer-aided design and manufacturing planning processes based on an object-oriented, feature-based design environment and feature refinement, a new knowledge-based approach to geometric reasoning in generative process planning.

Patent
Watanabe Mutsumi1
26 Sep 1990
TL;DR: In this paper, a moving object detection system includes an image acquiring section, feature extracting section, background detecting section, a prediction parameter calculating section, region estimating section, and a region determining section.
Abstract: A moving object detecting system includes an image acquiring section, a feature extracting section, a background detecting section, a prediction parameter calculating section, a region estimating section, and a moving object determining section. The image acquiring section has a mobile imaging system and acquires image frames sequentially obtained upon movement of the imaging system. The feature extracting section extracts features of the acquired image frames. The background feature detecting section detects a background feature from the features. The prediction parameter calculating section obtains prediction parameters for predicting a motion of the background region upon movement of the imaging system in accordance with a positional relationship between the correlated background features. The region estimating section estimates a region where features detected from image frames obtained by the mobile imaging system may have been present in an image frame of the immediately preceding frame by using the prediction parameters. The moving object determining section determines whether a feature corresponding to the given feature is present in the estimation region, thereby checking the presence/absence of the moving object.

01 Jan 1990
TL;DR: This thesis discusses an experimental feature recognizer that uses a blend of artificial intelligence (AI) and computational geometry techniques and is implemented in a rapid prototyping test bed consisting of the KnowledgeCraft AI environment tightly coupled with the PADL-2 solid modeler.
Abstract: Recognition of machining features such as holes, slots and pockets is essential for the fully automatic manufacture of mechanical parts. This thesis discusses an experimental feature recognizer that uses a blend of artificial intelligence (AI) and computational geometry techniques. The recognizer is implemented in a rapid prototyping test bed consisting of the KnowledgeCraft$\sp{\rm TM}$ AI environment tightly coupled with the PADL-2 solid modeler. It is capable of finding features with interacting volumes (e.g., two crossing slots), and takes into account nominal shape information, tolerancing and other available data. Machinable volumetric features (or simply "features") are solids removable by operations typically performed in 3-axis machining centers. Features are recognized by the characteristic traces they leave in the nominal geometry of a part. These traces, also called surface features, provide reliable clues or hints for the potential existence of volumetric features, even when feature interactions occur. A generate-and-test strategy is used. Partial information on the presence of features is processed by OPS-5 production rules which generate hints and post them on a blackboard. The clues are assessed, and those judged promising are processed to ensure they correspond to actual features and to gather information necessary for process planning. A solid feature is associated with each promising hint, its interaction with other features is represented by segmenting the feature into optional and required volumes, and the feature's accessibility is analyzed. Because some of the proposed features may rely on faulty hints, these are tested for validity in a second phase of feature finding. The validity tests ensure that the proposed features are accessible, do not intrude into the desired part, and satisfy other machinability conditions. The process continues until it produces a complete decomposition of the volume to be machined in terms of volumetric features that correspond to material removal operations.

Journal ArticleDOI
01 Aug 1990-Ecology
TL;DR: A novel process for preparing 4-hydroxy-3-(5- methyl-3-isoxazolylcarbamoyl)-2-methyl-2H-1,2-benzothia zine 1,1-dioxide (I), starting with 3-amino-5-methylisxazole (II) is disclosed, which exhibits anti-inflammatory properties and is useful for treating inflammation.
Abstract: A novel process for preparing 4-hydroxy-3-(5-methyl-3-isoxazolylcarbamoyl)-2-methyl-2H-1,2-benzothia zine 1,1-dioxide (I), starting with 3-amino-5-methylisoxazole (II) is disclosed. Compound I exhibits anti-inflammatory properties and is useful for treating inflammation. In the process of the invention an intermediate obtained, 2,3-dihydro-N-(5-methyl-3-isoxazolyl)-3-oxo-1,2-benzisothiazole-2-acet amide 1,1-dioxide (IV) undergoes rearrangement to provide 1-{[5-(4-hydroxy-2H-1,2-benzothiazin-3-yl)-1,2,4-oxadiazol-3-yl]methyl }ethanone S,S-dioxide (V), which is methylated, according to conventional procedures. The methylated intermediates VI, upon further treatment, undergoes a second rearrangement to obtain the desired anti-inflammatory compound I.