scispace - formally typeset
Search or ask a question

Showing papers on "Image segmentation published in 1993"


Book
01 Jan 1993
TL;DR: The digitized image and its properties are studied, including shape representation and description, and linear discrete image transforms, and texture analysis.
Abstract: List of Algorithms. Preface. Possible Course Outlines. 1. Introduction. 2. The Image, Its Representations and Properties. 3. The Image, Its Mathematical and Physical Background. 4. Data Structures for Image Analysis. 5. Image Pre-Processing. 6. Segmentation I. 7. Segmentation II. 8. Shape Representation and Description. 9. Object Recognition. 10. Image Understanding. 11. 3d Geometry, Correspondence, 3d from Intensities. 12. Reconstruction from 3d. 13. Mathematical Morphology. 14. Image Data Compression. 15. Texture. 16. Motion Analysis. Index.

5,451 citations


Journal ArticleDOI
TL;DR: Attempts have been made to cover both fuzzy and non-fuzzy techniques including color image segmentation and neural network based approaches, which addresses the issue of quantitative evaluation of segmentation results.

3,527 citations


Journal ArticleDOI
Luc Vincent1
TL;DR: An algorithm that is based on the notion of regional maxima and makes use of breadth-first image scannings implemented using a queue of pixels results in a hybrid gray-scale reconstruction algorithm which is an order of magnitude faster than any previously known algorithm.
Abstract: Two different formal definitions of gray-scale reconstruction are presented. The use of gray-scale reconstruction in various image processing applications discussed to illustrate the usefulness of this transformation for image filtering and segmentation tasks. The standard parallel and sequential approaches to reconstruction are reviewed. It is shown that their common drawback is their inefficiency on conventional computers. To improve this situation, an algorithm that is based on the notion of regional maxima and makes use of breadth-first image scannings implemented using a queue of pixels is introduced. Its combination with the sequential technique results in a hybrid gray-scale reconstruction algorithm which is an order of magnitude faster than any previously known algorithm. >

2,064 citations


Journal ArticleDOI
TL;DR: A novel graph theoretic approach for data clustering is presented and its application to the image segmentation problem is demonstrated, resulting in an optimal solution equivalent to that obtained by partitioning the complete equivalent tree and is able to handle very large graphs with several hundred thousand vertices.
Abstract: A novel graph theoretic approach for data clustering is presented and its application to the image segmentation problem is demonstrated. The data to be clustered are represented by an undirected adjacency graph G with arc capacities assigned to reflect the similarity between the linked vertices. Clustering is achieved by removing arcs of G to form mutually exclusive subgraphs such that the largest inter-subgraph maximum flow is minimized. For graphs of moderate size ( approximately 2000 vertices), the optimal solution is obtained through partitioning a flow and cut equivalent tree of G, which can be efficiently constructed using the Gomory-Hu algorithm (1961). However for larger graphs this approach is impractical. New theorems for subgraph condensation are derived and are then used to develop a fast algorithm which hierarchically constructs and partitions a partially equivalent tree of much reduced size. This algorithm results in an optimal solution equivalent to that obtained by partitioning the complete equivalent tree and is able to handle very large graphs with several hundred thousand vertices. The new clustering algorithm is applied to the image segmentation problem. The segmentation is achieved by effectively searching for closed contours of edge elements (equivalent to minimum cuts in G), which consist mostly of strong edges, while rejecting contours containing isolated strong edges. This method is able to accurately locate region boundaries and at the same time guarantees the formation of closed edge contours. >

1,223 citations


Journal ArticleDOI
TL;DR: An elaborate combination of various techniques has enabled us to track vehicles under complex illumination conditions and over long monocular image sequences, and open problems as well as future work are outlined.
Abstract: Moving vehicles are detected and tracked automatically in monocular image sequences from road traffic scenes recorded by a stationary camera. In order to exploit the a priori knowledge about shape and motion of vehicles in traffic scenes, a parameterized vehicle model is used for an intraframe matching process and a recursive estimator based on a motion model is used for motion estimation. An interpretation cycle supports the intraframe matching process with a state MAP-update step. Initial model hypotheses are generated using an image segmentation component which clusters coherently moving image features into candidate representations of images of a moving vehicle. The inclusion of an illumination model allows taking shadow edges of the vehicle into account during the matching process. Only such an elaborate combination of various techniques has enabled us to track vehicles under complex illumination conditions and over long (over 400 frames) monocular image sequences. Results on various real-world road traffic scenes are presented and open problems as well as future work are outlined.

742 citations


Journal ArticleDOI
TL;DR: This method provides an unbiased estimate of a binarized version of the image in an information theoretic sense by minimizing the cross entropy between the image and its segmented version.

684 citations


Journal ArticleDOI
TL;DR: An improved terminating criterion for the optimization scheme that is based on topographic features of the graph of the intensity image is proposed, as well as a continuation method based on a discrete sale-space representation.
Abstract: The problems of segmenting a noisy intensity image and tracking a nonrigid object in the plane are discussed. In evaluating these problems, a technique based on an active contour model commonly called a snake is examined. The technique is applied to cell locomotion and tracking studies. The snake permits both the segmentation and tracking problems to be simultaneously solved in constrained cases. A detailed analysis of the snake model, emphasizing its limitations and shortcomings, is presented, and improvements to the original description of the model are proposed. Problems of convergence of the optimization scheme are considered. In particular, an improved terminating criterion for the optimization scheme that is based on topographic features of the graph of the intensity image is proposed. Hierarchical filtering methods, as well as a continuation method based on a discrete sale-space representation, are discussed. Results for both segmentation and tracking are presented. Possible failures of the method are discussed. >

644 citations


Proceedings ArticleDOI
15 Jun 1993
TL;DR: A set of techniques is devised for segmenting images into coherently moving regions using affine motion analysis and clustering techniques and it is possible to decompose an image into a set of layers along with information about occlusion and depth ordering.
Abstract: Standard approaches to motion analysis assume that the optic flow is smooth; such techniques have trouble dealing with occlusion boundaries. The image sequence can be decomposed into a set of overlapping layers, where each layer's motion is described by a smooth flow field. The discontinuities in the description are then attributed to object opacities rather than to the flow itself, mirroring the structure of the scene. A set of techniques is devised for segmenting images into coherently moving regions using affine motion analysis and clustering techniques. It is possible to decompose an image into a set of layers along with information about occlusion and depth ordering. The techniques are applied to a flower garden sequence. The scene can be analyzed into four layers, and, the entire 30-frame sequence can be represented with a single image of each layer, along with associated motion parameters. >

344 citations


Journal ArticleDOI
01 Jan 1993
TL;DR: A software procedure for fully automated detection of brain contours from single-echo 3-D MRI data, developed initially for scans with coronal orientation, and the potential of the technique for generalization to other problems is discussed.
Abstract: A software procedure is presented for fully automated detection of brain contours from single-echo 3-D MRI data, developed initially for scans with coronal orientation. The procedure detects structures in a head data volume in a hierarchical fashion. Automatic detection starts with a histogram-based thresholding step, whenever necessary preceded by an image intensity correction procedure. This step is followed by a morphological procedure which refines the binary threshold mask images. Anatomical knowledge, essential for the discrimination between desired and undesired structures, is implemented in this step through a sequence of conventional and novel morphological operations, using 2-D and 3-D operations. A final step of the procedure performs overlap tests on candidate brain regions of interest in neighboring slice images to propagate coherent 2-D brain masks through the third dimension. Results are presented for test runs of the procedure on 23 coronal whole-brain data sets, and one sagittal whole-brain data set. Finally, the potential of the technique for generalization to other problems is discussed, as well as limitations of the technique. >

300 citations


Journal ArticleDOI
TL;DR: The theoretical analysis of the influence of noise on the location and the orientation of an edge is presented and reveals that the accuracy of the proposed approach is virtually unaffected by the additive noise.

246 citations


Journal ArticleDOI
TL;DR: The proposed approach uses a two-stage algorithm for spot detection and shape extraction that opens up the possibility of a reproducible segmentation of microcalcifications, which is a necessary precondition for an efficient screening program.
Abstract: A systematic method for the detection and segmentation of microcalcifications in mammograms is presented. It is important to preserve size and shape of the individual calcifications as exactly as possible. A reliable diagnosis requires both rates of false positives as well as false negatives to be extremely low. The proposed approach uses a two-stage algorithm for spot detection and shape extraction. The first stage applies a weighted difference of Gaussians filter for the noise-invariant and size-specific detection of spots. A morphological filter reproduces the shape of the spots. The results of both filters are combined with a conditional thickening operation. The topology and the number of the spots are determined with the first filter, and the shape by means of the second. The algorithm is tested with a series of real mammograms, using identical parameter values for all images. The results are compared with the judgement of radiological experts, and they are very encouraging. The described approach opens up the possibility of a reproducible segmentation of microcalcifications, which is a necessary precondition for an efficient screening program. >

Journal ArticleDOI
TL;DR: This work presents an investigation of the potential of artificial neural networks for classification of registered magnetic resonance and X-ray computer tomography images of the human brain, and uses them to develop an adaptive learning scheme able to overcome interslice intensity variations typical of MR images.
Abstract: This work presents an investigation of the potential of artificial neural networks for classification of registered magnetic resonance and X-ray computer tomography images of the human brain. First, topological and learning parameters are established experimentally. Second, the learning and generalization properties of the neural networks are compared to those of a classical maximum likelihood classifier and the superiority of the neural network approach is demonstrated when small training sets are utilized. Third, the generalization properties of the neural networks are utilized to develop an adaptive learning scheme able to overcome interslice intensity variations typical of MR images. This approach permits the segmentation of image volumes based on training sets selected on a single slice. Finally, the segmentation results obtained both with the artificial neural network and the maximum likelihood classifiers are compared to contours drawn manually. >

Proceedings ArticleDOI
20 Aug 1993
TL;DR: A new method for solving the problem of motion segmentation, identifying the objects within an image moving independently of the background by utilizing the fact that two views of a static 3D point set are linked by a 3 X 3 Fundamental Matrix (F).
Abstract: We present a new method for solving the problem of motion segmentation, identifying the objects within an image moving independently of the background. We utilize the fact that two views of a static 3D point set are linked by a 3 X 3 Fundamental Matrix (F). The Fundamental Matrix contains all the information on structure and motion from a given set of point correspondences and is derived by a least squares method under the assumption that the majority of the image is undergoing a rigid motion. Least squares is the most commonly used method of parameter estimation in computer vision algorithms. However the estimated parameters from a least squares fit can be corrupted beyond recognition in the presence of gross errors or outliers which plague any data from real imagery. Features with a motion independent of the background are those statistically inconsistent from the calculated value of (F). Well founded methods for detecting these outlying points are described.

Journal ArticleDOI
TL;DR: A unified methodology that can be used to characterize both properties and spatial relationships of object regions in a digital image is presented and can be use to arrive at more meaningful decisions about the contents of the scene.
Abstract: Properties of objects and spatial relations between objects play an important role in rule-based approaches for high-level vision. The partial presence or absence of such properties and relationships can supply both positive and negative evidence for region labeling hypotheses. Similarly, fuzzy labeling of a region can generate new hypotheses pertaining to the properties of the region, its relation to the neighboring regions, and, finally, hypotheses pertaining to the labels of the neighboring regions. A unified methodology that can be used to characterize both properties and spatial relationships of object regions in a digital image is presented. The methods proposed for computing the properties and relations of image regions can be used to arrive at more meaningful decisions about the contents of the scene. >

Proceedings ArticleDOI
01 Jan 1993
TL;DR: An innovative new approach for shape modeling which, while retaining important features of the existing methods, overcomes most of their limitations and can be applied to model arbitrarily complex shapes, shapes with protrusions, and to situations where no a priori assumption about the object's topology can be made.
Abstract: Shape modeling is an important constituent of computer vision as well as computer graphics research. Shape models aid the tasks of object representation and recognition. This dissertation presents a new approach to shape modeling which retains the most attractive features of existing methods, and overcomes their prominent limitations. Our technique can be applied to model arbitrarily complex shapes, which include shapes with significant protrusions, and to situations where no a priori assumption about the object's topology is made. A single instance of our model, when presented with an image having more than one object of interest, has the ability to split freely to represent each object. This method is based on the ideas developed by Osher and Sethian to model propagating solid/liquid interfaces with curvature-dependent speeds. The interface (front) is a closed, nonintersecting, hypersurface flowing along its gradient field with constant speed or a speed that depends on the curvature. It is moved by solving a "Hamilton-Jacobi" type equation written for a function in which the interface is a particular level set. A speed term synthesized from the image is used to stop the interface in the vicinity of the object boundaries. The resulting equation of motion is solved by employing entropy-satisfying upwind finite difference schemes. We also introduce a new algorithm for rapid advancement of the front using what we call a narrow-band updation scheme. This leads to significant improvement in the time complexity of the shape recovery procedure in 2D. An added advantage of our modeling scheme is that it can easily be extended to any number of space dimensions. The efficacy of the scheme is demonstrated with numerical experiments on low contrast medical images. We also demonstrate the recovery of 3D shapes.

Journal ArticleDOI
TL;DR: In this paper, an adaptive unsupervised method using more relevant contextual features is proposed, applied to two real SPOT images, which give consistently better results than a classical histogram-based method.
Abstract: The work addresses Bayesian unsupervised satellite image segmentation, using contextual methods. It is shown, via a simulation study, that the spatial or spectral context contribution is sensitive to image parameters such as homogeneity, means, variances, and spatial or spectral correlations of the noise. From this one may choose the best context contribution according to the estimated values of the above parameters. The parameter estimation is done by SEM, a densities mixture estimator which is a stochastic variant of the EM (expectation-maximization) algorithm. Another simulation study shows good robustness of the SEM algorithm with respect to different image parameters. Thus, modification of the behavior of the contextual methods, when the SEM-based unsupervised approaches are considered, is limited, and the conclusions of the supervised simulation study stay valid. An adaptive unsupervised method using more relevant contextual features is proposed. Different SEM-based unsupervised contextual segmentation methods, applied to two real SPOT images, give consistently better results than a classical histogram-based method. >

Journal ArticleDOI
TL;DR: It is shown that families of technical documents that share the same layout conventions can be readily analyzed and backtracking for error recovery and branch and bound for maximum-area labeling are implemented with Unix Shell programs.
Abstract: A method for extracting alternating horizontal and vertical projection profiles are from nested sub-blocks of scanned page images of technical documents is discussed. The thresholded profile strings are parsed using the compiler utilities Lex and Yacc. The significant document components are demarcated and identified by the recursive application of block grammars. Backtracking for error recovery and branch and bound for maximum-area labeling are implemented with Unix Shell programs. Results of the segmentation and labeling process are stored in a labeled x-y tree. It is shown that families of technical documents that share the same layout conventions can be readily analyzed. Results from experiments in which more than 20 types of document entities were identified in sample pages from two journals are presented. >

Journal ArticleDOI
TL;DR: The classification of 3 common breast lesions, fibroadenomas, cysts, and cancers, was achieved using computerized image analysis of tumor shape in conjunction with patient age using a video camera and commercial frame grabber on a PC-based computer system.
Abstract: The classification of 3 common breast lesions, fibroadenomas, cysts, and cancers, was achieved using computerized image analysis of tumor shape in conjunction with patient age. The process involved the digitization of 69 mammographic images using a video camera and a commercial frame grabber on a PC-based computer system. An interactive segmentation procedure identified the tumor boundary using a thresholding technique which successfully segmented 57% of the lesions. Several features were chosen based on the gross and fine shape describing properties of the tumor boundaries as seen on the radiographs. Patient age was included as a significant feature in determining whether the tumor was a cyst, fibroadenoma, or cancer and was the only patient history information available for this study. The concept of a radial length measure provided a basis from which 6 of the 7 shape describing features were chosen, the seventh being tumor circularity. The feature selection process was accomplished using linear discriminant analysis and a Euclidean distance metric determined group membership. The effectiveness of the classification scheme was tested using both the apparent and the leaving-one-out test methods. The best results using the apparent test method resulted in correctly classifying 82% of the tumors segmented using the entire feature space and the highest classification rate using the leaving-one-out test method was 69% using a subset of the feature space. The results using only the shape descriptors, and excluding patient age resulted in correctly classifying 72% using the entire feature space (except age), and 51% using a subset of the feature space. >

Journal ArticleDOI
TL;DR: An algorithm that integrates multiple region segmentation maps and edge maps operates independently of image sources and specific region-segmentation or edge-detection techniques and shows a strong resemblance to human-generated segmentation.
Abstract: We present an algorithm that integrates multiple region segmentation maps and edge maps. It operates independently of image sources and specific region-segmentation or edge-detection techniques. User-specified weights and the arbitrary mixing of region/edge maps are allowed. The integration algorithm enables multiple edge detection/region segmentation modules to work in parallel as front ends. The solution procedure consists of three steps. A maximum likelihood estimator provides initial solutions to the positions of edge pixels from various inputs. An iterative procedure using only local information (without edge tracing) then minimizes the contour curvature. Finally, regions are merged to guarantee that each region is large and compact. The channel-resolution width controls the spatial scope of the initial estimation and contour smoothing to facilitate multiscale processing. Experimental results are demonstrated using data from different types of sensors and processing techniques. The results show an improvement over individual inputs and a strong resemblance to human-generated segmentation. >

Journal ArticleDOI
TL;DR: Two methods for identifying and analyzing the multiresolution behavior of ridges and valleys in grey-scale images and focusing on the global drainage patterns of rainfall on a terrain map are described.
Abstract: Two methods for identifying and analyzing the multiresolution behavior of ridges and valleys in grey-scale images are described The first method uses the tools of differential geometry to focus on local image behavior The resulting vertex curves mark the tops of ridges and bottoms of valleys in an image The second method focuses on the global drainage patterns of rainfall on a terrain map The resulting watershed boundaries also identify the tops of ridges and bottoms of valleys in an image By following these two geometric representations through scale space, the authors build resolution hierarchies on ridges and valleys in the image that can be utilized for interactive image segmentation >

Journal ArticleDOI
TL;DR: A self-organizing multilayer neural network architecture suitable for image processing that does not require any supervised learning is proposed and an application of the proposed network in object extraction from noisy scenes is demonstrated.
Abstract: The feedforward multilayer perceptron (MLP) with back-propagation of error is described. Since use of this network requires a set of labeled input-output, as such it cannot be used for segmentation of images when only one image is available. (However, if images to be processed are of similar nature, one can use a set of known images for learning and then use the network for processing of other images.) A self-organizing multilayer neural network architecture suitable for image processing is proposed. The proposed architecture is also a feedforward one with back-propagation of errors; but like MLP it does not require any supervised learning. Each neuron is connected to the corresponding neuron in the previous layer and the set of neighbors of that neuron. The output status of neurons in the output layer is described as a fuzzy set. A fuzziness measure of this fuzzy set is used as a measure of error in the system (instability of the network). Learning rates for various measures of fuzziness have been theoretically and experimentally studied. An application of the proposed network in object extraction from noisy scenes is also demonstrated.

Journal ArticleDOI
Michael T. Orchard1
TL;DR: The method improves motion compensation along boundaries of moving objects by segmenting the motion field of previous frames of the sequence, and using the segmentation to predict the location of motion-field discontinuities in the current frame.
Abstract: A technique is presented for improving motion field accuracy, while transmitting the same amount of motion information as standard block-based methods (a small amount of additional side information is needed). The approach is developed within the framework of standard block-based methods, but the constraints of the block-based motion field model are relaxed. The method improves motion compensation along boundaries of moving objects by segmenting the motion field of previous frames of the sequence, and using the segmentation to predict the location of motion-field discontinuities in the current frame. Simulations show significant improvement in the accuracy of the motion-compensated frame compared with the standard block-based method and a corresponding decrease in the bit rate required to achieved a fixed image quality. >

Journal ArticleDOI
TL;DR: The methodology developed is able to classify pavement surface cracking by the type, severity, and extent of cracks detected in video images using an integration of artificial neural network models with conventional image-processing techniques.
Abstract: This paper presents a methodology for automating the processingof highway pavement video images using an integration of artificial neural network models with conventional image-processing techniques. The methodology developed is able to classify pavement surface cracking by the type, severity, and extent of cracks detected in video images. The approach is divided into five major steps: (1) image segmentation, which involves reduction of a raw gray-scale pavement image into a binary image, (2) feature extraction, (3) decomposition of the image into tiles and identification of tiles with cracking, (4) integration of the results from step (3) and classification of the type of cracking in each image, and (5) computation of the severities and extents of cracking detected in each image. In this methodology, artificial neural network models are used in automatic thresholding of the images in stage (1) and in the classification stages (3) and (4). The results obtained in each stage of the process are presented and discussed in this paper. The research results demonstrate the feasibility of this new approach for the detection, classification, and quantification of highway pavement surface cracking.

Proceedings ArticleDOI
20 Oct 1993
TL;DR: In this paper, a lexicon directed algorithm for recognition of unconstrained handwritten words (cursive, discrete, or mixed) such as those encountered in mail pieces is described.
Abstract: Discusses improvements made to a lexicon directed algorithm for recognition of unconstrained handwritten words (cursive, discrete, or mixed) such as those encountered in mail pieces. The procedure consists of binarization, pre-segmentation, intermediate feature extraction, segmentation recognition, and post-processing. The segmentation recognition and the post-processing are repeated for all lexicon words while the binarization to the intermediate feature extraction are applied once for an input word. The result of performance evaluation using large handwritten address block database is described, and algorithm improvements are described and discussed, in order to achieve higher recognition accuracy and speed. As a result the performance for lexicons of size 10, 100, and 1000 are improved to 98.01%, 95.46%, and 91.49% respectively. The processing speed for each lexicon is improved to 2.0, 2.5, and 3.5 sec/word on a SUN SPARC station 2. >

Journal ArticleDOI
Minoru Maruyama1, Shigeru Abe1
TL;DR: A method for range sensing that projects a single pattern of multiple slits of randomly distributed dots that propagated by exploiting the adjacency relationships to get an entire range image is described.
Abstract: A method for range sensing that projects a single pattern of multiple slits is described. Random dots are used to identify each slit. The random dots are given as randomly distributed cuts on each slit. Thus, each slit is divided into many small line segments. Segment matching between the image and the pattern is performed to obtain 3-D data. Using adjacency relations among slit segments, the false matches are reduced, and segment pairs whose adjacent segments correspond with each other are extracted and considered to be correct matches. Then, from the resultant matches, the correspondence is propagated by exploiting the adjacency relationships to get an entire range image. >

Proceedings ArticleDOI
01 Jan 1993
TL;DR: A model-based approach which allows robust and accurate interpretation using explicit anatomical knowledge is described, based on the extension to 3D of Point Distribution Models (PDMs) and associated image search algorithms.
Abstract: The automatic segmentation and labelling of anatomical structures in 3D medical images is a challenging task of practical importance. We describe a model-based approach which allows robust and accurate interpretation using explicit anatomical knowledge. Our method is based on the extension to 3D of Point Distribution Models (PDMs) and associated image search algorithms. A combination of global, Genetic Algorithm (GA), and local, Active Shape Model (ASM), search is used. We have built a 3D PDM of the human brain describing a number of major structures. Using this model we have obtained automatic interpretations for 30 3D Magnetic Resonance head images from different individuals. The results have been evaluated quantitatively and support our claim of robust and accurate interpretation.

Proceedings ArticleDOI
15 Jun 1993
TL;DR: An algorithm for separating the specular and diffuse components of reflection from images is presented and can handle highlights that have a varying diffuse component, as well as highlights that include regions with different reflectance and material properties.
Abstract: An algorithm for separating the specular and diffuse components of reflection from images is presented. The method uses color and polarization simultaneously to obtain strong constraints on the reflection components at each image point. Polarization is used to locally determine the color of the specular component, constraining the diffuse color at a pixel to a one-dimensional linear subspace. This subspace is used to find neighboring pixels whose color is consistent with the pixel. Diffuse color information from consistent neighbors is used to determine the diffuse color of the pixel. In contrast to previous separation algorithms, the proposed method can handle highlights that have a varying diffuse component, as well as highlights that include regions with different reflectance and material properties. Experimental results obtained by applying the algorithm to complex scenes with textured objects and strong interreflections are presented. >

Patent
12 Jan 1993
TL;DR: In this paper, a binary image function composed of the image local area maximum or minimums is made available for auto correlation, and an output signal indicative of the presence or absence of peaks at the predetermined halftone image frequencies is provided.
Abstract: Method and apparatus for processing image pixels to determine the presence of high frequency halftone images. Prior to auto correlation, each pixel in the image is examined to determine whether it is a local area maximum or minimum. A binary image function composed of the image local area maximum or minimums is made available for auto correlation. The presence of peaks at shifts indicative of predetermined halftone image frequencies is detected, and an output signal indicative of the presence or absence of peaks at the predetermined halftone image frequencies is provided. The arrangement is combined with a run length encoder to reduce false microdetection results.

Proceedings ArticleDOI
11 May 1993
TL;DR: A physically based deformable model which can be used to track and analyze non-rigid motion of dynamic structures in time sequences of 2-D or 3-D medical images and provides a sound framework for modal analysis, which allows a compact representation of a general deformation by a reduced number of parameters.
Abstract: The authors present a physically based deformable model which can be used to track and analyze non-rigid motion of dynamic structures in time sequences of 2-D or 3-D medical images. The model considers an object undergoing an elastic deformation as a set of masses linked by springs, where the natural length of the springs is set equal to zero and is replaced by a set of constant equilibrium forces, which characterize the shape of the elastic structure in the absence of external forces. This model has the extremely nice property of yielding dynamic equations which are linear and decoupled for each coordinate, irrespective of the amplitude of the deformation. It provides a reduced algorithmic complexity, and a sound framework for modal analysis, which allows a compact representation of a general deformation by a reduced number of parameters. The power of the approach to segment, track and analyze 2-D and 3-D images is demonstrated by a set of experimental results on various complex medical images. >

Journal ArticleDOI
TL;DR: A new algorithm for segmenting continuous handwritten signatures sampled by a digitizer using a two-step procedure that weights the perceptual importance of every signature point according to its specific neighboring points.
Abstract: A new algorithm for segmenting continuous handwritten signatures sampled by a digitizer is described. The segmentation points are found using a two-step procedure. The principal step is to construct a function that weights the perceptual importance of every signature point according to its specific neighboring points. The second step points out the various local maxima of this function that correspond to where the signature should be segmented. The method is well illustrated and tested on a number of signatures that require different kinds of segmentation decisions. >