scispace - formally typeset
Search or ask a question
Author

Its'hak Dinstein

Other affiliations: IBM, University of Kansas
Bio: Its'hak Dinstein is an academic researcher from Ben-Gurion University of the Negev. The author has contributed to research in topics: Image processing & Image restoration. The author has an hindex of 27, co-authored 105 publications receiving 20455 citations. Previous affiliations of Its'hak Dinstein include IBM & University of Kansas.


Papers
More filters
Journal ArticleDOI
01 Nov 1973
TL;DR: These results indicate that the easily computable textural features based on gray-tone spatial dependancies probably have a general applicability for a wide variety of image-classification applications.
Abstract: Texture is one of the important characteristics used in identifying objects or regions of interest in an image, whether the image be a photomicrograph, an aerial photograph, or a satellite image. This paper describes some easily computable textural features based on gray-tone spatial dependancies, and illustrates their application in category-identification tasks of three different kinds of image data: photomicrographs of five kinds of sandstones, 1:20 000 panchromatic aerial photographs of eight land-use categories, and Earth Resources Technology Satellite (ERTS) multispecial imagery containing seven land-use categories. We use two kinds of decision rules: one for which the decision regions are convex polyhedra (a piecewise linear decision rule), and one for which the decision regions are rectangular parallelpipeds (a min-max decision rule). In each experiment the data set was divided into two parts, a training set and a test set. Test set identification accuracy is 89 percent for the photomicrographs, 82 percent for the aerial photographic imagery, and 83 percent for the satellite imagery. These results indicate that the easily computable textural features probably have a general applicability for a wide variety of image-classification applications.

20,442 citations

Journal ArticleDOI
TL;DR: Two new maximum likelihood motion estimation schemes for ultrasound images are presented, based on the assumption that both images are contaminated by a Rayleigh distributed multiplicative noise, which enables motion estimation in cases where a noiseless reference image is not available.
Abstract: When performing block-matching based motion estimation with the ML estimator, one would try to match blocks from the two images, within a predefined search area. The estimated motion vector is that which maximizes a likelihood function, formulated according to the image formation model. Two new maximum likelihood motion estimation schemes for ultrasound images are presented. The new likelihood functions are based on the assumption that both images are contaminated by a Rayleigh distributed multiplicative noise. The new approach enables motion estimation in cases where a noiseless reference image is not available. Experimental results show a motion estimation improvement with regards to other known ML estimation methods.

144 citations

Journal ArticleDOI
TL;DR: The proposed approach treats the separation problem as an identification problem, and, in this way, manages to segregate overlapping chromosomes in a metaphase image, and is fast and does not depend on the existence of a separating path.
Abstract: A common task in cytogenetic tests is the classification of human chromosomes. Successful separation between touching and overlapping chromosomes in a metaphase image is vital for correct classification. Current systems for automatic chromosome classification are mostly interactive and require human intervention for correct separation between touching and overlapping chromosomes. Since chromosomes are nonrigid objects, special separation methods are required to segregate them. Common methods for overlapping chromosomes separation between touching chromosomes tend to fail where ambiguity or incomplete information are involved, and so are unable to segregate overlapping chromosomes. The proposed approach treats the separation problem as an identification problem, and, in this way, manages to segregate overlapping chromosomes. This approach encompasses low-level knowledge about the objects and uses only extracted information, therefore, it is fast and does not depend on the existence of a separating path. The method described in this paper can be adopted for other applications, where separation between touching and overlapping nonrigid objects is required.

121 citations

Journal ArticleDOI
TL;DR: Tested on five real-world databases, the MLP provides the highest classification accuracy at the cost of deforming the data structure, whereas the linear models preserve the structure but usually with inferior accuracy.
Abstract: The projection maps and derived classification accuracies of a neural network (NN) implementation of Sammon's mapping, an auto-associative NN (AANN) and a multilayer perceptron (MLP) feature extractor are compared with those of the conventional principal component analysis (PCA). Tested on five real-world databases, the MLP provides the highest classification accuracy at the cost of deforming the data structure, whereas the linear models preserve the structure but usually with inferior accuracy.

85 citations

Journal ArticleDOI
01 Sep 1989
TL;DR: The possibility of integrating human visual intelligence into the process of encrypting sensitive information by presenting certain visual information to the recipient's eye is discussed, which adds a new dimension to the cryptocomplexity of such a process.
Abstract: The possibility of integrating human visual intelligence into the process of encrypting sensitive information by presenting certain visual information to the recipient's eye is discussed. This adds a new dimension to the cryptocomplexity of such a process. Two implementations are based on this principle are described. The first shows how keys used for encryption can be randomly generated by the transmitter, without the necessity of exchanging them with the legitimate recipient. The keys are 'embedded' in a master key and are recovered from it by the intelligence of the legitimate recipient after he or she uses the master key. No human intelligence can be helpful to a user who does not possess the master key. The second implementation concerns the possibility of creating a secret connection between a numerical key and a specific image (e.g. a face). Such a scheme can be used, for example, in validating the identity of the users of credit cards. >

84 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The objective of this review paper is to summarize and compare some of the well-known methods used in various stages of a pattern recognition system and identify research topics and applications which are at the forefront of this exciting and challenging field.
Abstract: The primary goal of pattern recognition is supervised or unsupervised classification. Among the various frameworks in which pattern recognition has been traditionally formulated, the statistical approach has been most intensively studied and used in practice. More recently, neural network techniques and methods imported from statistical learning theory have been receiving increasing attention. The design of a recognition system requires careful attention to the following issues: definition of pattern classes, sensing environment, pattern representation, feature extraction and selection, cluster analysis, classifier design and learning, selection of training and test samples, and performance evaluation. In spite of almost 50 years of research and development in this field, the general problem of recognizing complex patterns with arbitrary orientation, location, and scale remains unsolved. New and emerging applications, such as data mining, web searching, retrieval of multimedia data, face recognition, and cursive handwriting recognition, require robust and efficient pattern recognition techniques. The objective of this review paper is to summarize and compare some of the well-known methods used in various stages of a pattern recognition system and identify research topics and applications which are at the forefront of this exciting and challenging field.

6,527 citations

Journal ArticleDOI
TL;DR: The goal of this article is to review the state-of-the-art tracking methods, classify them into different categories, and identify new trends to discuss the important issues related to tracking including the use of appropriate image features, selection of motion models, and detection of objects.
Abstract: The goal of this article is to review the state-of-the-art tracking methods, classify them into different categories, and identify new trends. Object tracking, in general, is a challenging problem. Difficulties in tracking objects can arise due to abrupt object motion, changing appearance patterns of both the object and the scene, nonrigid object structures, object-to-object and object-to-scene occlusions, and camera motion. Tracking is usually performed in the context of higher-level applications that require the location and/or shape of the object in every frame. Typically, assumptions are made to constrain the tracking problem in the context of a particular application. In this survey, we categorize the tracking methods on the basis of the object and motion representations used, provide detailed descriptions of representative methods in each category, and examine their pros and cons. Moreover, we discuss the important issues related to tracking including the use of appropriate image features, selection of motion models, and detection of objects.

5,318 citations

Journal ArticleDOI
Robert M. Haralick1
01 Jan 1979
TL;DR: This survey reviews the image processing literature on the various approaches and models investigators have used for texture, including statistical approaches of autocorrelation function, optical transforms, digital transforms, textural edgeness, structural element, gray tone cooccurrence, run lengths, and autoregressive models.
Abstract: In this survey we review the image processing literature on the various approaches and models investigators have used for texture. These include statistical approaches of autocorrelation function, optical transforms, digital transforms, textural edgeness, structural element, gray tone cooccurrence, run lengths, and autoregressive models. We discuss and generalize some structural approaches to texture based on more complex primitives than gray tone. We conclude with some structural-statistical generalizations which apply the statistical techniques to the structural primitives.

5,112 citations

Journal ArticleDOI
TL;DR: This report describes the process of radiomics, its challenges, and its potential power to facilitate better clinical decision making, particularly in the care of patients with cancer.
Abstract: In the past decade, the field of medical image analysis has grown exponentially, with an increased number of pattern recognition tools and an increase in data set sizes. These advances have facilitated the development of processes for high-throughput extraction of quantitative features that result in the conversion of images into mineable data and the subsequent analysis of these data for decision support; this practice is termed radiomics. This is in contrast to the traditional practice of treating medical images as pictures intended solely for visual interpretation. Radiomic data contain first-, second-, and higher-order statistics. These data are combined with other patient data and are mined with sophisticated bioinformatics tools to develop models that may potentially improve diagnostic, prognostic, and predictive accuracy. Because radiomics analyses are intended to be conducted with standard of care images, it is conceivable that conversion of digital images to mineable data will eventually become routine practice. This report describes the process of radiomics, its challenges, and its potential power to facilitate better clinical decision making, particularly in the care of patients with cancer.

4,773 citations

Journal ArticleDOI
TL;DR: The first free, open-source system designed for flexible, high-throughput cell image analysis, CellProfiler is described, which can address a variety of biological questions quantitatively.
Abstract: Biologists can now prepare and image thousands of samples per day using automation, enabling chemical screens and functional genomics (for example, using RNA interference). Here we describe the first free, open-source system designed for flexible, high-throughput cell image analysis, CellProfiler. CellProfiler can address a variety of biological questions quantitatively, including standard assays (for example, cell count, size, per-cell protein levels) and complex morphological assays (for example, cell/organelle shape or subcellular patterns of DNA or protein staining).

4,578 citations