scispace - formally typeset
Search or ask a question
Author

M. Ibrahim Sezan

Other affiliations: Eastman Kodak Company
Bio: M. Ibrahim Sezan is an academic researcher from Sharp. The author has contributed to research in topics: Motion estimation & Image processing. The author has an hindex of 26, co-authored 69 publications receiving 2755 citations. Previous affiliations of M. Ibrahim Sezan include Eastman Kodak Company.


Papers
More filters
Patent
James H. Errico1, M. Ibrahim Sezan1, George Borden1, Gary A. Feather1, Mick G. Grover1 
13 Jun 2005
TL;DR: In this article, a collaborative information system in which a first display device provides a recommendation of programming content to a viewer of a second display device, where the recommendation is based on content characteristics of the recommended content, and where the recommendations are assigned those content characteristics respectively different weights.
Abstract: A collaborative information system in which a first display device provides a recommendation of programming content to a viewer of a second display device, where the recommendation is based on content characteristics of the recommended content, and where the recommendation is based on assigning those content characteristics respectively different weights.

246 citations

Journal ArticleDOI
TL;DR: A new automatic peak detection algorithm is developed and applied to histogram-based image data reduction (quantization) and the results of using the proposed algorithm for data reduction purposes are presented in the case of various images.
Abstract: A new automatic peak detection algorithm is developed and applied to histogram-based image data reduction (quantization). The algorithm uses a peak detection signal derived either from the image histogram or the cumulative distribution function to locate the peaks in the image histogram. Specifically, the gray levels at which the peaks start, end, and attain their maxima are estimated. To implement data reduction, gray-level thresholds are set between the peaks, and the gray levels at which the peaks attain their maxima are chosen as the quantization levels. The results of using the proposed algorithm for data reduction purposes are presented in the case of various images.

222 citations

Patent
17 Jan 1996
TL;DR: In this paper, a robust system, adaptive to motion estimation accuracy, for creating a high resolution image from a sequence of lower resolution motion images produces a mapping transformation for each low resolution image to map pixels in each low-resolution image into locations in the high-resolution images.
Abstract: A robust system, adaptive to motion estimation accuracy, for creating a high resolution image from a sequence of lower resolution motion images produces a mapping transformation for each low resolution image to map pixels in each low resolution image into locations in the high resolution image. A combined point spread function (PSF) is computed for each pixel in each lower resolution image employing the mapping transformations provided that they describe accurate motion vectors. The high resolution image is generated from the lower resolution images employing the combined PSF's by projection onto convex sets (POCS), where sets and associated projections are defined only for those pixels whose motion vector estimates are accurate.

174 citations

Journal ArticleDOI
TL;DR: A tutorial review of recent developments in restoring images that are degraded by both blur and noise and considers three fundamental aspects of digital image restoration: modeling, identification algorithms, and restoration algorithms.
Abstract: We present a tutorial review of recent developments in restoring images that are degraded by both blur and noise. We consider three fundamental aspects of digital image restoration: modeling, identification algorithms, and restoration algorithms. An overview of modeling the degradations, and certain properties of images are given first. We then survey the methods that identify these models. Image restoration algorithms are surveyed in two categories: general algorithms and specialized algorithms. We briefly discuss present and future research topics in the field. Our emphasis here is on fundamental concepts and ideas rather than mathematical details.

169 citations

Patent
M. Ibrahim Sezan1, James H. Errico1, George Borden1, Gary A. Feather1, Mick G. Grover1 
13 Jun 2005

162 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: 40 selected thresholding methods from various categories are compared in the context of nondestructive testing applications as well as for document images, and the thresholding algorithms that perform uniformly better over nonde- structive testing and document image applications are identified.
Abstract: We conduct an exhaustive survey of image thresholding methods, categorize them, express their formulas under a uniform notation, and finally carry their performance comparison. The thresholding methods are categorized according to the information they are exploiting, such as histogram shape, measurement space clustering, entropy, object attributes, spatial correlation, and local gray-level surface. 40 selected thresholding methods from various categories are compared in the context of nondestructive testing applications as well as for document images. The comparison is based on the combined performance measures. We identify the thresholding algorithms that perform uniformly better over nonde- structive testing and document image applications. © 2004 SPIE and IS&T. (DOI: 10.1117/1.1631316)

4,543 citations

Journal ArticleDOI
TL;DR: A very broad and flexible framework is investigated which allows a systematic discussion of questions on behaviour in general Hilbert spaces and on the quality of convergence in convex feasibility problems.
Abstract: Due to their extraordinary utility and broad applicability in many areas of classical mathematics and modern physical sciences (most notably, computerized tomography), algorithms for solving convex feasibility problems continue to receive great attention. To unify, generalize, and review some of these algorithms, a very broad and flexible framework is investigated. Several crucial new concepts which allow a systematic discussion of questions on behaviour in general Hilbert spaces and on the quality of convergence are brought out. Numerous examples are given.

1,742 citations

Book
16 Nov 2012
TL;DR: The article introduces digital image restoration to the reader who is just beginning in this field, and provides a review and analysis for the readers who may already be well-versed in image restoration.
Abstract: The article introduces digital image restoration to the reader who is just beginning in this field, and provides a review and analysis for the reader who may already be well-versed in image restoration. The perspective on the topic is one that comes primarily from work done in the field of signal processing. Thus, many of the techniques and works cited relate to classical signal processing approaches to estimation theory, filtering, and numerical analysis. In particular, the emphasis is placed primarily on digital image restoration algorithms that grow out of an area known as "regularized least squares" methods. It should be noted, however, that digital image restoration is a very broad field, as we discuss, and thus contains many other successful approaches that have been developed from different perspectives, such as optics, astronomy, and medical imaging, just to name a few. In the process of reviewing this topic, we address a number of very important issues in this field that are not typically discussed in the technical literature.

1,588 citations

Book
25 Nov 1996
TL;DR: Algorithms for Image Processing and Computer Vision, 2nd Edition provides the tools to speed development of image processing applications.
Abstract: A cookbook of algorithms for common image processing applicationsThanks to advances in computer hardware and software, algorithms have been developed that support sophisticated image processing without requiring an extensive background in mathematics This bestselling book has been fully updated with the newest of these, including 2D vision methods in content-based searches and the use of graphics cards as image processing computational aids Its an ideal reference for software engineers and developers, advanced programmers, graphics programmers, scientists, and other specialists who require highly specialized image processingAlgorithms now exist for a wide variety of sophisticated image processing applications required by software engineers and developers, advanced programmers, graphics programmers, scientists, and related specialistsThis bestselling book has been completely updated to include the latest algorithms, including 2D vision methods in content-based searches, details on modern classifier methods, and graphics cards used as image processing computational aidsSaves hours of mathematical calculating by using distributed processing and GPU programming, and gives non-mathematicians the shortcuts needed to program relatively sophisticated applicationsAlgorithms for Image Processing and Computer Vision, 2nd Edition provides the tools to speed development of image processing applications

1,517 citations

Journal ArticleDOI
TL;DR: A hybrid method combining the simplicity of theML and the incorporation of nonellipsoid constraints is presented, giving improved restoration performance, compared with the ML and the POCS approaches.
Abstract: The three main tools in the single image restoration theory are the maximum likelihood (ML) estimator, the maximum a posteriori probability (MAP) estimator, and the set theoretic approach using projection onto convex sets (POCS). This paper utilizes the above known tools to propose a unified methodology toward the more complicated problem of superresolution restoration. In the superresolution restoration problem, an improved resolution image is restored from several geometrically warped, blurred, noisy and downsampled measured images. The superresolution restoration problem is modeled and analyzed from the ML, the MAP, and POCS points of view, yielding a generalization of the known superresolution restoration methods. The proposed restoration approach is general but assumes explicit knowledge of the linear space- and time-variant blur, the (additive Gaussian) noise, the different measured resolutions, and the (smooth) motion characteristics. A hybrid method combining the simplicity of the ML and the incorporation of nonellipsoid constraints is presented, giving improved restoration performance, compared with the ML and the POCS approaches. The hybrid method is shown to converge to the unique optimal solution of a new definition of the optimization problem. Superresolution restoration from motionless measurements is also discussed. Simulations demonstrate the power of the proposed methodology.

1,174 citations