scispace - formally typeset
Search or ask a question
Author

Subhasis Chaudhuri

Bio: Subhasis Chaudhuri is an academic researcher from Indian Institute of Technology Bombay. The author has contributed to research in topics: Image restoration & Haptic technology. The author has an hindex of 44, co-authored 343 publications receiving 8437 citations. Previous affiliations of Subhasis Chaudhuri include Indian Institute of Technology Indore & Indian Institutes of Technology.


Papers
More filters
Journal ArticleDOI
TL;DR: The concept of matched filter detection of signals is used to detect piecewise linear segments of blood vessels in these images and the results are compared to those obtained with other methods.
Abstract: Blood vessels usually have poor local contrast, and the application of existing edge detection algorithms yield results which are not satisfactory. An operator for feature extraction based on the optical and spatial properties of objects to be recognized is introduced. The gray-level profile of the cross section of a blood vessel is approximated by a Gaussian-shaped curve. The concept of matched filter detection of signals is used to detect piecewise linear segments of blood vessels in these images. Twelve different templates that are used to search for vessel segments along all possible directions are constructed. Various issues related to the implementation of these matched filters are discussed. The results are compared to those obtained with other methods. >

1,692 citations

Journal ArticleDOI
TL;DR: The SR image reconstruction method estimates an HR image with finer spectral details from multiple LR observations degraded by blur, noise, and aliasing, and the major advantage of this approach is that it may cost less and the existing LR imaging systems can still be utilized.
Abstract: The spatial resolution that represents the number of pixels per unit area in an image is the principal factor in determining the quality of an image. With the development of image processing applications, there is a big demand for high-resolution (HR) images since HR images not only give the viewer a pleasing picture but also offer additional detail that is important for the analysis in many applications. The current technology to obtain HR images mainly depends on sensor manufacturing technology that attempts to increase the number of pixels per unit area by reducing the pixel size. However, the cost for high-precision optics and sensors may be inappropriate for general purpose commercial applications, and there is a limitation to pixel size reduction due to shot noise encountered in the sensor itself. Therefore, a resolution enhancement approach using signal processing techniques has been a great concern in many areas, and it is called super-resolution (SR) (or HR) image reconstruction or simply resolution enhancement in the literature. In this issue, we use the term “SR image reconstruction” to refer to a signal processing approach toward resolution enhancement, because the term “super” very well represents the characteristics of the technique overcoming the inherent resolution limitation of low-resolution (LR) imaging systems. The term SR was originally used in optics, and it refers to the algorithms that mainly operate on a single image to extrapolate the spectrum of an object beyond the diffraction limit (SR restoration). These two SR concepts (SR reconstruction and SR restoration) have a common focus in the aspect of recovering high-frequency information that is lost or degraded during the image acquisition. However, the cause of the loss of high-frequency information differs between these two concepts. SR restoration in optics attempts to recover information beyond the diffraction cutoff frequency, while the SR reconstruction method in engineering tries to recover high-frequency components corrupted by aliasing. We hope that readers do not confuse the super resolution in this issue with the term super resolution used in optics. SR image reconstruction algorithms investigate the relative motion information between multiple LR images (or a video sequence) and increase the spatial resolution by fusing them into a single frame. In doing so, it also removes the effect of possible blurring and noise in the LR images. In summary, the SR image reconstruction method estimates an HR image with finer spectral details from multiple LR observations degraded by blur, noise, and aliasing. The major advantage of this approach is that it may cost less and the existing LR imaging systems can still be utilized. Considering the maturity of this field and its various prospective applications, it seems timely and appropriate to discuss and adjust the topic of SR in the special issue of the magazine, since we do not have enough materials for ready disposal. This special section contains five articles covering various aspects of SR techniques. The first article, “Super-Resolution Image Reconstruction: A Technical Overview” by Sungcheol Park, Minkyu Park, and Moon Gi Kang, provides an introduction to the concepts and definitions of the SR image reconstruction as well as an overview of various existing SR algorithms. Advanced issues that are currently under investigation in this area are also discussed. The second article, “High-Resolution Images from Low-Resolution Compressed Video,” by Andrew C. Segall, Rafael Molina, and Aggelos K. Katsaggelos, considers the SR techniques for compressed video. Since images are routinely compressed prior to transmission and storage in current acquisition systems, it is important to take into account the characteristics of compression systems in developing the SR techniques. In this article, they survey models for the compression system and develop SR techniques within the Bayesian framework. The third article, by Deepu Rajan, Subhasis Chaudhuri, and Manjunath V. Joshi, titled “Multi-Objective Super-Resolution Technique: Concept and Examples,”

422 citations

Journal ArticleDOI
TL;DR: The retrieval using the MCM is better than the CCM since it captures the third order image statistics in the local neighborhood and the use of MCM considerably improves the retrieval performance.

293 citations

Book
01 Jan 2001
TL;DR: In this paper, the authors proposed a generalized interpolation for super-resolution via image warping, which can be used for image enhancement using multiple apertures, as well as super resolution from Mutual Motion.
Abstract: Preface. Contributing Authors. 1. Introduction S. Chaudhuri. 2. Image Zooming: Use of Wavelets N. Kaulgud, U.D. Desai. 3. Generalized Interpolation for Super-Resolution D. Rajan, S. Chaudhuri. 4. High Resolution Image from Low Resolution Images B.C. Tom, et al. 5. Super-Resolution Imaging Using Blur as a Cue D. Rajan, S. Chaudhuri. 6. Super-Resolution via Image Warping T.E. Boult, et al. 7. Resolution Enhancement using Multiple Apertures T. Komatsu, et al. 8. Super-Resolution from Mutual Motion A. Zomet, S. Peleg. 9. Super-Resolution from Compressed Video C.A. Segall, et al. 10. Super-Resolution: Limits and Beyond S. Baker, T. Kanade. Index.

292 citations

Book
26 Mar 1999
TL;DR: A Partial Derivatives of Various Quantities in CRB, a MAP-MRF approach to Depth Recovery and Restoration using MRF Models.
Abstract: 1 Passive Methods for Depth Recovery.- 1.1 Introduction.- 1.2 Different Methods of Depth Recovery.- 1.2.1 Depth from Stereo.- 1.2.2 Structure from Motion.- 1.2.3 Shape from Shading.- 1.2.4 Range from Focus.- 1.2.5 Depth from Defocus.- 1.3 Difficulties in Passive Ranging.- 1.4 Organization of the Book.- 2 Depth Recovery from Defocused Images.- 2.1 Introduction.- 2.2 Theory of Depth from Defocus.- 2.2.1 Real Aperture Imaging.- 2.2.2 Modeling the Camera Defocus.- 2.2.3 Depth Recovery.- 2.2.4 Sources of Errors.- 2.3 Related Work.- 2.4 Summary of the Book.- 3 Mathematical Background.- 3.1 Introduction.- 3.2 Time-Frequency Representation.- 3.2.1 The Complex Spectrogram.- 3.2.2 The Wigner Distribution.- 3.3 Calculus of Variations.- 3.4 Markov Random Fields and Gibbs Distributions.- 3.4.1 Theory of MRF.- 3.4.2 Gibbs Distribution.- 3.4.3 Incorporating Discontinuities.- 4 Depth Recovery with a Block Shift-Variant Blur Model.- 4.1 Introduction.- 4.2 The Block Shift-Variant Blur Model.- 4.2.1 Estimation of Blur.- 4.2.2 Special Cases.- 4.3 Experimental Results.- 4.4 Discussion.- 5 Space-Variant Filtering Models for Recovering Depth.- 5.1 Introduction.- 5.2 Space-Variant Filtering.- 5.3 Depth Recovery Using the Complex Spectrogram.- 5.4 The Pseudo-Wigner Distribution for Recovery of Depth.- 5.5 Imposing Smoothness Constraint.- 5.5.1 Regularized Solution Using the Complex Spectrogram..- 5.5.2 The Pseudo-Wigner Distribution and Regularized Solution.- 5.6 Experimental Results.- 5.7 Discussion.- 6 ML Estimation of Depth and Optimal Camera Settings.- 6.1 Introduction.- 6.2 Image and Observation Models.- 6.3 ML-Based Recovery of Depth.- 6.4 Computation of the Likelihood Function.- 6.5 Optimality of Camera Settings.- 6.5.1 The Cramer-Rao Bound.- 6.5.2 Optimality Criterion.- 6.6 Experimental Results.- 6.7 Discussion.- 7 Recursive Computation of Depth from Multiple Images.- 7.1 Introduction.- 7.2 Blur Identification from Multiple Images.- 7.3 Minimization by Steepest Descent.- 7.4 Recursive Algorithm for Computing the Likelihood Function.- 7.4.1 Single Observation.- 7.4.2 Two Observations.- 7.4.3 General Case of M Observations.- 7.5 Experimental Results.- 7.6 Discussion.- 8 MRF Model-Based Identification of Shift-Variant PSF.- 8.1 Introduction.- 8.2 A MAP-MRF Approach.- 8.3 The Posterior Distribution and Its Neighborhood.- 8.4 MAP Estimation by Simulated Annealing.- 8.5 Experimental Results.- 8.6 Discussion.- 9 Simultaneous Depth Recovery and Image Restoration.- 9.1 Introduction.- 9.2 Depth Recovery and Restoration using MRF Models.- 9.3 Locality of the Posterior Distribution.- 9.4 Parameter Estimation.- 9.5 Experimental Results.- 9.6 Discussion.- 10 Conclusions.- A Partial Derivatives of Various Quantities in CRB.- References.

253 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: A review of recent as well as classic image registration methods to provide a comprehensive reference source for the researchers involved in image registration, regardless of particular application areas.

6,842 citations

Book
30 Sep 2010
TL;DR: Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images and takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene.
Abstract: Humans perceive the three-dimensional structure of the world with apparent ease. However, despite all of the recent advances in computer vision research, the dream of having a computer interpret an image at the same level as a two-year old remains elusive. Why is computer vision such a challenging problem and what is the current state of the art? Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images. It also describes challenging real-world applications where vision is being successfully used, both for specialized applications such as medical imaging, and for fun, consumer-level tasks such as image editing and stitching, which students can apply to their own personal photos and videos. More than just a source of recipes, this exceptionally authoritative and comprehensive textbook/reference also takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene. These problems are also analyzed using statistical models and solved using rigorous engineering techniques Topics and features: structured to support active curricula and project-oriented courses, with tips in the Introduction for using the book in a variety of customized courses; presents exercises at the end of each chapter with a heavy emphasis on testing algorithms and containing numerous suggestions for small mid-term projects; provides additional material and more detailed mathematical topics in the Appendices, which cover linear algebra, numerical techniques, and Bayesian estimation theory; suggests additional reading at the end of each chapter, including the latest research in each sub-field, in addition to a full Bibliography at the end of the book; supplies supplementary course material for students at the associated website, http://szeliski.org/Book/. Suitable for an upper-level undergraduate or graduate-level course in computer science or engineering, this textbook focuses on basic techniques that work under real-world conditions and encourages students to push their creative boundaries. Its design and exposition also make it eminently suitable as a unique reference to the fundamental techniques and current research literature in computer vision.

4,146 citations

Journal ArticleDOI
TL;DR: In this article, the authors categorize and evaluate face detection algorithms and discuss relevant issues such as data collection, evaluation metrics and benchmarking, and conclude with several promising directions for future research.
Abstract: Images containing faces are essential to intelligent vision-based human-computer interaction, and research efforts in face processing include face recognition, face tracking, pose estimation and expression recognition. However, many reported methods assume that the faces in an image or an image sequence have been identified and localized. To build fully automated systems that analyze the information contained in face images, robust and efficient face detection algorithms are required. Given a single image, the goal of face detection is to identify all image regions which contain a face, regardless of its 3D position, orientation and lighting conditions. Such a problem is challenging because faces are non-rigid and have a high degree of variability in size, shape, color and texture. Numerous techniques have been developed to detect faces in a single image, and the purpose of this paper is to categorize and evaluate these algorithms. We also discuss relevant issues such as data collection, evaluation metrics and benchmarking. After analyzing these algorithms and identifying their limitations, we conclude with several promising directions for future research.

3,894 citations