scispace - formally typeset
Search or ask a question
Author

Ronald W. Schafer

Bio: Ronald W. Schafer is an academic researcher from Georgia Institute of Technology. The author has contributed to research in topics: Image processing & Image restoration. The author has an hindex of 22, co-authored 63 publications receiving 3400 citations.


Papers
More filters
Journal ArticleDOI
01 Jul 1998-Neuron
TL;DR: It is reported that in the nematode Caenorhabditis elegans, serotonin controls a switch between two distinct, on/off states of egg-laying behavior, and genetic experiments suggest that determination of the behavioral states observed for C. elegans egg laying may be mediated through protein kinase C-dependent (PKC-dependent) modulation of voltage-gated calcium channels.

253 citations

Journal ArticleDOI
TL;DR: The focus of the research is on the combination, by means of the sup- and inf-operations, of alternating filters by reconstruction when their component filters belong to a granulometry and an antigranulometry (by reconstruction).

166 citations

Proceedings ArticleDOI
16 Sep 1994
TL;DR: In this article, a decision rule based on the second order local statistics of the signal (within a window) is used to switch between the identity filter and a median filter, and the results on a test image show an improvement of around 4dB over the median filter alone, and 2dB over other techniques.
Abstract: Noise removal is important in many applications. When the noise has impulsive characteristics, linear techniquesdo not perform well, and median filter or its derivatives are often used. Although median-based filters preserve edgesreasonably well, they tend to remove some of the finer details in the image. Switching schemes — where the filter isswitched between two or more filters — have been proposed, but they usually lack a decision rule efficient enough toyield good results on different regions of the image. In this paper we present a strategy to overcome this problem. Adecision rule based on the second order local statistics of the signal (within a window) is used to switch between theidentity filter and a median filter. The results on a test image show an improvement of around 4dB over the medianfilter alone, and 2dB over other techniques.Keywords: Median filter; Image enhancement; Noise removal; Impulsive noise. 1. INTRODUCTION Noise reduction is often necessary as a pre-processing step in situations where a signal is contaminated by noise.In cases where the noise can be adequately modeled as additive Gaussian noise, linear filters are normally efficiciitfor noise-reduction. However, in many cases the noise is impulsive, and in this case linear techniques do not usuallyperform well. The median filter and its derivatives are often the filter of choice for these applications.The median filter is a non-linear filter, and it has the useful property of removing (reducing) impulsive noisewithout (severely) smoothing the edges of the signal. The main drawback of the median filter is that it also modifiesthe points not contaminated by noise, therefore removing the finer details in the signal.In the past 20 years, median filters have been generalized and modified in many ways. A good overview of pastwork on generalizations of median filters can be find in the paper by Gabbouj et al.1 Examples include rank orderfilters, weighted median filters, stack filters, and linear combinations of nonlinear filters. A theory for optimal stackfilters has been developed.2 More recently, filters where the rank selected is based on the pixel rank have been alsoproposed .

157 citations

Journal ArticleDOI
TL;DR: A visibility approach that uses all possible color information from the photographs during reconstruction, photo-consistency measures that are more robust and/or require less manual intervention, and a volumetric warping method for application of these reconstruction methods to large-scale scenes are described.
Abstract: In this paper, we present methods for 3D volumetric reconstruction of visual scenes photographed by multiple calibrated cameras placed at arbitrary viewpoints. Our goal is to generate a 3D model that can be rendered to synthesize new photo-realistic views of the scene. We improve upon existing voxel coloring/space carving approaches by introducing new ways to compute visibility and photo-consistency, as well as model infinitely large scenes. In particular, we describe a visibility approach that uses all possible color information from the photographs during reconstruction, photo-consistency measures that are more robust and/or require less manual intervention, and a volumetric warping method for application of these reconstruction methods to large-scale scenes.

130 citations


Cited by
More filters
Journal ArticleDOI
24 Jan 2005
TL;DR: It is shown that such an approach can yield an implementation of the discrete Fourier transform that is competitive with hand-optimized libraries, and the software structure that makes the current FFTW3 version flexible and adaptive is described.
Abstract: FFTW is an implementation of the discrete Fourier transform (DFT) that adapts to the hardware in order to maximize performance. This paper shows that such an approach can yield an implementation that is competitive with hand-optimized libraries, and describes the software structure that makes our current FFTW3 version flexible and adaptive. We further discuss a new algorithm for real-data DFTs of prime size, a new way of implementing DFTs by means of machine-specific single-instruction, multiple-data (SIMD) instructions, and how a special-purpose compiler can derive optimized implementations of the discrete cosine and sine transforms automatically from a DFT algorithm.

5,172 citations

Journal ArticleDOI
TL;DR: 40 selected thresholding methods from various categories are compared in the context of nondestructive testing applications as well as for document images, and the thresholding algorithms that perform uniformly better over nonde- structive testing and document image applications are identified.
Abstract: We conduct an exhaustive survey of image thresholding methods, categorize them, express their formulas under a uniform notation, and finally carry their performance comparison. The thresholding methods are categorized according to the information they are exploiting, such as histogram shape, measurement space clustering, entropy, object attributes, spatial correlation, and local gray-level surface. 40 selected thresholding methods from various categories are compared in the context of nondestructive testing applications as well as for document images. The comparison is based on the combined performance measures. We identify the thresholding algorithms that perform uniformly better over nonde- structive testing and document image applications. © 2004 SPIE and IS&T. (DOI: 10.1117/1.1631316)

4,543 citations

Book
30 Sep 2010
TL;DR: Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images and takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene.
Abstract: Humans perceive the three-dimensional structure of the world with apparent ease. However, despite all of the recent advances in computer vision research, the dream of having a computer interpret an image at the same level as a two-year old remains elusive. Why is computer vision such a challenging problem and what is the current state of the art? Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images. It also describes challenging real-world applications where vision is being successfully used, both for specialized applications such as medical imaging, and for fun, consumer-level tasks such as image editing and stitching, which students can apply to their own personal photos and videos. More than just a source of recipes, this exceptionally authoritative and comprehensive textbook/reference also takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene. These problems are also analyzed using statistical models and solved using rigorous engineering techniques Topics and features: structured to support active curricula and project-oriented courses, with tips in the Introduction for using the book in a variety of customized courses; presents exercises at the end of each chapter with a heavy emphasis on testing algorithms and containing numerous suggestions for small mid-term projects; provides additional material and more detailed mathematical topics in the Appendices, which cover linear algebra, numerical techniques, and Bayesian estimation theory; suggests additional reading at the end of each chapter, including the latest research in each sub-field, in addition to a full Bibliography at the end of the book; supplies supplementary course material for students at the associated website, http://szeliski.org/Book/. Suitable for an upper-level undergraduate or graduate-level course in computer science or engineering, this textbook focuses on basic techniques that work under real-world conditions and encourages students to push their creative boundaries. Its design and exposition also make it eminently suitable as a unique reference to the fundamental techniques and current research literature in computer vision.

4,146 citations

Proceedings ArticleDOI
17 Jun 2006
TL;DR: This paper first survey multi-view stereo algorithms and compare them qualitatively using a taxonomy that differentiates their key properties, then describes the process for acquiring and calibrating multiview image datasets with high-accuracy ground truth and introduces the evaluation methodology.
Abstract: This paper presents a quantitative comparison of several multi-view stereo reconstruction algorithms. Until now, the lack of suitable calibrated multi-view image datasets with known ground truth (3D shape models) has prevented such direct comparisons. In this paper, we first survey multi-view stereo algorithms and compare them qualitatively using a taxonomy that differentiates their key properties. We then describe our process for acquiring and calibrating multiview image datasets with high-accuracy ground truth and introduce our evaluation methodology. Finally, we present the results of our quantitative comparison of state-of-the-art multi-view stereo reconstruction algorithms on six benchmark datasets. The datasets, evaluation details, and instructions for submitting new models are available online at http://vision.middlebury.edu/mview.

2,556 citations

Proceedings ArticleDOI
27 Aug 2013
TL;DR: The design and implementation of the first in-band full duplex WiFi radios that can simultaneously transmit and receive on the same channel using standard WiFi 802.11ac PHYs are presented and achieves close to the theoretical doubling of throughput in all practical deployment scenarios.
Abstract: This paper presents the design and implementation of the first in-band full duplex WiFi radios that can simultaneously transmit and receive on the same channel using standard WiFi 802.11ac PHYs and achieves close to the theoretical doubling of throughput in all practical deployment scenarios. Our design uses a single antenna for simultaneous TX/RX (i.e., the same resources as a standard half duplex system). We also propose novel analog and digital cancellation techniques that cancel the self interference to the receiver noise floor, and therefore ensure that there is no degradation to the received signal. We prototype our design by building our own analog circuit boards and integrating them with a fully WiFi-PHY compatible software radio implementation. We show experimentally that our design works robustly in noisy indoor environments, and provides close to the expected theoretical doubling of throughput in practice.

2,084 citations