# Showing papers in "Graphical Models and Image Processing in 1995"

••

TL;DR: In this article, an image fusion scheme based on the wavelet transform is presented, where wavelet transforms of the input images are appropriately combined, and the new image is obtained by taking the inverse wavelet transformation of the fused wavelet coefficients.

Abstract: The goal of image fusion is to integrate complementary information from multisensor data such that the new images are more suitable for the purpose of human visual perception and computer-processing tasks such as segmentation, feature extraction, and object recognition. This paper presents an image fusion scheme which is based on the wavelet transform. The wavelet transforms of the input images are appropriately combined, and the new image is obtained by taking the inverse wavelet transform of the fused wavelet coefficients. An area-based maximum selection rule and a consistency verification step are used for feature selection. The proposed scheme performs better than the Laplacian pyramid-based methods due to the compactness, directional selectivity, and orthogonality of the wavelet transform. A performance measure using specially generated test images is suggested and is used in the evaluation of different fusion methods, and in comparing the merits of different wavelet transform kernels. Extensive experimental results including the fusion of multifocus images, Landsat and Spot images, Landsat and Seasat SAR images, IR and visible images, and MRI and PET images are presented in the paper.

1,532 citations

••

TL;DR: In this paper, the theoretical discrete geometry framework for the voxelization of surfaces is presented, and the concepts of separating, coverage, and tunnel-freeness are introduced.

Abstract: This paper presents the theoretical discrete geometry framework for the voxelization of surfaces. A voxelized object is a 3D discrete representation of a continuous object on a regular grid of voxels. Many important topological properties of a discrete surface cannot be stated solely in terms of connectivity, and thus the concepts of separating, coverage, and tunnel-freeness are introduced. These concepts form the basis for proper voxelization of surfaces.

191 citations

••

TL;DR: A new method for physically based modeling and interactive-rate simulation of 3D fluids in computer graphics by solving the 2D Navier-Stokes equations using a computational fluid dynamics method.

Abstract: We present a new method for physically based modeling and interactive-rate simulation of 3D fluids in computer graphics. By solving the 2D Navier-Stokes equations using a computational fluid dynamics method, we map the surface into 3D using the corresponding pressures in the fluid flow field. The method achieves realistic interactive-rate fluid simulation by solving the physical governing laws of fluids but avoiding the extensive 3D fluid dynamics computation. Unlike previous computer graphics fluid models, our approach can simulate many different fluid behaviors by changing the internal or external boundary conditions. It can model different kinds of fluids by varying the Reynolds number. It can also simulate objects moving or floating in fluids. In addition, we can visualize the animation of the fluid flow field, the streakline of a flow field, and the blending of fluids of different colors. Our model can serve as a testbed to simulate many other fluid phenomena which have never been successfully modeled previously in computer graphics.

139 citations

••

TL;DR: The notion of a multiresolution support is introduced, which is a sequence of Boolean images related to significant pixels at each of a number of resolution levels used for noise suppression, in the context of image filtering, or iterative image restoration.

Abstract: The notion of a multiresolution support is introduced. This is a sequence of Boolean images related to significant pixels at each of a number of resolution levels. The multiresolution support is then used for noise suppression, in the context of image filtering, or iterative image restoration. Algorithmic details, and a range of practical examples, illustrate this approach.

127 citations

••

TL;DR: A new approach for building a three-dimensional model from a set of range images able to build models of free-form surfaces obtained from arbitrary viewing directions, with no initial estimate of the relative viewing directions is described.

Abstract: In this paper, we describe a new approach for building a three-dimensional model from a set of range images. The approach is able to build models of free-form surfaces obtained from arbitrary viewing directions, with no initial estimate of the relative viewing directions. The approach is based on building discrete meshes representing the surfaces observed in each of the range images, mapping each of the meshes to a spherical image, and computing the transformations between the views by matching the spherical images, The meshes are built using an iterative fitting algorithm previously developed; the spherical images are built by mapping the nodes of the surface meshes to the nodes of a reference mesh on the unit sphere and by storing a measure of curvature at every node. We describe the algorithms used for building such models from range images and for matching them. We show results obtained using range images of complex objects.

111 citations

••

TL;DR: A quantitative assessment of the results on synthetic data shows that the global method performs better than the local method, and a qualitative assessment of its application to a variety of real images shows that it reliably produces good results.

Abstract: In this paper we consider two methods for automatically determining values for thresholding edge maps. In contrast to most other related work they are based on the figural rather than statistical properties of the edges. The first approach applies a local edge evaluation measure based on edge continuity and edge thinness to determine the threshold on edge magnitude. The second approach is more global and considers complete connected edge curves. The curves are mapped onto an edge curve length/average magnitude feature space, and a robust technique is developed to partition this feature space into true and false edge regions. A quantitative assessment of the results on synthetic data shows that the global method performs better than the local method. Furthermore, a qualitative assessment of its application to a variety of real images shows that it reliably produces good results.

72 citations

••

TL;DR: There are as many concepts of triangulated surfaces as there are neighborhood relations; thus, the same concepts, algorithms, and methods can be used in computer imagery and in the field of topology-based geometric modeling.

Abstract: A new approach to the concept of discrete surfaces is proposed, It is a combinatorial approach. A surface is defined by vertices, edges, and faces satisfying the conditions of two-dimensional combinatorial manifolds. A set of voxels (points with integer coordinates) is a surface iff these points are the vertices of a two-dimensional combinatorial manifold. This approach allows introduction of several notions of discrete surfaces: The first, called a quadrangulated surface, is a combinatorial manifold whose faces are squares; the second, called a triangulated surface, is a combinatorial manifold whose faces are triangles, The last is associated with a neighborhood relation; thus, there are as many concepts of triangulated surfaces as there are neighborhood relations. As a consequence the same concepts, algorithms, and methods can be used in computer imagery and in the field of topology-based geometric modeling (so called "boundary representation").

71 citations

••

TL;DR: A multiscale distance transform is proposed to overcome the need to choose the appropriate scale and the addition of various saliency factors such as edge strength, length, and curvature to the basic distance transform to eliminate the need for thresholds and to improve its effectiveness.

Abstract: The distance transform has been used in computer vision for a number of applications such as matching and skeletonization. This paper proposes two things: (1) a multiscale distance transform to overcome the need to choose the appropriate scale and (2) the addition of various saliency factors such as edge strength, length, and curvature to the basic distance transform to eliminate the need for (e.g., edge magnitude) thresholds and to improve its effectiveness. Results are presented for applications of matching and snake fitting.

69 citations

••

TL;DR: A method for estimation of the orientation of 3D objects without point correspondence information based on decomposition of the object onto a basis of spherical harmonics, which is more accurate than the classical method based on the diagonalization of the inertia matrix.

Abstract: The paper describes a method for estimation of the orientation of 3D objects without point correspondence information. It is based on decomposition of the object onto a basis of spherical harmonics. Tensors are obtained, and their normalization provides the orientation of the object. Theoretical and experimental results show that the approach is more accurate than the classical method based on the diagonalization of the inertia matrix. Fast registration of 3D objects is a problem of practical interest in domains such as robotics and medical imaging, where it helps to compare multimodal data.

66 citations

••

TL;DR: Nonlinear regression techniques are applied to a simplified version of the Torrance-Sparrow model to do the simultaneous estimation of surface normals and surface reflectance parameters in the presence of noise to investigate the accurate recording of an object's geometric and material properties using photometric stereo.

Abstract: We investigate the accurate recording of an object's geometric and material properties using photometric stereo. This includes the simultaneous estimation of surface normals and surface reflectance parameters. We assume fairly general reflectance properties, including a combination of diffuse and specular reflection. By applying nonlinear regression techniques to a simplified version of the Torrance-Sparrow model we show how to do the simultaneous estimation in the presence of noise in such a way that any ill-conditioning or inadequacy of fit can be measured and detected. Thus, no smoothness or regularization assumptions need be made, and at all times an estimate of the accuracy of the obtained parameters is available. We also develop a criterion for making choices about those lighting setups that minimize ill-conditioning effects and maximize parameter precision. Finally, we eliminate the usual guesswork associated with parameter starting values by showing how to automatically obtain such values at each image pixel. The paper concludes with a number of examples of the method applied to simulations and real objects, followed by a discussion of the results together with suggestions for improvements to future systems.

60 citations

••

TL;DR: Elimination theory is used and the resultant of the equations of intersection are expressed as a matrix determinant and the algorithm for intersection is reduced to computing eigenvalues and eigenvectors of matrices.

Abstract: The problem of computing the intersection of parametric and algebraic curves arises in many applications of computer graphics and geometric and solid modeling. Previous algorithms are based on techniques from elimination theory or subdivision and iteration and are typically limited to simple intersections of curves. Furthermore, algorithms based on elimination theory are restricted to low degree curves. This is mainly due to issues of efficiency and numerical stability. In this paper we use elimination theory and express the resultant of the equations of intersection as a matrix determinant. Using matrix computations the algorithm for intersection is reduced to computing eigenvalues and eigenvectors of matrices. We use techniques from linear algebra and numerical analysis to compute geometrically isolated higher order intersections of curves. Such intersections are obtained from tangential intersections, singular points, etc. The main advantage of the algorithm lies in its efficiency and robustness. The numerical accuracy of the operations is well understood and we come up with tight bounds on the errors using 64-bit IEEE floating point arithmetic.

••

Rice University

^{1}TL;DR: This work presents geometric algorithms that detect the presence of these degeneracies and compute the resulting planar intersections and the results of the analysis as embodied in the geometric algorithms.

Abstract: One of the most challenging aspects of the surface-surface intersection problem is the proper disposition of degenerate configurations. Even in the domain of quadric surfaces, this problem has proven to be quite difficult. The topology of the intersection as well as the basic geometric representation of the curve itself is often at stake. By Bezout′s Theorem, two quadric surfaces always intersect in a degree four curve in complex projective space. This degree four curve is degenerate if it splits into two (possibly degenerate) conic sections. In theory the presence of such degeneracies can be detected using classical algebraic geometry. Unfortunately in practice it has proven to be extremely difficult to make computer implementations of such methods reliable numerically. Here, we present geometric algorithms that detect the presence of these degeneracies and compute the resulting planar intersections. The theoretical basis of these algorithms-in particular, proofs of correctness and completeness-are extremely long and tedious. We briefly outline the approach, but present only the results of the analysis as embodied in the geometric algorithms. Interested readers are referred to R. N. Goldman and J. R. Miller (Detecting and calculating conic sections in the intersection of two natural quadric surfaces, part I: Theoretical analysis; and Detecting and calculating conic sections in the intersection of two natural quadric surfaces, part II: Geometric constructions for detection and calculation, Technical Reports TR-93-1 and TR-93-2, Department of Computer Science, University of Kansas, January 1993) for details.

••

••

TL;DR: The expression that gives the Cramer-Rao lower bounds of these estimates is derived and simulation results have corroborated the derivations and shown that the estimator of [S. Thomas and Y. Chan] achieve the lower bounds.

Abstract: The problem of determining the coordinates of a circle and its radius from a set of measurements of its arc is of practical interest. We derive the expression that gives the Cramer-Rao lower bounds of these estimates. These bounds are a function of the noise factor, the number of measurements, and the arc length. Simulation results have corroborated the derivations and shown that the estimator of [S. M. Thomas and Y. T. Chan, Comput. Vision Graphics Image Process. 45, 1989, 362-370] achieve the lower bounds.

••

TL;DR: A method for the (re)construction of a simple closed polygon (2D) or polyhedron (3D) passing through all the points of a given set, based on a parameterized geometric graph, the γ-Neighborhood Graph.

Abstract: This paper presents a method for the (re)construction of a simple closed polygon (2D) or polyhedron (3D) passing through all the points of a given set. The points are assumed to lie on the boundary of a closed object without through-passages and inner voids. No a priori knowledge about any topological relation between the points, or additional information such as sample density or normal vectors, is used. The construction technique is based on a parameterized geometric graph, the γ-Neighborhood Graph. The hull of the γ-Neighborhood Graph is iteratively constricted, exploiting geometric information incorporated in the graph. This constriction technique provides a uniform approach for 2D and 3D.

••

TL;DR: The local methods turn out to be robust in the sense that the parameter estimation step does not degrade the final segmentation results significantly, and the choice of EM, ICE, or SEM has little importance.

Abstract: This paper addresses mixture estimation applied to unsupervised local Bayesian segmentation. The great efficiency of global Markovian-based model methods is well known, but the efficiency of local methods can be competitive in some particular cases. The purpose of this paper is to specify the behavior of different local methods in different situations. Algorithms which estimate distribution mixtures prior to segmentation, such as expectation maximization (EM), iterative conditional estimation (ICE), and stochastic expectation maximization (SEM), are studied. Adaptive versions of EM and ICE, valid for nonstationary class fields, are then proposed. After applying various combinations of estimators and segmentations to noisy images, we compare the estimators' performances according to different image and noise characteristics. Results obtained attest to the suitability of adaptive versions of EM, ICE, and SEM. Furthermore, the local methods turn out to be robust in the sense that the parameter estimation step does not degrade the final segmentation results significantly, and the choice of EM, ICE, or SEM has little importance.

••

TL;DR: Experimental results prove that the proposed wavelet-based approach for solving the shape from shading (SFS) problem is indeed better than traditional methods, both in the accuracy as well as the convergence speed of the SFS problem.

Abstract: This paper proposes a wavelet-based approach for solving the shape from shading (SFS) problem. The proposed method takes advantage of the nature of wavelet theory, which can be applied to efficiently and accurately represent "things," to develop a faster algorithm for reconstructing better surfaces. To derive the algorithm, the formulation of Horn and Brooks ((Eds.) Shape from Shading , MIT Press, Cambridge, MA, 1989), which combines several constraints into an objective function, is adopted. In order to improve the robustness of the algorithm, two new constraints are introduced into the objective function to strengthen the relation between an estimated surface and its counterpart in the original image. Thus, solving the SFS problem becomes a constrained optimization process. Instead of solving the problem directly by using Euler equation or numerical techniques, the objective function is first converted into the wavelet format. Due to this format, the set of differential operators of different orders which is involved in the whole process can be approximated with connection coefficients of Daubechies bases. In each iteration of the optimization process, an appropriate step size which will result in maximum decrease of the objective function is determined. After finding correct iterative schemes, the solution of the SFS problem will finally be decided. Compared with conventional algorithms, the proposed scheme is a great improvement in the accuracy as well as the convergence speed of the SFS problem. Experimental results, using both synthetic and real images, prove that the proposed method is indeed better than traditional methods.

••

TL;DR: The conventional divide-and-conquer algorithm for constructing Voronoi diagrams is revised into a numerically robust one that is completely robust in the sense that, no matter how poor the precision may be, the algorithm always carries out its task, ending up with a topologically consistent output.

Abstract: The conventional divide-and-conquer algorithm for constructing Voronoi diagrams is revised into a numerically robust one. The strategy for the revision is a topology-oriented approach. That is, at every step of the algorithm, consistency of the topological structure is considered more important than the result of numerical computation, and hence numerical values are used only when they do not contradict the topological structure. The resultant new algorithm is completely robust in the sense that, no matter how poor the precision may be, the algorithm always carries out its task, ending up with a topologically consistent output, and is correct in the sense that the output "converges" to the true Voronoi diagram as the precision becomes higher. Moreover, it is efficient in the sense that it achieves the same time complexity as the original divide-and-conquer algorithm unless the precision in computation is too poor. The performance of the algorithm is also verified by computational experiments.

••

TL;DR: An implementation of deformable models to approximate a 3-D surface given by a cloud of 3D points by using the web-known Powell algorithm which guarantees convergence and does not require gradient information.

Abstract: We present an implementation of deformable models to approximate a 3-D surface given by a cloud of 3D points. It is an extension of our previous work on "B-snakes" (S. Menet, P. Saint-Marc, and G. Medioni, in Proceedings of Image Understanding Workshop, Pittsburgh, 1990, pp. 720-726; and C. W. Liao and G. Medioni, in Proceedings of International Conference on Pattern Recognition, Hague, Netherlands, 1992, pp. 745-748), which approximates curves and surfaces using B-splines. The user (or the system itself) provides an initial simple surface, such as closed cylinder, which is subject to internal forces (describing implicit continuity properties such as smoothness) and external forces which attract it toward the data points. The problem is cast in terms of energy minimization. We solve this nonconvex optimization problem by using the web-known Powell algorithm which guarantees convergence and does not require gradient information. The variables are the positions of the control points. The number of control points processed by Powell at one time is controlled. This methodology leads to a reasonable complexity, robustness, and good numerical stability. We keep the time and space complexities in check through a coarse-to-fine approach and a partitioning scheme. We handle closed surfaces by decomposing an object into two caps and an open cylinder, smoothly connected. The process is controlled by two parameters only, which are constant for all our experiments. We show results on real range images to illustrate che applicability of our approach. The advantages of this approach are that it provides a compact representation of the approximated data and lends itself to applications such as nonrigid motion tracking and object recognition. Currently, our algorithm gives only a C0 continuous analytical description of the data, but because the output of our algorithm is in rectangular mesh format, a C1 or C2 surface can be constructed easily by existing algorithms (F. J. M. Schmitt, B. A. Barsky, and W.-H. Du, in ACM SIGGRAPH 86, pp. 179-1988).

••

TL;DR: This new approach is based on a distribution-free local analysis of the image and does not use higher order entropy, and is compared to the existing entropic thresholding methods.

Abstract: Since the pioneer work of Frieden (J. Opt. Soc. Am. 62, 1972, 511-518; Comput. Graphics Image Process. 12, 1980, 40-59), the entropy concept is increasingly used in image analysis, especially in image reconstruction, image segmentation, and image compression. In the present paper a new entropic thresholding method based on a block source model is presented. This new approach is based on a distribution-free local analysis of the image and does not use higher order entropy. Our method is compared to the existing entropic thresholding methods.

••

TL;DR: Nonlinear phase portraits are employed to represent the streamlines of scalar flow images generated by particle tracing experiments and extended to the compression of vector field data by using orthogonal polynomials derived from the Taylor series model.

Abstract: Nonlinear phase portraits are employed to represent the streamlines of scalar flow images generated by particle tracing experiments. The flow fields are decomposed into simple component flows based on the critical point behavior. A Taylor series model is assumed for the velocity components, and the model coefficients are computed by considering both local critical point and global flow field behavior. A merge and split procedure for complex flows is presented, in which patterns of neighboring critical point regions are combined and modeled. The concepts are extended to the compression of vector field data by using orthogonal polynomials derived from the Taylor series model. A critical point scheme and a block transform are presented. They are applied to velocity fields measured in particle image velocimetry experiments and generated by computer simulations. Compression ratios ranging from 15:1 to 100:1 are achieved.

••

TL;DR: The computationally efficient adaptive neighborhood extended contrast enhancement procedure is proposed, which achieves significant computational speedup without much loss of image quality and can be well applied to other contrast enhancement algorithms for improvement of the quality of the enhanced image.

Abstract: A new technique for contrast enhancement, namely adaptive neighborhood extended contrast enhancement, and several original modifications on the same are proposed. Developing from the adaptive contrast enhancement algorithm of Beghdadi and Negrate ( Comput. Vision Graphics Image Process. 46, 1989, 162-174), and the adaptive neighborhood histogram equalization algorithm of Paranjape et al. ( CVGIP Graphical Models Image Process. 54(3), 1992, 259-267), the above techniques have been evolved in order to make image enhancement more adaptive and context sensitive. The adaptive neighborhood extended contrast enhancement algorithm, without its other modifications, only combines the features of these two existing algorithms. This algorithm can be used for both image enhancement and de-enhancement and can also be combined with existing procedures such as power variation. To make the algorithm computationally efficient, the computationally efficient adaptive neighborhood extended contrast enhancement procedure is proposed. This modification achieves significant computational speedup without much loss of image quality. The adaptive neighborhood definition extended contrast enhancement procedure is next proposed in order to make the algorithm further adaptive by making its performance independent of its most sensitive parameter. This procedure achieves better identification of different gray level regions by an analysis of the histogram in the locality of every pixel in the image. The experimental results aptly demonstrate the efficacy of the procedure. This technique can be well applied to other contrast enhancement algorithms for improvement of the quality of the enhanced image. Finally, a correction mechanism called repulsion correction is evolved to correct for a specific inability of contrast enhancement algorithms in separating adjacent regions of nearly equal brightness from each other when surrounded by a very large, brighter or darker region.

••

TL;DR: The possibility of finding Gibbs distributions which truly model certain properties of images is investigated and the potential usefulness of using such image-modeling distributions as priors in Bayesian image processing is looked at.

Abstract: Gibbs distributions, which have been very successfully used in statistical mechanics, have also been applied in image processing as assumed prior distributions in Bayesian (MAP) image restoration or reconstruction. When used in this context, the appropriateness of the Gibbs distribution has been judged by the success of the resulting image processing method; little attention has been paid to whether the Gibbs distribution indeed models the images that occur in the particular application area, in the sense that a randomly selected image from the distribution is likely to share the essential properties of those images. Indeed, many of the proposed Gibbs distributions do nothing but enforce smoothness; random samples from such distributions are likely to be uniformly smooth and thus probably atypical for any application area. In this paper we investigate the possibility of finding Gibbs distributions which truly model certain properties of images and look at the potential usefulness of using such image-modeling distributions as priors in Bayesian image processing. Specifically, we construct a Gibbs distribution which models an image that consists of piecewise homogeneous regions. The proposed model incorporates not only the information about the smoothness within regions in the image, but also the continuity of boundary structures which exist between regions. It is demonstrated that by sampling the Gibbs distribution which arises from the model we obtain images with piecewise homogeneous regions resembling the global features of the image that we intend to model; hence such a Gibbs distribution is indeed "image-modeling." Objective assessment of the model is accomplished by performing a goodness-of-fit test based on a χ 2 statistic computed by considering the corresponding local conditional distributions. Issues related to the selection of model parameters from the given data image are addressed. Importantly, the most essential parameter of the image model (related to the regularization parameter associated with the penalty function in many image restoration and reconstruction methods) is estimated in the process of constructing the image model. Comparative results are presented of the outcome of using our model and an alternative model as the prior in some image restoration problems in which noisy synthetic images were considered.

••

TL;DR: A superposition property called threshold decomposition and another property called stacking are applied successfully ongray-scale soft morphological operations, which allow gray-scale signals and structuring elements to be decomposed into their binary sets respectively and operated by only logic gates in new VLSI architectures.

Abstract: Gray-scale soft mathematical morphology is the natural extension of binary soft mathematical morphology which has been shown to be less sensitive to additive noise and to small variations. But gray-scale soft morphological operations are difficult to implement in real time. In this Note, a superposition property called threshold decomposition and another property called stacking are applied successfully on gray-scale soft morphological operations. These properties allow gray-scale signals and structuring elements to be decomposed into their binary sets respectively and operated by only logic gates in new VLSI architectures, and then these binary results are combined to produce the desired output as of the time-consuming gray-scale processing.

••

Keele University

^{1}TL;DR: This paper analyzes four fundamental image operators using Lagrange polynomials and suggests a way to get three orders of magnitude improvement in the computational efficiency for the Lagrange-based method over standard methods.

Abstract: In this paper we study four fundamental image operators using Lagrange polynomials. These operators are interpolation, first and second derivative, and image reduction (or shrinking). We analyze each operation and compare it to a standard signal processing windowing approach using Gaussian window. The analysis shows the very simple mathematical relation between the two approaches for all four operations and provides a rigorous way to trade speed for accuracy depending on the available resources and requirements. Furthermore, the analysis suggests a way to get three orders of magnitude improvement in the computational efficiency for the Lagrange-based method over standard methods.

••

Xerox

^{1}TL;DR: The present paper develops the corresponding mean-absolute-error representation, which expresses the error of estimation of a filter composed of some number of erosions in terms of single-erosion filter errors.

Abstract: Computational mathematical morphology provides a framework for analysis and representation of range-preserving, finite-range operators in the context of mathematical morphology. As such, it provides a framework for statistically optimal design in the framework of a Matheron-type representation; that is, each increasing, translation-invariant filter can be expressed via the erosions generated by structuring elements in a basis. The present paper develops the corresponding mean-absolute-error representation, This representation expresses the error of estimation of a filter composed of some number of erosions in terms of single-erosion filter errors. There is a recursive form of the representation that permits calculation of filter errors from errors for filters composed of fewer structuring elements, Finally, the error representation is employed in designing an optimal filter to solve an image enhancement problem in electronic printing, the transformation of a 1-bit per pixel image into a 2-bit per pixel image.

••

TL;DR: Experimental results with histogram equalization demonstrate that the use of a higher resolution histogram leads to reduced distortion as well as a "flatter" output histogram.

Abstract: For color images, histogram modification is usually applied to the quantized luminance component. However, the luminance quantization error can be significantly magnified by the transformation function, leading to distortion in the processed image. The propagation of the quantization error is analyzed theoretically to determine the worst-case error, The relationship between the number of luminance quantization levels and the output quantization error is derived. Experimental results with histogram equalization demonstrate that the use of a higher resolution histogram leads to reduced distortion as well as a "flatter" output histogram.

••

TL;DR: An alternative form of the widely used Lee SAR filter, consisting of one single equation expressed in terms of coefficient of variation, allows a better interpretation of the filter effectiveness.

Abstract: An alternative form of the widely used Lee SAR filter is presented. The original Lee filter and its alternative formulation are mathematically equivalent. The new formulation, consisting of one single equation expressed in terms of coefficient of variation, allows a better interpretation of the filter effectiveness, The proposed alternative form is compared with another alternative expression of the Lee filter taken from the literature. A brief comparison between the Lee filter and the Kuan filter effectiveness is presented as well. Neither an increase in computational efficiency nor an improvement in operational effectiveness can be expected by implementing this Lee filter alternative expression.

••

TL;DR: It is shown that, although wavelets can be used to produce a good approximation to fractional Brownian motion, a technique based on the random midpoint displacement algorithm is in practice much simpler to implement, faster to generate, and results in a comparable accuracy.

Abstract: This paper compares the synthesis of fractal images using both wavelets and a modified form of the random midpoint displacement algorithm. The accuracy of the generated fractal is investigated by an analysis of its second-order temporal statistics. It is shown that, although wavelets can be used to produce a good approximation to fractional Brownian motion, a technique based on the random midpoint displacement algorithm is in practice much simpler to implement, faster to generate, and results in a comparable accuracy. Furthermore the proposed method is shown to be considerably more efficient computationally.

••

TL;DR: The discrete binomial splines which are born from analytic B-splines by bilinear transformation have approximately Gaussian-shaped frequency characteristics and are fast to implement since minor number of multiplications are needed.

Abstract: Analytic B-splines are important tools in different areas of signal processing. The present work introduces the discrete binomial splines which are born from analytic B-splines by bilinear transformation. The characteristics of the discrete binomial splines are compared with the directly sampled B-splines. The discrete binomial splines have approximately Gaussian-shaped frequency characteristics. They are fast to implement since minor number of multiplications are needed. A parallel combination of discrete binomial splines can be used to construct, e.g., discrete-time filters, kernels for wavelet transform, and multiresolution decomposition algorithms applied in image coding and interpolation.