scispace - formally typeset
Search or ask a question

Showing papers by "Andrew Zisserman published in 1986"


Proceedings Article
20 Jul 1986
TL;DR: This paper deals with the application of weak continuity constraints to description of plane curves, applications in computer vision include curve description, edge detection, reconstruction of 2~D surfaces from stereo or laser-rangefinder data, and others.
Abstract: Scale-space filtering (Witkin 83) is a recently developed technique, both powerful and general, for segmentation and analysis of signals. Asada and Brady (84) have amply demonstrated the value of scale-space for description of curved contours from digitised images . Weak continuity constraints (Blake 83a,b, Blake and Zisserman 85,87) furnish novel, powerful, non-linear filters , to use in place of gaussians, for scale-space filtering. This has some striking advantages (fig 1). First, scale-space is uniform, so that tracking across scale is a trivial task. Structure need not be preserved to indefinitely fine scale; this leads to an enrichment of the concept of scale a rounded corner, for example, can be represented as a discontinuity at coarse scale but smooth at fine scale. And finally boundary conditions at ends of curves are handled satisfactorily it is as easy to analyse open curves as closed ones. 1 Weak continuity constraints Weak continuity constraints are a principled and effective treatment of the localisation of discontinuities in discrete data. Detailed discussions are given in (Blake 83a, Blake 83b, Blake and Zisserman 85, Blake and Zisserman 87). Applications in computer vision include curve description, edge detection, reconstruction of 2~D surfaces from stereo or laser-rangefinder data, and others. This paper deals with the application of weak continuity constraints to description of plane curves. First a brief summary of weak continuity constraints is given for problems like curve description, in which the data is a 1D array. Data may be obtained from a plane curve as an array (Ji of tangent angle values at equal spacings in arc-length s . The problem is to localise discontinuities in noisy, discrete data. The notion of a discontinuity applies to functions, not to discrete arrays so the problem is ill-posed, and this is exacerbated by the presence of noise. One solution is to interpolate the data by a smooth function such as a gaussian, whose 1st derivative can then be examined. Of course this is common practice in edge detection and in spline interpolation (e.g. de Boor 78) . Such smoothing can be regarded as fitting a function u(s) which tends to seek a minimum of some elastic energy P. Energy P is traded off against a sum of squares error measure D , defined as : D = ~i(U(Si) (Ji)2 by minimising variationally the total energy (or cost) P+ D. The result is a function u(s) that is both fairly smooth and is a fair approximation to the data (}i. The simplest form of the energy P is that of a horizontal stretched string (approximately) : where the parameter A governs the stiffness of the string. If.>. is large then the tendency to smoothness overwhelms the tendency (from D) to approximate the data well. In the extremes, if'>' is very large, the fitted function is simply u = canst , the least squares regression of a constant function to the data (Ji; but if .>. ~ 0 then u interpolates the data, linking the (Ji by straight lines. Weak continuity constraints can be applied to a scheme like the one above, to incorporate discontinuities explicitly into the fitting of u above. Rather than fitting a u that is smooth everywhere and then examining the gradient u' , the function u is allowed to break (at knots, in spline jargon) it is piecewise continuous. The number and position of the discontinuities is chosen optimally, by using an augmented form of cost function E = D + P + S, where the additional term S embodies weak continuity constraints: s = n' X (number of discontinuities) a fixed penalty a is paid for each discontinuity allowed. This has the effect of discouraging discontinuities; u is continuous "almost everywhere". But an occasional discontinuity may be allowed if there is sufficient benefit in terms of smoothness (P) and faithfulness to data (D) in so doing . Clearly a is some kind of measure of reluctance to allow a discontinuity. In fact the two parameters a,'>' interact in a rather interesting way. Far from being "fudge factors" that must be empirically set, they have clear interpretations in terms of

17 citations