Author

# P.M. van den Berg

Other affiliations: University of Dundee

Bio: P.M. van den Berg is an academic researcher from Delft University of Technology. The author has contributed to research in topics: Integral equation & Inverse problem. The author has an hindex of 37, co-authored 163 publications receiving 5270 citations. Previous affiliations of P.M. van den Berg include University of Dundee.

##### Papers published on a yearly basis

##### Papers

More filters

••

TL;DR: In this paper, a method for reconstructing the complex index of refraction of a bounded two-dimensional inhomogeneous object of known geometric configuration from measured scattered field data is presented, which is an extension of recent results on the direct scattering problem wherein the governing domain integral equation was solved iteratively by a successive over-relaxation technique.

347 citations

••

TL;DR: Van den Berg and Abubakar as discussed by the authors discussed the possibility of the presence of local minima of the nonlinear cost functional and under which conditions they can exist, and introduced a new type of regularization, based on a weighted L 2 total variation norm.

Abstract: We discuss the problem of the reconstruction of the profile of an inhomogeneous object from scattered field data. Our starting point is the contrast source inversion method, where the unknown contrast sources and the unknown contrast are updated by an iterative minimization of a cost functional. We discuss the possibility of the presence of local minima of the nonlinear cost functional and under which conditions they can exist. Inspired by the successful implementation of the minimization of total variation and other edgepreserving algorithms in image restoration and inverse scattering, we have explored the use of these image-enhancement techniques as an extra regularization. The drawback of adding a regularization term to the cost functional is the presence of an artificial weighting parameter in the cost functional, which can only be determined through considerable numerical experimentation. Therefore, we first discuss the regularization as a multiplicative constraint and show that the weighting parameter is now completely prescribed by the error norm of the data equation and the object equation. Secondly, inspired by the edge-preserving algorithms, we introduce a new type of regularization, based on a weighted L2 total variation norm. The advantage is that the updating parameters in the contrast source inversion method can be determined explicitly, without the usual line minimization. In addition this new regularization shows excellent edge-preserving properties. Numerical experiments illustrate that the present multiplicative regularized inversion scheme is very robust, handling noisy as well as limited data very well, without the necessity of artificial regularization parameters. 190 Van den Berg and Abubakar

338 citations

•

01 Jan 1993

TL;DR: In this paper, the reciprocity theorem was chosen as the central theme of the seismic wave theory, and the seismic experiment was formulated as terme of a geological system response to a known source function.

Abstract: Progress in seismic data processing requires the knowledge of all the theoretical aspects of the acoustic wave theory. We choose the reciprocity theorem as the central theme, because it constitutes the fundaments of the seismic wave theory (Fokkema and van den Berg, 1993). In essence, two states are distinguished in this theorem. These can be completely different, although they share the same time -invariant domgin of application and they are related via an interaction quantity. The particular choice of the two states determines the acoustic application. This makes it possible to formulate the seismic experiment in terme of a geological system response to a known source function.

334 citations

••

TL;DR: In this paper, the recently developed multiplicative regularized contrast source inversion method is applied to microwave biomedical applications, which is fully iterative and avoids solving any forward problem in each iterative step.

Abstract: In this paper, the recently developed multiplicative regularized contrast source inversion method is applied to microwave biomedical applications. The inversion method is fully iterative and avoids solving any forward problem in each iterative step. In this way, the inverse scattering problem can efficiently be solved. Moreover, the recently developed multiplicative regularizer allows us to apply the method blindly to experimental data. We demonstrate inversion from experimental data collected by a 2.33-GHz circular microwave scanner using a two-dimensional (2-D) TM polarization measurement setup. Further some results of a feasibility study of the present inversion method to the 2-D TE polarization and the full-vectorial three-dimensional measurement will be presented as well.

329 citations

••

TL;DR: This work presents a preconditioned conjugate gradient method to update the contrast, which introduces hardly any additional computation time, but achieves the same or even better results than the original CSI method.

Abstract: We discuss the problem of the reconstruction of the profile of a bounded object from scattered field data. Inspired by the successful implementation of the minimization of total variation (TV) in the modified gradient method, we have explored the possibilities of this image-enhancement technique in the contrast source inversion (CSI) method. In order to be able to implement the additional regularizer in the CSI method, the updating of the contrast has been modified. We present a preconditioned conjugate gradient method to update the contrast, which introduces hardly any additional computation time, but achieves the same or even better results than the original CSI method. The addition of the minimization of the total variation to the cost functional has a very positive effect on the quality of the reconstructions for both `blocky' and smooth profiles, but a drawback is the presence of an artificial weighting parameter in the cost functional, which can only be determined through considerable numerical experimentation. Therefore, we have introduced the TV as a multiplicative constraint. Numerical experiments demonstrate that the algorithm, based on this multiplicative regularization, seems to be robust, handling noisy as well as limited data very well, without the necessity of artificial parameters.

321 citations

##### Cited by

More filters

••

TL;DR: This review attempts to illuminate the state of the art of FWI by building accurate starting models with automatic procedures and/or recording low frequencies, and improving computational efficiency by data-compression techniquestomake3DelasticFWIfeasible.

Abstract: Full-waveform inversion FWI is a challenging data-fitting procedure based on full-wavefield modeling to extract quantitative information from seismograms. High-resolution imaging at half the propagated wavelength is expected. Recent advances in high-performance computing and multifold/multicomponent wide-aperture and wide-azimuth acquisitions make 3D acoustic FWI feasible today. Key ingredients of FWI are an efficient forward-modeling engine and a local differential approach, in which the gradient and the Hessian operators are efficiently estimated. Local optimization does not, however, prevent convergence of the misfit function toward local minima because of the limited accuracy of the starting model, the lack of low frequencies, the presence of noise, and the approximate modeling of the wave-physics complexity. Different hierarchical multiscale strategiesaredesignedtomitigatethenonlinearityandill-posedness of FWI by incorporating progressively shorter wavelengths in the parameter space. Synthetic and real-data case studies address reconstructing various parameters, from VP and VS velocities to density, anisotropy, and attenuation. This review attempts to illuminate the state of the art of FWI. Crucial jumps, however, remain necessary to make it as popular as migration techniques. The challenges can be categorized as 1 building accurate starting models with automatic procedures and/or recording low frequencies, 2 defining new minimization criteria to mitigate the sensitivity of FWI to amplitude errors and increasing the robustness of FWI when multiple parameter classes are estimated, and 3 improving computational efficiency by data-compression techniquestomake3DelasticFWIfeasible.

2,981 citations

•

01 Jan 1998

TL;DR: This work states that all scale-spaces fulllling a few fairly natural axioms are governed by parabolic PDEs with the original image as initial condition, which means that, if one image is brighter than another, then this order is preserved during the entire scale-space evolution.

Abstract: Preface Through many centuries physics has been one of the most fruitful sources of inspiration for mathematics. As a consequence, mathematics has become an economic language providing a few basic principles which allow to explain a large variety of physical phenomena. Many of them are described in terms of partial diierential equations (PDEs). In recent years, however, mathematics also has been stimulated by other novel elds such as image processing. Goals like image segmentation, multiscale image representation, or image restoration cause a lot of challenging mathematical questions. Nevertheless, these problems frequently have been tackled with a pool of heuristical recipes. Since the treatment of digital images requires very much computing power, these methods had to be fairly simple. With the tremendous advances in computer technology in the last decade, it has become possible to apply more sophisticated techniques such as PDE-based methods which have been inspired by physical processes. Among these techniques, parabolic PDEs have found a lot of attention for smoothing and restoration purposes, see e.g. 113]. To restore images these equations frequently arise from gradient descent methods applied to variational problems. Image smoothing by parabolic PDEs is closely related to the scale-space concept where one embeds the original image into a family of subsequently simpler , more global representations of it. This idea plays a fundamental role for extracting semantically important information. The pioneering work of Alvarez, Guichard, Lions and Morel 11] has demonstrated that all scale-spaces fulllling a few fairly natural axioms are governed by parabolic PDEs with the original image as initial condition. Within this framework, two classes can be justiied in a rigorous way as scale-spaces: the linear diiusion equation with constant dif-fusivity and nonlinear so-called morphological PDEs. All these methods satisfy a monotony axiom as smoothing requirement which states that, if one image is brighter than another, then this order is preserved during the entire scale-space evolution. An interesting class of parabolic equations which pursue both scale-space and restoration intentions is given by nonlinear diiusion lters. Methods of this type have been proposed for the rst time by Perona and Malik in 1987 190]. In v vi PREFACE order to smooth the image and to simultaneously enhance semantically important features such as edges, they apply a diiusion process whose diiusivity is steered by local image properties. These lters are diicult to analyse mathematically , as they may act locally like a backward diiusion process. …

2,484 citations

••

TL;DR: A surface plasmon polariton (SPP) is an electromagnetic excitation existing on the surface of a good metal, whose electromagnetic field decays exponentially with distance from the surface.

2,211 citations

••

TL;DR: In this paper, a quantum-mechanical description of the interaction between the electrons and the sample is discussed, followed by a powerful classical dielectric approach that can be in practice applied to more complex systems.

Abstract: This review discusses how low-energy, valence excitations created by swift electrons can render information on the optical response of structured materials with unmatched spatial resolution. Electron microscopes are capable of focusing electron beams on sub-nanometer spots and probing the target response either by analyzing electron energy losses or by detecting emitted radiation. Theoretical frameworks suited to calculate the probability of energy loss and light emission (cathodoluminescence) are revisited and compared with experimental results. More precisely, a quantum-mechanical description of the interaction between the electrons and the sample is discussed, followed by a powerful classical dielectric approach that can be in practice applied to more complex systems. We assess the conditions under which classical and quantum-mechanical formulations are equivalent. The excitation of collective modes such as plasmons is studied in bulk materials, planar surfaces, and nanoparticles. Light emission induced by the electrons is shown to constitute an excellent probe of plasmons, combining sub-nanometer resolution in the position of the electron beam with nanometer resolution in the emitted wavelength. Both electron energy-loss and cathodoluminescence spectroscopies performed in a scanning mode of operation yield snap shots of plasmon modes in nanostructures with fine spatial detail as compared to other existing imaging techniques, thus providing an ideal tool for nanophotonics studies.

1,288 citations