scispace - formally typeset
Search or ask a question
Author

Kurt J. Marfurt

Bio: Kurt J. Marfurt is an academic researcher from University of Oklahoma. The author has contributed to research in topics: Seismic attribute & Curvature. The author has an hindex of 43, co-authored 482 publications receiving 10815 citations. Previous affiliations of Kurt J. Marfurt include University of Houston & ConocoPhillips.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors quantitatively compare finite-difference and finite-element solutions of the scalar and elastic hyperbolic wave equations for the most popular implicit and explicit time-domain and frequency-domain techniques.
Abstract: Numerical solutions of the scalar and elastic wave equations have greatly aided geophysicists in both forward modeling and migration of seismic wave fields in complicated geologic media, and they promise to be invaluable in solving the full inverse problem. This paper quantitatively compares finite-difference and finite-element solutions of the scalar and elastic hyperbolic wave equations for the most popular implicit and explicit time-domain and frequency-domain techniques. It is imperative that one choose the most cost effective solution technique for a fixed degree of accuracy. To be of value, a solution technique must be able to minimize (1) numerical attenuation or amplification, (2) polarization errors, (3) numerical anisotropy, (4) errors in phase and group velocities, (5) extraneous numerical (parasitic) modes, (6) numerical diffraction and scattering, and (7) errors in reflection and transmission coefficients. This paper shows that in homogeneous media the explicit finite-element and finite-difference schemes are comparable when solving the scalar wave equation and when solving the elastic wave equations with Poisson's ratio less than 0.3. Finite-elements are superior to finite-differences when modeling elastic media with Poisson's ratio between 0.3 and 0.45. For both the scalar and elastic equations, the more costly implicit time integration schemes such as the Newmark scheme are inferior to the explicit central-differences scheme, since time steps surpassing the Courant condition yield stable but highly inaccurate results. Frequency-domain finite-element solutions employing a weighted average of consistent and lumped masses yield the most accurate resuls, and they promise to be the most cost-effective method for CDP, well log, and interactive modeling.--Modified journal abstract.

861 citations

Journal ArticleDOI
TL;DR: In this paper, a multi-race, semblance-based coherency algorithm was proposed to analyze data of lesser quality than the original three-trace cross-correlation-based algorithm.
Abstract: Seismic coherency is a measure of lateral changes in the seismic response caused by variation in structure, stratigraphy, lithology, porosity, and the presence of hydrocarbons. Unlike shaded relief maps that allow 3-D visualization of faults and channels from horizon picks, seismic coherency operates on the seismic data itself and is therefore unencumbered by interpreter or automatic picker biases. We present a more robust, multitrace, semblance-based coherency algorithm that allows us to analyze data of lesser quality than our original three-trace cross-correlation-based algorithm. This second-generation, semblance-based coherency algorithm provides improved vertical resolution over our original zero mean crosscorrelation algorithm, resulting in reduced mixing of overlying or underlying stratigraphic features. In general, we analyze stratigraphic features using as narrow a temporal analysis window as possible, typically determined by the highest usable frequency in the input seismic data. In the limit, one may confidently apply our new semblance-based algorithm to a one-sample-thick seismic volume extracted along a conventionally picked stratigraphic horizon corresponding to a peak or trough whose amplitudes lie sufficiently above the ambient seismic noise. In contrast, near-vertical structural features, such as faults, are better enhanced when using a longer temporal analysis window corresponding to the lowest usable frequency in the input data. The calculation of reflector dip/azimuth throughout the data volume allows us to generalize the calculation of conventional complex trace attributes (including envelope, phase, frequency, and bandwidth) to the calculation of complex reflector attributes generated by slant stacking the input data along the reflector dip within the coherency analysis window. These more robust complex reflector attribute cubes can be combined with coherency and dip/azimuth cubes using conventional geostatistical, clustering, and segmentation algorithms to provide an integrated, multiattribute analysis.

735 citations

Journal ArticleDOI
TL;DR: The creation of an updated and upgraded Marmousi model and data set which is named Marm Mousi2 is outlined, thereby extending the usefulness of the model for, hopefully, some time to come.
Abstract: The original Marmousi model was created by a consortium led by the Institut Francais du Petrole (IFP) in 1988. Since its creation, the model and its acoustic finite-difference synthetic data have been used by hundreds of researchers throughout the world for a multitude of geophysical purposes, and to this day remains one of the most published geophysical data sets. The advancement in computer hardware capabilities since the late 1980s has made it possible to perform a major upgrade to the model and data set, thereby extending the usefulness of the model for, hopefully, some time to come. This paper outlines the creation of an updated and upgraded Marmousi model and data set which we have named Marmousi2.

536 citations

Journal ArticleDOI
TL;DR: The basic eigenstructure approach for computing coherence followed by a comparison on data from the Gulf of Mexico is introduced and a theoretical connection between the well- known semblance and the less well-known eigenStructure measures of coherence in terms of the eigenvalues of the data covariance matrix is developed.
Abstract: Coherence measures applied to 3-D seismic data volumes have proven to be an effective method for imaging geological discontinuities such as faults and stratigraphic features. By removing the seismic wavelet from the data, seismic coherence offers interpreters a different perspective, often exposing subtle features not readily apparent in the seismic data. Several formulations exist for obtaining coherence estimates. The first three generations of coherence algorithms at Amoco are based, respectively, on cross correlation, semblance, and an eigendecomposition of the data covariance matrix. Application of these three generations to data from the Gulf of Mexico indicates that the implementation of the eigenstructure approach described in this paper produces the most robust results. This paper first introduces the basic eigenstructure approach for computing coherence followed by a comparison on data from the Gulf of Mexico. Next, Appendix A develops a theoretical connection between the well-known semblance and the less well-known eigenstructure measures of coherence in terms of the eigenvalues of the data covariance matrix. Appendix B further extends the analysis by comparing the semblance- and eigenstructure-based coherence measures in the presence of additive uncorrelated noise.

445 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
TL;DR: This review attempts to illuminate the state of the art of FWI by building accurate starting models with automatic procedures and/or recording low frequencies, and improving computational efficiency by data-compression techniquestomake3DelasticFWIfeasible.
Abstract: Full-waveform inversion FWI is a challenging data-fitting procedure based on full-wavefield modeling to extract quantitative information from seismograms. High-resolution imaging at half the propagated wavelength is expected. Recent advances in high-performance computing and multifold/multicomponent wide-aperture and wide-azimuth acquisitions make 3D acoustic FWI feasible today. Key ingredients of FWI are an efficient forward-modeling engine and a local differential approach, in which the gradient and the Hessian operators are efficiently estimated. Local optimization does not, however, prevent convergence of the misfit function toward local minima because of the limited accuracy of the starting model, the lack of low frequencies, the presence of noise, and the approximate modeling of the wave-physics complexity. Different hierarchical multiscale strategiesaredesignedtomitigatethenonlinearityandill-posedness of FWI by incorporating progressively shorter wavelengths in the parameter space. Synthetic and real-data case studies address reconstructing various parameters, from VP and VS velocities to density, anisotropy, and attenuation. This review attempts to illuminate the state of the art of FWI. Crucial jumps, however, remain necessary to make it as popular as migration techniques. The challenges can be categorized as 1 building accurate starting models with automatic procedures and/or recording low frequencies, 2 defining new minimization criteria to mitigate the sensitivity of FWI to amplitude errors and increasing the robustness of FWI when multiple parameter classes are estimated, and 3 improving computational efficiency by data-compression techniquestomake3DelasticFWIfeasible.

2,981 citations

Reference EntryDOI
15 Oct 2004

2,118 citations

Journal ArticleDOI
TL;DR: The adjoint-state method as discussed by the authors is a well-known method in the numerical community for computing the gradient of a functional with respect to the model parameters when this functional depends on those model parameters through state variables, which are solutions of the forward problem.
Abstract: SUMMARY Estimating the model parameters from measured data generally consists of minimizing an error functional. A classic technique to solve a minimization problem is to successively determine the minimum of a series of linearized problems. This formulation requires the Frechet derivatives (the Jacobian matrix), which can be expensive to compute. If the minimization is viewed as a non-linear optimization problem, only the gradient of the error functional is needed. This gradient can be computed without the Frechet derivatives. In the 1970s, the adjoint-state method was developed to efficiently compute the gradient. It is now a well-known method in the numerical community for computing the gradient of a functional with respect to the model parameters when this functional depends on those model parameters through state variables, which are solutions of the forward problem. However, this method is less well understood in the geophysical community. The goal of this paper is to review the adjoint-state method. The idea is to define some adjoint-state variables that are solutions of a linear system. The adjoint-state variables are independent of the model parameter perturbations and in a way gather the perturbations with respect to the state variables. The adjoint-state method is efficient because only one extra linear system needs to be solved. Several applications are presented. When applied to the computation of the derivatives of the ray trajectories, the link with the propagator of the perturbed ray equation is established.

1,514 citations