Journal ArticleDOI

# Paraxial geometrical optics for quasi-P waves: theories and numerical methods

01 Mar 2002-Wave Motion (Elsevier)-Vol. 35, Iss: 3, pp 205-221
TL;DR: In this article, an isotropic adaptive eikonal solver was proposed to solve the upwind singularity of the paraxial quasi-P wave in anisotropic solids.

### 1. Introduction

• In modern seismic exploration for hydrocarbon reservoirs, the quasi-P wave in anisotropic solids is of practical importance in both improving maximum imaging resolution and obtaining accurate estimates of elastic parameters [1,6,8,10,11,29,44,45], because shales among many others rocks are anisotropic at different length scales in the subsurface structure.
• To solve these inverse problems, the industry favors the direct linearized method which seeks direct, closed form solutions by utilizing inverse scattering techniques such as the Born (or Rytov) approximation and the asymptotic ray theory [3,4,6,11,42], because the computation cost of the direct linearized method is cheaper than other methods such as nonlinear optimization technique.
• To obtain the traveltime and amplitude function, the authors solve the quasi-P eikonal equation and a variant of the transport equation by finite-difference methods.
• The authors extend this approach to compute the traveltimes and amplitudes in anisotropic solids.
• In Section 2, the authors give some background on geometrical optics of quasi-P waves, including the eikonal equation for traveltimes and the transport equation for amplitudes.

### 2. Background on geometrical optics

• (3) As can be seen, the individual terms in (2) become smoother asn increases, so the most singular part of the wave is captured by the leading termsn = 0 andn = 1. Substituting theansatz(2) into the wave equation (1) and equating individual coefficients offn−2 to zero successively, the authors have a recursive system for the phase τ and amplitudesA(n).
• Note that all of these quantities depend on the spatial coordinate vectorx = (x1, x2, x3), though in this and some of the following displays this dependence has been suppressed for the sake of clarity.
• By usingpi = ni/V , wheren = (ni) is the unit normal vector to the wavefront andV is the normal or phase speed of the wavefront, Eq. (5) yields a cubic characteristic polynomial equation with respect toV 2, therefore it has three eigenvalues corresponding to a quasi-longitudinal (“quasi-P” or “qP”) and two quasi-transverse waves.
• The convexity of quasi-P slowness sheet is essential in constructing the paraxial approximation for the quasi-P eikonal equation.

### 3. Paraxial geometrical optics for quasi-P waves: theories

• Therefore, each eigenvalue takes the formv2(x,p)p2, wherev is a homogeneous function of degree zero inp.
• Therefore, equations (22) are advection equations in Cartesian coordinates for the two ray parametersq1 andq2, vi ∂qν ∂xi = 0.
• The authors have to incorporate the information of the group velocity direction into the paraxial Hamiltonian by some strategy.
• Due to the strict convexity of the slowness surface, the explicit limitation on the slowness surface as done in [33] also implies an implicit one on the group velocity vector.

### 4.1. Numerical algorithms for paraxial Hamiltonian

• Therefore the authors have devised some numerical algorithms for computing the paraxial Hamiltonian.
• Since the theoretical results proved in [33] are constructive, the design of the algorithms in [30,31] basically follows those constructions.

### 4.2. High-order finite-difference schemes

• Because the amplitude involved in the geometrical optics term is related to the second-order derivatives of traveltimes, a first-order accurate amplitude field requires a third-order accurate traveltime field.
• The WENO second- and third-order schemes forD±x2τm,k are defined similarly.
• To adapt numerical schemes for Hamilton–Jacobi equations to the solution of eikonal equations, the authors need a so-called flux functionĤ∆ defined by [27].
• In isotropic media and transversely isotropic media with VTI, the above flux can be significantly simplified; see [33].

### 4.4. Adaptive gridding for the singularity at the source

• The traveltime field is mostly smooth, and the use of upwind differencing in the eikonal solvers confines the errors due to singularities which develop away from the source point.
• The source point itself is, however, also an upwind singularity.
• The truncation error of apth order method is dominated by the product of(p+ 1)th derivatives of the traveltime field and the(p + 1)th power of the step(s).
• For treating the upwind singularity at the source, Kim and Cook [22] refine the computational grid several times near the source so that the reduced grid spacing compensates for the increased truncation error near the source, which is similar to the adaptive gridding approach [32] the authors advocated for the isotropic solids.
• Thus the authors extend the adaptive-gridding approach in [32] to the anisotropic case, and the fundamental principle is similar to the isotropic case.

### 5.1. Geometrical optics for 2D VTI solids

• In the case of VTI, an explicit form of the paraxial Hamiltonian can be found [33]; consequently the authors use that paraxial Hamiltonian in their numerical experiments.
• As shown above, a certain Jacobian from Cartesian coordinates to ray coordinates is needed in the amplitude computation.
• As in the isotropic case, the authors use the adaptive-gridding approach for solving the paraxial eikonal equation (33).
• The technique for solving the advection equation of take-off angles in isotropic cases [32] could be modified to solve Eq. (35) as well, so the detail is omitted here.

### 5.2. Examples

• To illustrate that the adaptive-gridding strategy is efficient and accurate for anisotropic media, the authors demonstrate some numerical experiments for a 2D VTI solid.
• Similar observations hold for derivatives∂τ/∂z; see Fig. 5. Now the authors will discuss the computational results for the take-off angle and its derivatives.
• Fig. 7(a) shows the contours of derivatives∂q1/∂x computed by the adaptive-gridding approach.
• Fig. 7(b) shows the calibration result at the bottomz = 1 km for the adaptive-gridding approach by using the analytical solution.
• The authors expect that the approach can handle the traveltime and amplitude computation of the general anisotropic media as well.

### 6. Conclusions

• The paraxial geometrical optics provides a framework for the computation of traveltimes and amplitudes of quasi-P waves by finite-difference methods on Cartesian grids.
• These quantities in turn could be used in Kirchhoff migration and inversion efficiently.
• To solve the paraxial eikonal equations to high-order accuracy, the authors introduced the second- and third-order WENO Runge–Kutta schemes.
• To treat the upwind singularity at the point source, the authors extend the adaptive gridding approach originally designed for isotropic solids to anisotropic solids.
• The numerical experiments for 2D VTI solids show that the algorithms are efficient and accurate.

Did you find this useful? Give us your feedback

##### Citations
More filters
Journal ArticleDOI
TL;DR: Novel ordering strategies are proposed so that the fast sweeping method can be extended efficiently and easily to any unstructured mesh and it is proved that the convergence of the new algorithm converges in a finite number of iterations independent of mesh size.
Abstract: The original fast sweeping method, which is an efficient iterative method for stationary Hamilton-Jacobi equations, relies on natural ordering provided by a rectangular mesh. We propose novel ordering strategies so that the fast sweeping method can be extended efficiently and easily to any unstructured mesh. To that end we introduce multiple reference points and order all the nodes according to their $l^p$-metrics to those reference points. We show that these orderings satisfy the two most important properties underlying the fast sweeping method: (1) these orderings can cover all directions of information propagating efficiently; (2) any characteristic can be decomposed into a finite number of pieces and each piece can be covered by one of the orderings. We prove the convergence of the new algorithm. The computational complexity of the algorithm is nearly optimal in the sense that the total computational cost consists of $O(M)$ flops for iteration steps and $O(M{\rm log}M)$ flops for sorting at the predetermined initialization step which can be efficiently optimized by adopting a linear time sorting method, where $M$ is the total number of mesh points. Extensive numerical examples demonstrate that the new algorithm converges in a finite number of iterations independent of mesh size.

174 citations

Journal ArticleDOI
TL;DR: Numerical examples validate both the accuracy and the efficiency of the new fast sweeping method for static Hamilton–Jacobi equations with convex Hamiltonians.
Abstract: We develop a fast sweeping method for static Hamilton---Jacobi equations with convex Hamiltonians. Local solvers and fast sweeping strategies apply to structured and unstructured meshes. With causality correctly enforced during sweepings numerical evidence indicates that the fast sweeping method converges in a finite number of iterations independent of mesh size. Numerical examples validate both the accuracy and the efficiency of the new method.

157 citations

Journal ArticleDOI
TL;DR: In this article, a level set-based Eulerian approach was proposed to capture all three different coupled wave modes as solutions to the anisotropic eikonal equation, i.e., quasi-transverse, or quasi-S, waves with cusps.

57 citations

### Cites methods from "Paraxial geometrical optics for qua..."

• ...The paraxial eikonal solver in [31,33,34]is developed specifically for capturing qP wavefronts....

[...]

• ...See [34] for an Eulerian approach to constructing a complete geometric optics term for quasi-P waves in the framework of paraxial geometric optics....

[...]

Journal ArticleDOI
TL;DR: The solution for the ekonal equation with a point-source condition has an upwind singularity at the source point as the eikonal solution behaves like a distance function at and near the source.
Abstract: The solution for the eikonal equation with a point-source condition has an upwind singularity at the source point as the eikonal solution behaves like a distance function at and near the source. As...

52 citations

### Cites background from "Paraxial geometrical optics for qua..."

• ...Because of the tremendous number of its applications, the eikonal equation has been tackled from many different perspectives, resulting in the vast literature on the topic; see [21, 35, 20, 34, 30, 6, 9, 23, 24, 25, 26, 33, 10, 12, 40, 39, 8, 27, 28, 11, 14, 5, 29, 2, 1, 38, 15] and references therein....

[...]

Journal ArticleDOI

31 citations

##### References
More filters
Book
01 Jan 1959
TL;DR: In this paper, the authors discuss various topics about optics, such as geometrical theories, image forming instruments, and optics of metals and crystals, including interference, interferometers, and diffraction.
Abstract: The book is comprised of 15 chapters that discuss various topics about optics, such as geometrical theories, image forming instruments, and optics of metals and crystals. The text covers the elements of the theories of interference, interferometers, and diffraction. The book tackles several behaviors of light, including its diffraction when exposed to ultrasonic waves.

19,815 citations

Journal ArticleDOI
TL;DR: The PSC algorithm as mentioned in this paper approximates the Hamilton-Jacobi equations with parabolic right-hand-sides by using techniques from the hyperbolic conservation laws, which can be used also for more general surface motion problems.

13,020 citations

Book
01 Jan 1947
TL;DR: In this paper, the authors present an algebraic extension of LINEAR TRANSFORMATIONS and QUADRATIC FORMS, and apply it to EIGEN-VARIATIONS.
Abstract: Partial table of contents: THE ALGEBRA OF LINEAR TRANSFORMATIONS AND QUADRATIC FORMS. Transformation to Principal Axes of Quadratic and Hermitian Forms. Minimum-Maximum Property of Eigenvalues. SERIES EXPANSION OF ARBITRARY FUNCTIONS. Orthogonal Systems of Functions. Measure of Independence and Dimension Number. Fourier Series. Legendre Polynomials. LINEAR INTEGRAL EQUATIONS. The Expansion Theorem and Its Applications. Neumann Series and the Reciprocal Kernel. The Fredholm Formulas. THE CALCULUS OF VARIATIONS. Direct Solutions. The Euler Equations. VIBRATION AND EIGENVALUE PROBLEMS. Systems of a Finite Number of Degrees of Freedom. The Vibrating String. The Vibrating Membrane. Green's Function (Influence Function) and Reduction of Differential Equations to Integral Equations. APPLICATION OF THE CALCULUS OF VARIATIONS TO EIGENVALUE PROBLEMS. Completeness and Expansion Theorems. Nodes of Eigenfunctions. SPECIAL FUNCTIONS DEFINED BY EIGENVALUE PROBLEMS. Bessel Functions. Asymptotic Expansions. Additional Bibliography. Index.

7,426 citations

Book
01 Feb 1996
TL;DR: In this paper, Schnabel proposed a modular system of algorithms for unconstrained minimization and nonlinear equations, based on Newton's method for solving one equation in one unknown convergence of sequences of real numbers.
Abstract: Preface 1. Introduction. Problems to be considered Characteristics of 'real-world' problems Finite-precision arithmetic and measurement of error Exercises 2. Nonlinear Problems in One Variable. What is not possible Newton's method for solving one equation in one unknown Convergence of sequences of real numbers Convergence of Newton's method Globally convergent methods for solving one equation in one uknown Methods when derivatives are unavailable Minimization of a function of one variable Exercises 3. Numerical Linear Algebra Background. Vector and matrix norms and orthogonality Solving systems of linear equations-matrix factorizations Errors in solving linear systems Updating matrix factorizations Eigenvalues and positive definiteness Linear least squares Exercises 4. Multivariable Calculus Background Derivatives and multivariable models Multivariable finite-difference derivatives Necessary and sufficient conditions for unconstrained minimization Exercises 5. Newton's Method for Nonlinear Equations and Unconstrained Minimization. Newton's method for systems of nonlinear equations Local convergence of Newton's method The Kantorovich and contractive mapping theorems Finite-difference derivative methods for systems of nonlinear equations Newton's method for unconstrained minimization Finite difference derivative methods for unconstrained minimization Exercises 6. Globally Convergent Modifications of Newton's Method. The quasi-Newton framework Descent directions Line searches The model-trust region approach Global methods for systems of nonlinear equations Exercises 7. Stopping, Scaling, and Testing. Scaling Stopping criteria Testing Exercises 8. Secant Methods for Systems of Nonlinear Equations. Broyden's method Local convergence analysis of Broyden's method Implementation of quasi-Newton algorithms using Broyden's update Other secant updates for nonlinear equations Exercises 9. Secant Methods for Unconstrained Minimization. The symmetric secant update of Powell Symmetric positive definite secant updates Local convergence of positive definite secant methods Implementation of quasi-Newton algorithms using the positive definite secant update Another convergence result for the positive definite secant method Other secant updates for unconstrained minimization Exercises 10. Nonlinear Least Squares. The nonlinear least-squares problem Gauss-Newton-type methods Full Newton-type methods Other considerations in solving nonlinear least-squares problems Exercises 11. Methods for Problems with Special Structure. The sparse finite-difference Newton method Sparse secant methods Deriving least-change secant updates Analyzing least-change secant methods Exercises Appendix A. A Modular System of Algorithms for Unconstrained Minimization and Nonlinear Equations (by Robert Schnabel) Appendix B. Test Problems (by Robert Schnabel) References Author Index Subject Index.

6,831 citations

Book
01 Mar 1983
TL;DR: Newton's Method for Nonlinear Equations and Unconstrained Minimization and methods for solving nonlinear least-squares problems with Special Structure.
Abstract: Preface 1. Introduction. Problems to be considered Characteristics of 'real-world' problems Finite-precision arithmetic and measurement of error Exercises 2. Nonlinear Problems in One Variable. What is not possible Newton's method for solving one equation in one unknown Convergence of sequences of real numbers Convergence of Newton's method Globally convergent methods for solving one equation in one uknown Methods when derivatives are unavailable Minimization of a function of one variable Exercises 3. Numerical Linear Algebra Background. Vector and matrix norms and orthogonality Solving systems of linear equations-matrix factorizations Errors in solving linear systems Updating matrix factorizations Eigenvalues and positive definiteness Linear least squares Exercises 4. Multivariable Calculus Background Derivatives and multivariable models Multivariable finite-difference derivatives Necessary and sufficient conditions for unconstrained minimization Exercises 5. Newton's Method for Nonlinear Equations and Unconstrained Minimization. Newton's method for systems of nonlinear equations Local convergence of Newton's method The Kantorovich and contractive mapping theorems Finite-difference derivative methods for systems of nonlinear equations Newton's method for unconstrained minimization Finite difference derivative methods for unconstrained minimization Exercises 6. Globally Convergent Modifications of Newton's Method. The quasi-Newton framework Descent directions Line searches The model-trust region approach Global methods for systems of nonlinear equations Exercises 7. Stopping, Scaling, and Testing. Scaling Stopping criteria Testing Exercises 8. Secant Methods for Systems of Nonlinear Equations. Broyden's method Local convergence analysis of Broyden's method Implementation of quasi-Newton algorithms using Broyden's update Other secant updates for nonlinear equations Exercises 9. Secant Methods for Unconstrained Minimization. The symmetric secant update of Powell Symmetric positive definite secant updates Local convergence of positive definite secant methods Implementation of quasi-Newton algorithms using the positive definite secant update Another convergence result for the positive definite secant method Other secant updates for unconstrained minimization Exercises 10. Nonlinear Least Squares. The nonlinear least-squares problem Gauss-Newton-type methods Full Newton-type methods Other considerations in solving nonlinear least-squares problems Exercises 11. Methods for Problems with Special Structure. The sparse finite-difference Newton method Sparse secant methods Deriving least-change secant updates Analyzing least-change secant methods Exercises Appendix A. A Modular System of Algorithms for Unconstrained Minimization and Nonlinear Equations (by Robert Schnabel) Appendix B. Test Problems (by Robert Schnabel) References Author Index Subject Index.

6,217 citations

### "Paraxial geometrical optics for qua..." refers methods in this paper

• ...for example, Newton method can solve this system effectively [13]....

[...]