The conjugate gradient method

From SEG Wiki
Jump to: navigation, search

The conjugate gradient method can be used to solve many large linear geophysical problems — for example, least-squares parabolic and hyperbolic Radon transform, traveltime tomography, least-squares migration, and full-waveform inversion (FWI).[1] This tutorial revisits the Linear inversion tutorial (Hall, 2016) that estimated reflectivity by deconvolving a known wavelet from a seismic trace using least squares. This tutorial solves the same problem using the conjugate gradient method. This problem is easy to understand, and the concepts apply to other applications. The conjugate gradient method is often used to solve large problems because the least-squares algorithm is much more expensive — that is, even a large computer may not be able to find a useful solution in a reasonable amount of time.

Introduction

The conjugate gradient method was originally proposed by Hestenes (1952) and extended to handle rectangular matrices by Paige and Saunders (1982).[2][3] Claerbout (2012) demonstrates its application to geophysical problems.[4] It is an iterative method. Each iteration applies the linear operator and its adjoint. The initial guess is often the zero vector, and computation may stop after very few iterations.

The adjoint of the operator A, denoted as , is defined as the operator that satisfies for all vectors x and y (where represents the inner product between vectors u and v). For a given matrix, the adjoint is simply the complex conjugate of the transpose of the matrix; this is also sometimes known as the Hermitian transpose and is sometimes written as or . Just to muddy the notation water even further, the complex conjugate transpose is denoted by A.H in NumPy and A' in Matlab or Octave. However, we will implement the adjoint operator without forming any matrices.

Many linear operators can be programmed as functions that are more intuitive and efficient than matrix multiplication. The matrices for operators like migration and FWI would be huge, but we avoid this problem because once you have the program for the linear operator, you can write the adjoint operator without computing matrices. Implementing the conjugate gradient algorithm using functions to apply linear operators and their adjoints is practical and efficient. It is wonderful to see programs that implement linear algorithms without matrices, and the programming technique is a key theme in Claerbout's 2012 book.[4]

This tutorial provides a quick start to the conjugate gradient method based on Guo's pseudocode (2002).[5] Those interested in more depth can read Claerbout (2012) and Shewchuk (1994).[4][6] A Jupyter Notebook with Python code to reproduce the figures in this tutorial is at https://github.com/seg/tutorials.

The forward and adjoint operators

Starting with a known reflectivity model m, we create synthetic seismic data d = Fm, where F is the linear operator that performs the function “convolve with a Ricker wavelet.” Given such a trace and the operator, the conjugate gradient method can be used to estimate the original reflectivity.

Hall (2016) calls the operator G instead of F, and he creates a matrix by shifting the wavelet and padding with zeros. In contrast, I implement the operator using the NumPy function convolve(). This is advantageous because it allows us to solve the linear equation without ever having to construct (or invert) this matrix, which can become very large. This matrix-free approach is faster and uses less memory than the matrix implementation.

We can add one more feature to the operator and implement it with its adjoint. A convenient way to combine the two operations is to use a so-called “object-oriented programming” approach and define a Python class. Then we can have two methods (i.e., functions) defined on the class: forward, implementing the forward operator, and adjoint for the adjoint operator, which in this case is correlation.

Tle37040296.1 fd1.gif

Claerbout (2012) teaches how to write this kind of symmetrical code and provides many examples of geophysical operators with adjoints (e.g., derivative versus negative derivative, causal integration versus anticausal integration, stretch versus squeeze, truncate versus zero pad). Writing functions to apply operators is more efficient than computing matrices.

Now that we have the operator, we can instantiate the class with a wavelet. This wavelet will be “built in” to the instance F.

d = F.forward(m)

Now we can compute d = Fm simply by passing the model m to the method F.forward(), which already has the wavelet:

F = Operator(wavelet)

This results in the synthetic seismogram shown in Figure 1.

Figure 1. A plot of reflectivity model m (black) and the synthetic seismic data d (orange).

The conjugate gradient method

Now that we have a synthetic, we wish to solve the linear inverse problem and estimate the model m from the data d. The model can not be completely recovered because the Ricker wavelet is band limited, so some information is lost.

One way to solve linear problems is to start with an initial guess and iteratively improve the solution. The next few paragraphs derive an iterative method. You do not need to understand all the derivation, so you might want to lightly read it and move to the section about the pseudocode.

We start with an initial estimate for the model, (the zero vector) and compute the residual (i.e., the difference between the data and the action of the forward operator on the model estimate). A good measure of the error in the initial solution is the inner product or np.dot(r0, r0) in code. This is equivalent to the squared norm (length) of the residual vector , and constitutes our cost function. If the cost is 0, or within some small tolerance, then we have a solution.

We can improve the estimate, by selecting a direction and a scale to move to a new guess, . You can compute the direction in model space that most rapidly decreases the error. This is the gradient of or . If you grind through the mathematics, the gradient turns out to be given by the action of the adjoint operator on the residual: .

The scalar is computed to minimize the residual, . We can then compute by taking the derivative of with respect to , setting the derivative to 0, and solving for . The result is:


(1)

These equations define a reasonable approach to iteratively improve an estimated solution, and it is called “the steepest descent method.” The conjugate gradient algorithm builds on this. It is not much harder to implement, has similar cost per iteration, and faster convergence.

The first iteration of the conjugate gradient method is the same as the steepest descent method. The second (and later) iterations compute the gradient and . With and from the previous iteration we can then compute the step direction. Scalars and are computed to minimize


(2)

Some mathematical manipulations determine the best direction is where . The conjugate gradient algorithm is guaranteed to converge when the number of iterations is equal to the dimension of , but only a few iterations often give sufficient accuracy. For our implementation, we'll start with a simplifed version of the pseudocode provided by Guo (2002), with its implementation in Python on the right:[5]

Tle37040296.1 fd6.gif

Results

The Python code in the previous section was used to invert for reflectivity. Figure 2 shows the five iterations of the conjugate gradient method. The conjugate gradient method converged in only four iterations; the results of the fourth and fifth iteration almost exactly overlay on the plot. Fast convergence is important for a practical algorithm. Convergence is guaranteed in 50 iterations (the dimension of the model).

Figure 2. The initial estimate and the first 5 iterations of conjugate gradient. The fifth iteration almost exactly overlays the fourth.

Figure 3 compares the original model and the model estimated using conjugate gradient inversion. Conjugate gradient inversion does not completely recover the model because the Ricker wavelet is band limited, but side lobes are reduced compared to the data.

Figure 3. Comparison of the model (black) and the model estimated using conjugate gradient inversion (purple), along with the data (orange) for comparison.

Finally, we compute the predicted data from the estimated model:

d_pred = F.forward(m_est)

Figure 4 compares the predicted data with the original data to show that we have done a good job of the estimation. This demonstrates more than one reflectivity sequence when convolved with the Ricker wavelet fits the data, in particular the original model and the model estimated by the conjugate gradient method. It may be interesting to explore preconditioning operators that promote a sparse solution.

Figure 4. Comparison of the predicted data d_pred from the conjugate gradient inversion with the original data d. It overplots the data almost exactly.

The Jupyter notebook provided with this tutorial further explores finding least-squares solutions using the conjugate gradient method. The notebook demonstrates how preconditioning can be used to promote a sparse solution. It also provides examples using the solver provided in the SciPy package.

Conclusions

I described the conjugate gradient algorithm and presented an implementation. This is an iterative method that requires functions to apply the linear operator and its adjoint. Many linear operators that are familiar geophysical operations like convolution are more efficiently implemented without matrices. The reflectivity estimation problem described in Hall (2016) was solved using the conjugate gradient method.[7] Convergence only took four iterations. The conjugate gradient method is often used to solve large problems because well-known solvers like least squares are much more expensive.

Acknowledgments

The SEG Seismic Working Workshop on Reproducible Tutorials held 9–13 August 2017 in Houston inspired this tutorial. For more information, visit http://ahay.org/wiki/Houston_2017.

References

  1. Witte, P., M. Louboutin, K. Lensink, M. Lange, N. Kukreja, F. Luporini, G. Gorman, and F. J. Herrman, 2018, Full-waveform inversion, Part 3: Optimization: The Leading Edge, 37, no. 2, 142–145, http://dx.doi.org/10.1190/tle37020142.1.
  2. Hestenes, M. R., and E. Stiefel, 1952, Methods of conjugate gradients for solving linear systems: Journal of Research of the National Bureau of Standards, 49, no. 6, http://dx.doi.org/10.6028/jres.049.044.
  3. Paige, C. C., and M. A. Saunders, 1982, LSQR: An algorithm for sparse linear equations and sparse least squares: ACM Transactions on Mathematical Software, 8, no. 1, 43–71, http://dx.doi.org/10.1145/355984.355989.
  4. 4.0 4.1 4.2 Claerbout, J., and S. Fomel, 2012, Image estimation by example, http://sepwww.stanford.edu/sep/prof/gee1-2012.pdf, accessed 6 March 2018.
  5. 5.0 5.1 Guo, J., H. Zhou, J. Young, and S. Gray, 2002, Merits and challenges for accurate velocity model building by 3D gridded tomography: 72nd Annual International Meeting, SEG, Expanded Abstracts, 854–857, http://dx.doi.org/10.1190/1.1817395.
  6. Shewchuk, J. R., 1994, An introduction to the conjugate gradient method without the agonizing pain, http://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.pdf, accessed 6 March 2018.
  7. Hall, M., 2016, Linear Inversion: The Leading Edge, 35, no. 12, 1085–1087, http://dx.doi.org/10.1190/tle35121085.1.

Corresponding authors

  • Corresponding author: Karl Schleicher, University of Texas, Bureau of Economic Geology, Jackson School of Geosciences, k_schleicher‐at‐hotmail.com

External links

find literature about
The conjugate gradient method
SEG button search.png Datapages button.png GeoScienceWorld button.png OnePetro button.png Schlumberger button.png Google button.png AGI button.png
Retrieved from "https://wiki.seg.org/index.php?title=The_conjugate_gradient_method&oldid=116720"