Difference between revisions of "Mathematical foundation of deconvolution"

Series Investigations in Geophysics Öz Yilmaz http://dx.doi.org/10.1190/1.9781560801580 ISBN 978-1-56080-094-1 SEG Online Store

B.1 Synthetic seismogram

Consider an earth model that consists of homogeneous horizontal layers with thicknesses corresponding to the sampling interval. The seismic impedance associated with a layer is defined as I = ρv, where ρ is density and v is the compressional-wave velocity within the layer. The instantaneous value of seismic impedance for the kth layer is given by

 ${\displaystyle I_{k}={\rho _{k}}{v_{k}}.}$ (B-1)

For a vertically incident plane wave, the pressure amplitude reflection coefficient associated with an interface is given by

 ${\displaystyle c_{k}={\frac {I_{k+1}-I_{k}}{I_{k+1}+I_{k}}}.}$ (B-2)

Assume that the change of density with depth is negligible compared to the change of velocity with depth. Equation (B-2) then takes the form

 ${\displaystyle c_{k}={\frac {v_{k+1}-v_{k}}{v_{k+1}+v_{k}}}.}$ (B-3)

If the amplitude of the incident wave is unity, then the magnitude of the reflection coefficient corresponds to the fraction of amplitude reflected from the interface.

With knowledge of the reflection coefficients, we can compute the impulse response of a horizontally layered earth model using the Kunetz method [1]. The impulse response contains not only the primary reflections, but also all the possible multiples. Finally, convolution of the impulse response with a source wavelet yields the synthetic seismogram.

Although random noise can be added to the seismogram, the convolutional model used here to establish the deconvolution filters does not include random noise. We also assume that the source waveform does not change as it propagates down into the earth; hence, the convolutional model does not include intrinsic attenuation. The convolution of a seismic wavelet w(t) with the impulse response e(t) yields the seismogram x(t)

 ${\displaystyle x(t)=w(t)\ast e(t).}$ (B-4)

By Fourier transforming both sides, we get

 ${\displaystyle X(\omega )=W(\omega )E(\omega ),}$ (B-5)

where X(ω), W(ω), and E(ω) represent the complex Fourier transforms of the seismogram, the source waveform, and the impulse response, respectively. In terms of amplitude and phase spectra, the Fourier transforms are expressed as

 ${\displaystyle X(\omega )=A_{x}(\omega )\ \exp[i\phi _{x}(\omega )],}$ (B–6a)

 ${\displaystyle W(\omega )=A_{w}(\omega )\ \exp[i\phi _{w}(\omega )],}$ (B–6b)

and

 ${\displaystyle E(\omega )=A_{e}(\omega )\ \exp[i\phi _{e}(\omega )].}$ (B–6c)

By substituting equations (B-6) into equation (B-5), we have

 ${\displaystyle A_{x}(\omega )=A_{w}(\omega )A_{e}(\omega )}$ (B–7a)

and

 ${\displaystyle \phi _{x}(\omega )=\phi _{w}(\omega )+\phi _{e}(\omega ).}$ (B–7b)

Hence, in convolving the seismic wavelet with the impulse response, their phase spectra are added, while the amplitude spectra are multiplied.

We assume that the earth’s impulse response can be represented by a white reflectivity series; hence, its amplitude spectrum is flat

 ${\displaystyle A_{e}(\omega )=A_{0}=constant.}$ (B-8)

By substituting into equation (B-7a), we obtain

 ${\displaystyle A_{x}(\omega )=A_{0}A_{w}(\omega ).}$ (B-9)

This implies that the amplitude spectrum of the seismogram is a scaled version of the amplitude spectrum of the source wavelet.

We now examine the autocorrelation functions r(τ) of x(t), w(t), and e(t). Autocorrelation is a measure of similarity between the events on a time series at different time positions. It is a running sum given by the expression

 ${\displaystyle r_{e}(\tau )={\frac {1}{N}}\sum _{t=0}^{N-1}e_{t}e_{t+\tau },\quad \tau =0,\ \mp 1,\ \mp 2,\ \ldots ,\ \mp (N-1),}$ (B-10)

where τ is time lag. A random time series is an uncorrelated series. (Strictly, it is uncorrelated when it is continuous and infinitely long.) Therefore,

 ${\displaystyle r_{e}(\tau )=0,\quad \tau =0}$ (B–11a)

and

 ${\displaystyle {{r}_{e}}\left(0\right)={{r}_{0}}=constant.}$ (B–11b)

Equation (B-11) states that the autocorrelation of a perfect random series is zero at all lags except at zero lag. The zero-lag value actually is the cumulative energy contained in the time series:

 ${\displaystyle {{r}_{0}}={{e}_{0}^{2}}+{{e}_{1}^{2}}+\cdots +{{e}_{N-1}^{2}}.}$ (B-12)

Consider the z-transform of the convolutional model in equation (B-4):

 ${\displaystyle X\left(z\right)=W\left(z\right)E\left(z\right).}$ (B-13)

By putting 1/z in place of z and taking the complex conjugate, we get

 ${\displaystyle {\overline {X}}\left({1}/{z}\;\right)={\overline {W}}\left({1}/{z}\;\right){\overline {E}}\left({1}/{z}\;\right),}$ (B-14)

where the bar denotes the complex conjugate. By multiplying both sides of equations (B-13) and (B-14), we get

 ${\displaystyle X(z){\overline {X}}(1/z)=[W(z)E(z)][{\overline {W}}(1/z){\overline {E}}(1/z)].}$ (B-15)

By rearranging the right side,

 ${\displaystyle X(z){\overline {X}}(1/z)=[W(z){\overline {W}}(1/z)][E(z){\overline {E}}(1/z)].}$ (B-16)

Finally, by definition, equation (B-16) yields

 ${\displaystyle r_{x}=r_{w}\ast r_{e},}$ (B-17)

where rx, rw, and re are the autocorrelations of the seismogram, seismic wavelet, and impulse response, respectively. Based on the white reflectivity series assumption (equation B-11), we have

 ${\displaystyle {{r}_{x}}={{r}_{0}}{{r}_{w}}.}$ (B-18)

Equation (B-18) states that the autocorrelation of the seismogram is a scaled version of that of the seismic wavelet. We will see that conversion of the seismic wavelet into a zero-lag spike requires knowledge of the wavelet’s autocorrelation. Equation (B-18) suggests that the autocorrelation of the seismogram can be used in lieu of that of the seismic wavelet, since the latter often is not known.

B.2 The inverse of the source wavelet

A basic purpose of deconvolution is to compress the source waveform into a zero-lag spike so that closely spaced reflections can be resolved. Assume that a filter operator f(t) exists such that

 ${\displaystyle w(t)\ast f(t)=\delta (t),}$ (B-19)

where δ(t) is the Kronecker delta function. The filter f(t) is called the inverse filter for w(t). Symbolically, f(t) is expressed in terms of the seismic wavelet w(t) as

 ${\displaystyle f(t)=\delta (t)\ast {\frac {1}{w(t)}}.}$ (B-20)

The z-transform of the seismic wavelet with a finite length m + 1 is (Appendix A)

 ${\displaystyle W(z)=w_{0}+w_{1}z+w_{2}z^{2}+\cdots +w_{m}z^{m}.}$ (B-21)

The z-transform of the inverse filter can be obtained by polynomial division:

 ${\displaystyle F(z)={\frac {1}{W(z)}}.}$ (B-22)

The result is another polynomial whose coefficients are the terms of the inverse filter

 ${\displaystyle F(z)=f_{0}+f_{1}z+f_{2}z^{2}+\cdots +f_{n}z^{n}+\cdots .}$ (B-23)

Note that the polynomial F(z) in equation (B-23) has only positive powers of z; this means f(t) is causal. If the coefficients of F(z) asymptotically approach zero as time goes to infinity, so that the filter has finite energy, we say that the filter f(t) is realizable. If the coefficients increase without bound, we say that the filter is not realizable. In practice, we prefer to work with a causal and realizable filter. Such a filter, by definition, also is minimum-phase. If f(t) is minimum-phase, then the seismic wavelet w(t) also must be minimum-phase. Finally, to apply the filter with a finite length n + 1, the polynomial F(z) must be truncated. Truncation of the filter operator induces some error in spiking the seismic wavelet.

The inverse of the seismic wavelet also can be computed in the frequency domain. By Fourier transforming equation (B-19), we get

 ${\displaystyle W(\omega )F(\omega )=1.}$ (B-24)

By substituting equation (B-6b), we obtain

 ${\displaystyle F(\omega )={\frac {1}{A_{w}(\omega )\ \ \exp[i\phi _{w}(\omega )]}}.}$ (B-25)

We express the Fourier transform of the inverse filter F(ω) as

 ${\displaystyle F(\omega )={A_{f}(\omega )\ \ \exp[i\phi _{f}(\omega )]},}$ (B-26)

and compare it with equation (B-25) to get

 ${\displaystyle A_{f}(\omega )={\frac {1}{A_{w}(\omega )}}}$ (B–27a)

and

 ${\displaystyle \phi _{f}(\omega )=-\phi _{w}(\omega ).}$ (B–27b)

Equations (B-27) show that the amplitude spectrum of the inverse filter is the inverse of that of the seismic wavelet, and the phase spectrum of the inverse filter is negative of that of the seismic wavelet.

B.3 The inverse filter

Instead of the polynomial division procedure described by equation (B-22), consider a different approach to derive the inverse filter. Start with the z-transform of the autocorrelation of the seismic wavelet:

 ${\displaystyle R_{w}(z)=W(z){\overline {W}}(1/z),}$ (B-28)

and the z-transform of equation (B-19):

 ${\displaystyle W\left(z\right)F\left(z\right)=1,}$ (B-29)

from which we get

 ${\displaystyle W(z)={\frac {1}{F(z)}}.}$ (B-30)

By substituting into equation (B-28), we obtain

 ${\displaystyle R_{w}(z)F(z)={\overline {W}}(1/z).}$ (B-31)

Since the wavelet is a real-time function,

 ${\displaystyle r_{w}(\tau )=r_{w}(-\tau ).}$ (B-32)

Consider the special case of a three-point inverse filter (f0, f1, f2). We will assume that the seismic wavelet w(t) is minimum-phase (causal and realizable); hence, its inverse f(t) also is minimum-phase. The z-transform of f(t) is

 ${\displaystyle F(z)=f_{0}+f_{1}z+f_{2}z^{2}.}$ (B–33a)

The z-transform of its autocorrelogram rw(τ) is, by way of equation (B-32),

 ${\displaystyle R_{w}(z)=\cdots +r_{2}z^{-2}+r_{1}z^{-1}+r_{0}+r_{1}z+r_{2}z^{2}+\cdots }$ (B–33b)

The z-transform of w(t) is

${\displaystyle W\left(z\right)={{w}_{0}}+{{w}_{1}}z+{{w}_{2}}{{z}^{2}}+\cdots +{{w}_{m}}{{z}^{m}},}$

therefore,

 ${\displaystyle {\overline {W}}(1/z)={\bar {w}}_{0}+{\bar {w}}_{1}z+{\bar {w}}_{2}z^{2}+\cdots +{\bar {w}}_{m}z^{m}.}$ (B–33c)

By substituting equations (B-33a), (B-33b), and (B-33c) into equation (B-31), we obtain

 ${\displaystyle (r_{2}z^{-2}+r_{1}z^{-1}r_{0}+r_{1}z+r_{2}z^{2})(f_{0}+f_{1}z+f_{2}z^{2})={\bar {w}}_{0}+{\bar {w}}_{1}z^{-1}+{\bar {w}}_{2}z^{-2}.}$ (B-34)

To solve for (f0, f1, f2), we identify the coefficients of powers of z. The coefficient of z0 yields

${\displaystyle r_{0}f_{0}+r_{1}f_{1}+r_{2}f_{2}={\bar {w}}_{0},}$

the coefficient of z1 yields

${\displaystyle {{r}_{1}}{{f}_{0}}+{{r}_{0}}{{f}_{1}}+{{r}_{1}}{{f}_{2}}={\text{0}},}$

while the coefficient of z2 yields

${\displaystyle {{r}_{2}}{{f}_{0}}+{{r}_{1}}{{f}_{1}}+{{r}_{0}}{{f}_{2}}={\text{0}}.}$

When put into matrix form, these equations for the coefficients of powers of z yield

 ${\displaystyle {\begin{pmatrix}r_{0}&r_{1}&r_{2}\\r_{1}&r_{0}&r_{1}\\r_{2}&r_{1}&r_{0}\\\end{pmatrix}}{\begin{pmatrix}f_{0}\\f_{1}\\f_{2}\end{pmatrix}}={\begin{pmatrix}{\bar {w}}_{0}\\0\\0\end{pmatrix}}}$ (B-35)

Note that ${\displaystyle {{\bar {w}}_{0}},}$ which equals w0 for the usual case of a real source wavelet, is the amplitude of the wavelet at t = 0. There are four unknowns and three equations. By normalizing with respect to f0, we get

 ${\displaystyle {\begin{pmatrix}r_{0}&r_{1}&r_{2}\\r_{1}&r_{0}&r_{1}\\r_{2}&r_{1}&r_{0}\\\end{pmatrix}}{\begin{pmatrix}1\\a_{1}\\a_{2}\end{pmatrix}}={\begin{pmatrix}L\\0\\0\end{pmatrix}}}$ (B–36a)

where a1 = f1/f0, a2 = f2/f0, and L = w0/f0. We now have three unknowns, a1, a2, and L, and three equations. The square matrix elements on the left side of the equation represent the autocorrelation lags of the seismic wavelet, which we do not know. However, the autocorrelation lags from equation (B-18) of the seismogram that we do know can be substituted.

For the general case of an n-point inverse filter (f0, f1, f2, …, fn), equation (B-36a) takes the form

 ${\displaystyle {\begin{pmatrix}r_{0}&r_{1}&r_{2}&\cdots &r_{n}\\r_{1}&r_{0}&r_{1}&\cdots &r_{n-1}\\r_{2}&r_{1}&r_{0}&\cdots &r_{n-2}\\\vdots &\vdots &\vdots &\ddots &\vdots \\r_{n}&r_{n-1}&r_{n-2}&\cdots &r_{0}\\\end{pmatrix}}{\begin{pmatrix}1\\a_{1}\\a_{2}\\\vdots \\a_{n-1}\end{pmatrix}}={\begin{pmatrix}L\\0\\0\\\vdots \\0\end{pmatrix}}}$ (B–36b)

The autocorrelation matrix in equation (B-36b) is of a special type. First, it is a symmetric matrix; second, its diagonal elements are identical. This type of matrix is called a Toeplitz matrix. For n number of normal equations, the standard algorithms require a memory space that is proportional to n2 and CPU time that is proportional to n3. Because of the special properties of the Toeplitz matrix, Levinson devised a recursive scheme [1] that requires a memory space and CPU time proportional to n and n2, respectively (Section B.6).

B.4 Frequency-domain deconvolution

We want to estimate a minimum-phase wavelet from the recorded seismogram. The inverse of the wavelet is the spiking deconvolution operator. We start with the autocorrelation of the seismic wavelet w(t) in the frequency domain:

 ${\displaystyle R_{w}(\omega )=W(\omega ){\overline {W}}(\omega ),}$ (B-37)

where ${\displaystyle {\bar {W}}\left(\omega \right)}$ is the complex conjugate of the Fourier transform of w(t). Since w(t) normally is not known, Rw(ω) is not known. However, based on an assumption of white reflectivity series (equation B-18), we can substitute the autocorrelation function of the seismogram in equation (B-37).

Define a new function U(ω):

 ${\displaystyle U(\omega )=\ln[R_{w}(\omega )].}$ (B-38)

When both sides of equation (B-38) are exponentiated:

 ${\displaystyle R_{w}(\omega )=\exp[U(\omega )].}$ (B-39)

Suppose that another function, ϕ(ω), is defined and that equation (B-39) is rewritten as [1]

 ${\displaystyle R_{w}(\omega )=\exp \left\{{\frac {1}{2}}[U(\omega )+i\phi (\omega )]\right\}\ \ \exp \left\{{\frac {1}{2}}[U(\omega )-i\phi (\omega )]\right\}}$ (B-40)

When equation (B-40) is compared with equation (B-37), we see that

 ${\displaystyle W(\omega )=\exp \left\{{\frac {1}{2}}[U(\omega )+i\phi (\omega )]\right\}.}$ (B-41)

We know Rw(ω) from equation (B-18) and therefore we know U(ω) from equation (B-38). To estimate W(ω) from equation (B-41), we also need to know ϕ(ω). If we make the minimum-phase assumption, ϕ(ω) turns out to be the Hilbert transform of U(ω) [1]. To perform the Hilbert transform, first inverse Fourier transform U(ω) back to the time domain. Then, double the positive time values, leave the zero-lag alone, and set the negative time values to zero. This operation yields a time function u+(t), which vanishes before t = 0. Then, return to the transform domain to get

 ${\displaystyle U^{+}(\omega )={\frac {1}{2}}[U(\omega )+i\phi (\omega )],}$ (B-42)

where U+(ω) is the Fourier transform of u+(t). Finally, exponentiating U+(ω) yields the Fourier transform W(ω) of the minimum-phase wavelet w(t) (equation B-41).

Once the minimum-phase wavelet w(t) is estimated, its Fourier transform W(ω) is rewritten in terms of its amplitude and phase spectra,

 ${\displaystyle W(\omega )=A(\omega )\ \ \exp[i\phi (\omega )].}$ (B-43)

The deconvolution filter in the Fourier transform domain is

 ${\displaystyle F(\omega )={\frac {1}{W(\omega )}}.}$ (B-44)

By substituting equation (B-43), we obtain the amplitude and phase spectra of this filter:

 ${\displaystyle A_{f}(\omega )={\frac {1}{A(\omega )}}}$ (B–45a)

and

 ${\displaystyle \phi _{f}(\omega )=-\phi (\omega ).}$ (B–45b)

Since the estimated wavelet w(t) is minimum phase, it follows that the deconvolution filter [whose Fourier transform is given by equation (B-44)] also is minimum phase. By inverse Fourier transforming, equation (B-44) yields the deconvolution operator. The frequency-domain method of deconvolution is outlined in Figure B-1.

Figure B-2 illustrates the frequency-domain deconvolution that is described in Figure B-1. The panels in Figure B-2 should be compared with the corresponding results of the Wiener-Levinson (time-domain) method in Figure 2-20. As expected, there is virtually no difference between the two results.

From the discussion so far, we see that the spiking deconvolution operator is the inverse of the minimum-phase equivalent of the seismic wavelet. The process of estimating the minimum-phase equivalent of a seismic wavelet is called spectral decomposition. The minimum-phase wavelet, once computed, can be inverted easily to get the deconvolution operator. To avoid dividing by zero in equation (B-45a) and to ensure that the filter is stable, a small number often is added to the amplitude spectrum before division. This is called prewhitening. Equation (B-45a) then takes the form

 ${\displaystyle A_{f}(\omega )={\frac {1}{A(\omega )+\varepsilon }}.}$ (B-46)

The amplitude spectrum also can be smoothed to get a more stable operator. Smoothing the spectrum is analogous to shortening the equivalent time-domain deconvolution operator.

Zero-phase deconvolution can be implemented conveniently in the frequency domain. To do this, the phase spectrum given by equation (B-45b) is set to zero and thus yields a deconvolution operator that flattens the amplitude spectrum of the input seismogram, but does not alter the phase.

B.5 Optimum Wiener filters

Figure B-3  Wiener filter model.

The following concise discussion of optimum Wiener filters is based on [2]. Consider the general filter model in Figure B-3. Wiener filtering involves designing the filter f(t) so that the least-squares error between the actual and desired outputs is minimum. Error L is defined as

 ${\displaystyle L=\sum \limits _{t}(d_{t}-y_{t})^{2}.}$ (B-47)

The actual output is the convolution of the filter with the input

 ${\displaystyle y_{t}=f_{t}\ast x_{t}.}$ (B-48)

When equation (B-48) is substituted into equation (B-47), we get

 ${\displaystyle L=\sum \limits _{t}\left(d_{t}-\sum \limits _{\tau }f_{\tau }x_{t-\tau }\right)^{2}.}$ (B-49)

The goal is to compute the filter coefficients (f0, f1, …, fn-1) so that the error is minimum. Filter length n must be predefined. The minimum error is attained by setting the variation of L with respect to fi to zero:

 ${\displaystyle {\frac {\partial L}{\partial f_{i}}}=0,\quad i=0,1,2,\ldots ,(n-1).}$ (B-50)

By expanding the square term in equation (B-49), we have

 ${\displaystyle L=\sum \limits _{t}d_{t}^{2}-2\sum \limits _{t}d_{t}\sum \limits _{\tau }f_{\tau }x_{t-\tau }+\sum \limits _{t}\left(\sum \limits _{\tau }f_{\tau }x_{t-\tau }\right)^{2}.}$ (B-51)

By taking the partial derivatives and setting them to zero, we get

 ${\displaystyle {\frac {\partial L}{\partial f_{i}}}=-2\sum \limits _{t}d_{t}x_{t-i}+2\sum \limits _{t}\left(\sum \limits _{\tau }f_{\tau }x_{t-\tau }\right)x_{t-i}=0,}$ (B-52)

or

 ${\displaystyle \sum \limits _{\tau }f_{\tau }\sum \limits _{t}x_{t-\tau }x_{t-i}=\sum \limits _{t}d_{t}x_{t-i},\qquad \quad i=0,1,2,\ldots ,(n-1).}$ (B-53)

By using

 ${\displaystyle \sum \limits _{t}x_{t-\tau }x_{t-i}=r_{i-\tau }}$ (B–54a)

and

 ${\displaystyle \sum \limits _{t}d_{t}x_{t-i}=g_{i}}$ (B–54b)

for each ith term, we get

 ${\displaystyle \sum \limits _{\tau }f_{\tau }r_{i-\tau }=g_{i},\quad i=0,1,2,\ldots ,(n-1).}$ (B-55)

When put into matrix form, equation (B-55) becomes

 ${\displaystyle {\begin{pmatrix}r_{0}&r_{1}&r_{2}&\cdots &r_{n-1}\\r_{1}&r_{0}&r_{1}&\cdots &r_{n-2}\\\vdots &\vdots &\vdots &\ddots &\vdots \\r_{n-1}&r_{n-2}&r_{n-3}&\cdots &r_{0}\\\end{pmatrix}}{\begin{pmatrix}f_{0}\\f_{1}\\\vdots \\f_{n-1}\end{pmatrix}}={\begin{pmatrix}g_{0}\\g_{1}\\\vdots \\g_{n-1}\end{pmatrix}}}$ (30)

Here, ri are the autocorrelation lags of the input and gi are the lags of the crosscorrelation between the desired output and input. Since the autocorrelation matrix is Toeplitz, the optimum Wiener filter coefficients fi can be computed by using Levinson recursion [1].

The least-squares error involved in this process now is computed. Expressed again, equation (B-51) becomes

 ${\displaystyle L_{\mathit {min}}=\sum \limits _{t}d_{t}^{2}-2\left(\sum \limits _{t}\sum \limits _{\tau }d_{t}f_{\tau }x_{t-\tau }\right)+\sum \limits _{t}\left(\sum \limits _{\tau }f_{\tau }x_{t-\tau }\right)^{2}.}$ (B-57)

By substituting the relationships

 ${\displaystyle \sum \limits _{t}d_{t}x_{t-\tau }=g_{\tau }}$ (B–58a)

and

 ${\displaystyle \sum \limits _{t}x_{t}x_{t-\tau }=r_{\tau }}$ (B–58b)

into equation (B-57), we get

 ${\displaystyle L_{\it {min}}=\sum \limits _{t}d_{t}^{2}-2\sum \limits _{\tau }f_{\tau }g_{\tau }+\sum \limits _{\tau }f_{\tau }\sum \limits _{i}f_{i}\sum \limits _{t}x_{t-\tau }x_{t-i}}$ (B-59)

or

 ${\displaystyle L_{\it {min}}=\sum \limits _{t}d_{t}^{2}-2\sum \limits _{\tau }f_{\tau }g_{\tau }+\sum \limits _{\tau }f_{\tau }\sum \limits _{i}f_{i}r_{\tau -i}.}$ (B-60)

Finally, by using equation (B-55), we get

 ${\displaystyle L_{\it {min}}=\sum \limits _{t}d_{t}^{2}-\sum \limits _{\tau }f_{\tau }g_{\tau }.}$ (B-61)

B.6 Spiking deconvolution

Consider the desired output to be the zero-delay spike dt : (1, 0, 0, …). Given the input series xt : (x0, x1, x2, …), equation (B-56) takes the form

 ${\displaystyle {\begin{pmatrix}r_{0}&r_{1}&r_{2}&\cdots &r_{n-1}\\r_{1}&r_{0}&r_{1}&\cdots &r_{n-2}\\\vdots &\vdots &\vdots &\ddots &\vdots \\r_{n-1}&r_{n-2}&r_{n-3}&\cdots &r_{0}\\\end{pmatrix}}{\begin{pmatrix}f_{0}\\f_{1}\\\vdots \\f_{n-1}\end{pmatrix}}={\begin{pmatrix}x_{0}\\0\\\vdots \\0\end{pmatrix}}}$ (B-62)

Divide both sides by f0 to obtain

 ${\displaystyle {\begin{pmatrix}r_{0}&r_{1}&r_{2}&\cdots &r_{n-1}\\r_{1}&r_{0}&r_{1}&\cdots &r_{n-2}\\\vdots &\vdots &\vdots &\ddots &\vdots \\r_{n-1}&r_{n-2}&r_{n-3}&\cdots &r_{0}\\\end{pmatrix}}{\begin{pmatrix}1\\a_{1}\\\vdots \\a_{n-1}\end{pmatrix}}={\begin{pmatrix}v\\0\\\vdots \\0\end{pmatrix}}}$ (B-63)

where ai = fi/f0, i = 1, 2, …, n − 1, and v = x0/f0. Equation (B-63) is solved for the unknown quantity v and the filter coefficients (a1, a2, …, an−1). Since the desired output is a zero-delay spike, the filter (1, a1, a2, …, an−1) describes a spiking deconvolution process.

The solution to equation (B-63) can be obtained efficiently using the Levinson recursion [1]. Start with the solution of equation (B-63) for the two-term filter (1, a1), then solve for the filter (1, a1, a2) and so on. Equation (B-63) for n = 2 takes the form

 ${\displaystyle {\begin{pmatrix}r_{0}&r_{1}\\r_{1}&r_{0}\end{pmatrix}}{\begin{pmatrix}1\\a_{1}\end{pmatrix}}={\begin{pmatrix}v\\0\end{pmatrix}}}$ (B-64)

Write out the simultaneous equations:

${\displaystyle {{r}_{0}}+{{r}_{1}}{{a}_{1}}=v}$

and

${\displaystyle {{r}_{1}}+{{r}_{0}}{{a}_{1}}={\text{0}},}$

which yield the filter coefficient a1 and the unknown variable v from the first iteration:

 ${\displaystyle {a_{1}}=-{r_{1}}/{r_{0}}}$ (B–65a)

and

 ${\displaystyle v={{r}_{0}}+{{r}_{1}}{{a}_{1}}.}$ (B–65b)

Now write equation (B-63) for the three-term filter ${\displaystyle \left(1,\ {a'_{1}},\ {a'_{2}}\right):}$

 ${\displaystyle \left({\begin{matrix}{{r}_{0}}&{{r}_{1}}&{{r}_{2}}\\{{r}_{1}}&{{r}_{0}}&{{r}_{1}}\\{{r}_{2}}&{{r}_{1}}&{{r}_{0}}\\\end{matrix}}\right)\left({\begin{matrix}1\\{{{a}'}_{1}}\\{{{a}'}_{2}}\\\end{matrix}}\right)=\left({\begin{matrix}{{v}'}\\0\\0\\\end{matrix}}\right).}$ (B-66)

We want to solve for the filter coefficients ${\displaystyle \left({a'_{1}},\ {a'_{2}}\right)}$ by using the results of the previous step (equations B-65a,B-64) by adding a row in the following manner:

 ${\displaystyle {\begin{pmatrix}r_{0}&r_{1}&r_{2}\\r_{1}&r_{0}&r_{1}\\r_{2}&r_{1}&r_{0}\\\end{pmatrix}}{\begin{pmatrix}1\\a_{1}\\0\end{pmatrix}}={\begin{pmatrix}v\\0\\e\end{pmatrix}}}$ ()

where

 ${\displaystyle e={r_{2}}+{r_{1}}{a_{1}}.}$ (B-68)

Rewrite equation (B-67) by changing the order of the rows:

 ${\displaystyle {\begin{pmatrix}r_{0}&r_{1}&r_{2}\\r_{1}&r_{0}&r_{1}\\r_{2}&r_{1}&r_{0}\\\end{pmatrix}}{\begin{pmatrix}0\\a_{1}\\1\end{pmatrix}}={\begin{pmatrix}e\\0\\v\end{pmatrix}}}$ (B-69)

Multiply both sides of equation (B-69) with variable c, yet to be determined, and subtract the result from equation (B-67):

 ${\displaystyle {\begin{pmatrix}r_{0}&r_{1}&r_{2}\\r_{1}&r_{0}&r_{1}\\r_{2}&r_{1}&r_{0}\\\end{pmatrix}}{\begin{pmatrix}1\\a_{1}-ca_{1}\\-c\end{pmatrix}}={\begin{pmatrix}v-ce\\0\\e-cv\end{pmatrix}}}$ (B-70)

Finally, compare equation (B-70) with equation (B-66), and note that

 ${\displaystyle {\begin{pmatrix}1\\a'_{1}\\a'_{2}\end{pmatrix}}={\begin{pmatrix}1\\a_{1}-ca_{1}\\-c\end{pmatrix}}}$ (B–71a)

and

 ${\displaystyle {\begin{pmatrix}v'\\0\\0\end{pmatrix}}={\begin{pmatrix}v-ce\\0\\e-cv\end{pmatrix}}}$ (B–71b)

Solve for c and v':

 ${\displaystyle c={\frac {e}{v}},}$ (B–72a)

and

 ${\displaystyle v'=v\left[1-\left({\frac {e}{v}}\right)^{2}\right].}$ (B–72b)

Hence, the new filter coefficients ${\displaystyle \left({a'_{1}},\ {a'_{2}}\right)}$ are (equations B-71a)

 ${\displaystyle a'_{1}=a_{1}-{\frac {e}{v}}a_{1}}$ (B–73a)

and

 ${\displaystyle a'_{2}=-{\frac {e}{v}}.}$ (B–73b)

This recursive scheme is repeated to determine the next set of filter coefficients:

1. Compute v and e using the autocorrelation lags of the input series and the present filter coefficients (equations B-67).
2. Compute the next set of filter coefficients (equations B-71a).
3. Compute a new value for v (equation B-72b).

This recursive process yields the Wiener filter (1, a1, a2, …, an−1) of the desired length n.

B.7 Predictive deconvolution

Suppose that the desired output in the filter model of Figure B-3 is a time-advanced version of the input, d(t) = x(t + α). We want to design a Wiener filter f(t) that predicts x(t + α) from the past values of the input x(t). In this special case, the crosscorrelation function g becomes

 ${\displaystyle g_{\tau }=\sum \limits _{t}d_{t}x_{t-\tau }=\sum \limits _{t}x_{t+\alpha }x_{t-\tau }=\sum \limits _{t}x_{t}x_{t-(a+\tau )}.}$ (B-74)

By definition, we have

 ${\displaystyle r_{\tau }=\sum \limits _{t}x_{t}x_{t-\tau }.}$ (B-75)

For the α + τ lag, equation (B-75) becomes

 ${\displaystyle r_{\alpha +\tau }=\sum \limits _{t}x_{t}x_{t-(\alpha +\tau )}.}$ (B-76)

Combine equations (B-74) and (B-76) and note that rα + τ = gτ. By substituting this result into equation (B-56), we get the set of normal equations that must be solved to find the prediction filter (f0, f1, …, fn−1):

 ${\displaystyle {\begin{pmatrix}r_{0}&r_{1}&r_{2}&\cdots &r_{n-1}\\r_{1}&r_{0}&r_{1}&\cdots &r_{n-2}\\\vdots &\vdots &\vdots &\ddots &\vdots \\r_{n-1}&r_{n-2}&r_{n-3}&\cdots &r_{0}\\\end{pmatrix}}{\begin{pmatrix}f_{0}\\f_{1}\\\vdots \\f_{n-1}\end{pmatrix}}={\begin{pmatrix}r_{\alpha }\\r_{\alpha +1}\\\vdots \\r_{\alpha +n-1}\end{pmatrix}}}$ (B-77)

For a unit prediction lag, α = 1, equation (B-77) takes the form:

 ${\displaystyle {\begin{pmatrix}r_{0}&r_{1}&r_{2}&\cdots &r_{n-1}\\r_{1}&r_{0}&r_{1}&\cdots &r_{n-2}\\\vdots &\vdots &\vdots &\ddots &\vdots \\r_{n-1}&r_{n-2}&r_{n-3}&\cdots &r_{0}\\\end{pmatrix}}{\begin{pmatrix}f_{0}\\f_{1}\\\vdots \\f_{n-1}\end{pmatrix}}={\begin{pmatrix}r_{1}\\r_{2}\\\vdots \\r_{n}\end{pmatrix}}}$ (B-78)

By augmenting the right side to the square matrix on the left side, we have

 ${\displaystyle {\begin{pmatrix}-r_{1}&r_{0}&r_{1}&r_{2}&\cdots &r_{n-1}\\-r_{2}&r_{1}&r_{0}&r_{1}&\cdots &r_{n-2}\\\vdots &\vdots &\vdots &\vdots &\ddots &\vdots \\-r_{n}&r_{n-1}&r_{n-2}&r_{n-3}&\cdots &r_{0}\\\end{pmatrix}}{\begin{pmatrix}1\\f_{0}\\f_{1}\\\vdots \\f_{n-1}\end{pmatrix}}={\begin{pmatrix}0\\0\\0\\\vdots \\0\end{pmatrix}}}$ (B-79)

By adding one row, then putting the negative sign to the filter column, we obtain

 ${\displaystyle {\begin{pmatrix}r_{0}&r_{1}&r_{2}&\cdots &r_{n}\\r_{1}&r_{0}&r_{1}&\cdots &r_{n-1}\\r_{2}&r_{1}&r_{0}&\cdots &r_{n-2}\\\vdots &\vdots &\vdots &\ddots &\vdots \\r_{n}&r_{n-1}&r_{n-2}&\cdots &r_{0}\\\end{pmatrix}}{\begin{pmatrix}1\\-f_{0}\\-f_{1}\\\vdots \\-f_{n-1}\end{pmatrix}}={\begin{pmatrix}L\\0\\0\\\vdots \\0\end{pmatrix}}}$ (B-80)

We now have n + 1 equations and n + 1 unknowns — (f0, f1, …, fn−1, L). From equation (B-80)

 ${\displaystyle L=r_{0}-r_{1}f_{0}-r_{2}f_{1}-\cdots -r_{n}f_{n-1}.}$ (B-81)

By using equation (B-61), the minimum error associated with the unit prediction-lag filter can be computed as follows. Start with

 ${\displaystyle {d_{t}}={x_{t+1}}}$ (B-82)

so that

 ${\displaystyle \sum _{t}d_{t}^{2}=r_{0}}$ (B–83a)

and

 ${\displaystyle {g_{t}}={r_{t+1}}.}$ (B–83b)

Substituting into equation (B-61), we get

 ${\displaystyle L_{min}=r_{0}-(r_{1}f_{0}+r_{2}f_{1}+\cdots +r_{n}f_{n-1}),}$ (B-84)

which is identical to quantity L given by equation (B-81). Therefore, when we solve equation (B-80), we compute both the minimum error and the filter coefficients.

Equation (B-77) gives us the prediction filter with prediction lag α. The desired output is xt+α. The actual output is ${\displaystyle {{\hat {x}}_{t+\alpha }}}$ which is an estimate of the desired output. This desired output is the predictable component of the input series; i.e., periodic events such as multiples. The error series contains the unpredictable component of the series and is defined as

 ${\displaystyle e_{t+\alpha }=x_{t+\alpha }-{\hat {x}}_{t+\alpha }=x_{t+\alpha }-\sum \limits _{\tau }f_{\tau }x_{t-\tau }.}$ (B-85)

We assume that the unpredictable component et is the uncorrelated reflectivity we want to extract from the seismogram. By taking the z-transform of equation (B-85), we have

 ${\displaystyle z^{-\alpha }E(z)=z^{-\alpha }X(z)-F(z)X(z)}$ (B-86)

or

 ${\displaystyle E(z)=[1-z^{\alpha }F(z)]X(z).}$ (B-87)

Now define a new filter a(t) whose z-transform is

 ${\displaystyle A(z)=1-z^{\alpha }F(z)}$ (B-88)

so that, when substituted into equation (B-87), we have

 ${\displaystyle E\left(z\right)=A\left(z\right)X\left(z\right).}$ (B-89)

The corresponding time-domain relationship is

 ${\displaystyle e(t)=a(t)\ast x(t).}$ (B-90)

After defining e(t) as reflectivity, equation (B-90) states that by applying filter a(t) to the input seismogram x(t), we obtain the reflectivity series. Since computing the reflectivity series is a goal of deconvolution, prediction filtering can be used for deconvolution. The time-domain form a(t) of the filter defined by equation (B-88) is

 ${\displaystyle {{a}_{t}}=\left(1,\ \overbrace {0,\ 0,\ \cdots ,\ 0,} ^{\alpha -1}-{{f}_{0}},\ -{{f}_{1}},\ \cdots ,\ -{{f}_{n-1}}\right).}$ (B-91)

The filter a(t) is obtained from the prediction filter f(t), which is the solution to equation (B-77). We call a(t) the prediction error filter. For unit-prediction lag, the filter coefficients given by equation (B-91) are of the form

 ${\displaystyle a_{t}=(1,-f_{0},-f_{1},\cdots ,-f_{n-1}).}$ (B-92)

This is the same as the solution of equation (B-80). Moreover, equation (B-80) is equivalent to equation (B-36) for n = 2. Thus, we can conclude that a prediction error filter with unit-prediction lag and with n + 1 length is equivalent to an inverse filter of the same length except for a scale factor.

B.8 Surface-consistent deconvolution

Deconvolution can be formulated as a surface-consistent spectral decomposition [3]. In such a formulation, the seismic trace is decomposed into the convolutional effects of source, receiver, offset, and the earth’s impulse response, thus ex plicitly accounting for variations in wavelet shape affected by near-source and near-receiver conditions and source-receiver separation. Decomposition is followed by inverse filtering to recover the earth’s impulse response. The assumption of surface-consistency implies that the basic wavelet shape depends only on the source and receiver locations, and not on the details of the raypath from source to reflector to receiver.

The convolutional model discussed in The convolutional model is described by

 ${\displaystyle x(t)=w(t)\ast e(t)+n(t),}$ (B-93)

where x(t) is the recorded seismogram, w(t) is the source waveform, e(t) is the earth’s impulse response that we want to estimate, and n(t) is the noise component.

A postulated surface-consistent convolutional model is given by

 ${\displaystyle x'_{ij}(t)=s_{j}(t)\ast h_{l}(t)\ast e_{k}(t)\ast g_{i}(t)+n(t),}$ (B-94)

where ${\displaystyle {{{x}'}_{ij}}\left(t\right)}$ is a model of the recorded seismogram, sj(t) is the waveform component associated with source location j, gi(t) is the component associated with receiver location i, and hl(t) is the component associated with offset dependency of the waveform defined for each offset index l = |i − j|. As in equation (B-93), ek(t) represents the earth’s impulse response at the source-receiver midpoint location, k = (i + j)/2. By comparing equations (B-93) and (B-94), we infer that w(t) represents the combined effects of s(t), h(t), and g(t). [4] offer an alternative to equation (B-94) in which the offset term is ignored.

To illustrate a method of computing sj(t), hl(t), ek(t), and gi(t), assume n(t) = 0 and Fourier transform equation (B-94):

 ${\displaystyle X'_{ij}(\omega )=S_{j}(\omega )H_{l}(\omega )E_{k}(\omega )G_{i}(\omega ).}$ (B-95)

This equation can be separated into the following amplitude spectral components:

 ${\displaystyle {\overline {X}}'_{ij}(\omega )={\overline {S}}_{j}(\omega ){\overline {H}}_{l}(\omega ){\overline {E}}_{k}(\omega ){\overline {G}}_{i}(\omega ),}$ (B–96a)

and phase spectral components (Section A.1):

 ${\displaystyle \phi '_{ij}(\omega )=\phi _{sj}(\omega )+\phi _{hl}(\omega )+\phi _{ek}(\omega )+\phi _{ri}(\omega ).}$ (B–96b)

If the minimum-phase assumption is made, only the amplitude spectra (equation B-96a) need to be considered.

Equation (B-96a) now can be linearized by taking the logarithm of both sides:

 ${\displaystyle {\tilde {X}}'_{ij}(\omega )={\tilde {S}}_{j}(\omega )+{\tilde {H}}_{l}(\omega )+{\tilde {E}}_{k}(\omega )+{\tilde {G}}_{i}(\omega ).}$ (B-97)

The left side is the logarithm of the amplitude spectrum ${\displaystyle {{\overline {X}}^{\prime }}_{ij}\left(\omega \right)}$ of the modeled input trace as in the left-hand side of equation (B-96a), and the right-hand terms are the logarithms of the amplitude spectra of the individual components as in the right-hand side of equation (B-96a).

An alternative model equation is given by [5] based on the work by [6]:

 ${\displaystyle {\tilde {X}}''_{ij}(\omega )={\tilde {S}}_{j}(\omega )+{\tilde {H}}_{l}(\omega )+{\tilde {E}}_{k}(\omega )+{\tilde {G}}_{i}(\omega ),}$ (B–98a)

where

 ${\displaystyle {\tilde {X}}''_{ij}(\omega )={\tilde {X}}'_{ij}(\omega )-{\tilde {X}}'_{avg}(\omega ).}$ (B–98b)

The spectral component ${\displaystyle {{{\tilde {X}}'}_{avg}}\left(\omega \right)}$ is associated with the average amplitude spectrum for the entire data set. The terms on the right-hand side of equation (B-98) now correspond to residual spectral components for each source, offset, midpoint, and receiver location.

The spectral components on the right-hand side of equation (B-98a) can be computed by least-squares error minimization. For each frequency ω, equation (B-98a) is written for each trace of each CMP gather in the data set. Consider a data set with ns shot locations and nc channels, so that the total number of traces is ns × nc. For ns × nc values of the actual spectral components ${\displaystyle {{\tilde {X}}_{ij}}}$ at frequency ω, and ns shot locations, nr receiver locations, ne midpoint locations, and nh offsets, we have the following set of model equations:

 ${\displaystyle {\begin{pmatrix}\\\vdots \\{\tilde {X}}''_{ij}\\\vdots \\\\\end{pmatrix}}{\begin{pmatrix}\\\\\cdots 1\cdots &1\cdots &1\cdots &1\cdots \\\\\\\end{pmatrix}}{\begin{pmatrix}\vdots \\{\tilde {S}}_{j}\\\vdots \\\vdots \\{\tilde {H}}_{l}\\\vdots \\\vdots \\{\tilde {E}}_{k}\\\vdots \\\vdots \\{\tilde {G}}_{i}\\\vdots \\\end{pmatrix}}}$ (B–99a)

Write equation (B-99a) in matrix notation:

 ${\displaystyle {\tilde {\mathbf {X} }}''={\mathbf {L} p},}$ (B–99b)

where ${\displaystyle {\tilde {\mathbf {X} }}''}$ is the column vector of m-length on the left-hand side in equation (B-99), L is the sparse matrix with dimensions (ns × nc) × (ns + nh + ne + nr) and p is the column vector of (ns + nh + ne + nr)-length on the right-hand side of the same equation. Except the four elements in each row, the L matrix contains zeros.

We want to estimate for each frequency ω the model parameters p such that the difference between the actual spectral component ${\displaystyle {\tilde {\mathbf {X} }}}$ and the modeled spectral component ${\displaystyle {\tilde {\mathbf {X} }}''}$ is minimum in the least-squares sense.

The error vector v is defined as the difference between the modeled and the actual spectral component for each frequency ω

 ${\displaystyle {\mathbf {v} }={\tilde {\mathbf {X} }}-{\tilde {\mathbf {X} }}''.}$ (B–100a)

Substitute equation (B-99b) into equation (B-100a) to obtain

 ${\displaystyle {\mathbf {v} }={\tilde {\mathbf {X} }}-{\mathbf {L} p}.}$ (B–100b)

Following [7], the least-squares solution for equation (B-100b) can be determined. First, the cumulative squared error C is expressed as

 ${\displaystyle C={\mathbf {v} }^{\mathbf {T} *}{\mathbf {v} }.}$ (B–101a)

where T is for transpose and * is for complex conjugate. By substituting for v from equation (B-100b), we get

 ${\displaystyle C=({\tilde {\mathbf {X} }}-{\mathbf {L} p})^{\mathbf {T} *}({\tilde {\mathbf {X} }}-{\mathbf {L} p}).}$ (B–101b)

Minimization of C with respect to p requires that

${\displaystyle {\frac {\partial C}{\partial {\tilde {S}}_{j}}}={\frac {\partial C}{\partial {\tilde {H}}_{l}}}={\frac {\partial C}{\partial {\tilde {E}}_{k}}}={\frac {\partial C}{\partial {\tilde {G}}_{i}}}=0.}$

This requirement yields the desired least-squares solution:

 ${\displaystyle {\mathbf {p} }=({\mathbf {L} }^{\mathbf {T} \ast }{\mathbf {L} })^{-1}{\mathbf {L} }^{\mathbf {T} *}{\tilde {\mathbf {X} }}.}$ (B-102)

Application of the least-squares minimization to surface-consistent prediction-error filtering is given by [8]. A practical scheme for solving equation (B-98a) is based on the Gauss-Seidel method. In this scheme, each term on the right-hand side of equation (B-98a) is computed by the following set of recursive equations:

 ${\displaystyle {\tilde {S}}_{j}^{m}={\frac {1}{n_{r}}}\sum \limits _{i}^{n_{r}}\left\{{\tilde {X}}_{ij}-{\tilde {H}}_{l}^{m-1}-{\tilde {E}}_{k}^{m-1}-{\tilde {G}}_{i}^{m-1}\right\},}$ (B–103a)

 ${\displaystyle {\tilde {G}}_{i}^{m}={\frac {1}{n_{s}}}\sum \limits _{j}^{n_{s}}\left\{{\tilde {X}}_{ij}-{\tilde {H}}_{l}^{m-1}-{\tilde {E}}_{k}^{m-1}-{\tilde {S}}_{j}^{m-1}\right\},}$ (B–103b)

 ${\displaystyle {\tilde {H}}_{l}^{m}={\frac {1}{n_{e}}}\sum \limits _{k}^{n_{e}}\left\{{\tilde {X}}_{ij}-{\tilde {S}}_{j}^{m-1}-{\tilde {E}}_{k}^{m-1}-{\tilde {G}}_{i}^{m-1}\right\},}$ (B–103c)

and

 ${\displaystyle {\tilde {E}}_{k}^{m}={\frac {1}{n_{h}}}\sum \limits _{l}^{n_{h}}\left\{{\tilde {X}}_{ij}-{\tilde {S}}_{j}^{m-1}-{\tilde {H}}_{l}^{m-1}-{\tilde {G}}_{i}^{m-1}\right\},}$ (B–103d)

where m is the iteration index. The solutions in equations (B-103) are based on the orthogonality of the shot and receiver axes, and the orthogonality of the midpoint and offset axes. Equations (B-103) can be modified as follows:

 ${\displaystyle {\tilde {S}}_{j}^{m}={\frac {1}{n_{r}}}\sum _{i}^{n_{r}}\{{\tilde {X}}_{ij}\}-{\frac {1}{n_{r}}}\sum _{i}^{n_{r}}\{{\tilde {H}}_{l}^{m-1}-{\tilde {E}}_{k}^{m-1}-{\tilde {G}}_{i}^{m-1}\},}$ (B–104a)

 ${\displaystyle {\tilde {G}}_{i}^{m}={\frac {1}{n_{s}}}\sum _{j}^{n_{s}}\{{\tilde {X}}_{ij}\}-{\frac {1}{n_{s}}}\sum _{j}^{n_{s}}\{{\tilde {H}}_{l}^{m-1}-{\tilde {E}}_{k}^{m-1}-{\tilde {S}}_{j}^{m-1}\},}$ (B–104b)

 ${\displaystyle {\tilde {H}}_{l}^{m}={\frac {1}{n_{e}}}\sum _{k}^{n_{e}}\{{\tilde {X}}_{ij}\}-{\frac {1}{n_{e}}}\sum _{k}^{n_{e}}\{{\tilde {S}}_{j}^{m-1}-{\tilde {E}}_{k}^{m-1}-{\tilde {G}}_{i}^{m-1}\},}$ (B–104c)

and

 ${\displaystyle {\tilde {E}}_{k}^{m}={\frac {1}{n_{h}}}\sum _{l}^{n_{h}}\{{\tilde {X}}_{ij}\}-{\frac {1}{n_{h}}}\sum _{l}^{n_{h}}\{{\tilde {S}}_{j}^{m-1}-{\tilde {H}}_{l}^{m-1}-{\tilde {G}}_{i}^{m-1}\}.}$ (B–104d)

This modification enables us to compute and store the sum of the spectral components of input data ${\displaystyle \sum {{X}_{ij}},}$ thus circumventing the need for storing the individual spectral components Xij [5]. The process is iterated until an index m that attains the least-squares minimization.

The parameter vector p that contains the spectral components ${\displaystyle {{\tilde {S}}_{j}},\ {{\tilde {G}}_{i}},\ {{\tilde {H}}_{l}},{\text{ and }}{{\tilde {E}}_{k}},}$ which are associated with the source and receiver locations, offset dependency, and earth’s impulse response, is solved for each frequency component ω using equations (B-104). Results from all frequency components are then combined to obtain the terms in equation (B-98a). The surface-consistent spiking deconvolution operator to be applied to each trace in the data set is then the minimum-phase inverse of ${\displaystyle {{s}_{j}}\left(t\right)*{{g}_{i}}\left(t\right)*{{h}_{l}}\left(t\right).}$ In the case of predictive deconvolution with a desired prediction lag, for each source, receiver and, midpoint location, a deconvolution operator is computed by using the autocorrelograms of the terms sj(t), gi(t), and hl(t). To each trace in the data set, these operators are then applied in a cascaded manner.

As a by-product of the derivation of the surface-consistent spectral decomposition equation (B-97), trace amplitudes themselves can be corrected for in a surface-consistent manner [9]. Sum the individual terms in equation (B-97) over the frequencies:

 ${\displaystyle \sum _{\omega }{\tilde {X}}'_{ij}(\omega )=\sum _{\omega }{\tilde {S}}_{j}(\omega )+\sum _{\omega }{\tilde {H}}_{l}(\omega )+\sum _{\omega }{\tilde {E}}_{k}(\omega )+\sum _{\omega }{\tilde {G}}_{i}(\omega ).}$ (B–105a)

Each term yields a scalar that is related to the source, receiver, offset and midpoint locations:

 ${\displaystyle {\tilde {x}}'_{ij}={\tilde {s}}_{j}+{\tilde {h}}_{l}+{\tilde {e}}_{k}+{\tilde {g}}_{i}.}$ (B–105b)

Once computed in the same manner as the terms in equation (B-98a), these scalars are then applied to individual traces in the data set for surface-consistent amplitude corrections.

In practice, application of surface-consistent deconvolution to field data usually involves two terms, only — the source term sj(t) and the receiver term gi(t). In a transition zone, surface conditions at source and receiver locations may vary significantly from dry to wet surface conditions. Hence, the most likely situation where surface-consistent amplitude corrections and deconvolution may be required is transition-zone data. Figure B-4 shows a field data example of surface-consistent deconvolution. Note the variations in the autocorrelograms from one source location to the next and from one receiver location to the next. Differences in reflection continuity are observed within the first 1 s on the stacked sections created by application of conventional trace-by-trace deconvolution and surface-consistent deconvolution.

B.9 Inverse Q filtering

Consider a 1-D seismogram that represents a compressional plane wave that propagates vertically downward in a homogeneous medium with intrinsic attenuation. This plane wave is expressed as the solution to the scalar wave equation:

 ${\displaystyle {\frac {1}{v^{2}}}{\frac {\partial ^{2}P}{\partial t^{2}}}={\frac {\partial ^{2}P}{\partial {z}^{2}}},}$ (B-106)

where P(t, z) is the plane wave represented by the 1-D seismogram — a CMP-stacked trace, t is the traveltime, z is the depth variable and v is the wave velocity. We shall assume that the wave velocity is constant.

To solve equation (B-106), first, Fourier transform in the time direction:

 ${\displaystyle {\frac {\omega ^{2}}{v^{2}}}P={\frac {\partial ^{2}P}{\partial z^{2}}},}$ (B-107)

where P(ω, z) is the Fourier transform of the wavefield P(t, z), and ω is the angular frequency. The upcoming wave solution is then given by

 ${\displaystyle P(\omega ,\ z)=P(\omega ,\ z=0)\ \ \exp(-i{\frac {\omega }{v}}z).}$ (B-108)

To include amplitude decay in wave propagation in a medium with intrinsic attenuation, the wave velocity is defined as a complex variable:

 ${\displaystyle v=\alpha +i\beta .}$ (B-109)

Substitute equation (B-109) into equation (B-108) to get

 ${\displaystyle P(\omega ,\ z)=P(\omega ,\ z=0)\ \ \exp(-i{\frac {w}{\alpha +i\beta }}z).}$ (B-110)

By simple algebra, rewrite equation (B-110) as follows:

 ${\displaystyle P(\omega ,\ z)=P(\omega ,\ z=0)\ \ \exp(-i{\frac {w\alpha }{\alpha ^{2}+\beta ^{2}}}z)\ \ \exp(-{\frac {\omega \beta }{\alpha ^{2}+\beta ^{2}}}z).}$ (B-111)

For most rocks, the assumption that β is much smaller than α can be made. As a result, equation (B-111) can be simplified as follows:

 ${\displaystyle P(\omega ,\ z)=P(\omega ,\ z=0)\ \ \exp(-i{\frac {\omega }{\alpha }}z)\ \ \exp(-{\frac {\omega \beta }{\alpha ^{2}}}z).}$ (B-112)
Figure B-4  Surface-consistent deconvolution applied to field data: autocorrelograms of (a) the source term, and (b) the receiver term as in equation (B-94), (c) conventional trace-by-trace prestack deconvolution, and (d) surface-consistent deconvolution using the autocorrelation estimates as in (a) and (b) (Analysis by Duane Dopkin).

Now, define a vertical time variable τ equivalent to the depth variable z via z = ατ, and rewrite equation (B-112):

 ${\displaystyle P(\omega ,\ \tau )=P(\omega ,\ \tau =0)\ \ \exp(-i\omega \tau )\ \exp(-\omega {\frac {\beta }{\alpha }}\tau ).}$ (B-113)

Assume an attenuation constant Q that is independent of frequency ω [10]:

 ${\displaystyle {\frac {1}{2Q}}={\frac {\beta }{\alpha }},}$ (B-114)

and substitute into equation (B-113) to obtain

 ${\displaystyle P(\omega ,\ \tau )=P(\omega ,\ \tau =0)\ \ \exp(-i\omega \tau )\ \ \exp(-{\frac {\omega \tau }{2Q}}).}$ (B-115)

Note from equation (B-115) that the higher the frequency, the greater the attenuation.

For a nondissipative medium, β = 0; hence, equation (B-114) states that Q is infinite. As a result, equation (B-115) takes the special form

 ${\displaystyle P(\omega ,\ \tau )=P(\omega ,\ \tau =0)\ \ \exp(-i\omega \tau ).}$ (B-116)

The amplitude spectrum of the inverse Q filter is thus given by the exponential scaling function

 ${\displaystyle A(\omega ,\ \tau )=\exp({\frac {\omega \tau }{2Q}}).}$ (B–117a)

The phase spectrum can either be set to zero, or more appropriately, assumed to be minimum-phase. In the latter case, it can be computed by taking the Hilbert transform of the amplitude spectrum given by equation (B-117a) (Section B.4):

 ${\displaystyle \phi (\omega ,\ \tau )=\mathbf {H} \{A(\omega ,\tau )\},}$ (B–117b)

where H represents the Hilbert transform.

By combining the amplitude and phase spectra given by equations (B-117a,B-117b), we define the minimum-phase inverse Q filter as

 ${\displaystyle W(\omega ,\ \tau )=A(\omega ,\ \tau )\ \ \exp\{-i\phi (\omega ,\tau )\}.}$ (B-118)

The inverse Q filtering equation (B-115) now takes the form

 ${\displaystyle P(\omega ,\ \tau )=P(\omega ,\ \tau =0)\ \ \exp(-i\omega \tau )W(\omega ,\ \tau ).}$ (B-119)

Note that the time variable t is associated with the input trace P(t, τ = 0) and the time variable τ is associated with the output trace P(t = 0, τ) after the application of the inverse Q filter.

To apply the filter W(ω, τ) to a trace P(ω, τ = 0), define a time step Δτ and write equation (B-119) in its recursive form:

 ${\displaystyle P(\omega ,\ k\Delta \tau )=P[\omega ,\ (k-1)\Delta \tau ]\ \ \exp(-i\omega \Delta \tau )W(\omega ,\ \Delta \tau ),}$ (B-120)

where k = 1, 2, …, n, with n number of time steps (number of samples in the input trace).

Equation (B-120) can now be used to describe a procedure for inverse Q filtering:

1. Fourier transform the input trace P(t, τ = 0) to obtain the complex transform function P(ω, τ = 0).
2. Define a time step Δτ and apply the linear phase shift to P(ω, τ = 0) by multiplying with the exponential exp(−Δτ).
3. Specify a constant Q and apply the inverse Q filter given by equation (B-117a,B-117b) that represents the inverse Q filter.
4. Repeat steps (a), (b), and (c) for all frequencies.
5. Sum over all frequencies to obtain the inverse Q-filtered wavefield at time step Δτ given by P(t = 0, Δτ).
6. Repeat step (d) for all time steps kΔτ, k = 1, 2, …, n, to obtain the inverse Q-filtered wavefield P(t = 0, τ) at all times τ.
Figure B-5  Inverse Q filtering applied to field data [11]: Portion of a stacked section with (a) no deconvolution, (b) inverse Q filtering, (c) inverse Q filtering followed by deconvolution, and (d) deconvolution, only; (e) average amplitude spectrum of the data shown in (a); (f) average amplitude spectrum of the data shown in (b); (g) average amplitude spectrum of the data shown in (c).

Inverse Q filtering often is applied to data using a constant Q factor. An efficient scheme for a vertically varying Q factor is described by [12].

Figure B-5 shows an example of inverse Q filtering applied to field data. Compare the stacked sections and the average amplitude spectra with no deconvolution, inverse Q filtering, inverse Q filtering followed by deconvolution and deconvolution, only. An inverse Q filter restores the high-frequency components of signal subjected to intrinsic attenuation by the propagation medium. The deconvolution that follows the inverse Q filtering then easily flattens the spectrum within the passband.

References

1. Claerbout, 1976, Claerbout, J. F., 1976, Fundamentals of geophysical data processing: McGraw-Hill Book Co.
2. Robinson and Treitel, 1980, Robinson, E. A. and Treitel, S., 1980, Geophysical signal analysis: Prentice-Hall Book Co.
3. Taner and Coburn, 1981, Taner, M. T. and Coburn, K., 1981, Surface-consistent deconvolution: Presented at the 51st Ann. Internat. Mtg., Soc. Expl. Geophys.
4. Cambois and Stoffa (1992), Cambois, G. and Stoffa, P. L., 1992, Surface-consistent deconvolution in the log-Fourier domain: Geophysics, 57, 823–840.
5. Cary and Lorentz (1993), Cary, P. W. and Lorentz, G. A., 1993, Four-component surface-consistent deconvolution: Geophysics, 58, 383–392.
6. Morley and Claerbout (1983), Morley, L. and Claerbout, J. F., 1983, Predictive deconvolution in shot-receiver space: Geophysics, 48, 515–531.
7. Lines and Treitel (1984), Lines, L. R. and Treitel, S., 1984, Tutorial: A review of least-squares inversion and its application to geophysical problems: Geophys. Prosp., 32, 159–186.
8. Levin (1989), Levin, S. A., 1989, Surface-consistent deconvolution: Geophysics, 54, 1123–1133.
9. Taner and Koehler, 1981, Taner, M. T. and Koehler, F., Surface-consistent corrections: Geophysics, 46, 17–22.
10. Kjartansson, 1979, Kjartansson, E., 1979, Constant Q-wave propagation and attenuation: J. Geophys. Res., 84, 4737–4748.
11. Saatcilar, 1996, Saatcilar, R., 1996, An algorithm for Q-filtering: J. Seis. Expl., 5, 157–168.
12. Hargreaves and Calvert, 1991, Hargreaves, N. D. and Calvert, A. J., 1991, Inverse Q filtering by Fourier transform: Geophysics, 56, 519–527.