# Deconvolution methods

Series Geophysical References Series Problems in Exploration Seismology and their Solutions Lloyd P. Geldart and Robert E. Sheriff 9 295 - 366 http://dx.doi.org/10.1190/1.9781560801733 ISBN 9781560801153 SEG Online Store

## Problem 9.17

Deconvolution, whose objective is undoing the results of a prior convolution, is a somewhat open-ended collection of methods, a number of which are briefly described in the background. List the assumptions of different methods, such as invariant wavelet, randomness of the reflectivity or of the noise, whether a source wavelet is the same as the recorded wavelet, and so on.

### Background

Various deconvolution methods and their characteristics are described in the following text. Instead of giving a solution, we leave it up to the reader to list the assumptions.

A change in signal waveshape can be regarded as filtering, the effect being equivalent to convolution of the signal $g_{t}$ with a filter, $h_{t}=f_{t}*g_{t}$ . Deconvolution attempts to remove the effects of $f_{t}$ and obtain the original signal $g_{t}$ . Deconvolution in the time domain consists of convolution with an inverse filter $i_{t}$ , i.e., $h_{t}*i_{t}$ (see problems 9.18 and 9.19); in the frequency doman its equivalent is multiplication of the transforms, $H(\omega )I(\omega )$ (see Sheriff and Geldart, 1995, section 9.5 for more details). An inverse filter convolved with the filter yields an impulse, $i_{t}*f_{t}\leftrightarrow I(\omega )F(\omega )=1\leftrightarrow \delta _{t}$ . The objective of deconvolution often is to determine the shape of the embedded wavelet (see problem 9.6).

Most deconvolution methods are based on autocorrelations of individual traces. An autocorrelation measures the repetition in a time series. Presumably the embedded wavelet is repeated for every reflection and thus the early part of the autocorrelation is largely determined by the shape of the embedded wavelet (see Figure 9.17a). An embedded wavelet that is long will contribute significantly to several half-cycles of the autocorrelation. Because we generally want a short embedded wavelet, we want the autocorrelation’s amplitude to die off rapidly.

The wavelet changes shape as it travels through the earth. Nevertheless, most deconvolution methods assume constant waveshape, sometimes called stationarity. Obtaining a representative autocorrelation requires that an appreciable number of time samples be included in the calculation process, so usually 500 or more samples (1 s at 0.002 s sampling) are included. Because there is no phase information in an autocorrelation, the phase spectrum of the embedded wavelet cannot be recovered from it. It is often assumed that the embedded wavelet is minimum-phase.

Changes in the wavelet shape with time are generally accommodated by subdividing a trace into time segments, e.g., finding a deconvolution operator for the early portion of a trace and another for a later portion, assuming that the wavelet shape is constant during each portion. The portions analyzed often overlap, e.g., one autocorrelation is calculated for the data between 0.5 and 2.0 s, a second for the data between 1.5 and 3.0 s. Each deconvolution operator is then applied over its respective range, giving two outputs in the overlap region. These two outputs are then added together in different proportions in the overlap region, the output at 1.5 s being 100% of that given by the early operator and that at 2.0 s being 100% of that given by the latter operator, the proportions changing linearly during the overlap time. This is often called adaptive deconvolution.

While many deconvolution methods exist, they can be roughly divided into deterministic and statistical methods, but the division is often unclear because deterministic methods may employ statistics and statistical methods may utilize knowledege about the nature of the convolution to be undone. Deterministic deconvolution requires that we know or can reasonably assume the mechanism or properties of the convolution to be undone.

One type of deconvolution is dereverberation or deringing, whose objective is to remove the effects of energy bouncing repeatedly in a near-surface layer—usually the water layer with marine data. This requires a knowledge of the repetition time of the near-surface bounces and the relative amplitudes of successive bounces. In the marine case the repetition time is assumed to be given by the water depth and the amplitudes by the sea-floor reflectivity, which sometimes can be obtained by trials. One marine deghosting method used with ocean-bottom recording employs velocity geophones and hydrophones (see problem 7.10).

Deghosting is a type of deterministic deconvolution where we assume that the ghost is a replica of the original signal $G(z)$ with amplitude reduced by the factor $R$ and delayed by $n\Delta$ , where $R$ is the reflection coefficient at the ghosting interface (note that $R$ is usually negative) for a wave approaching from below and $n\Delta$ is the two-way traveltime between the source and the ghosting interface. The transform of the signal plus ghost is then

 {\begin{aligned}H(z)=G(z)(1+Rz^{n}).\end{aligned}} (9.17a)

On land we usually can get $n$ fairly accurately from the depth of the source and the thickness and velocity of the LVL, but we must determine (or guess) the value of $R$ . In marine work, we get $n$ from the depth of the water. Deghosting is often done in a recursive manner, called recursive deconvolution.

System deconvolution is sometimes deterministically carried out to remove the filtering action of recording instrumentation or data processing.

In spiking deconvolution we assume that the impulse response of the earth $r_{t}$ , whose elements are the reflection coefficients at the various interfaces, are randomly distributed, and, hence, the autocorrelation of $r_{t}$ has a nonzero value (a “spike”) only at zero time shift, that is,

 {\begin{aligned}\phi _{rr}(t)\approx k\delta _{t}.\end{aligned}} (9.17b)

“Random” here means unpredictable, i.e., one cannot predict the arrival time of a primary reflection based on the arrival times of earlier reflections.] If $\omega _{t}$ is the embedded wavelet, the geophone output is $g_{t}=e_{t}*\omega _{t}$ . We can use the method of least squares (least-squares filtering is discussed in problem 9.22) to find the optimum inverse filter that will give a result that has the properties we assume for $e_{t}$ (see Sheriff and Geldart, 1995, 296). Spiking deconvolution can be carried out in either the time or frequency domain. Because we consider $e_{t}$ as random, its spectrum contains all frequencies in equal abundance, that is, its spectrum should be flat; techniques for achieving this are called spectral flattening or flattening deconvolution. Flattening is usually done only over the passband where the signal is assumed to be dominant. Since the autocorrelation of $e_{t}$ is small except for zero shift, the inverse filter $I(z)$ is

 {\begin{aligned}I(z)=1/H(z),\end{aligned}} (9.17c)

where $H(z)$ is the transform of the observed seismic trace. We can get the amplitude spectrum from the autocorrelation, but we also need to know the phase of $H(z)$ in order to solve the problem. There is no phase information in the autocorrelation, and so we have to assume the phase to get a solution. We usually assume that $H(z)$ is minimum-phase. Spiking deconvolution is often done to shorten the duration of the embedded wavelet, but it may make the data noisy if too much noise is included.

Predictive deconvolution uses information about the primary reflections to predict multiples produced by the same reflectors. Long-path multiples cause systematic repetition of a trace which produces significant values of the autocorrelation following the time delay associated with the multiple repetition time (see Figure 9.17a). Hence deconvolution to remove multiples is generally based on the portion of an autocorrelation trace after a lag time $L$ . A predictive deconvolution filter does not begin to act until the time $L$ . Autocorrelation elements that occur after $L$ are regarded as produced by multiples and a least-squares method can be used to find the filter that will predict the multiples. The predicted multiples can then be subtracted to get rid of the recorded multiples.

Entropy is a measure of the disorder or unpredictability of a system, the entropy increasing as the disorder increases. The autocorrelation of a time series is not unique, and a spectrum corresponds to a number of different time series. Maximum-entropy filtering attempts to select the time series that has the maximum entropy (maximum disorder).

We sometimes try to represent a seismic trace as a sequence minimizing the number of reflection events, i.e., involving only a few large reflections. This can be accomplished, for example, by minimizing the error in a least-fourth power sense rather than in a least-squares sense (as is done in Wiener filtering). This type of operation is sometimes called minimum-entropy (sparse-spike) deconvolution.

Homomorphic deconvolution (Sheriff and Geldart, 1995, 298) involves transformation into the cepstral domain, the cepstrum $g^{*}(\zeta )$ being defined by the relations

{\begin{aligned}{\rm {\;ln\;}}[G(z)]=G^{*}(z)\leftrightarrow g^{*}(\zeta ),\end{aligned}} Where $\leftrightarrow$ denotes the Fourier transformation. In the cepstral domain the geophone input $g_{t}=e_{t}*\omega _{t}$ becomes $g^{*}(\zeta )=e^{*}(\zeta )+\omega ^{*}(\zeta )$ . If $e_{t}$ varies more rapidly than $\omega _{t}$ , the two may be separable by frequency filtering.