# Deconvolution methods

Series | Geophysical References Series |
---|---|

Title | Problems in Exploration Seismology and their Solutions |

Author | Lloyd P. Geldart and Robert E. Sheriff |

Chapter | 9 |

Pages | 295 - 366 |

DOI | http://dx.doi.org/10.1190/1.9781560801733 |

ISBN | ISBN 9781560801153 |

Store | SEG Online Store |

## Problem 9.17

Deconvolution, whose objective is undoing the results of a prior convolution, is a somewhat open-ended collection of methods, a number of which are briefly described in the background. List the assumptions of different methods, such as invariant wavelet, randomness of the reflectivity or of the noise, whether a source wavelet is the same as the recorded wavelet, and so on.

### Background

Various deconvolution methods and their characteristics are described in the following text. Instead of giving a solution, we leave it up to the reader to list the assumptions.

A change in signal waveshape can be regarded as filtering, the effect being equivalent to convolution of the signal with a filter, . *Deconvolution* attempts to remove the effects of and obtain the original signal . Deconvolution in the time domain consists of convolution with an inverse filter , i.e., (see problems 9.18 and 9.19); in the frequency doman its equivalent is multiplication of the transforms, (see Sheriff and Geldart, 1995, section 9.5 for more details). An inverse filter convolved with the filter yields an impulse, . The objective of deconvolution often is to determine the shape of the embedded wavelet (see problem 9.6).

Most deconvolution methods are based on autocorrelations of individual traces. An autocorrelation measures the repetition in a time series. Presumably the embedded wavelet is repeated for every reflection and thus the early part of the autocorrelation is largely determined by the shape of the embedded wavelet (see Figure 9.17a). An embedded wavelet that is long will contribute significantly to several half-cycles of the autocorrelation. Because we generally want a short embedded wavelet, we want the autocorrelation’s amplitude to die off rapidly.

The wavelet changes shape as it travels through the earth. Nevertheless, most deconvolution methods assume constant waveshape, sometimes called stationarity. Obtaining a representative autocorrelation requires that an appreciable number of time samples be included in the calculation process, so usually 500 or more samples (1 s at 0.002 s sampling) are included. Because there is no phase information in an autocorrelation, the phase spectrum of the embedded wavelet cannot be recovered from it. It is often assumed that the embedded wavelet is minimum-phase.

Changes in the wavelet shape with time are generally accommodated by subdividing a trace into time segments, e.g., finding a deconvolution operator for the early portion of a trace and another for a later portion, assuming that the wavelet shape is constant during each portion. The portions analyzed often overlap, e.g., one autocorrelation is calculated for the data between 0.5 and 2.0 s, a second for the data between 1.5 and 3.0 s. Each deconvolution operator is then applied over its respective range, giving two outputs in the overlap region. These two outputs are then added together in different proportions in the overlap region, the output at 1.5 s being 100% of that given by the early operator and that at 2.0 s being 100% of that given by the latter operator, the proportions changing linearly during the overlap time. This is often called *adaptive deconvolution*.

While many deconvolution methods exist, they can be roughly divided into deterministic and statistical methods, but the division is often unclear because deterministic methods may employ statistics and statistical methods may utilize knowledege about the nature of the convolution to be undone. *Deterministic deconvolution* requires that we know or can reasonably assume the mechanism or properties of the convolution to be undone.

One type of deconvolution is *dereverberation* or *deringing*, whose objective is to remove the effects of energy bouncing repeatedly in a near-surface layer—usually the water layer with marine data. This requires a knowledge of the repetition time of the near-surface bounces and the relative amplitudes of successive bounces. In the marine case the repetition time is assumed to be given by the water depth and the amplitudes by the sea-floor reflectivity, which sometimes can be obtained by trials. One marine deghosting method used with ocean-bottom recording employs velocity geophones and hydrophones (see problem 7.10).

*Deghosting* is a type of deterministic deconvolution where we assume that the ghost is a replica of the original signal with amplitude reduced by the factor and delayed by , where is the reflection coefficient at the ghosting interface (note that is usually negative) for a wave approaching from below and is the two-way traveltime between the source and the ghosting interface. The transform of the signal plus ghost is then

**(**)

On land we usually can get fairly accurately from the depth of the source and the thickness and velocity of the LVL, but we must determine (or guess) the value of . In marine work, we get from the depth of the water. Deghosting is often done in a recursive manner, called *recursive deconvolution*.

*System deconvolution* is sometimes deterministically carried out to remove the filtering action of recording instrumentation or data processing.

In *spiking deconvolution* we assume that the impulse response of the earth , whose elements are the reflection coefficients at the various interfaces, are randomly distributed, and, hence, the autocorrelation of has a nonzero value (a “spike”) only at zero time shift, that is,

**(**)

“Random” here means unpredictable, i.e., one cannot predict the arrival time of a primary reflection based on the arrival times of earlier reflections.] If is the embedded wavelet, the geophone output is . We can use the method of least squares (least-squares filtering is discussed in problem 9.22) to find the optimum inverse filter that will give a result that has the properties we assume for (see Sheriff and Geldart, 1995, 296). Spiking deconvolution can be carried out in either the time or frequency domain. Because we consider as random, its spectrum contains all frequencies in equal abundance, that is, its spectrum should be flat; techniques for achieving this are called *spectral flattening* or *flattening deconvolution*. Flattening is usually done only over the passband where the signal is assumed to be dominant. Since the autocorrelation of is small except for zero shift, the inverse filter is

**(**)

where is the transform of the observed seismic trace. We can get the amplitude spectrum from the autocorrelation, but we also need to know the phase of in order to solve the problem. There is no phase information in the autocorrelation, and so we have to assume the phase to get a solution. We usually assume that is minimum-phase. Spiking deconvolution is often done to shorten the duration of the embedded wavelet, but it may make the data noisy if too much noise is included.

*Predictive deconvolution* uses information about the primary reflections to predict multiples produced by the same reflectors. Long-path multiples cause systematic repetition of a trace which produces significant values of the autocorrelation following the time delay associated with the multiple repetition time (see Figure 9.17a). Hence deconvolution to remove multiples is generally based on the portion of an autocorrelation trace after a *lag time* . A predictive deconvolution filter does not begin to act until the time . Autocorrelation elements that occur after are regarded as produced by multiples and a least-squares method can be used to find the filter that will predict the multiples. The predicted multiples can then be subtracted to get rid of the recorded multiples.

Entropy is a measure of the disorder or unpredictability of a system, the entropy increasing as the disorder increases. The autocorrelation of a time series is not unique, and a spectrum corresponds to a number of different time series. *Maximum-entropy filtering* attempts to select the time series that has the maximum entropy (maximum disorder).

We sometimes try to represent a seismic trace as a sequence minimizing the number of reflection events, i.e., involving only a few large reflections. This can be accomplished, for example, by minimizing the error in a least-fourth power sense rather than in a least-squares sense (as is done in Wiener filtering). This type of operation is sometimes called *minimum-entropy* (*sparse-spike*) *deconvolution*.

*Homomorphic deconvolution* (Sheriff and Geldart, 1995, 298) involves transformation into the *cepstral domain*, the *cepstrum* being defined by the relations

Where denotes the Fourier transformation. In the cepstral domain the geophone input becomes . If varies more rapidly than , the two may be separable by frequency filtering.

## Continue reading

Previous section | Next section |
---|---|

Zero-phase filtering of a minimum-phase wavelet | Calculation of inverse filters |

Previous chapter | Next chapter |

Reflection field methods | Geologic interpretation of reflection data |

## Also in this chapter

- Fourier series
- Space-domain convolution and vibroseis acquisition
- Fourier transforms of the unit impulse and boxcar
- Extension of the sampling theorem
- Alias filters
- The convolutional model
- Water reverberation filter
- Calculating crosscorrelation and autocorrelation
- Digital calculations
- Semblance
- Convolution and correlation calculations
- Properties of minimum-phase wavelets
- Phase of composite wavelets
- Tuning and waveshape
- Making a wavelet minimum-phase
- Zero-phase filtering of a minimum-phase wavelet
- Deconvolution methods
- Calculation of inverse filters
- Inverse filter to remove ghosting and recursive filtering
- Ghosting as a notch filter
- Autocorrelation
- Wiener (least-squares) inverse filters
- Interpreting stacking velocity
- Effect of local high-velocity body
- Apparent-velocity filtering
- Complex-trace analysis
- Kirchhoff migration
- Using an upward-traveling coordinate system
- Finite-difference migration
- Effect of migration on fault interpretation
- Derivative and integral operators
- Effects of normal-moveout (NMO) removal
- Weighted least-squares