All of the above, of course, holds within the limitations of statistical errors imposed by noise, by the computational approximations, and by the finiteness of the data, in addition to the unavoidable specification errors imposed by introduction of an idealized model. The success of the method of predictive deconvolution depends largely on the validity of the basic hypothesis that the wavelet is minimum delay and that the reflectivity function within a specified time window is random and uncorrelated. The power of the method of predictive deconvolution rests on its stability and robustness and on the fact that the only required data are those of the received seismic trace. Often incorporated in comprehensive schemes, the method of predictive deconvolution continues to be used to process field records in many seismic environments, both land and marine. Its general success shows that the basic hypotheses are valid over a wide range of operating conditions.
A given convolutional model holds not for the entire trace but only within a specific time gate. (The recent work of Montana and Margrave  attempted to obviate this time-gate assumption by use of the Gabor transform.) In practice, several gates are specified on a trace. If two adjacent gates do not overlap, the first deconvolution filter is applied down to the end of the first gate and then is applied with a decreasing linear taper down to the beginning of the second gate.
If two adjacent gates overlap, the operator designed for the first gate is applied down to the beginning of the zone of overlap. The operator designed for the second gate is applied from a point beyond the zone of overlap. Within the zone of overlap, the filter is expressed as a time-varying weighted average of the filter designed in the first gate and the filter designed in the second gate.
During application of predictive deconvolution, we assume that within each time gate, (1) the wavelet is unknown but is minimum delay and (2) the frequency content of the reflectivity series is completely white. The validity of the random-reflectivity assumption naturally depends on the geology of the area. The frequency content of the wavelet on the trace can be estimated from the trace’s autocorrelation. This is because the power spectrum of a white signal (i.e., the reflection coefficient series) is completely flat, so that the trace and the wavelet have the same colored (i.e., nonwhite) frequency content. In usual practice, the deconvolution filter is computed with the least-squares method, which leads to a set of the Toeplitz normal equations. The known quantities in these equations are the trace autocorrelation coefficients. The deconvolution filter then can be found as the solution to these Toeplitz normal equations.
When the spike-deconvolution filter is convolved with the trace, the spike-deconvolution filter in fact is convolved with all constituent wavelets of the trace. This operation converts each wavelet as well as possible into a spike, thus improving seismic resolution. The amplitude of each spike is proportional to the reflection coefficient associated with it, whereas its sharpness is a function of the frequency bandwidth. Such spikes can be no sharper than the best approximation possible with the available bandwidth. Thus, the spike-deconvolved trace is approximately
Spike deconvolution and gap deconvolution are intimately related, as we described earlier in this chapter (equations 29 and following). Let the gap-deconvolution filter (for prediction distance ) be denoted by f. The head is defined as the leading part of the minimum-delay wavelet m. More specifically, the head is defined as the first sample values of m. The gap-deconvolution filter f for a given value of is the convolution of the spike-deconvolution filter with the head that is, . Because the spike-deconvolution filter is necessarily minimum delay, it follows that the gap-deconvolution filter f is minimum delay if and only if the head is minimum delay. In conclusion, the spike-deconvolved trace y is (within the limits of what the least-squares method can achieve) the reflectivity series , and the gap-deconvolved trace is (within the same limits) the reflectivity series smoothed by the head .
The objective of deconvolution is to design and apply inverse filters (i.e., deconvolution filters) to the seismic trace to yield an estimate of the reflectivity function (Berkhout, 1977). Each trace must be segmented into several deconvolution design windows. These choices usually are made by trial and error, and an operator then is computed for each design window. A deconvolution algorithm first determines and then applies a deconvolution filter to the trace. Such deconvolution filters can be designed either for spike deconvolution or for gap deconvolution. Both spike deconvolution and gap deconvolution are based on the same convolutional model, which describes the seismic trace as the convolution of a source signal with a seismic wavelet. The wavelet can incorporate any or all of the following components: the geophone response, the intrinsic absorption response, the response of the remaining instrumentation (amplifiers, and so forth), and the reflection response of the earth. In preselected windows or gates, the reflection response can be approximated by convolution of the desired reflectivity function with undesirable ghosts, multiples, reverberations, and intrinsic absorption responses. Following deconvolution, a band-pass filter can be applied to the deconvolved trace.
Deconvolution can be implemented in several ways, including (1) spike deconvolution, (2) gap deconvolution (which often is called predictive deconvolution, although strictly speaking, predictive deconvolution includes both spiking and gap deconvolution), (3) predictive (spiking or gap) deconvolution without spectral whitening, and (4) predictive (spiking or gap) deconvolution with spectral whitening. The length, expressed as integral multiples of the sampling increment, must be specified. For gap deconvolution, the operator prediction distance(s) must be given, again expressed in integral multiples of the sampling increment.
The success of the deconvolution method clearly depends on how well the model mimics reality. One example of application of the convolutional model is the synthetic seismogram. We can obtain the reflectivity series from downhole information obtained at a well site. This reflectivity series is the input that, after convolution with the seismic multiple wavelets, produces the output: the synthetic seismic trace.
The operation also can be reversed, at least approximately. Thus, the output can be spike-deconvolved to recover the input. In this sense, spike deconvolution represents a particular kind of seismic inversion. We emphasize the fact that the deconvolution operation is produced by the convolution of the deconvolution filter with the synthetic seismic trace.
- Montana, C.A., and G. F. Margrave, 2006, Surface-consistent Gabor deconvolution: 76thAnnual International Conference, SEG, Expanded Abstracts, 2812-2816.
- Berkhout, A., 1977, Least-squares inverse filtering and wavelet deconvolution: Geophysics, 42, 1369-1383.
|Previous section||Next section|
|Random-reflection-coefficient model||Canonical representation|
|Previous chapter||Next chapter|
|Wavelet Processing||Fine Points|
Also in this chapter
- Model used for deconvolution
- Least-squares prediction and smoothing
- The prediction-error filter
- Spiking deconvolution
- Gap deconvolution
- Tail shaping and head shaping
- Seismic deconvolution
- Piecemeal convolutional model
- Time-varying convolutional model
- Random-reflection-coefficient model
- Canonical representation
- Appendix J: Exercises