# Tail shaping and head shaping

Series | Geophysical References Series |
---|---|

Title | Digital Imaging and Deconvolution: The ABCs of Seismic Exploration and Processing |

Author | Enders A. Robinson and Sven Treitel |

Chapter | 10 |

DOI | http://dx.doi.org/10.1190/1.9781560801610 |

ISBN | 9781560801481 |

Store | SEG Online Store |

The head of a wavelet consists of its first values; its tail is all the rest. The filter with the minimum-phase wavelet as input and the tail as the desired output is called the *tail-shaping filter*. The filter with the minimum-phase wavelet as input and the head as the desired output is called the *head-shaping filter*. What do these filters look like?

If the desired output is the tail, then the crosscorrelation between desired output and input is

**(**)

This shows that the crosscorrelation is equal to the advanced autocorrelation of the minimum-phase wavelet. Because the seismic trace (where is white noise) and the wavelet *b* have the same theoretical autocorrelation, the normal equations are the same as before; that is, the normal equations are the same as matrix equation **9** for the prediction filter with prediction distance .

If we assume the desired output to be the head , the crosscorrelation between desired output and input is simply the autocorrelation of the head; that is,

**(**)

Thus, the right-hand side of the normal equations is

**(**)

For the head-shaping filter, matrix equation **4** becomes

**(**)

Let us interpret this equation in terms of the prediction-error filter. Because the desired output is the first values of the minimum-phase wavelet, the crosscorrelation is the autocorrelation of the head. Because the head is of length , the crosscorrelation vanishes for lags greater than . The input wavelet is minimum phase. The prediction-error operator shortens this input wavelet *b* to its head, which is of length . Because is an independent variable, we are free to select whatever value we wish for . Thus, we can control the desired degree of resolution or wavelet contraction. However, this use of the prediction-error filter is not a more generalized approach to deconvolution but is part of a unified whole. The set of prediction-error filters (with various prediction distances) for a given autocorrelation has the spiking filter (which has prediction distance 1) at one extreme and the do-nothing filter (with prediction distance ) at the other extreme. All of the prediction-error filters in the set are simply related to each other.

A numerical example follows: Let the minimum-phase wavelet be . Let the length of the prediction filter be , and let the prediction distance be . The tail of the wavelet is . The one-sided minimum-phase wavelet autocorrelation is . The crosscorrelation of the tail with the wavelet, which is the same as the advanced autocorrelation of the wavelet, is . Normal equations **(4)** are then

**(**)

where the column vector on the left is the prediction operator — namely, the solution of system 41. Equation **18** for the prediction-error operator is

**(**)

where the column vector on the left is the prediction-error operator

**(**)

as given by the solution of the equations. The crosscorrelation of the head with the wavelet, which is the same as the autocorrelation of the head, is .

Equation **40** for the head-shaping operator is

**(**)

where the column vector on the left is the head-shaping operator

**(**)

as given by the solution of the equations. Filters 43 and 45 are the same, within the limits of the numerical approximations used.

However, the above two methods in fact need not be used to compute the prediction-error operator. Spike deconvolution and gap deconvolution are intimately related. Equation **36** states that , which says that the gap-deconvolution filter *f* for a given value of is the convolution of the spike filter *a* with the head *h* of the minimum-phase wavelet *b*. Because spike filter *a* is necessarily minimum phase, it follows that the gap filter *f* is minimum phase if and only if the head *h* is minimum phase.

Let the length of the prediction filter be *N* = 4, and let the prediction distance be . The tail of the wavelet is . The crosscorrelation of the tail with the wavelet, which is the same as the advanced autocorrelation of the wavelet, is . Equation **4** becomes

**(**)

where the column vector on the left is the prediction operator, as given by the solution of the equations. Thus, the spike operator is . The convolution of the spike operator with the head for gives the prediction-error operator

**(**)

We have found the prediction-error filter in three ways, which resulted in equation **43**, equation **45**, and equation **47**. For a prediction-error operator of a given length, the head-shaping method given by equation **45** produces the most accurate result. Equation **47**, which is obtained by convolution of the spike filter with the head, gives the least accurate result.

Spiking-filter outputs cannot in general be interpreted with ease because high-frequency components are present in the deconvolved trace. Those components result from the fact that this kind of deconvolution uses inverse, or spiking, filters. One improves this condition by passing the raw deconvolved trace through suitable low-pass filters, by smoothing the autocorrelation function, or by other related means.

Use of prediction-error filters with an arbitrary prediction distance leads to a deconvolution method in which the gap-deconvolved trace becomes equal to the convolution of the head of the wavelet with the spike-deconvolved trace. The head of the wavelet generally is a poor choice except in cases for which a detailed special type of convolutional model is available. More effective controls exist for the desired degree of resolution than are offered by gap deconvolution. In conclusion, gap deconvolution should not be used except when circumstances warrant it.

## Continue reading

Previous section | Next section |
---|---|

Gap deconvolution | Seismic deconvolution |

Previous chapter | Next chapter |

Wavelet Processing | Fine Points |

## Also in this chapter

- Model used for deconvolution
- Least-squares prediction and smoothing
- The prediction-error filter
- Spiking deconvolution
- Gap deconvolution
- Seismic deconvolution
- Piecemeal convolutional model
- Time-varying convolutional model
- Random-reflection-coefficient model
- Implementing deconvolution
- Canonical representation
- Appendix J: Exercises