# The shaping filter

Other languages:
English • ‎español
Series Geophysical References Series Digital Imaging and Deconvolution: The ABCs of Seismic Exploration and Processing Enders A. Robinson and Sven Treitel 9 http://dx.doi.org/10.1190/1.9781560801610 9781560801481 SEG Online Store

Use of least-squares digital filters for analyzing seismic recordings is widespread. We now describe the design criteria for such filters (Robinson and Treitel, 1965[1]; Treitel and Robinson, 1966[2]). A useful and general model is shown in Figure 4. In this model, there are four signals: (1) the input signal x, (2) the desired output signal z, (3) the shaping filter f to be designed, and (4) the actual output signal ${\displaystyle y=f*x}$.

Figure 4.  General model showing desired output, input, filter, and actual output.

We seek a technique that allows us to find filter f in terms of the input signal and the desired output signal. Among the various methods that are at our disposal to accomplish this task, one stands out particularly because of the quality of the results obtainable and the simplicity of the concepts involved. The technique we are about to describe is based on the least-squares criterion, and it leads to filters that are linear and time invariant. Many of the computer programs required to carry out the computations given in this chapter can be found in Robinson and Osman (1996)[3].

We formulate this problem in discrete time. To make things meaningful for computer implementation, we must restrict ourselves to filters with a finite number of coefficients. At this point, it is important to make the distinction between causal and noncausal filters. A causal filter is a one-sided filter. A finite-length one-sided filter has the form

 {\displaystyle {\begin{aligned}f=\left(f_{0},f_{1},\dots ,f_{n}\right).\end{aligned}}} (1)

If the input to the causal filter is the signal x, the output is the signal y whose coefficients are given by the convolution

 {\displaystyle {\begin{aligned}y_{k}=\sum _{n{=0}}^{N}{f_{n}}x_{k-n}.\end{aligned}}} (2)

It always is understood that the running index n on a causal filter goes from 0 to N. This convolution can be written more concisely as ${\displaystyle y=f*x}$.

A noncausal filter is a two-sided filter. Generally, noncausal filters have the same number of filter coefficients on each side of the center point n = 0 so that the filter can be written as

 {\displaystyle {\begin{aligned}f=\left(f_{-N},f_{-N+1},\dots ,f_{0},f_{1},\dots ,f_{N}\right).\end{aligned}}} (3)

The same type of convolutional equation holds between input and output, except now the filter time index n runs from –N to N; that is,

 {\displaystyle {\begin{aligned}y_{k}=\sum _{n=-N}^{N}{f_{n}}x_{k-n}.\end{aligned}}} (4)

For a causal filter, the output value ${\displaystyle y_{k}}$ at a present time k depends only on the present and past input values ${\displaystyle x_{k},x_{k-1},...,x_{k-n}}$ of the input. However, for a noncausal filter, the output also depends on the future input values ${\displaystyle x_{k+1},...,x_{k+n}}$. This dependence of the present output on future input is the reason why a two-sided filter is called noncausal.

Noncausal filters cannot be realized in real time because it is impossible for any present output to depend on future inputs. However, noncausal filters find widespread use in what can be called nominal time, which refers to a time scale on recorded data. The data have been observed and documented during some interval during the real-time past, and the time scale attached to the data refers to this historical time. With reference to this historical (or nominal) time scale, both the past and future of the recorded data are available to us. Thus, we can use noncausal filters to process the recordings. This technique is precisely the one used by a historian positioning herself or himself in the midst of some historical event whose future outcome is known already. It is also a technique we can use to analyze seismic records, because the seismograms were acquired some time ago in the field, and now we have the entire set of field recordings laid out before us.

How is a shaping filter designed? Let us illustrate. The mathematical derivations are the same for either a causal or a noncausal filter, as long as we remember to let the filter index run from 0 to N on the causal filter and from –N to N on the noncausal filter. The purpose of the shaping filter is to shape (as well as possible in the least-squares sense) the input signal x into the desired output signal z. In other words, the actual output y of the filter should approximate the desired output. The error is the difference between the desired output and the actual output; that is, the value of the error signal at time k is ${\displaystyle z_{k}-y_{k}}$. The basic principle for our filter design is the least-squares criterion: Minimize the energy of the error signal. In other words, we seek the filter coefficients ${\displaystyle f_{k}}$ that minimize the value of the error energy

 {\displaystyle {\begin{aligned}I=\sum _{k}{{\left(z_{k}-y_{k}\right)}^{2}}.\end{aligned}}} (5)

The summation is over all pertinent values of the discrete time index k. The smallest achievable value of the error energy for a given situation will yield the best or “optimum” linear filter in the least-squares sense. The error energy can be written as

 {\displaystyle {\begin{aligned}I=\sum _{k}{{\left(z_{k}-\sum _{n}{f_{n}}x_{k-n}\right)}^{2}}.\end{aligned}}} (6)

The index on the second summation sign is n = 0, 1, … , N for a causal filter, whereas the index is n = –N, –N + 1, … ,N for a noncausal filter. We take the partial derivative of the error energy with respect to each coefficient ${\displaystyle f_{m}}$:

 {\displaystyle {\begin{aligned}{\frac {\partial I}{\partial f_{m}}}=\sum \limits _{k}{2\left({z_{k}-\sum \limits _{n}{f_{n}x_{k-n}}}\right){\frac {\partial }{\partial f_{m}}}\left({z_{k}-\sum \limits _{n}{f_{n}x_{k-n}}}\right)}\\\;\;\;\;\;\;\;=\sum \limits _{k}2\left({z_{k}-\sum \limits _{n}{f_{n}x_{k-n}}}\right)(-x_{k-m})=-2\sum \limits _{k}{z_{k}x_{k-m}x_{k-m}}.\\\end{aligned}}} (7)

In Chapter 7, the autocorrelation of signal x was defined as

 {\displaystyle {\begin{aligned}r_{m-n}=\sum _{k}{x_{k-n}}x_{k-m}^{*}.\end{aligned}}} (8)

Likewise, the crosscorrelation of signals z and x was defined as

 {\displaystyle {\begin{aligned}g_{m}=\sum _{k}{z_{k}}x_{k-m}^{*}.\end{aligned}}} (9)

However, in the present chapter, we deal with only real-value signals, so the superscript asterisk, which indicates the complex conjugate, is not needed. The partial derivatives are

 {\displaystyle {\begin{aligned}{\frac {\partial I}{\partial f_{m}}}=-2g_{m}+2\sum _{n}{f_{n}}r_{m-n}.\end{aligned}}} (10)

We minimize the error energy by setting each partial derivative equal to zero. Thus, we obtain the set of linear simultaneous equations given by

 {\displaystyle {\begin{aligned}\sum _{n}{f_{n}}r_{m-n}=g_{m}.\end{aligned}}} (11)

This is a set of N + 1 simultaneous equations with m = 0, 1, 2, …, N in the case of a causal filter and a set of 2N + 1 simultaneous equations with m = –N, –M, N + 1, … , N in the case of a noncausal filter. These are the so-called normal equations, whose solution permits us to find the filter coefficients ${\displaystyle {\rm {f}}_{n}}$. The known quantities in these normal equations are the autocorrelation ${\displaystyle r_{m-n}}$ of the input signal and the crosscorrelation ${\displaystyle g_{m}}$ of the desired output signal and the input signal. It can be shown that as N increases, the mean square error decreases (Robinson and Treitel, 2000[4]). However, the specification error starts to increase, so some optimum value of N is required so that the total of these two types of error is minimized. (Note: A mathematical model of necessity must be a simplification of the actual physical situation. The specification error is the difference between the fitted model and the actual situation. Invariably, some types of variables will be excluded from the model either by design or accident. Increasing the number of coefficients of an included variable usually cannot offset the loss of the excluded variables, and as the number of coefficients increases, it actually can make matters worse.)

The normal equations can be solved by standard techniques. However, that approach becomes unnecessarily cumbersome for large values of N. A very efficient iterative method for solving the normal equations has been given by Levinson (1947)[5], making use of the Toeplitz symmetry that the normal equations possess. For a detailed treatment, see Robinson and Treitel (2000)[4]. For computer codes, see Robinson and Osman (1996)[3].

## References

1. Robinson, E. A., and S. Treitel, 1965, Dispersive digital filters: Reviews of Geophysics, 3, 433-461.
2. Treitel, S., and E. A. Robinson, 1966, The design of high-resolution digital filters: IEEE Transactions on Geoscience Electronics, GE-4, 25-38.
3. Robinson, E. A., and O. M. Osman, 1996, Deconvolution: SEG Geophysics Reprint Series No. 17.
4. Robinson, E. A., and S. Treitel, 2000, Geophysical signal analysis: SEG.
5. Levinson, N., 1947, The Wiener rms error criterion in prediction and filtering: Journal of Mathematics and Physics, 25, 261-278.