# Model used for deconvolution

Other languages:
English • ‎español
Series Geophysical References Series Digital Imaging and Deconvolution: The ABCs of Seismic Exploration and Processing Enders A. Robinson and Sven Treitel 10 http://dx.doi.org/10.1190/1.9781560801610 9781560801481 SEG Online Store
What crowd is this? What have we here! We must not pass it by; A telescope upon its frame, and pointed to the sky.

—William Wordsworth (1770-1850)


Pythagoras (c. 580-c. 500 B.C.) taught that “all is number.” Pythagoras realized that numbers were hidden in everything, from the harmonies of music to the orbits of the planets. In other words, number and the nature of number make a thing clear either in itself or in its relation to other things. Today’s world, with its digital computers, digital pictures, digital animation, digital television, digital telephones, digital regulators, and digital processing, attests to the foresight of Pythagoras. Pythagoras was instrumental in the development of the language of mathematics, which enabled him and others to describe the nature of the universe.

In additional ways unforeseen by Pythagoras, everything is number. The great mathematician Carl Friedrich Gauss (1777-1855) once said, “Mathematics is the queen of science and number theory the queen of mathematics.” While he was still a teenager, Gauss was intrigued with numbers. At the age of 18, he thought up and justified the numerical method of least squares.

Gauss’s love for numerical calculations stirred his interest in astronomy. On New Year’s Eve 1800-1801, Giuseppe Piazzi had discovered what he thought was a new planet (it was the asteroid Ceres). Because observers soon would lose sight of such a small object, it was important to calculate its elliptical orbit as soon as possible. Using only the few observations that had been made of the asteroid, Gauss calculated its orbit (reputedly by least squares) so accurately that astronomers could locate it again late in 1801 and early in 1802.

In the finest tradition of Piazzi and Gauss, modern geophysicists Joe Dellinger and Bill Dillon discovered and calculated the orbit of a new asteroid, which they named Svenders after the present authors (Schuster, 2005[1]). Since Gauss’s work, the least-squares method has been important in science, especially in astronomy and geodesy.

The purpose of a least-squares filter is, in the least-squares sense, to convert an input signal into a desired output signal. The design of a least-squares filter requires two entities: the autocorrelation of the input signal and the crosscorrelation of the desired output signal with the input signal.

A least-squares prediction filter is a special case of the least-squares filter. In this case, the desired output is a time-advanced version of the input signal. The design of a least-squares prediction filter requires only the autocorrelation of the input signal. Least-squares filtering, as a mathematical process, does not require a detailed description of the signals involved. That aspect makes least-squares filtering very useful.

Deconvolution typically is implemented with the least-squares-error criterion, although other error criteria also can be used. Deconvolution requires a model that describes the internal structure of a seismic trace. More exactly, deconvolution is based on the assumption that a seismic trace can be represented as the convolution of a reflectivity series with a seismic wavelet. This so-called convolutional model of the seismic trace leads directly to the method of deconvolution, in which we attempt to recover estimates of the reflectivity series from the recorded seismic trace.

The ensemble of the reflection coefficients comprises the reflectivity series. The seismic trace is the response of the reflectivity series to wavelet excitation, that is, the trace is a superposition of the individual wavelets. This linear process is called the principle of superposition, which we discussed in Chapter 1. The process is achieved computationally by convolving the wavelet with the reflectivity series. To identify closely spaced reflecting boundaries, the wavelet must be removed from the trace to yield the reflectivity series. This removal process is the opposite of the convolutional process used to represent the response of the reflectivity series to a wavelet excitation. Such an opposite or inverse process is appropriately called inverse filtering or deconvolution (Robinson, 1954[2], 1957[3], 1966[4]; Robinson and Treitel, 1967[5], 1969[6]; Peacock and Treitel, 1969[7]).

The convolutional model states that a trace is the convolution of a wavelet and a reflectivity series. As Robinson (1957[3], p. 767-768) wrote, this convolutional model yields

the representation of the time series at any moment in terms of its own observable past history plus an unpredictable, random-like innovation. It is assumed that a seismic trace is additively composed of many overlapping seismic wavelets that arrive as time progresses. It is assumed that each wavelet has the same stable, one-sided, minimum-phase shape and that the arrival times and strengths of these wavelets may be represented by a time sequence of uncorrelated random variables. Since the geologic structure is physically fixed and constant in nature and has no intrinsic random characteristics, any statistical approach to this problem immediately encounters difficulties that are commonly associated in the statistical literature with the Theorem of Thomas Bayes. Nevertheless, modern statistical theory admits the bypassing of these difficulties, although with reservation, and hence the working geophysicist may be considered to be faced with a situation which is essentially statistical.

Interestingly, when Robinson submitted this paper for publication in 1954, it was not accepted. At that time, the solid earth was regarded as being deterministic, to be treated by differential and integral equations, rather than being random or randomlike, to be treated by statistical methods.

When Norman Ricker became editor of GEOPHYSICS in 1957, he found Robinson’s manuscript by accident in an old cardboard box and published it. For clarification, one can find no better words that those of Ulrych et al. (2001[8], p. 55):

An inherent feature of any inverse problem is randomness. As we will see, randomness may be associated with various parts of our quest, but there can certainly be no doubt that noise always associated with the observations is indeed random. Thus, our approach must be statistical in nature. Statistics, to many, imply probabilities. Probabilities, at least to us, imply Bayes. This is not the only view; in fact, we consider two quite different views. In the first, we consider the model parameters to be a realization of a random variable. In the second, we treat the parameters as nonrandom. Probability theory and statistics are different. The former refers to the quest of predicting properties of observations from probability laws that are assumed known. The latter is, in a sense, the inverse. We observe data and wish to infer the underlying probability law. In general, inverse problems are more complex to solve than forward problems. They are often ill posed or nonunique.

The standard assumption is that each wavelet has the same minimum-phase shape and that the arrival times and strengths of each wavelet are given by a reflectivity series of uncorrelated random variables. The spiking-deconvolution operator can be computed from the autocorrelation of the trace. The spiking operator, so computed, is necessarily minimum phase (Robinson and Wold, 1963[9]). This operator can be used to deconvolve the trace — that is, the spiking operator removes the wavelet from the trace, thereby yielding the reflectivity series. In addition, the spiking operator can be inverted to yield estimates of the basic wavelet shape.

Deconvolution improves the temporal resolution of a seismic section by compressing the seismic wavelets. For example, the reverberatory (ringy) character of a marine record without deconvolution seriously limits resolution. Deconvolution can attenuate such reverberations significantly. A spiking deconvolution filter is a prediction-error filter with prediction distance equal to one time unit. On the other hand, the method of gap deconvolution also uses the least-squares prediction-error filter, but with prediction distances greater than unity. In certain cases, gap deconvolution can be used to remove reverberations from a seismic trace that was generated by a mixed-phase wavelet. We present some examples in Chapter 11.

Other filters can be derived from a minimum-phase wavelet (Robinson, 1998[10]). For example, we can divide the wavelet into two parts. One part consists of the wavelet’s first ${\displaystyle \alpha }$ coefficients and is called its head. The other part consists of all the coefficients beyond ${\displaystyle \alpha }$ and is called the wavelet’s tail. Such shaping operators can be computed by least squares. The shaping operator that shapes the wavelet into its head is called the head-shaping operator. Within computational accuracy, this operator is the same as the prediction-error operator for the trace for prediction distance ${\displaystyle \alpha }$. The shaping operator that shapes the wavelet into its tail is called the tail-shaping operator.