Appendix K: Exercises
Results! Why, man, I have gotten a lot of results. I know several thousand things that won’t work. —Thomas A. Edison
1. What are the fundamental assumptions justifying the validity of deconvolution?
2. How can the resolution (spikiness) of output be controlled by designing a prediction-error filter? Converting the seismic wavelet into a spike is like asking for a perfect resolution. Because of noise and assumptions made about the seismic wavelet, is spiking deconvolution always desirable?
3. How can the prediction-error filter be used to remove multiples from the trace?
4. Explain how well-log measurements of velocity and density provide a link between seismic data and the geology of the substrata. A sonic log is a plot of interval velocity as a function of depth. A strong low-frequency component with a distinct, blocky character represents gross velocity variations. This low-frequency component normally is estimated by velocity analysis of CMP gathers. In many sonic logs, the low-frequency component is an expression of the general compaction-derived increase of velocity with depth. The sonic log also has a high-frequency component superimposed on the low-frequency component. These rapid fluctuations can be attributed to changes in rock properties that are local in nature. For example, the limestone layer can have interbeddings of shale and sand. Porosity changes also can affect interval velocities within a rock layer. Seismic impedance is defined as the product of density and velocity. Because the vertical density gradient is in most cases much smaller than the vertical velocity gradient, the impedance contrast in those cases between rock layers is caused mostly by the velocity contrast only.
5. Explain the following: The wavelet created by the source is called the signature. As this wavelet travels into the earth, two things happen. First, its overall amplitude decays because of wavefront divergence. Second, frequencies are attenuated because of the absorption effects of rocks. At any given time, the wavelet is not the same as it was at the onset of source excitation. A compensation usually is applied before deconvolution. Wavefront divergence is removed by applying a spherical spreading function. Inverse Q-filtering compensates for frequency attenuation.
6. Explain: In the convolutional model, all that normally is known is the recorded trace. The reflectivity is not known except at the location of wells with good sonic logs. The wavelet also is unknown. In certain cases, however, the wavelet is known partly. Such a case is when the signature, such as that of an air-gun array, can be measured.
7. Explain: A wavelet with its energy front-loaded is a minimum-phase wavelet. If the energy of the wavelet is concentrated mostly in the middle, then the wavelet is mixed phase. Finally, if the energy of the wavelet is end-loaded, then the wavelet is a maximum-phase wavelet.
8. Explain: The input wavelet might or might not be minimum phase. Starting with the input wavelet, the autocorrelation is computed to give the spiking-deconvolution operator. The spiking-deconvolution operator and its inverse are both minimum phase (Chapter 10). The inverse of the spiking-deconvolution operator is called the minimum-phase equivalent of the input wavelet. The minimum-phase equivalent has the same amplitude spectrum as that of the input wavelet. The minimum-phase equivalent has for its phase spectrum the negative of the phase spectrum of the spiking-deconvolution operator. If the spiking-deconvolution operator is convolved with the input wavelet, describe the output. Although the amplitude spectrum of the output is virtually flat, the phase spectrum of the output is not minimum phase. In conclusion, if the input wavelet is not minimum phase, spiking deconvolution cannot convert the input wavelet to a perfect zero-lag spike.
9. Discuss the implications of the following: The amplitude spectrum of the spiking-deconvolution operator is (approximately) the inverse of the amplitude spectrum of the input wavelet. To ensure numerical stability, an artificial level of white noise is introduced before computation of the deconvolution filter. This process is called prewhitening. Prewhitening is achieved by multiplying the zero-lag autocorrelation coefficient by a positive real number slightly greater than one in magnitude. This is the same as adding white noise to the power spectrum, whose total energy is equal to the difference between the zero-lag coefficient after and before prewhitening has been applied.
10. Explain: Wavelet processing refers to estimating the basic wavelet embedded in the trace, designing a shaping filter to convert the estimated wavelet to a desired form — usually a broadband zero-phase wavelet — and, finally, applying the shaping filter to the trace. Another type of wavelet processing involves wavelet shaping in which the desired output is the zero-phase wavelet with the same amplitude spectrum as that of the input wavelet. Note that this type of wavelet processing does not try to flatten the spectrum but only tries to dephase the input wavelet.
11. Explain: The design of a prediction filter requires only the autocorrelation of the input signal.
12. Is predictive deconvolution a general process that encompasses spiking deconvolution? Let be the prediction distance. Describe when the following statement can be made: Given an input wavelet of length , the prediction-error filter shrinks the input wavelet down to an -long output wavelet. When , the procedure is called spiking deconvolution.
13. Explain: Predictive deconvolution is an integral part of seismic data processing that is aimed at compressing the seismic wavelet, thereby increasing temporal resolution. In the limit, it can be used to spike the seismic wavelet and obtain an estimate for reflectivity.
14. Why does the source waveform change as it travels in the subsurface?
15. Explain: The reflectivity is assumed to be a random process. This implies that the trace and the seismic wavelet have nearly the same autocorrelation and amplitude spectrum. What about the respective phase spectra?
16. Explain: The seismic wavelet is minimum phase. Therefore, it has a minimum-phase inverse.
17. What are the basic assumptions for the convolutional model? In practice, deconvolution often yields good results in areas where these assumptions are not strictly valid.
18. Explain time-variant deconvolution. In this technique, a trace is divided into several time gates — typically three or more. Deconvolution operators then are designed from each gate and convolved with data within that gate.
19. Explain: Deconvolution operators should be designed using time gates and frequency bands with low noise levels. Poststack deconvolution can be used in an effort to take advantage of the noise reduction inherent in the stacking process.
20. Why is the quality of the output from the spiking deconvolution degraded when the seismic wavelet is not minimum phase?
21. Deconvolution is applied to a large fraction of all recorded seismic traces. Most of the time, deconvolution yields satisfactory results. What are the critical assumptions that underlie predictive deconvolution? When deconvolution does not work on some data, which of its underlying assumptions should be examined first?
22. Basically, spiking deconvolution is inverse filtering where the operator is the least-squares inverse of the seismic wavelet. Should increasingly better results be obtained when more and more coefficients are included in the inverse filter? Increasing the operator length whitens the spectrum further. Beyond a certain length, however, specification error (as discussed in Chapter 9) prevents further improvement by longer operators.
23. What can happen when the seismic wavelet is a mixed-phase wavelet? The minimum-phase equivalent of the mixed-phase wavelet has the same autocorrelation and amplitude spectrum as does the mixed-phase wavelet. Hence, the deconvolution operators for both wavelets are identical. Because the minimum-phase assumption is violated, deconvolution does not convert the mixed-phase wavelet to a perfect spike. Instead, the deconvolved output is a complicated high-frequency wavelet. Also note that the dominant peak in the output can be negative, whereas the impulse response has a positive spike. This difference in sign can occur when a mixed-phase wavelet is deconvolved.
24. Discuss (1) the case of unit prediction distance, and (2) the case of predicting the input trace at a future time as given by the prediction distance. Case 2 is used to predict and suppress multiples. When the prediction distance is equal to the sampling rate, the result is equivalent to spiking deconvolution. Predictive deconvolution using a prediction distance greater than unity yields a wavelet of finite duration instead of a spike. Given an input wavelet of samples, predictive deconvolution using a prediction filter with length n and prediction distance converts this wavelet into another wavelet that is samples long. The first lags of the autocorrelation are preserved, whereas the next n lags are zeroed out. In addition, the amplitude spectrum of the output increasingly resembles that of the input wavelet as prediction distance is increased. At a large prediction distance, predictive deconvolution does nothing to the input wavelet because almost all the lags of its autocorrelation have been left untouched. This result has an important practical implication: Under the ideal, noise-free conditions, resolution on the output from predictive deconvolution can be controlled by adjusting the prediction distance. Unit prediction distance implies the highest resolution, and a larger prediction distance implies less than full resolution. However, in reality, these assessments are dictated by the signal-to-noise ratio. What basic assumptions are made to support the above statements?
25. In predictive deconvolution, what meaning can be assigned to the first and second zero crossings on autocorrelation of the input wavelet? The first zero crossing produces a spike with some width, whereas the second zero crossing produces a wavelet with a positive lobe and a negative lobe.
26. What is the effect of prediction distance? As prediction distance increases, the output spectrum becomes increasingly less broadband. If prediction distance is increased, then the output amplitude spectrum becomes increasingly band-limited. The output also can be band-limited by applying a band-pass filter on the spiking-deconvolution output. Are these two ways of band-limiting equivalent? The output of the large prediction distance has an amplitude spectrum that is band-limited. However, the spectral shape within this bandwidth is not a boxcar but rather is similar to that of the input wavelet. The boxcar shape would be the case if a band-pass filter were applied on the output of the spiking deconvolution. Hence, spiking deconvolution followed by band-pass filtering is not equivalent to predictive deconvolution with a prediction distance greater than unity.
27. Explain: If prediction distance is increased, the output from predictive deconvolution becomes less spiky. This effect is useful because it allows the bandwidth of deconvolved output to be controlled by adjusting prediction distance. Application of spiking deconvolution to field data is not always desirable because it boosts high-frequency noise in the data. The most prominent effect of the nonunity prediction distance is suppression of the high-frequency end of the spectrum and preservation of the overall spectral shape of the input data. If prediction distance is increased further, then the low-frequency end of the spectrum is affected as well, making the output more band-limited.
28. Consider the percentage of prewhitening. Is the effect of varying prewhitening similar to that of varying the prediction distance? Does the spectrum become increasingly less broadband as the percentage of prewhitening is increased? Note that prewhitening narrows the spectrum without changing much of the flatness character, whereas a larger prediction distance narrows the spectrum and alters its shape, making it look more like the spectrum of the input seismic wavelet. Prewhitening preserves the spiky character of the output, although it adds a low-amplitude, high-frequency tail. On the other hand, increasing prediction distance produces a wavelet with a duration that is equal to the prediction distance.
29. Discuss how output bandwidth is related to prediction distance. [Answer: The smaller the prediction distance is, the broader the output bandwidth is.]
30. Consider the effect of random noise on deconvolution. The autocorrelation of ideal random noise is zero at all lags except at zero lag. Should the effect of random noise on deconvolution operators be roughly similar to the effect of prewhitening? Both effects modify the diagonal of the autocorrelation matrix, making it more dominant. However, the noise component also slightly modifies the nonzero lags of the autocorrelation. Prewhitening is equivalent to the addition of perfect random noise to the system. Because random noise generally is already present in the system to some degree, only a minute amount of such white noise, sometimes as little as 0.1%, needs to be added to the trace for numerical stability.
31. The effect of random noise on the performance of deconvolution must be tested on models. The noise component has a harmful effect on deconvolution. Deconvolution results from the noisy trace can have spurious spikes, which could be interpreted as genuine reflections. Noisy field data can yield a better stack when they are not treated by deconvolution. Only by testing can it be determined whether deconvolution performs satisfactorily on data with a severe noise problem.
32. Consider suppression of multiples. A prediction filter predicts periodic events, such as multiples, on the trace. A prediction-error filter produces the unpredictable component of the trace, which our mathematical model identifies with the earth-reflectivity series. Write down the noise-free convolutional model for the trace that contains the water-bottom multiples. Describe the actions of the prediction filter and of the prediction-error filter.
33. What is the effect of random noise on deconvolution performance?
34. Regarding two-step deconvolution that is predictive deconvolution followed by spiking deconvolution, discuss the two distinct goals for predictive deconvolution: (1) spiking the seismic wavelet and (2) predicting and suppressing multiples. Can the first goal be met using an operator with unit prediction distance? Can the second be met using an operator with a prediction distance greater than unity? How is the autocorrelation of the input trace used to determine the appropriate prediction distance for suppressing multiples? Periodicity resulting from multiples appears in the autocorrelation as an isolated series of energy packets in neighborhoods separated by the basic periodicity. Should prediction distance be chosen to bypass the first part of the autocorrelation that represents the seismic wavelet? Should operator length be chosen to include the first isolated energy packet in the autocorrelation? After applying gap deconvolution, what remains on the trace? At that point, can the basic wavelet be compressed by spiking deconvolution?
35. Two-step deconvolution is aimed at suppressing the multiple wave train and then spiking the remaining primary wavelet. Can the sequence be interchanged by first applying spiking deconvolution followed by predictive deconvolution? By using a sufficiently long spiking-deconvolution operator, can the two goals be achieved in one step? Can genuine reflections be suppressed unintentionally?
36. How can you ensure that no primaries are destroyed by deconvolution? Examine the autocorrelation. Which part of the autocorrelation represents the seismic wavelet? Look for isolated bursts that represent actual multiples. Does the first part of the autocorrelation chiefly represent the seismic wavelet? Should the prediction distance be chosen to bypass the first part of the autocorrelation? Should operator length be chosen to include the first isolated burst?
37. Why is periodicity of multiples preserved with only vertical incidence and zero-offset recording? Explain why predictive deconvolution aimed at suppression of multiples might not be effective when applied to nonzero-offset data such as common-shot or common-midpoint data. Predictive deconvolution sometimes is applied to CMP stacked data in an effort to suppress multiples. The performance of such an approach can be unsatisfactory because the amplitude relationships between multiples often are altered greatly by the stacking process. This result is primarily because of velocity differences between primaries and multiples. In addition, geometric-spreading compensation by using the primary velocity function adversely affects the amplitudes of multiples on nonzero-offset data. However, in the slant stack (or -p) main, the periodicity of multiples is preserved (Treitel et al., 1982).
38. Find a gather that contains strong reverberations. Determination of the deconvolution parameters starts with an analysis of the time gate. A first gate selected might be the entire length of the record. A second choice might exclude the deeper part of the record where ambient noise dominates. The start of the gate is chosen as the first arrival path. A third choice might be to exclude not only the deeper portion but also the early part of the record that contains energy corresponding to the guided waves. These waves travel within the water layer and are not part of the reflected signals from the deep horizons. What might be the merits and disadvantages of each of the above choices?
39. Autocorrelations from different gates should be compared. Does the autocorrelation represent the reverberatory character of the data? Does the early portion of the autocorrelation characterize the basic seismic wavelet?
40. Another aspect of an autocorrelation gate is length. The autocorrelation estimated from a narrow time gate in some cases might lack the characteristics of the reverberations and the basic seismic wavelet. In general, any autocorrelation function is biased; that is, the first lag value is computed from, say, N samples, the second lag value is computed from samples, and so on. If N is not large enough, an undesirable biasing effect can result. How large should the data gate be to avoid such biasing? If the largest autocorrelation lag used in the design of the deconvolution operator is M, an accepted rule of thumb is that the number N of data samples should be at least 10M.
41. Once the autocorrelation gate is determined, the issue of operator length can be examined. A short operator can leave behind some residual energy corresponding to the basic wavelet and the reverberating wave train. For a longer operator, smaller portions of the energy associated with the basic wavelet and the reverberations remain. However, at some point, any further increase in operator length does not significantly change the result. The goal is to find an operator that preserves the prominent reflections, compresses the seismic wavelet, and significantly suppresses the reverberations. How would you go about studying this problem?
42. Consider the effect of prediction distance. Show why, as the prediction distance increases, the deconvolution process becomes increasingly less effective in broadening the spectrum, whereas the autocorrelation contains increasingly more energy at nonzero lags. For very long prediction distances, the deconvolution process becomes ineffective. In practice, common values for the prediction distance are unity (spiking deconvolution) or the first or second zero crossing of the autocorrelation function (predictive deconvolution). Discuss the reasons why these criteria make sense or why they do not.
43. Why does increasing the prewhitening percentage make the deconvolution process less effective? The high end of the spectrum is not flattened as much as is the rest of the spectrum. The autocorrelations contain increasingly more energy at nonzero lags with an increasing percentage of prewhitening. In practice, it is not advisable to assign a large percentage of prewhitening. Typically, a value between 0.1 and 1% is sufficient to ensure stability in designing the deconvolution operator.
44. Discuss why the time-variant character of a seismic wavelet requires a multigate deconvolution. Can any of the following choices be done automatically, and if not, why not? It is not unusual for a field record to be deconvolved by using three or more time gates. Discuss how the autocorrelations from the various gates can show differences in the character of the reverberatory energy from one gate to another. A shallow gate has a higher-frequency signal than does a middle gate, and a middle gate has a higher-frequency signal than does a deeper gate. It is necessary to design deconvolution operators for each time gate. Three gates are usually enough to handle the nonstationary seismic signal. If the autocorrelations from different gates do not show significant variations, then multigate deconvolution might not be necessary.
45. Because the amplitude spectrum of the input data is flattened when one is applying spiking deconvolution, both the high-frequency ambient noise and the high-frequency components of the signal are boosted. Therefore, the output of spiking deconvolution often must be filtered with a low-pass operator. Discuss the advantages and disadvantages of this approach.
46. In marine seismic exploration, the far-field signature of the source array can be recorded. Discuss a method of first applying signature deconvolution to remove the source signature, followed by applying predictive deconvolution. Because the recorded signature is available, the inverse signature filter can be designed and applied to the recorded trace to remove the signature. The unknown seismic wavelet includes the propagating effects in the earth and the response of the recording system. This remaining wavelet then can be removed by spiking deconvolution.
47. Perform a test of gap deconvolution. Take an input gather and pick a gate to be used for the autocorrelation estimation. Plot the corresponding autocorrelation beneath each trace. Using the same prediction filter operator length and a 1% prewhitening each time: (1) deconvolve with gap zero (i.e., perform spiking deconvolution). Filter the result with a zero-phase band-pass filter (with a passband of 10 to 30 MHz) and the corresponding minimum-phase band-pass filter. (2) Deconvolve with a gap of 32 ms. Compare the results of (1) and (2).
48. Perform a test of prewhitening percentages. Using the data in the exercise above, compute the same deconvolution using a prewhitening of 10%. Compare the results.
49. Let us examine three-gate deconvolution. Pick gate boundaries for three gates. With data from each gate, design a deconvolution operator and apply it to the data in that gate. The operators are blended across the gate boundaries. Do the same thing with a different set of gates on the same data. Compare the results. Describe how one should go about choosing gate boundaries and blending methods. Note the difference in character among gates.
50. Perform a test of spiking deconvolution. Spike-deconvolve a shot record and follow by band-pass filtering. Display the autocorrelation before deconvolution, after deconvolution, and after filtering.
51. Perform a test of deconvolution after stack. Take a CMP stack with no deconvolution and do spike deconvolution followed by band-pass filtering. Also perform gap deconvolution and compare the results.
52. Perform a test of signature processing. Design the filters necessary to convert a recorded signature to its minimum-phase equivalent and apply it to the input record. Note that the output has the same bandwidth as does the input.
53. Let us do another test of signature processing. Design a shaping filter to convert the recorded signature to a spike.
54. Discuss the following two ways to handle a signature: (1) Convert the signature to its minimum-phase equivalent and then do deconvolution. (2) Convert the signature to a spike and then do deconvolution. Discuss the case in which the signature is minimum phase and the case in which the signature is not minimum phase.
55. Let us examine vibroseis deconvolution. The vibroseis source is a long-duration sweep signal in the form of a frequency-modulated sinusoid that tapers on both ends. Describe the convolutional model that holds for a correlated vibroseis record. A Klauder wavelet is defined as the two-sided autocorrelation of the sweep signal. Why is a Klauder wavelet zero phase? Convolution of a Klauder wavelet with a minimum-phase seismic wavelet gives a mixed-phase wavelet. Because spiking deconvolution is based on the minimum-phase assumption, it cannot properly recover the reflectivity from correlated vibroseis data.
56. One approach to deconvolution of vibroseis data is to apply a zero-phase inverse filter to remove the Klauder wavelet, followed by a minimum-phase deconvolution to remove the basic wavelet. In practice, problems arise because of the presence of spectral zeros caused by the band-limited nature of the Klauder wavelet. Inversion of an amplitude spectrum that has zeros yields an unstable operator. To circumvent this problem, a small percentage of white noise, say 0.1%, usually is added before inverting the Klauder-wavelet spectrum.
57. Design a filter that converts a Klauder wavelet into its minimum-phase equivalent. If the Klauder wavelet is converted to its minimum-phase equivalent, then spiking deconvolution would be applicable because the minimum-phase assumption is satisfied. In case there is a 90° phase difference in the vibrator systems between the control sweep signal and the base plate response, it is necessary to subtract this phase difference.
58. Correlated vibroseis data often are deconvolved as if they were dynamite data — that is, without converting the Klauder wavelet to its minimum-phase equivalent. The basic minimum-phase assumption, which generally holds for explosive data, is violated for correlated vibroseis data. Discuss why spiking deconvolution without conversion of the Klauder wavelet to its minimum-phase equivalent seems to work for most field data. However, the problem of tying vibrator lines to lines recorded with dynamite or to downhole data is more difficult if the vibrator data have not been phase-corrected. Discuss field systems that do minimum-phase vibroseis correlation in the field.
59. Discuss low-pass, high-pass, band-pass, and notch filters. [Answer: Low-pass filters are useful in minimizing high-frequency noise and interference. High-pass filters are useful when the information spectrum lies above the part of the spectrum where most of the noise and interference power is located. Seismic noise power increases as we go to lower frequencies, so a high-pass filter assists in reducing this kind of problem. Band-pass filtering is useful when the information spectrum lies in a narrow range of frequencies. Another example is a comb filter that is an ensemble of band-pass filters, each set to a different passband. A comb filter in the frequency domain looks like the teeth of a comb. Notch filtering is useful if the unwanted signals (noise or interference) lie in a narrow band.]
60. Discuss FIR and IIR filters. [Answer: Filters categorized as FIR (finite-impulse-response) filters are absolutely stable but often require many points for accurate design. One disadvantage with them is the many multiplications and additions required in their application. Another disadvantage is that the filter can reach into areas far from its center and thus be more prone to noise and extraneous signals. The Z-transform of an IIR (infinite-impulse-response) filter is the quotient of two polynomials. Such filters are more flexible in design than FIR filters are, and in addition they require fewer multiplications and additions than do comparable FIR filters. However, an IIR filter is not as easy to design as a FIR filter is. Moreover, an IIR filter is susceptible to errors that can produce instability in that the output can oscillate or grow without limit.]
61. Discuss band-pass filters. [Answer: The amplitude spectrum of a typical band-pass filter has the appearance of a nearly flat plateau that drops off toward zero on each side of the passband. The passband lies between two frequencies known as the cutoff frequencies. Application of a band-pass filter to a seismic trace restricts the frequencies of the filtered trace to lie mainly in the passband. In exploration seismology, a processing geophysicist chooses the passband on the basis of the data. The geophysicist might use a filter with a passband of 10–70 Hz in the shallower part of the section (two-way traveltimes of less than 1 or 2 s) and might use a filter with a passband of 10–50 Hz in the deeper part of the section (more than 2 s).]
62. Discuss high-pass and low-pass filters. [Answer: The amplitude spectrum of a high-pass (also known as low-cut) filter is nearly flat above a certain cutoff frequency. However, frequencies below the cutoff frequency do not completely disappear in the output. Such frequencies are reduced to a low level. The amplitude spectrum of a low-pass filter (also known as a high-cut filter) is nearly flat below a certain cutoff frequency. Application of this filter greatly reduces all frequencies above the cutoff frequency. Note that a band-pass filter can be obtained as the product of a suitable low-pass filter and a suitable high-pass filter. The amplitude spectrum of a typical notch filter is nearly flat except in a small band or notch in which it approaches zero. There are two parameters: the notch frequency and the width of the notch. A notch filter can suppress power-line interference, such as 50 Hz and its harmonics or 60 Hz and its harmonics.]
- Treitel, S., P. R. Gutowski, and D. E. Wagner, 1982, Plane-wave decomposition of seismograms: Geophysics, 47, 1375-1401.
|Interactive earth-digital processing
Also in this chapter
- Prediction-error filters
- Water reverberations
- Gap deconvolution of a mixed-delay wavelet
- Prediction distance
- Model-driven predictive deconvolution
- Convolutional model in the frequency domain
- Time-variant spectral whitening
- Model-based deconvolution
- Surface-consistent deconvolution
- Interactive earth-digital processing