Interactive earth-digital processing
|Series||Geophysical References Series|
|Title||Digital Imaging and Deconvolution: The ABCs of Seismic Exploration and Processing|
|Author||Enders A. Robinson and Sven Treitel|
|Store||SEG Online Store|
The method of interactive earth-digital design can be used to increase seismic resolution. This method draws on the use of the earth as a computer in conjunction with a digital computer. Robinson (1967, p. xix) writes:
The earth itself can be used as a computer. Through the use of high-powered electromechanical vibrators, signals of specific types can be impressed in the earth. By the use of directional detectors, signals can be picked up, and coded signals can then be fed back into the earth under the control of a digital computer on the surface, all in real time. Hence, the earth itself becomes a computing machine, which can affect the deconvolution process as the time series are recorded. Thus, we can use the earth itself to analyze the data from the earth. In this way, we can utilize seismic recording systems more efficiently; our equipment will not be idle a good part of the time as it is now, but will be used at great efficiency by incorporating the environment itself into our computing system.
Let us give an analogy from photography. In the nineteenth century, a photographer would use an analog camera to take a picture of a person. The picture was the final image. In the predigital days of seismic prospecting (from 1930 to about 1965), a geophysicist would record seismic records on photographic paper in the field. The paper records made up the final image that was used for interpretation. With the introduction of analog magnetic tape recording in the mid-1950s, some analog processing became feasible, such as electric analog filtering and static time shifts of the traces. However, this analog development soon was eclipsed by digital recording and processing, which became prevalent in the 1960s.
How does a photographer use a digital camera today? He can take a candid picture, such as a photograph of a person taken informally on the street, especially without the subject’s knowledge. Suppose a freelance photographer who pursues a celebrity can obtain only one picture in a particular circumstance. Unfortunately, the picture is full of extraneous items. Once he is back in his studio, the photographer uses his digital computer to remove the extraneous items and enhance the picture. The photographer chooses a configuration of enhancement parameters and makes an initial run on the computer. He looks at the result. If he is not satisfied with the picture, he adjusts the configuration of the parameters and makes another run on the computer. He continues to use this iterative improvement algorithm until he is satisfied with the result.
How does a geophysicist produce an image of the subterranean earth? In the field, the geophysicist records a digital seismic data set. Unfortunately, the seismic data set is full of extraneous items. The situation is analogous to that of the freelance photographer. Once he is back in the office, the geophysicist uses his digital computer to remove the extraneous items and enhance the data set. (See Figure 16a.) Like a freelance photographer, the geophysicist uses iterative improvement to obtain a satisfactory final image of the subterranean earth. In the case of either the freelance photographer or the geophysicist, one fixed set of data is used in an iterative improvement algorithm on the computer to obtain the final image.
In another circumstance, an artistic photographer in his studio formally takes a posed picture of person. He places the person in a particular attitude or position so as to photograph that person in the most flattering way. The artistic photographer takes a picture with his digital camera and looks at the result on the display. If he is not satisfied, he adjusts the camera and adjusts the pose of the subject as well. He takes a second picture. The artistic photographer repeats this process until a satisfactory picture is produced. Now the iterative improvement algorithm involves not just the digital equipment but also the subject. Another picture of the subject is taken with each iteration. The freelance photographer has one picture with which to work; the artistic photographer has as many pictures as he wishes, all to his own choosing. Needless to say, the artistic photographer obtains a final picture of much better quality than does the freelance photographer.
The great strides made in instrumentation and computers open up the possibilities of using the earth as a computer (Robinson, 2008). A geophysicist now, as it were, can emulate the prowess of the artistic photographer by bringing the subject (i.e., the earth) within the feedback loop of the iterative improvement algorithm (Figure 16b). In interactive earth-digital processing, a vibratory source signature optimally would be coded both in time and in space (Tyapkin and Robinson, 2001, 2003a, 2003b). This coded signature is used to record a set of seismic field data. The field data are run directly into a computer on location. The computer analyzes the set of field data and forms an image. If the image is not satisfactory, both the configuration of the parameters and the coded signature are adjusted. These feedback operations occur in real time. A second set of seismic field data is taken. The process is repeated in an iterative-improvement manner until a satisfactory image of the earth is produced. The essence of interactive design in photography and in geophysics is the use of successive sets of data produced by the object itself (i.e., the person photographed or the earth probed, as the case may be) to refine the image to obtain better resolution.
Seismology involves remote detection. Photography and holography make an image only of what can be seen with the naked eye. Holography can make a 3D image of a house, but it cannot see through the walls to make an image of the inside of the house. Seismology can look inside the earth to make a 3D image of the entire structure, inside and out. To achieve this end, seismology requires analysis (such as that provided by decomposition or deconvolution) to remove multiple reflections. In conventional seismic processing, deconvolution is done on the recorded seismic data set inside the computer. In interactive earth-digital processing, the earth itself is tricked into doing the deconvolution.
As an example, let a receiver be placed at depth in a drill hole. The source is placed above the receiver. The receiver is a directional detector, which picks up both the downgoing wave and the upgoing wave. In Einstein deconvolution (Chapter 9), the upgoing wave is deconvolved by the downgoing wave. The Einstein deconvolution process removes not only the source signature from the data but also the interference caused by reverberations and multiple reflections originating from the geologic layering above the receiver. Because this type of interference is the most harmful on the seismogram, the Einstein deconvolution method offers the possibility of widespread application. Einstein deconvolution, with its demand for extremely accurate data, usually is not feasible. The process of bringing the earth itself within the iteration loop improves the likelihood that such accuracy can be obtained.
The output of Einstein deconvolution is the unit-impulse reflection response of the deep earth system (i.e., of the rock layers below the receiver). In conventional digital processing, Einstein deconvolution would be performed on a fixed set of received seismic data. In interactive earth-digital processing, a new set of seismic field data is obtained in each iteration. We take the downgoing wave recorded at the receiver and invert it in the computer to obtain the inverse downgoing wave. Now we use this inverse downgoing wave as the new coded signature signal sent into the ground on the next iteration. The iterations would continue until the downgoing wave recorded at the receiver becomes approximately an impulse. In such a case, the upgoing wave recorded at the receiver would be the required impulse response of the deep-earth system. In this way, all the interference caused by reverberations and multiple reflections originating from the upper geologic layers would be removed. A clear picture of the lower layers would emerge as the upgoing wave at the receiver. The earth itself is used to perform the Einstein deconvolution.
Let us give an analogy. An earth-bound telescope suffers from the interference effects caused by the layers of the earth’s atmosphere. The Hubble telescope was placed in space above the atmosphere. An image of the stars taken by the Hubble telescope, being free of these interference effects, is much clearer than the same image produced by an earthbound telescope. In a similar manner, a seismic survey suffers from the interference effects caused by the earth’s upper surface layers. We can drill a hole in the earth for the receiver. Drilling holes for the receiver is expensive, so a judicious choice would have to be made regarding the number and location of such holes. Unfortunately, we still have the physical limitation that the vibroseis-type source cannot be placed underground. Einstein deconvolution can be used to overcome this limitation, so we still can obtain the same beneficial effect as that obtained by the Hubble telescope. Earth-digital processing helps us to obtain the accuracy required to carry out successful Einstein deconvolution.
The same reasoning as that used for Einstein deconvolution can be used for dynamic deconvolution and for other inversion methods (Stoffa et al., 1998; Sen, 2001; Hong and Sen, 2006). To use such exacting deconvolution methods, the earth must be brought into the feedback loop. In this way, fresh seismic data are generated on each iteration. The succession of seismic data sets provides the means to assure that the process is indeed working correctly.
- ↑ Robinson, E. A., 1967, Multichannel time series analysis with digital computer programs: Holden Day Press.
- ↑ Robinson, E. A., 2008, Seismic resolution and interactive earth-digital processing: The Leading Edge, 27, no. 5, 670-673.
- ↑ Tyapkin, Y., and E. Robinson, 2001, Why waste energy and money with improper sweeps? 71st Annual International Meeting, SEG, Expanded Abstracts, 13-16.
- ↑ Tyapkin, Y., and E. Robinson, 2003a, Optimum pilot sweep: Geophysical Prospecting, 51, no. 1, 15-22.
- ↑ Tyapkin, Y., and E. Robinson, 2003b, How to optimize the pilot sweep of vibratory sources in seismic surveys, First Break, 21, no. 2, 47-52.
- ↑ Stoffa, P. L., M. K. Sen, and G. Xia, 1998, 1-D elastic waveform inversion: A divide-and-conquer approach: Geophysics, 63, 1670-1684.
- ↑ Sen, M. K., 2001, Pre-stack waveform inversion of plane wave seismograms: Isotropy to transverse isotropy: Recorder, 26, no. 6, 85-96.
- ↑ Hong, T., and M. K. Sen, 2006, Real-coded multiscale genetic algorithm for prestack waveform inversion: 76th Annual International Meeting, SEG, Expanded Abstracts, 2161-2165.
|Previous section||Next section|
|Surface-consistent deconvolution||Appendix K: Exercises|
|Previous chapter||Next chapter|
Also in this chapter
- Prediction-error filters
- Water reverberations
- Gap deconvolution of a mixed-delay wavelet
- Prediction distance
- Model-driven predictive deconvolution
- Convolutional model in the frequency domain
- Time-variant spectral whitening
- Model-based deconvolution
- Surface-consistent deconvolution
- Appendix K: Exercises