Chapter 2 Digital Imaging
A little learning is a dangerous thing; Drink deep, or taste not the Pierian spring. - Alexander Pope
In the past, most seismic surveys were along surface lines, which yield 2D subsurface images. Because of great strides in computer technology and seismic instrumentation, exploration geophysics has made the transition from 2D to 3D processing.
The wave equation behaves nicely in one dimension and in three dimensions but not in two dimensions. In one dimension, waves on a uniform string propagate without distortion. In three dimensions, waves in a homogeneous isotropic medium propagate in an undistorted way except for a spherical correction factor. However, in two dimensions, wave propagation is complicated and distorted. By its very nature, 2D processing never can account for events originating outside of the plane. As a result, 2D processing is broken up into a large number of approximate partial steps in a sequence of operations. These steps are ingenious, but they never can give a true image.
On the other hand, 3D processing accounts for all of the events. It is now cost-effective to lay out seismic surveys over a surface area and to do 3D processing. No longer is the third dimension missing, so consequently, the need for a large number of piecemeal 2D approximations is gone. Prestack depth migration is a 3D imaging process that is computationally extensive but mathematically simple. The resulting 3D images of the interior of the earth surpass all expectations in utility and beauty.
Reflection seismology is a remote-imaging method used in petroleum exploration. The seismic reflection method was developed in the 1920s. From then until about 1965, the method involved two steps: acquisition and interpretation. Acquisition consisted of setting off a dynamite explosion in the ground and using geophones planted in the ground to detect the resulting seismic waves. The geophones were laid out on a line. The received waves were recorded on photographic paper on a drum. The recordings were taken for a time span of about two or three seconds after the moment of the shot. Each receiver accounted for a single wiggly line, which is called a seismic trace or simply a trace. In other words, a seismic trace is a signal originating from a specific source location and received at a specific receiver location. In the early days, a seismic crew could fire approximately ten shots a day, with a dozen or so receivers for each shot. Each shot yielded an analog seismic record made up of traces, with one trace from each receiver used (Figure 1).
Interpretation was the next step. Each record was examined visually. A primary reflection is an event that represents direct passage of a seismic wave from the source to the depth point from which it is reflected and then a direct passage back to the receiver. In other words, a primary reflection involves just one reflection. In contrast, a multiple reflection involves more than one reflection (Figure 2).
At a reflection, traces become coherent in the sense that they come into phase with each other. In other words, at a reflection, the crests and troughs on adjacent traces give the appearance of fitting into one another. The arrival time of a reflection indicates the depth of the reflecting horizon below the surface, whereas the time differential (the so-called stepout time) in the arrivals of a given peak or trough at successive receiver positions gives information on the dip of the reflecting horizon. In favorable areas, it is possible to follow the same reflection over a distance much greater than that covered by the receiver spread for a single record. In such cases, interpreters place the records side by side. The reflection from the last trace of one record correlates with the first trace of the next record. Such correlation can be continued on successive records as long as the reflection persists.
Unfortunately, in unfavorable areas, signal-generated noise overwhelms the primary reflections. In such cases, the primary reflections cannot be picked visually. For example, water-layer reverberations usually completely overwhelm the primaries in water-covered areas such as, for example, the Gulf of Mexico, the North Sea, and the Persian Gulf. The most extreme case of multiples that geophysicist Larry Lines has ever seen was from the coast of Labrador (the place that Leif Eriksson had named Markland, more than a thousand years ago), where there were at least 20 strong water-bottom bounces. Areas where geophysicists can see “no reflections” on the raw record are termed no-record (NR) areas. The NR areas of the world were not amenable to oil exploration before digital processing.
In the 1950s, a large part of the earth’s sedimentary basins, including essentially all water-covered regions, were classified as no-record areas. Yet the 1940s and 1950s were replete with inventions, not the least of which was the modern high-speed electronic stored-program digital computer. In 1952 through 1957, nearly every major oil company and geophysical company sponsored the MIT Geophysical Analysis Group (GAG) to develop digital-processing methods for unlocking the secrets of NR seismograms (Wadsworth et al., 1953; Robinson, 1957, 2005; Treitel, 2005). This undertaking was the first effort to convert an industry from analog to digital methodology. The goal was to find ways to remove signal-generated noise (such as ghosts, reverberations, and other multiple reflections) to yield the underlying primary reflections.
The GAG succeeded in developing signal-processing methods (such as deconvolution) that were successful in preserving the primary reflections and in suppressing signal-generated noise. However, the unreliability of the existing computers made them far from suitable for geophysical processing on a routine basis. Nevertheless, each year saw a constant stream of improvements in computers, and that trend in computer technology was accelerating.
With the introduction of transistorized computers in the late 1950s, the situation changed. By 1965, the digital approach was so much in force in exploration geophysics that there could be no turning back to the old analog ways (Robinson and Treitel, 1964; Claerbout and Robinson, 1964). Oil and geophysics companies started to carry out the conversion from analog to digital methods. These companies made extensive advances in [digital processing]] and technology (Lawyer et al., 2001). Exploration geophysics was the first industry to go digital. Geophysicists introduced the term digital revolution to describe this transition (Robinson and Clark, 2005b).
- Wadsworth, G. P., E. A. Robinson, J. G. Bryan, and P. M. Hurley, 1953, Detection of reflection on seismic records by linear operators: Geophysics, 18, 539–586.
- Robinson, E. A., 1957, Predictive decomposition of seismic traces: Geophysics, 22, 767–778.
- Robinson, E. A., 2005, The MIT Geophysical Analysis Group (GAG) from inception to 1954: Geophysics, 70, no. 4, 7JA–30JA.
- Treitel, S., 2005, The MIT Geophysical Analysis Group (GAG): Geophysics, 70, no. 4, 30JA.
- Robinson, E. A., and S. Treitel, 1964, Principles of digital filtering: Geophysics, 29, 395–404.
- Claerbout, J., and E. A. Robinson, 1964, The error in least squares inverse filtering: Geophysics, 29, 118–120.
- Lawyer, L. C., C. C. Bates, and R. B. Rice, 2001, Geophysics in the affairs of mankind: SEG.
- Robinson, E. A., and R. D. Clark, 2005b, Reflecting on the digital revolution: The Leading Edge, 24, no. 10, 1030–1032.
Also in this chapter
- Digital processing
- Signal enhancement
- The unit tangent vector
- The gradient
- The directional derivative
- The principle of least time
- The eikonal equation
- Snell’s law
- Ray equation
- Ray equation for velocity linear with depth
- Raypath for velocity linear with depth
- Traveltime for velocity linear with depth
- Point of maximum depth
- Wavefront for velocity linear with depth
- Two orthogonal sets of circles
- Migration in the case of constant velocity
- Implementation of Migration
- Appendix B: Exercises