Processing of seismic data

From SEG Wiki
Jump to: navigation, search
Seismic Data Analysis
Series Investigations in Geophysics
Author Öz Yilmaz
ISBN ISBN 978-1-56080-094-1
Store SEG Online Store

Seismic data recorded in digital form by each channel of the recording instrument are represented by a time series. Processing algorithms are designed for and applied to either single channel time series, individually, or multichannel time series. The Fourier transform constitutes the foundation of much of the digital signal processing applied to seismic data. Aside from sections on the one- and two-dimensional Fourier transforms and their applications, fundamentals of signal processing also includes a section on a worldwide assortment of recorded seismic data. By referring to the field data examples, we examine characteristics of the seismic signal — primary reflections from layer boundaries and random and coherent noise such as multiple reflections, reverberations, linear noise associated with guided waves and point scatterers. Fundamentals of signal processing concludes with a section on the basic processing sequence and guidelines for quality control in processing.

The Principal Processes

The next three sections are devoted to the three principal processes — deconvolution, CMP stacking, and migration. Deconvolution often improves temporal resolution by collapsing the seismic wavelet to approximately a spike and suppressing reverberations on some field data (Figure I-7). The problem with deconvolution is that the accuracy of its output may not always be self-evident unless it can be compared with well data. The main reason for this is that our model for deconvolution is nondeterministic in character.

We study the second principal process, CMP stacking, with the accompanying subjects on velocity analysis, normal-moveout (NMO), and statics corrections. Common-midpoint stacking is the most robust of the three principal processes. By using redundancy in CMP recording, stacking can attenuate uncorrelated noise significantly, thereby increasing the S/N ratio (Figure I-3). It also can attenuate a large part of the coherent noise in the data, such as guided waves and multiples.

The normal moveout (NMO) correction before stacking is done using the primary velocity function. Because multiples have larger moveout than primaries, they are undercorrected and, hence, attenuated during stacking (Figure I-8).

The main problem with CMP stacking is that it is based on the hyperbolic moveout assumption. Although it may be violated in areas with severe structural complexities, seismic data acquired in many parts of the world seem to satisfy this assumption reasonably well.

Data acquired on land must be corrected for elevation differences at shot and receiver locations and traveltime distortions caused by a near-surface weathering layer. The corrections usually are in the form of vertical traveltime shifts to a flat datum level (statics corrections). Because of uncertainties in near-surface model estimation, there always remains some residual statics which need to be removed from data before stacking (Figure I-9).

Finally, we study the third principal process, migration. Migration collapses diffractions and moves dipping events to their supposedly true subsurface locations (Figure I-10). In other words, migration is an imaging process. Because it is based on the wave equation, migration also is a deterministic process. The migration output often is self-evident — you can tell whether the output is migrated properly. When the output is not self-evident, this uncertainty often can be traced to the imprecision of the velocity information available for input to the migration program. Other factors that influence migration results include type of input data — two-dimensional (2-D) or three-dimensional (3-D), migration strategies — time or depth, post- or pre-stack, and algorithms and associated parameters. Two-dimensional migration does not correctly position events with 3-D orientation in the subsurface. Note the accurate imaging of the erosional unconformity (event A) in Figure I-10. However, this event is intercepted by event B which is most likely associated with the same unconformity, only that it is out-of-the-plane of recording along the line traverse.

Dipmoveout correction, noise and multiple attenuation techniques, and processing of 3-D seismic data

Events with conflicting dips require an additional step — dip-moveout (DMO) correction, prior to CMP stacking (Figure I-11). Conflicting dips with different stacking velocities often are associated with fault blocks and salt flanks. Specifically, the moveout associated with steeply dipping fault-plane reflections or reflections off a salt flank is in conflict with the moveout associated with reflections from gently dipping strata. Following NMO correction, DMO correction is applied to data so as to preserve events with conflicting dips during stacking. Migration of a DMO stack then yields an improved image of fault blocks (Figure I-11) and salt flanks (Figure I-1).

The rigorous solution to the problem of conflicting dips with different stacking velocities is migration before stack. This is closely related to DMO correction.

We explore various techniques for attenuating random noise, coherent noise, and multiple reflections. Techniques to attenuate random noise exploit part of the signal uncorrelated from trace to trace. Techniques to attenuate coherent linear noise exploit the linearity in the frequency-wavenumber and slant-stack domains. Finally, techniques to attenuate multiples exploit their periodicity in the common-midpoint, slant-stack and velocity-stack domains. Multiples also can be attenuated by using techniques that exploit the velocity discrimination between primaries and multiples in the same domains.

After reviewing the fundamentals of signal processing, studying the three principal processes — deconvolution, CMP stacking and migration, and reviewing dipmoveout correction and the noise and multiple attenuation techniques, we then move on to processing of 3-D seismic data. The principal objective for 3-D seismic exploration is to obtain an earth image in three dimensions. Clearly, all of the 2-D processing techniques are either directly applicable to 3-D seismic data or need to be extended to the third dimension, such as migration and dip-moveout correction.

There is a fundamental problem with seismic data processing. Even when starting with the same raw data, the result of processing by one organization seems to be different from that of another organization. The example shown in Figure I-12 demonstrates this problem. The same data have been processed by six different contractors. Note the significant differences in frequency content, S/N ratio, and degree of structural continuity from one section to another. These differences often stem from differences in the choice of parameters and the detailed aspects of implementation of processing algorithms. For example, all the contractors have applied residual statics corrections in generating the sections in Figure I-12. However, the programs each contractor has used to estimate residual statics most likely differ in the handling of the correlation window, selecting the traces used for crosscorrelation with the pilot trace, and statistically treating the correlation peaks.

One other aspect of seismic data processing is the generation of artifacts while trying to enhance signal. A good seismic data analysis program not only performs the task for which it is written, but also generates minimum numerical artifacts. One of the features that makes a production program different from a research program, which is aimed at testing whether the idea works or not, is refinement of the algorithm in the production program to minimize artifacts. Processing can be hazardous if artifacts overpower the intended action of the program.

The ability of the seismic data analyst invariably is as important as the effectiveness of the algorithms in determining the quality of the final product from data processing. There are many examples of good processing using mediocre software. There are also examples of poor processing using good software. The example shown in Figure I-12 rigorously demonstrates how implementational differences in processing algorithms and differences in the analyst’s skills can influence results of processing.

See also

External links

  • Cambridge University - [1]
  • Schlumberger - [2]
  • U.S. Geological Survey - [3]
  • OpenCourseWare - [4]
  • The University of Arizona - [5]