Introduction to fundamentals of signal processing

From SEG Wiki
Jump to navigation Jump to search
ADVERTISEMENT
Seismic Data Analysis
Seismic-data-analysis.jpg
Series Investigations in Geophysics
Author Öz Yilmaz
DOI http://dx.doi.org/10.1190/1.9781560801580
ISBN ISBN 978-1-56080-094-1
Store SEG Online Store


The Fourier transform is fundamental to seismic data analysis. It applies to almost all stages of processing. A seismic trace represents a seismic wavefield recorded at a receiver location. The digital form of a seismic trace is a time series which can be completely described as a discrete sum of a number of sinusoids — each with a unique peak amplitude, frequency, and a phase-lag (relative alignment). The analysis of a seismic trace into its sinusoidal components is achieved by the forward Fourier transform. Conversely, the synthesis of a seismic trace from the individual sinusoidal components is achieved by the inverse Fourier transform. A brief mathematical review of the Fourier transform is provided.

Seismic data processing algorithms often can be described or implemented more simply in the frequency domain than in the time domain. The one-dimensional (1-D) Fourier transform is introduced and some basic properties of time series in both time and frequency domains are described. Many of the processing techniques — single- and multichannel, involve an operand (seismic trace) and an operator (filter). A simple application of Fourier analysis is in the design of zero-phase frequency filters, typically in the form of band-pass filtering.

The two-dimensional (2-D) Fourier transform is a way to decompose a seismic wavefield, such as a common-shot gather, into its plane-wave components, each with a certain frequency propagating at a certain angle from the vertical. Therefore, the 2-D Fourier transform can describe processes like migration and frequency-wavenumber (f – k) filtering. A common application of f – k filtering is the rejection of coherent linear noise by dip filtering, and attenuation of multiples based on velocity discrimination between primaries and multiples in the f – k domain (frequency-wavenumber filtering).

In worldwide assortment of shot records, 40 common-shot gathers recorded in different parts of the world with different types of sources and recording instruments are introduced. Various types of seismic energy are described on these shot records — reflections, refractions, coherent noise such as multiples, guided waves, side-scattered energy and ground roll, and ambient random noise.

Seismic data often require application of a gain function — time-variant scaling of amplitudes, for various reasons. The scaling function commonly is derived from the data. Gain types are discussed in gain applications. At an early stage in processing, gain is applied to data to correct for wavefront divergence — decay in amplitudes caused by geometric spreading of seismic waves. Seismic data often are gained for display purposes; for instance, by applying automatic gain control (AGC), which brings up weak reflection zones in seismic data. However, an AGC-type gain can destroy signal character and must, therefore, be considered with caution.

Finally, in basic data processing sequence, a summary of the basic data processing sequence is presented with field data examples. There are three primary stages in seismic data processing; each is aimed at improving seismic resolution — the ability to separate two events that are very close together, either spatially or temporally:

  1. Deconvolution is performed along the time axis to increase temporal resolution by compressing the basic seismic wavelet to approximately a spike and suppressing reverberating wavetrains.
  2. Stacking compresses the offset dimension, thus reducing seismic data volume to the plane of the zero-offset seismic section and increasing the signal-to-noise ratio.
  3. Migration commonly is performed on the stacked section (which is assumed to be equivalent to a zero-offset section) to increase lateral resolution by collapsing diffractions and moving dipping events to their supposedly true subsurface positions.

Secondary processes are implemented at certain stages to condition the data and improve the performance of deconvolution, stacking, and migration. When coherent noise is dip filtered, for example, deconvolution and velocity analysis may be improved. Residual statics corrections also improve velocity analysis and, hence, the quality of the stacked section.

See also

External links

find literature about
Introduction to fundamentals of signal processing
SEG button search.png Datapages button.png GeoScienceWorld button.png OnePetro button.png Schlumberger button.png Google button.png AGI button.png