Difference between revisions of "Basic data processing sequence"

From SEG Wiki
Jump to: navigation, search
(finish initial entry - content)
(added template)
Line 1: Line 1:
{{SDA publication  
  | name    = Seismic Data Analysis
  | title  = Seismic Data Analysis
  | image  = Seismic-data-analysis.jpg
  | image  = [[File:Seismic-data-analysis.jpg|frameless|180px]]
  | width  = 120px
  | width  = 120px
  | series  = Investigations in Geophysics
  | series  = Investigations in Geophysics
| title  = [[Seismic Data Analysis]]
  | part    =  
  | part    =  
  | chapter =  
  | chapter =  

Revision as of 14:08, 9 July 2014

Seismic Data Analysis
Series Investigations in Geophysics
Author Öz Yilmaz
DOI http://dx.doi.org/10.1190/1.9781560801580
ISBN ISBN 978-1-56080-094-1
Store SEG Online Store

Since the introduction of digital recording, a routine sequence in seismic data processing has evolved. This basic sequence now is described to gain an overall understanding of each step. There are three primary steps in processing seismic data — deconvolution, stacking, and migration, in their usual order of application. Figure 1.5-1 represents the seismic data volume in processing coordinates — midpoint, offset, and time. Deconvolution acts along the time axis. It removes the basic seismic wavelet (the source time function modified by various effects of the earth and recording system) from the recorded seismic trace and thereby increases temporal resolution. Deconvolution achieves this goal by compressing the wavelet. Stacking also is a process of compression (velocity analysis and statics corrections). In particular, the data volume in Figure 1.5-1 is reduced to a plane of midpoint-time at zero offset (the frontal face of the prism) first by applying normal moveout correction to traces from each CMP gather (Introduction to velocity analysis and statics corrections|velocity analysis and statics corrections]]), then by summing them along the offset axis. The result is a stacked section. (The terms stacked section, CMP stack, and stack often are used synonymously.) Finally, migration commonly is applied to stacked data. It is a process that collapses diffractions and maps dipping events on a stacked section to their supposedly true subsurface locations. In this respect, migration is a spatial deconvolution process that improves spatial resolution.

Figure 1.5-1  Seismic data volume represented in processing coordinates — midpoint-offset-time. Deconvolution acts on the data along the time axis and increases temporal resolution. Stacking compresses the data volume in the offset direction and yields the plane of stacked section (the frontal face of the prism). Migration then moves dipping events to their true subsurface positions and collapses diffractions, and thus increases lateral resolution.

All other processing techniques may be considered secondary in that they help improve the effectiveness of the primary processes. For example, dip filtering may need to be applied before deconvolution to remove coherent noise so that the autocorrelation estimate is based on reflection energy that is free from such noise. Wide band-pass filtering also may be needed to remove very low- and high-frequency noise. Before deconvolution, correction for geometric spreading is necessary to compensate for the loss of amplitude caused by wavefront divergence. Velocity analysis, which is an essential step for stacking, is improved by multiple attenuation and residual statics corrections.

Many of the secondary processes are designed to make data compatible with the assumptions of the three primary processes. Deconvolution assumes a stationary, vertically incident, minimum-phase source wavelet and white reflectivity series that is free of noise. Stacking assumes hyperbolic moveout, while migration is based on a zero-offset (primaries only) wavefield assumption. A pessimist could claim that none of these assumptions is valid. However, when applied to field data, these techniques do provide results that are close to the true subsurface image. This is because these three processes are robust and their performance is not very sensitive to the underlying assumptions in their theoretical development.

Keep in mind that the success of a process depends not only on the proper choice of parameters pertinent to that particular process, but also on the effectiveness of the previous processing steps.

We shall use a 2-D seismic line from the Caspian Sea to demonstrate the basic processing sequence. Table 1-14 provides the processing parameters for the line. The water depth at one end of the line is approximately 750 m and decreases along the line traverse to approximately 200 m at the other end.


Field data are recorded in a multiplexed mode using a certain type of format. The data first are demultiplexed as described in Figure 1.5-2. Mathematically, demultiplexing is seen as transposing a big matrix so that the columns of the resulting matrix can be read as seismic traces recorded at different offsets with a common shot point. At this stage, the data are converted to a convenient format that is used throughout processing. This format is determined by the type of processing system and the individual company. A common format used in the seismic industry for data exchange is SEG-Y, established by the Society of Exploration Geophysicists.

Table 1-14. Processing parameters for the Caspian line used to describe the basic processing sequence in this section.
Shot interval in m 25
Group interval in m 25
Number of receiver groups 180
Minimum offset in m 175
Maximum offset in m 4,650
CMP interval in m 12.5
Fold of coverage 90
Number of CMPs 6,212
Line length in km 77.64
Sampling interval in ms 4
Maximum time in ms 8,000
Data volume in gigabytes 4.5
Figure 1.5-2  Seismic data are recorded in rows of samples — samples at the same time at consecutive channels. Demultiplexing involves sorting the data into columns of samples — all the time samples in one channel followed by those in the next channels.

Figure 1.5-3 shows selected shot records along the Caspian line under consideration. Note the strong amplitudes at the early part and the relatively weaker energy at the deeper part of the records. Such decay in amplitude primarily is caused by wavefront divergence. The dispersive nature of the guided waves resulting from normal-mode propagation within the water layer appears to vary from record to record. This results from a combination of varying water depth, depth of the source array, and water-bottom conditions (Section F.1).

Preprocessing also involves trace editing. Noisy traces, traces with transient glitches (see Figure 1.3-40), or monofrequency signals (see Figure 1.3-3) are deleted; polarity reversals (see Figure 1.3-2) are corrected. In case of very shallow marine data, guided waves are muted since they travel horizontally within the water layer and do not contain reflections from the substratum.

As seen in Figure 1.5-3, most marine data are contaminated by swell noise and cable noise. These types of noise carry very low-frequency energy but can be high in amplitudes. They can be recognized by their distinctive linear pattern and vertical streaks. The swell noise and cable noise are removed from shot records by a low-cut filtering as shown in Figure 1.5-4. Attenuation of coherent linear noise associated with side scatterers and ground roll may require techniques based on dip filtering (noise and multiple attenuation).

Following the trace editing and prefiltering, a gain recovery function is applied to the data to correct for the amplitude effects of spherical wavefront divergence. This amounts to applying a geometric spreading function, which depends on travel time (gain applications). Optionally, this amplitude correction is made dependent on a spatially averaged velocity function, which is associated with primary reflections in a particular survey area. Additionally, an exponential gain function may be used to compensate for attenuation losses.

The data in Figure 1.5-5 have been corrected for geometric spreading using a t2 scaling function. While primary reflection amplitudes are corrected for wavefront divergence, energy associated with multiple reflections, coherent linear noise generated by water-bottom point scatterers and the recording cable, and random noise also is inevitably boosted by geometric spreading correction.

Finally, field geometry is merged with the seismic data. This precedes any gain correction that is offset-dependent. Based on survey information for land data or navigation information for marine data, coordinates of shot and receiver locations for all traces are stored on trace headers. Changes in shot and receiver locations are handled properly based on the information available in the observer’s log. Many types of processing problems arise from setting up the field geometry, incorrectly. As a result, the quality of a stacked section can be degraded severely.

For land data, elevation statics are applied at this stage to reduce traveltimes to a common datum level. This level may be flat or vary (floating datum) along the line. Reduction of traveltimes to a datum usually requires correction for the near-surface weathering layer in addition to differences in elevation of source and receiver stations. Estimation and correction for the near-surface effects usually are performed using refracted arrivals associated with the base of the weathering layer (refraction statics corrections).


Typically, prestack deconvolution is aimed at improving temporal resolution by compressing the effective source wavelet contained in the seismic trace to a spike (spiking deconvolution). Predictive deconvolution (optimum wiener filters and predictive deconvolution in practice) with a prediction lag (commonly termed gap) that is equal to the first or second zero crossing of the autocorrelation function also is used commonly. Although deconvolution usually is applied to prestack data trace by trace, it is not uncommon to design a single deconvolution operator and apply it to all the traces on a shot record. Deconvolution techniques used in conventional processing are based on optimum Wiener filtering.

Figure 1.5-6 shows the common-shot gathers after spiking deconvolution. By examining some of the individual reflections and comparing them with those in Figure 1.5-5, note how the wavelet associated with the significant reflections is compressed and reverberatory energy that trails behind each reflection is largely attenuated by deconvolution. Because both low- and high-frequency noise and signal are boosted, the data often need filtering with a wide band-pass filter after deconvolution. In addition, some kind of trace balancing (gain applications) often is applied after deconvolution to bring the data to a common root-mean-squared (rms) level (Figure 1.5-7).

CMP sorting

Seismic data acquisition with multifold coverage is done in shot-receiver (s, g) coordinates. Figure 1.5-8a is a schematic depiction of the recording geometry and ray paths associated with a flat reflector. Seismic data processing, on the other hand, conventionally is done in midpoint-offset (y, h) coordinates. The required coordinate transformation is achieved by sorting the data into CMP gathers. Based on the field geometry information, each individual trace is assigned to the midpoint between the shot and receiver locations associated with that trace. Those traces with the same midpoint location are grouped together, making up a CMP gather. Albeit incorrectly, the term common depth point (CDP) and common midpoint (CMP) often are used interchangeably.

Figure 1.5-8b depicts the geometry of a CMP gather and raypaths associated with a flat reflector. Note that CDP gather is equivalent to a CMP gather only when reflectors are horizontal and velocities do not vary horizontally. However, when there are dipping reflectors in the subsurface, these two gathers are not equivalent and only the term CMP gather should be used. Selected CMP gathers obtained from sorting the deconvolved shot gathers (Figure 1.5-7) are shown in Figure 1.5-9.

Figure 1.5-10 shows the superposition of shot-receiver (s, g) and midpoint-offset (y, h) coordinates, and raypath geometries for various gather types. The (y, h) coordinates have been rotated 45 degrees relative to the (s, g) coordinates. The dotted area represents the coverage used in recording the seismic profile along the midpoint axis, Oy. Each dot represents a seismic trace with the time axis perpendicular to the plane of paper. The following gather types are identified in Figure 1.5-10:

  1. Common-shot gather (shot record, field record),
  2. Common-receiver gather,
  3. Common-midpoint gather (CMP gather, CDP gather),
  4. Common-offset section (constant-offset section),
  5. CMP-stacked section (zero-offset section).

The recording cable length is FG and the line length is AD. The number of dots along the offset axis (cross-section 3) is equal to the CMP fold. The fold tapers off at the ends of the profile (segments AB and CD). Full-fold coverage along the line is at midpoints over segment BC. The diagram in Figure 1.5-10 is known as a stacking chart and is useful when setting up the geometry of a line for preprocessing. If there is a missing shot or a bad receiver, the affected midpoints are identified easily (Exercise 1-15).

For most recording geometries, the fold of coverage nf for CMP stacking is given by


where Δg and Δs are the receiver-group and shot intervals, respectively, and ng is the number of recording channels. By using this relationship, the following rules can be established:

  1. The fold does not change when alternating traces in each shot record are dropped.
  2. The fold is halved when every other shot record is skipped, whether or not alternating traces in each record are dropped.

Velocity analysis

In addition to providing an improved signal-to-noise ratio, multifold coverage with nonzero-offset recording yields velocity information about the subsurface. Velocity analysis is performed on selected CMP gathers or groups of gathers. The output from one type of velocity analysis is a table of numbers as a function of velocity versus two-way zero-offset time (velocity spectrum). These numbers represent some measure of signal coherency along the hyperbolic trajectories governed by velocity, offset, and traveltime.

Figure 1.5-11 shows the velocity spectra derived from the CMP gathers as in Figure 1.5-9. The horizontal axis in each spectrum represents the scanned normal-moveout velocity with a range of 1000 to 5000 m/s, and the vertical axis represents the two-way zero-offset time from 0 to 8 s. Red indicates the maximum coherency measure. The curve in each spectrum represents the velocity function based on the picked maximum coherency values associated with primary reflections. The pairs of numbers along each curve denote the time-velocity values for each pick. Velocity-time pairs are picked from these spectra based on maximum coherency peaks to form velocity functions at analysis locations.

The velocity functions picked at analysis locations then are spatially interpolated between the analysis locations to create a velocity field as shown in Figure 1.5-12. Red in the shallow portion and blue in the deep portion of the section correspond to low and high velocities, respectively. This velocity field is used to supply a velocity function for each CMP gather along the profile.

In areas with complex structure, velocity spectra often fail to provide sufficient accuracy in velocity picks. When this is the case, the data are stacked with a range of constant velocities, and the constant-velocity stacks themselves are used in picking velocities.

Normal-moveout correction

The velocity field (Figure 1.5-12) is used in normal moveout (NMO) correction of CMP gathers. Based on the assumption that, in a CMP gather, reflection traveltimes as a function of offset follow hyperbolic trajectories, the process of NMO correction removes the moveout effect on traveltimes. Figure 1.5-13 shows the CMP gathers in Figure 1.5-9 after moveout correction. Note that events are mostly flattened across the offset range — the offset effect has been removed from traveltimes. Traces in each CMP gather are then summed to form a stacked trace at each midpoint location. The stacked section comprises the stacked traces at all midpoint locations along the line traverse.

As a result of moveout correction, traces are stretched in a time-varying manner, causing their frequency content to shift toward the low end of the spectrum. Frequency distortion increases at shallow times and large offsets (Figure 1.5-13). To prevent the degradation of especially shallow events, the amplitudes in the distorted zone are zeroed out (muted) before stacking (Figure 1.5-14).

The CMP recording technique, which was invented in the 1950s and published later [1], uses redundant recording to improve the signal-to-noise ratio during stacking. To achieve redundancy, multiple sources per trace ns, multiple receivers per trace nr, and multiple offset coverage of the same subsurface point nf, are used in the field. Given the total number of elements in the recording system, N = ns × nr × nf, the signal amplitude-to-rms noise ratio theoretically is improved by a factor of . This improvement factor is based on the assumptions that the reflection signal on traces of a CMP gather is identical and the random noise is mutually uncorrelated from trace to trace [2]. Because these assumptions do not strictly hold in practice, the signal-to-noise ratio improvement gained by stacking is somewhat less than . Common-midpoint stacking also attenuates coherent noise such as multiples, guided waves, and ground roll. This is because reflected signal and coherent noise usually have different stacking velocities.

In areas with complex overburden structure that gives rise to strong lateral velocity variations, the hyperbolic moveout assumption associated with reflection traveltimes in CMP gathers is no longer valid. As a result, hyperbolic moveout correction and CMP stacking do not always yield a stacked section in which reflections from the underlying strata are faithfully preserved. In such circumstances, imaging in depth and before stack becomes imperative.

Multiple attenuation

Multiple reflections and reverberations are attenuated using techniques based on their periodicity or differences in moveout velocity between multiples and primaries. These techniques are applied to data in various domains, including the CMP domain, to best exploit the periodicity and velocity discrimination criteria (noise and multiple attenuation).

Deconvolution is one method of multiple attenuation that exploits the periodicity criterion. Often, however, the power of conventional deconvolution in attenuating multiples is underestimated. As for the Caspian data example in this section, despite theoretical limitations, deconvolution can remove a significant portion of the energy associated with short-period multiples and reverberations. It also can attenuate long-period multiples if it is applied in data domains in which periodicity is preserved (noise and multiple attenuation).

Dip-moveout correction

The normal-moveout correction in Figure 1.5-14 was applied to the CMP gathers using the velocity field of Figure 1.5-12 that is optimum for flat events. Stacking velocities, however, are dip-dependent. Dip-moveout correction (DMO) is needed to correct for the dip effect on stacking velocities and thus preserve events with conflicting dips during CMP stacking. Dip-moveout correction has been an integral part of a conventional processing sequence for 2-D and 3-D seismic data since 1985.

Dip-moveout correction is applied to data following the normal-moveout correction using flat-event velocities (Figure 1.5-15). This then is followed by inverse moveout correction (Figure 1.5-16) and subsequent velocity analysis at closely spaced intervals. Figure 1.5-17 shows the velocity spectra asociated with a subset of the analysis locations which correspond to those of Figure 1.5-11. As for the velocity spectra in Figure 1.5-11, the velocity range is 1000-5000 m/s and the maximum time is 8 s. Also, red indicates the maximum coherency measure.

CMP stacking

A new velocity field as shown in Figure 1.5-18 is derived from the velocity functions picked from the velocity spectra after DMO correction. As for the velocity field in Figure 1.5-12, red in the shallow portion and blue in the deep portion of the section correspond to low and high velocities, respectively. This new velocity field is used to apply NMO correction to the CMP gathers (Figure 1.5-19). Finally, a CMP stack is obtained (Figure 1.5-20) by summing over the offset axis. The stack is the frontal face of the data volume shown in Figure 1.5-1.

Poststack processing

A typical poststack processing sequence includes the following steps:

  1. Deconvolution after stack (field data examples) is usually applied to restore high frequencies attenuated by CMP stacking. It also is often effective in suppressing reverberations and short-period multiples. Figure 1.5-21 shows the CMP stack as in Figure 1.5-20 after spiking deconvolution.
  2. Although not included in the processing sequence for the Caspian data example in this section, often, time-variant spectral whitening (the problem of nonstationarity) is used to further flatten the spectrum and accounts for the time-variant character of the source waveform.
  3. Time-variant band-pass filtering (1-D Fourier transform) is then used to remove noise at the high- and low-frequency end of the signal spectrum (Figure 1.5-22).
  4. The basic processing sequence sometimes includes a step for attenuation of random noise uncorrelated from trace to trace (noise and multiple attenuation).
  5. Finally, some type of display gain is applied to the stacked data (Figure 1.5-23). For true amplitude preservation, time-variant scaling of stacked amplitudes is avoided; instead, a relative amplitude compensation function that is constant from trace to trace is applied. This is a slow time-varying gain function that amplifies weak late reflections without destroying the amplitude relationships from trace to trace that may be caused by subsurface reflectivity.


Dipping events are then moved to their supposedly true subsurface positions, and diffractions are collapsed by migrating the stacked section prior to amplitude scaling (migration). Figure 1.5-24 shows the CMP stack as in Figure 1.5-22 after migration. As for the unmigrated stack, the migrated section also is displayed with the scaled amplitudes (Figure 1.5-25). Although the output of migration is intended to represent the geological cross-section along the line traverse, it often is displayed in time as for the input stacked section. The provided lateral velocity variations are mild to moderate, time migration often is acceptable; otherwise, depth migration is imperative (migration).

The structural highs below midpoints 4200, 6800, and 8200 in Figure 1.5-25 are associated with mud diapirism which is prominent in the Caspian basin. Structural complexity caused by faulting and folding generally introduces problems in stacking and imaging the subsurface in three respects:

  1. Steeply dipping reflections associated with fault planes and salt flanks often conflict during stacking with gently dipping or near-flat reflections associated with the less undisturbed strata. The remedy for this problem is prestack time migration for which the robust alternative is dip-moveout correction combined with poststack time migration.
  2. Nonhyperbolic moveout caused by strong lateral velocity variations associated with complex overburden structures involving salt tectonics and over-thurst tectonics yields traveltime and amplitude distortions during stacking based on the hyperbolic moveout assumption. The remedy for this problem is prestack depth migration.
  3. Any of the two cases described in (a) and (b) often manifest themselves as 3-D problems in nature. The remedy for the 3-D effects, of course, is 3-D migration.

The migrated section in Figure 1.5-25 must be evaluated within the above limitations in stacking and imaging the subsurface.

Residual statics corrections

There is one additional step in conventional processing of land and shallow-water seismic data before stacking — residual statics corrections. From the NMO-corrected gathers in Figure 1.5-26a, note that the events in CMP 216 are not as flat as they are in the other gathers. The moveout in CMP gathers does not always conform to a perfect hyperbolic trajectory. This often is because of near-surface velocity irregularities that cause a static or dynamic distortion problem. Lateral velocity variations caused by a complex overburden can cause move-outs that could be negative — a reflection event arrives on long-offset traces before it arrives on short-offset traces. Close examination of the velocity spectra indicates that some are easier to pick (Figure 1.5-27a) than others (Figure 1.5-28a). The velocity spectrum that corresponds to CMP 297 has sharp coherency peaks that are associated with a distinctive velocity trend. However, the velocity spectrum that corresponds to CMP 188 does not yield a distinctive trend, thus making it relatively difficult to pick (Figure 1.5-28a).

To improve stacking quality, residual statics corrections are performed on the moveout-corrected CMP gathers. This is done in a surface-consistent manner; that is, time shifts are dependent only on shot and receiver locations, not on the ray paths from shots to receivers. The estimated residual corrections are applied to the original CMP gathers with no NMO correction. Velocity analyses then are often repeated to improve the velocity picks (Figures 1.5-27b and 1.5-28b). With the improved velocity field, the CMP gathers are NMO-corrected (Figure 1.5-26b). Finally, the gathers are stacked as shown in Figure 1.5-29b. For comparison, the stack without the residual statics corrections is shown in Figure 1.5-29a. Reflection continuity over the problem zone between midpoints 53245 has been improved.

Quality control in processing

The conventional processing sequence is outlined in Figure 1.5-30. Each of the processes described above is presented in detail in subsequent chapters. In a seismic data processing sequence, the step that is most vulnerable to human errors is defining the geometry for the survey under consideration and merging it with the seismic data. This involves correctly assigning sources and receivers to their respective surface locations and correctly specifying the source-receiver separation and azimuth for each recorded trace in the survey.

To demonstrate just how important it is to correctly specify the geometry of a survey, consider the impact of a deliberately incorrect geometry assignment on velocity estimation and normal-moveout correction. Figure 1.5-31 shows CMP gathers before and after moveout correction and velocity spectra at three analysis locations along a seismic traverse. The case shown in Figure 1.5-31a does not appear to exhibit any abnormal moveout behavior. The velocity spectrum yields a fairly unambiguous primary velocity function, and primary events on the moveout-corrected gather are nearly flat. The case shown in Figure 1.5-31b, however, begins to show signs of something being wrong with the data. Although the velocity spectrum, again, yields a fairly unambiguous primary velocity function, note that the events associated with the major primary reflections in the CMP gather do not submit themselves to flattening properly after normal-moveout correction. Such behavior in the moveout may be attributed to some physical phenomenon, for instance, anisotropy or nonhyperbolic moveout caused by lateral velocity variations. Nevertheless, it is caused in this case by incorrect geometry specification related to wrong offset assignment to the traces in the gather. The abnormal moveout behavior is strikingly more obvious in the case shown in Figure 1.5-31c. Note the ambiguous semblance peaks in the velocity spectrum, which cause failure in normal-moveout correction to properly flatten the primary events in the gather. Note the differences in the degree of abnormal behavior in event moveout from one location to another (Figures 1.5-31a,b,c); the simpler and the flatter the subsurface structure, the less obvious the adverse impact of incorrect geometry on the moveout.

Figure 1.5-30  A conventional processing flowchart.

The care required for correct assignment of the geometry of a survey, of course, does not undermine the care required for proper specification of the parameters associated with any other step in a processing sequence. Specifically, each step must be executed with the necessary quality control. Displays of appropriate data attributes, such as amplitude spectrum and autocorrelogram, help the analyst understand signal and noise chracteristics of the recorded data and the effect of a step included in the processing sequence on the data, thus facilitating appropriate specification of parameters associated with that step. Figures 1.5-32 through 1.5-41 show quality control panels that are examples of recommended standard displays for parameter selection at various stages in the analysis. All displays include the amplitude spectrum on the top row averaged over the gather, if it is a prestack test panel, and averaged over the portion of the stack, if it is a poststack test panel, and autocorrelogram of the respective data type on bottom row.

Figure 1.5-32 is the quality control panel for prestack signal processing. Shown from left to right are: (a) a CMP gather which exhibits strong, low-frequency swell noise; (b) low-cut filtering to remove the swell noise; (c) t2 scaling to correct for geometric spreading (gain applications); (d) prestack spiking deconvolution (optimum wiener filters, predictive deconvolution in practice, and field data examples); (e) and wide bandpass filtering to remove the high-frequency noise boosted by spiking deconvolution. Note that the autocorrelogram better exhibits over the entire cable length the characteristics of the source waveform and reverberations and multiples after t2 scaling. Also note that spiking deconvolution has removed much of the energy associated with the reverberations and multiples. The broadening and flattening of the amplitude spectrum after spiking deconvolution are indicative of the increase of vertical resolution.

Figure 1.5-33 shows the spectra which are associated with the gathers from left to right in Figure 1.5-32. The horizontal axis is frequency in Hz and the vertical axis is two-way traveltime in s. Note from (a) that the swell noise at very low frequencies occupies the spectrum along the entire time axis. Note also that the energy in the gather is largely confined to shallow times within a bandlimited region of the spectrum. Following the low-cut filtering (b), note the elimination of the swell noise energy. The t2 scaling (c) has restored the energy at late times, and deconvolution (d) has broadened the spectrum. Following the wide bandpass filtering (e), note that the signal bandwidth has been preserved [compare with (a)], and the spectrum has been flattened within the passband.

Figures 1.5-34 and 1.5-35 show two standard test panels for determining prestack deconvolution parameters. With the help of the amplitude spectrum and autocorrelogram, the analyst chooses an optimum operator length and prediction lag. Figure 1.5-34 shows the test panel for prestack spiking deconvolution. Shown from left to right are: the input gather after low-cut filtering and t2 scaling as in Figure 1.5-32, followed by deconvolution using operator lengths of 120 ms, 160 ms, 240 ms, 360 ms, and 480 ms. Note that deconvolution using an operator length of 480 ms best flattens the spectrum within the signal passband. Failure of deconvolution in flattening the spectrum at very high frequencies is most likely due to nonstationarity of the signal. This effect usually is accounted for by time-variant spectral whitening after stack. Since autocorrelation of input data is used in designing a deconvolution operator, it is appropriate to examine the autocorrelation before and after deconvolution. Note from the autocorrelograms in Figure 1.5-34 that operator length dictates the ability of deconvolution in removing reverberations and short-period multiples.

Figure 1.5-35 shows the test panel for prestack predictive deconvolution. Shown from left to right are: the input gather after low-cut filtering and t2 scaling as in Figure 1.5-32, followed by deconvolution using prediction lags of 2 ms (unit prediction lag), 8 ms, 16 ms, 24 ms, and 32 ms, with the same operator length of 480 ms. Note that the unit-prediction lag yields a flat spectrum across the passband, while increasing the prediction lag results in departure from a flat spectrum. Prediction lag controls the ability of deconvolution to increase the vertical resolution.

Figures 1.5-36 and 1.5-37 show two standard test panels for determining poststack deconvolution parameters. Note from the average amplitude spectrum of the section on the left-hand side of each test panel that CMP stacking inherently attenuates high frequencies which need to be restored by poststack deconvolution. Figure 1.5-36 shows the test panel for poststack spiking deconvolution. Shown from left to right are: the input stack, followed by deconvolution using operator lengths of 120 ms, 160 ms, 240 ms, 360 ms and 480 ms, and high-cut filtering to retain the acceptable signal band and remove the high-frequency noise.

Figure 1.5-37 shows the test panel for poststack predictive deconvolution. Shown from left to right are: the input stack, followed by deconvolution using prediction lags of 2 ms (unit prediction lag), 8 ms, 16 ms, 24 ms and 32 ms, using the same operator length of 480 ms, and high-cut filtering to retain the acceptable signal band and remove the high-frequency noise. Again, note that the unit-prediction lag yields a flat spectrum across the passband, while increasing the prediction lag results in departure from a flat spectrum.

Figure 1.5-38 shows the standard quality control panel for poststack signal processing. Shown from left to right are: a portion of the stacked section with prestack processing as described by Figure 1.5-32; spiking deconvolution (field data examples) to restore the high frequencies attenuated by the stacking process; time-variant spectral whitening to account for nonstationarity and to further flatten the spectrum — all three steps followed by high-cut filtering; bandpass filtering to retain the acceptable signal band and remove the high-frequency noise; instantaneous AGC scaling and rms amplitude AGC scaling.

Figures 1.5-39 and 1.5-40 show the test panels for defining the parameters for time-variant filtering (the 1-D Fourier transform). A portion of the stacked section is bandpass filtered using a 10-Hz bandwidth that slides from low to high-frequency end of the spectrum. Note that the coherent signal at high-frequency bands is confined to shallow times. Nevertheless, these filter panels indicate that signal up to 90 Hz is present in the data down to 2.2 s, and the signal up to 100 Hz is present down to 1.4 s.

Finally, Figure 1.5-41 shows the test panel for poststack noise attenuation using f – x deconvolution (linear uncorrelated noise attenuation). A parameter that needs to be tested for f – x deconvolution is the percent add-back of the estimated noise to circumvent the smeared appearance of events following noise attenuation. Shown from left to right are: a portion of the stacked section with poststack deconvolution, time-variant spectral whitening and bandpass filtering; noise attenuation with 80, 60, 40, 20, and 0 percent add-back. Note that without any add-back, the amplitude spectrum of the section after noise attenuation indicates dampening of high-frequency energy that may be attributed to the random noise uncorrelated from trace to trace.

The test panels for quality control in processing of seismic data are not limited to those presented in Figures 1.5-32 through 1.5-41. Additional panels with appropriate and convenient format may be constructed to test parameters associated with refraction and residual statics corrections, multiple attenuation, dip-moveout correction, and migration. Powerful interactive tools, including 3-D visualization techniques, facilitate efficient parameter testing and quality control in processing.

Parsimony in processing

The primary objective in data processing is to enhance the signal-to-noise ratio while preserving the useful signal bandwidth associated with the recorded data at all stages in the analysis. The principle of parsimony in processing is the basis to achieve this objective. Specifically, a processing sequence should be optimally lean and not include any step that may do more harm than the intended action by that process. A further compelling reason for parsimony is preserving relative amplitudes for amplitude-driven exploration objectives associated with stratigraphic plays.

Figures 1.5-42 through 1.5-53 show the step-by-step appearance of a portion of a stacked section based on a very basic processing sequence intended to minimize amplitude distortions while largely attenuating reverberations, multiples, and random noise and ultimately increasing vertical and lateral resolution.

The stacked sections in Figures 1.5-42 through 1.5-47 were created based on the following prestack processing sequence:

  1. Figure 1.5-42: stack based on unprocessed data that contain low-frequency swell noise.
  2. Figure 1.5-43: stack using CMP gathers with low-cut filtering applied to remove the swell noise.
  3. Figure 1.5-44: stack as in (b) with the additional step for t2 scaling to compensate for wavefront divergence; note the restoration of amplitudes at late times.
  4. Figure 1.5-45: stack as in (c) with the additional step for prestack spiking deconvolution; note the attenuation of reverberations.
  5. Figure 1.5-46: stack as in (d) with the additional step for wide bandpass filtering to improve velocity analysis.
  6. Figure 1.5-47: stack as in (e) with the additional step for dip-moveout correction; note the preservation of diffractions that interfere with the nearly flat reflections.

The stacked sections in Figures 1.5-48 through 1.5-53 were created based on the following poststack processing sequence:

  1. Figure 1.5-48: stack as in (f) of the prestack processing sequence described above with the additional step for poststack spiking deconvolution; note the increase in vertical resolution as a result of wavelet compression.
  2. Figure 1.5-49: stack as in (a) with the additional step for time-variant spectral whitening to account for nonstationarity.
  3. Figure 1.5-50: stack as in (b) with the additional step for wide bandpass filtering.
  4. Figure 1.5-51: stack as in (c) with the additional step for AGC scaling.
  5. Figure 1.5-52: stack as in (d) with the additional step for attenuation of random noise uncorrelated from trace to trace using f – x deconvolution (linear uncorrelated noise attenuation).
  6. Figure 1.5-53: migrated stack as in (c) with the additional step for AGC scaling.

Scan through the stacked sections starting with Figure 1.5-42 and observe the effect of each processing step on the result. Additionally, examine the spectra labeled as (a) through (f) in Figure 1.5-54 which correspond to the stacked sections in Figures 1.5-42 through Figure 1.5-47 involving the prestack sequence, respectively, and the spectra labeled as (a) through (f) in Figure 1.5-55 which correspond to the stacked sections in Figures 1.5-48 through Figure 1.5-53 involving the poststack sequence, respectively. Observe the change in the spectral content induced by each process, and note that the ultimate objectives in processing are aimed at preserving the bandwidth of the recorded signal and flattening the spectrum within the signal passband to attain the maximum possible vertical and lateral resolutions.

See also


  1. Mayne, 1962, Mayne, W. H., 1962, Common-reflection-point horizontal data stacking techniques: Geophysics, 27, 927–938.
  2. Sengbush, 1983, Sengbush, R. L., 1983, Seismic exploration methods: Internat. Human Res. Dev. Corp.
  3. Claerbout, 1976, Claerbout, J. F., 1976, Fundamentals of geophysical data processing: McGraw-Hill Book Co.

External links

find literature about
Basic data processing sequence
SEG button search.png Datapages button.png GeoScienceWorld button.png OnePetro button.png Schlumberger button.png Google button.png AGI button.png