# Difference between revisions of "Vertical resolution"

Series Investigations in Geophysics Öz Yilmaz http://dx.doi.org/10.1190/1.9781560801580 ISBN 978-1-56080-094-1 SEG Online Store

For two reflections, one from the top and one from the bottom of a thin layer, there is a limit on how close they can be, yet still be separable. This limit depends on the thickness of the layer and is the essence of the problem of vertical resolution.

The dominant wavelength of seismic waves is given by

 ${\displaystyle \lambda ={\frac {v}{f}},}$ (1)

where v is velocity and f is the dominant frequency. Seismic wave velocities in the subsurface range between 2000 and 5000 m/s and generally increase in depth. On the other hand, the dominant frequency of the seismic signal typically varies between 50 and 20 Hz and decreases in depth. Therefore, typical seismic wavelengths range from 40 to 250 m and generally increase with depth. Since wavelength determines resolution, deep features must be thicker than the shallow features to be resolvable. A graph of wavelength as a function of velocity for various values of frequency is plotted in Figure 11.1-1. The wavelength is easily determined from this graph, given the velocity and dominant frequency.

 λ/4 = v/4f v (m/s) f (Hz) λ/4 (m) 2000 50 10 3000 40 18 4000 30 33 5000 20 62

The acceptable threshold for vertical resolution generally is a quarter of the dominant wavelength. This is subjective and depends on the noise level in the data. Sometimes the quarter-wavelength criterion is too generous, particularly when the reflection coefficient is small and no reflection event is discernable. Sometimes the criterion may be too stringent, particularly when events do exist and their amplitudes can be picked with ease.

Table 11-1 contains the wavelength threshold values for vertical resolution, considering the realistic velocity and frequency ranges. For example, a shallow feature with a 2000-m/s velocity and 50-Hz dominant frequency potentially can be resolved if it is as thin as 10 m. A thinner feature cannot be resolved. Similarly, for a deep feature with a velocity as high as 5000 m/s and dominant frequency as low as 20 Hz, the thickness must be at least 62 m for it to be resolvable.

It is now appropriate to ask whether a thin stratigraphic unit must be resolved to be mapped. The answer is no. Resolution as defined here and in the geophysical literature implies that reflections from the top and bottom of a thin bed are seen as separate events or wavelet lobes. Using this definition, resolution does not consider amplitude effects. The thickness and areal extent of beds below the resolution limit often can be mapped on the basis of amplitude changes. This amplitude-based analysis can be especially precise when used for mapping gas-generated bright spots in tertiary rocks. Thus, in many stratigraphic plays, resolution in the strict sense is not an issue. For these plays, detection, not resolution, is the problem.

Vertical resolution is a concern when discontinuities are inferred along a reflection horizon because of faults. Figure 11.1-2 shows a series of faults with vertical throws that are equal to 1, 1/2, 1/4, 1/8, and 1/16th of the dominant wavelength. It is when the throw is equal or greater then one-fourth of the dominant wavelength that the presence of the fault can be inferred easily. Perhaps a smaller throw can be inferred by using diffractions from faults along the reflection horizon, provided the noise level is low in the data.

Figure 11.1-1  The relationship between velocity, dominant frequency, and wavelength. Here, wavelength = velocity/frequency. (Adapted from Sheriff, 1976; courtesy American Association of Petroleum Geologists.)

Clearly the ability to resolve or detect small targets can be increased by increasing the dominant frequency of the stacked data. The dominant frequency of a stacked section from a given area is governed by the physical properties of the subsurface, processing quality, and recording parameters. Since we cannot control the subsurface properties, the high-frequency signal level can only be influenced by the effort put into recording and processing.

The emphasis in recording should be to preserve high frequencies and suppress noise. The sampling rate and antialiasing filters should be adequate to record the desired frequencies. Receiver arrays should be small enough to prevent the significant loss of high-frequency signal because of intragroup moveout and statics. However, the arrays should not be too small, since small arrays are not as effective at suppressing random, high-frequency noise (wind noise) as large arrays. Finally, the source effort should be high enough to provide adequate signal level relative to noise level within the desired frequency band. Unless the signal-to-noise ratio of the field data is above some minimal level, say 0.25, processing algorithms have difficulty in recovering the signal. The signal has to be detectable before it can be enhanced.

The emphasis in processing should be to preserve and display the high-frequency signal present in the input data. Filters with good high-frequency response should be used for interpolation processes such as NMO removal, datum and statics corrections, and multiplex skew corrections. Extra care should be taken to ensure that small-scale residual moveout or statics, which might cause loss of high-frequency signal during stacking, are removed before stack. Nonsurface-consistent alignment programs (often called trim statics programs) sometimes are used for this purpose. Finally, care must be taken to ensure that all the high-frequency signal is displayed on the final stack. Poststack deconvolution is a useful tool for this purpose.

Figure 11.1-2  Faults with different amounts of vertical throws expressed in fractions of the dominant wavelength.