Seismic petrophysics: Part 2

From SEG Wiki
Jump to navigation Jump to search

In Part 1 of this tutorial in the April 2015 issue of TLE, we loaded some logs and used a data framework called Pandas to manage them. We made a lithology-fluid-class (LFC) log and used it to color a crossplot. This month, we take the workflow further with fluid-replacement modeling based on Gassmann’s equation. This is just an introduction, see Smith et al. and Wang for comprehensive overviews.[1][2]

Fluid-replacement modeling is used to predict the response of elastic logs (, , and density) in different fluid conditions. In that way, we can see how much a rock would change in terms of velocity (or impedance) if it was filled with gas instead of brine, for example. It also brings all elastic logs to a common fluid “denominator,” enabling us to disregard fluid effects and focus only on lithologic variations.

Creating new logs

The inputs to fluid-replacement modeling are and (saturated bulk and shear moduli, which we can obtain from recorded , , and density logs), and (dry-rock bulk and shear moduli), (mineral bulk modulus), (fluid bulk modulus), and porosity, . Reasonable estimates of mineral and fluid bulk moduli and porosity are computed easily. The real unknowns, arguably the core issue of rock physics, are the dry-rock moduli.

To find the bulk modulus of a rock with the new fluid, we first use an inverse form of Gassmann’s equation to compute the dry-rock bulk modulus and then apply the direct form:

We can put all the steps together into a Python function (see the IPython notebook for details, available online at, calculating the elastic parameters for fluid 2, given the parameters for fluid 1, along with some other data about the fluids:

def frm(vp1, vs1, rho1, rho _f1, k _f1, rho_f2, k _f2, k0, phi):

# Original properties

vp1 = vp1 / 1000.

vs1 = vs1 / 1000.

mu1 = rho1 * vs1**2.

k_s1 = rho1 * vp1**2 - (4./3.)*mu1

# Inverse Gassmann

k_d = (k_s1 * ((phi*k0)/k_f1+1-phi)-k0) /\ ((phi*k0)/k_f1+(k_s1/k0)-1-phi)

# Forward Gassmann

k_s2 = k_d + (1 - (k_d/k0))**2 /\ ((phi/k_f2) + ((1-phi)/k0) - (k _ d/k0**2))

# New properties

rho2 = rho1-phi * rho_f1+phi * rho_f2

mu2 = mu1

vp2 = np.sqrt(((k_s2+(4./3)*mu2))/rho2)

vs2 = np.sqrt((mu2/rho2))

return vp2*1000, vs2*1000, rho2, k_s2

This function takes the following parameters:

  • vp1, vs1, rho1: measured , , and density — saturated with fluid 1
  • rho_f1, k_f1: density and bulk modulus of fluid 1
  • rho_f2, k_f2: density and bulk modulus of fluid 2
  • k0: mineral bulk modulus
  • phi: porosity

and returns , , density, and bulk modulus of the rock with fluid 2. Velocities are in meters per second, and densities are in grams per cubic centimeter.

This is how we can create gas sands that we discussed in Part 1 of this tutorial. In the IPython notebook, I go into more detail to show how to compute mineral moduli and mixed-fluid moduli. I also demonstrate how to put all the results together in a Python container for later use and how to update the LFC log we created last time to reflect the newly available gas sands.


The final result is shown in Figure 1, highlighting a detail of the reservoir section between 2150 m and 2200 m with acoustic impedance () and ratio for the in situ case (gray curve), brine case (blue), and gas case (red). The LFC log on the right is still the original one computed on the in situ case (gray curve). The same data are shown in the versus domain in Figure 2.

We can make a few qualitative observations from these crossplots:

  • Shales are overall quite different from sands thus are potentially easy to identify.
  • Brine sands have higher and than hydrocarbonbearing sands have.
  • Oil and gas cases are not very different from each other.
  • Further investigation could be done on the shale intervals that overlap with brine sands.

Moving away from well logs

Fluid-replacement modeling has allowed us to create an augmented data set. Now we will move away from the intricacies and local irregularities of the real data to create a fully synthetic data set that represents an idealized version of the reservoir. We do this by analyzing the data through simple statistical techniques in terms of tendency, dispersion, and correlation among certain elastic properties for each lithofluid class.

Central tendency is described by calculating the mean values of some desired elastic property for all the existing classes. Dispersion and correlation are summarized with the covariance matrix, which can be written like this (for two generic variables X and Y):

[ var_X cov_XY ]

[ cov_XY var_Y ]

With three variables instead, we would have

[ var_X cov_XY cov_XZ ]

[ cov_XY var_Y cov_YZ ]

[ cov_XZ cov_YZ var_Z ],

where var_X is the variance of property X, i.e., a measure of dispersion about the mean, whereas the covariance cov_XY is a measure of similarity between two properties X and Y.

We will study the data set using only two properties ( and ) and store everything in the DataFrame stat:

LFC IP_mean VP/VS_mean IP_var
1 6790 2.11 199721
2 6185 2.01 337593
3 5816 1.94 360001
4 6088 2.32 492525
LFC IP_VPVS_cov VPVS_var Samples
1 –27.95 0.0205 1546
2 –16.72 0.0234 974
3 8.67 0.0204 840
4 –98.02 0.0563 4512

There are four rows, one for each lithofluid class: shale, brine, oil, and gas sands. For each class, we store the mean values, the variances, and the covariance. The number of samples that is a metric on the robustness of the analysis, i.e., too few samples, points to a potential unreliability of the statistical information.

Once again, it is easy to get information out of stat; e.g., the average of brine sands (a value of 2 in the LFC log) is

>>> print stat.ix[stat.LFC==2,'Ip _ mean']


To display the same information graphically, this command produces Figure 3:

>>> pd.scatter _ matrix(ww[ww.LFC==2].drop('LFC',1), diagonal='kde')

We can now use this information to create a brand-new synthetic data set that will replicate the average behavior of the reservoir complex and at the same time overcome typical problems when using real data such as undersampling of a certain class, presence of outliers, or spurious occurrence of anomalies. The technique is a Monte Carlo simulation that relies on multivariate normal distribution to draw samples that are random but correlated in the elastic domain of choice ( and ).

First we define how many samples per class to create (e.g., 100) and then create an empty Pandas DataFrame (called mc) with as many columns as the chosen elastic logs (in this case, three: LFC, , and ) and rows equal to the number of samples multiplied by the number of classes (100 × 4 = 400):

>>> mc = pd.DataFrame(data=None, columns=lognames0, index=np.arange(100*nlfc), dtype='float')

Then we fill in the LFC column with the numbers assigned to each class:

>>> for i in range(1, nlfc+1):

>>> mc.loc[NN*i-NN:NN*i-1, 'LFC'] = i

Finally, for each class, we extract the average value mean and the covariance matrix sigma from the stat DataFrame and then put them into Python’s np.random.multivariate_normal function to draw randomly selected samples from the continuous and correlated distributions of the properties and :

>>> for i in range(1, nlfc+1):

>>> mean = stat.loc[i-1, means[0]:means[-1]].values

>>> sigma = np.reshape(stat.loc[i-1, covs[0]:covs[-1]].values, (nlogs, nlogs))

>>> m = multivariate_normal(mean, sigma, NN)

>>> mc.ix[mc.LFC==i,1:] = m

The synthetic data set is therefore a copy of my original data set in which we have added all the modifications obtained through fluid-replacement modeling and have subtracted the local discrepancies and outliers that often plague our understanding of the situation, leading us to see separation between lithologies where they are absent or similarity between fluids that are different. See Figure 4, in which the synthetic data set is compared with the augmented data set.

This procedure also can be used as the basis for more advanced work, for example, modeling the progressive argillization of a reservoir or the reduction in porosity resulting from burial depth or modeling lithologic rather than fluid changes. See the discussion on training data in Avseth et al., p. 126.[3]


With this tutorial, I have shown two ways to create new data for a reservoir system, progressively increasing the distance from the real situation encountered to allow us to make conjectures and play with more data than we normally have. We have used Python to also demonstrate how easy it is to move away from a “black-box” approach, making it possible to adjust every part of the workflow.


  1. Smith, T. M., C. H. Sondergeld, and C. S. Rai, 2003, Gassmann fluid substitutions: A tutorial: Geophysics, 68, no. 2, 430–440,
  2. Wang, Z., 2001, Fundamentals of seismic rock physics: Geophysics, 66, no. 2, 398–412,
  3. Avseth, P., T. Mukerji, and G. Mavko, 2005, Quantitative seismic interpretation: Applying rock physics rules to reduce interpretation risk: Cambridge University Press.

External links

find literature about
Seismic petrophysics: Part 2
SEG button search.png Datapages button.png GeoScienceWorld button.png OnePetro button.png Schlumberger button.png Google button.png AGI button.png