Reservoir modeling

ADVERTISEMENT
From SEG Wiki
Jump to: navigation, search
Figure 1 Reservoir model derived from volumetric seismic interpretation. Taken from Paradigm's SKUA GoCAD image samples.

Reservoir modeling is the process of creating a three-dimensional representation of a given reservoir based on its petrophysical, geological and geophysical properties. These properties are defined during reservoir characterization where geoscientists and engineers gather all physical and chemical data to extrapolate those values throughout the reservoir. They can then create a three-dimensional model to be used for reservoir simulation. During reservoir simulation, scientists run several computer models of the reservoir, with real time field data, to accurately predict the behavior of the reservoir. This is useful for making field development decisions, such as drilling additional wells and estimating reserves. The physical space of the model is represented by an array of discrete, three-dimensional cells. These cells are discrete because they have known and definable boundaries from core and/or well log data. [1]

One of the challenges faced with creating reservoir models is to accurately illustrate the reservoir’s geometry, such as its stratigraphic or structural framework, yet also create it efficiently. [2] The structural framework is essential to the model because it defines the major compartments within the reservoir through fault planes and controls the movement of fluids. [1]

Modeling Scale

Figure 2 Geostatistical modeling flow chart taken from Pyrcz and Deutsch, 2014. [1]

For all models, at any scale, it is very helpful to have hard data. This is usually in the form of core measurements. From these cores, it is possible to extract facies, porosity and permeability values and assign them to specific depths. Datasets, such as seismic data and well logs, are “soft data” because they are subjective. Seismic data needs to be tied to well logs in order to be calibrated with known core measurements and become “hard data”. It is not possible to model reservoir properties at the resolution of core data therefore, it must be scaled up to some intermediate resolution. This is necessary because if the model were scaled to reservoir properties, it would run very inefficiently and take a long time to generate. Small scale features, such as core and well log data, must usually be scaled up. Large scale features, such as seismic and production data, need to be scaled down to the resolution of the model. Therefore, to save time and money, models are usually created at an intermediate scale and then scaled to an even coarser resolution for flow simulation. This is an additional challenge for reservoir models: to create a model that is not too small that it runs inefficiently, yet not so broad that the model generates inaccurate flow calculations and misrepresents geologic features. [1] [2]

Uncertainty

Another problem with using multiple datasets containing petrophysical measurements is that they will often have different scales and levels of precision. When scaling data up or down, there is always a certain degree of uncertainty. Each cell is an average of an area. Therefore, assumptions are made to simplify the model so that it can run efficiently. As the number of datasets increases, the level of uncertainty increases as well. The goal of reservoir modeling is to create a holistic representation, over a region of interest, of the different attributes within a given reservoir. This can include data, at a required resolution, such as facies, porosity and permeability. To do this with accuracy and uncertainty is impossible. Multiple realizations are substituted to create different reservoir scenarios. A realization is defined as a single illustration of all reservoir attributes over a region of interest. These different possibilities highlight the stochastic uncertainty as geoscientists and engineers extrapolate data throughout the reservoir. [3] Geoscientists and engineers developed numerical models to help them quantify the varying levels of uncertainty through geostatistics. This discipline gives scientists a consistent set of tools to evaluate uncertainty throughout the reservoir. Some techniques include variograms, multiple point, and object-based process mimicking. However, there are many more that are used and still being developed today. [3]

Reservoir Modeling Data

When modeling a reservoir, it is important to use as many available datasets as possible to best represent the various features within it. Several datasets are used for reservoir modeling:

Figure 3 A complex 3-D reservoir (center) that incorporates all relevant data (surrounding rectangles). Taken from Pyrcz and Deutsch, 2014. [1]

Core Data

Core data is a plug that is taken at a certain depth within a well. Core data indicates the lithology at that given depth in addition to the porosity and permeability. This is one of the most useful pieces of information for a model because it provides hard, reliable data that is true to the behavior of the reservoir. Cores can also reveal structural features through fractures and highlight key stratigraphic features such as depositional environments and unconformities.

Well Log Data

Well log data gives insights into the petrophysical properties of the rock. This data is gathered through wire logging tools and provides information such as lithology, porosity, water saturation, zone thickness and permeability. [4] These measurements are gathered through a variety of tools such as a spontaneous potential log, a gamma ray log, a sonic log, a neutron log, a density log and resistivity logs.

Structural Framework

This is usually done by interpreting seismic data and establishing the various structural features within the reservoir. If no seismic data is available, geoscientists will estimate these features through literature and previously published geologic maps. The framework is important to include within the model because structural features are one of the primary factors that confine fluids within a reservoir. The framework will also enable geoscientists and engineers to evaluate the varying stresses within the reservoir and evaluate formation stability.

Stratigraphic Framework

The stratigraphic interpretation provides layering information within the reservoir. It also shows lateral continuity and the thickness of each layer. This is useful for determining how long an oil column extends both vertically and laterally. [1] The stratigraphic framework can be constructed through chronostratigraphy or lithostratigraphy. Chronostratigraphy indicates the absolute ages of the rocks whereas lithostratigraphy identifies the varying lithologies within the reservoir.

Figure 4 Broad overview of geostatistical reservoir modeling. Taken from Pyrcz and Deutsch, 2014. [1]

Porosity and Permeability

Permeability is the hardest to define because it is directional. The most common way to assess vertical and horizontal permeability is by using two wireline formation testing modes. Wire logging engineers start with a pretest which involves probes followed by a vertical interference test that includes packers and more probes. [4] Porosity is also gathered through wireline testing. These two attributes are essential for developing a reservoir model because they provide key insights as to how fluids migrate and where they are stored.

Well Tests and Production Data

Well tests and production data can be gathered by performing tests before or after the well is completed. Production data is useful for modeling because it indicates the productivity of a well and its reservoir.

Typical Workflow

Workflows will vary depending upon the complexity and preference of the team creating the reservoir model. A generic workflow for creating a reservoir model starts by gathering all available and relevant datasets. These datasets are then compiled to create stratigraphic and structural frameworks that define the geometry of the reservoir. Using these frameworks, the facies, porosity and permeability values are extrapolated horizontally and vertically throughout each layer. The facies rock types are modeled independently within each stratigraphic layer whereas the porosity of the model is dependent upon the facies model. Permeability is dependent upon both the facies and the porosity models. Several equally-likely realizations are created from these attributes and are then evaluated to determine the best reservoir outcome. The best models are then selected for reservoir simulation to evaluate fluid flow in each scenario. The team then selects the best possible plan to develop the field efficiently and economically. [1]

External Links

Here is a selection of videos and websites from various energy companies that demonstrate how reservoir models are created:

Baker Hughes: JewelSuite Software: Website, Video

Paradigm: SKUA-GoCAD Software: Website, Video

Schlumberger: Petrel Software: Website, Video

See Also

References

  1. 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 Pyrcz, M. J., & Deutsch, C. V. (2014). Geostatistical reservoir modeling Oxford university press.
  2. 2.0 2.1 Soleimani, M., & Shokri, B. J. (2015). 3D static reservoir modeling by geostatistical techniques used for reservoir characterization and data integration. Environmental Earth Sciences, 74(2), 1403-1414. https://doi.org/10.1007/s12665-015-4130-3
  3. 3.0 3.1 Pyrcz, M. J., & White, C. D. (2015). Uncertainty in reservoir modeling. Interpretation, 3(2), SQ7-SQ19. https://doi.org/10.1190/INT-2014-0126.1
  4. 4.0 4.1 Ma, S., Zeybek, M. M., & Kuchuk, F. (2014). Static, dynamic data integration improves reservoir modeling, characterization. Oil & Gas Journal, 112(9), 82-82.