# Difference between revisions of "Self Organizing Map and Multi-attribute Analysis"

Self-organizing map (SOM) is an artificial neural network which is trained using unsupervised learning algorithm to produce a low dimensional map to reduce dimensionality non-linearly. [1][2][3]Self-organizing map has been proven as a useful tool in seismic interpretation and multi-attribute analysis by a machine learning approach. By exploring big data, self-organizing map reveals patterns of the samples clustering and classifying them into different subsets. These subsets provide information of seismic facies, petrophysical properties, and geological features. With modern visualization capabilities and the application of 2D color maps, SOM routinely identifies meaningful geologic patterns.[4]

## Algorithm

Self-organizing map of World Bank quality of life data. Resource: World Bank [5]

Self-organizing map applies competitive learning that no supervision is needed. Opposed to supervise learning algorithm which minimizes the error between real data and predictive data by gradient descent and backpropagation, self-organizing map trains and classifies sample to find the winning neuron that is closest to the sample by Euclidean distance. This competitive learning includes the following step:[6]

1. Each neuron’s weight of the map is randomized.
2. Select one of the input vector in the training dataset.
3. Traverse every neuron in the training dataset and examine which one’s weights are most similar to the input vector by using Euclidean distance. The neuron that has the smallest distance to the input vector is known as the winning neuron and it is also called Best Matching Unit (BMU).
4. Update the weight vectors of the neurons that are adjacent to BMU (including the BMU itself) by moving them closer to the input vector. The neuron adjusts the weights itself by the following recursion for n iterations:
1. ${\displaystyle w_{j}(n+1)=w_{j}(n)+\theta (n)\cdot \alpha (n)\cdot (x_{i}-w_{j}(n))}$
5. Repeat the procedure from step 2 to step 4 for all input vectors.
6. The input vectors that are close to each neuron are classified into different groups which have certain colors. This implies that every neuron is associated with a given set of samples.
• ${\displaystyle w_{j}(n)}$ is the weight factor of neuron j at step n.
• ${\displaystyle \theta (n)}$ is a restraint due to distance from BMU that is also called neighborhood function.
• ${\displaystyle \alpha (n)}$ is a learning restraint due to iteration progress
• ${\displaystyle x_{i}-w_{j}(n)}$ is the distance between input vector and neuron j at step n.

## Multi-attribute analysis

Offshore West Africa 2D seismic line processed by SOM analysis. In the figure insert, each neuron is shown as a unique color in the 2D color map. After training, each multiattribute seismic sample was classified by finding the neuron closest to the sample by Euclidean distance. The color of the neuron is assigned to the seismic sample in the display. A great deal of geologic detail is evident in classification by SOM neurons. Resource: Geophysical Insights.[4]

Seismic attributes are derived from seismic data in order to delineate geological or geophysical features. Seismic attributes include categories of time, amplitude, frequency, and attenuation with both pre-stack and post-stack seismic data. Common used attributes include Dip and Azimuth maps, Amplitude extraction, Coherence, Spectral decomposition, and Seismic Inversion.

In an effort to improve the interpretation of seismic attributes, combine two or more attributes together could better visualize features. The goal can be achieved by a Self-organizing map which includes multiple information of the features with low dimension. First, a subset of attributes that are extracted from the 3D seismic survey are selected as the input training data, and then SOM is able to run competitive leaning algorithm. The input samples are assigned to the classes or colors of the closest winning neurons. In order to identify the quality of learning, interpreters can examine the proportional reduction of error between initial and final epochs. The error is measured by summing distances as a measure of how near the winning neurons are to their respective data samples. The largest reduction in error is the indicator of best learning.

## Enhance Seismic Resolution Using SOM

SOM results: (a) original stacked amplitude, (b) SOM results with associated Embedded Image color map, and (c) SOM results with color map showing two neurons that highlight flat spots in the data. The hydrocarbon contacts (flat spots) in the field were confirmed by well control. Resource: Geophysical Insights.[4]

What a Self-organizing map can do is to detect detail in formation by multiple seismic attributes. When applied to an SOM, two or more attributes will help to distinguish different types of anomalous features which may be just one same anomaly without using SOM. The combination of seismic attribute provide by SOM analysis has better resolution images in the reservoir than any one of the seismic attributes or the original amplitude volume individually.

With higher resolution, it can be clearer to identify thin beds in the reservoir and Direct Hydrocarbon Indicator (DHI).

The development of unconventional plays is becoming a trend in exploration. This type of play requires detail information in facies and stratigraphy changing for horizontal wells drilling. When the target shale or mud rock is too thin to detect by seismic, SOM can be used to improve the resolution laterally and vertically compared to the seismic amplitude line.[7]

Direct Hydrocarbon Indicator (DHI) is defined due to seismic amplitude anomalies. Common DHI includes bright spot, flat spot, polarity reversal, and AVO analysis. Containing subtle facies changing information, SOM helps to verify whether the DHI is real and helps to detect the edge of reservoir. When better understanding the existing reservoir, interpreters can calculate the volume of the target more accurately and can decrease risk for future exploration in the area.[8]

SOM results: SOM highlights the reservoir above the oil/water and gas/oil contacts and the hydrocarbon contacts (flat spots). Resource: Geophysical Insights.[8]

## Reference

1. Kohonen, Teuvo; Honkela, Timo (2007). "Kohonen Network". Scholarpedia.
2. Kohonen, Teuvo (1982). "Self-Organized Formation of Topologically Correct Feature Maps". Biological Cybernetics 43 (1): 59–69. doi:10.1007/bf00337288.
3. Smith, T. (2010). Unsupervised neural networks-disruptive technology for seismic interpretation. Oil & Gas Journal, 108(37), 42-47. https://www.geoinsights.com/unsupervised-neural-networks-disruptive-technology-for-seismic-interpretation/
4. World Poverty Map, SOM research page, Univ. of Helsinki, http://www.cis.hut.fi/research/som-research/worldmap.html
5. Kohonen, Teuvo (2005). "Intro to SOM". SOM Toolbox. Retrieved 2006-06-18.
6. Roden, R., Smith, T. A., Santogrossi, P., Sacrey, D., & Jones, G. (2017). Seismic interpretation below tuning with multiattribute analysis. The Leading Edge, 36(4), 330-339. https://geoinsights.com/seismic-interpretation-below-tuning-with-multiattribute-analysis/
7. Roden, R., & Chen, C. W. (2017). Interpretation of DHI characteristics with machine learning. First Break, 35(5), 55-63.https://www.geoinsights.com/interpretation-of-dhi-characteristics-with-machine-learning/