Geophysical signal processing
Geophysical signal processing is a method that through the use of computers aims to manipulate the acquired (raw) signal through the application of filters, algorithms, and transforms to make the wanted signal clearer in both the time and frequency domains. The two main goals of geophysical signal processing are: improvement of the signal-to-noise ratio, and results representation in a convenient manner to facilitate geological and geophysical interpretation.
If the geophysical signal processing is not effective nor well performed, then the data handling will be more tedious and more time consuming. Without the enhancing of signals or noise removal, data interpretation will be harder to perform and the data could be misinterpreted.
- 1 Discretization of the signal
- 2 Sampling Theory
- 3 Correlation
- 4 Convolution
- 5 Deconvolution
- 6 Discrete Time Filters
- 7 Filter applications
- 8 External Links
- 9 References
- 10 Important Papers
Discretization of the signal
The workflow of the geophysical signal processing varies depending on the type of data that we are working with. However, since the processing of the signals is done in a digital computer, the discretization of the signal (waveform) is the first step. Discretization of the signal is the process in which we convert analog data to digital data (Kipnis et al., 2018). In this case, the analog data is a continuous signal with an infinite number of samples, and the digital data is a discrete signal with a finite number of samples. For a signal to be discrete, it should be a band-limited signal which means that the Fourier Transform is applied to the signal and the amplitude spectrum of the signal is non-zero for a specific range of frequencies. The discretize samples are equal intervals of time where each interval is represented by a single amplitude measurement. The difference between the analog and digital signal is shown on Figure 2. The discretization of the signal facilitates and makes the processing of the signal more efficient and effective.
Sampling theorem behaves as a bridge between continuous signal and discrete signals. This theory lets us know what information we can recover from the discrete data (Luke, 1999). However, sampling operation is generally not invertible. The sampling theorem states that “A continuous time signal can be represented in samples and can be exactly reconstructed when sampling frequency (fs) is greater than or equal to twice the maximum frequency component (fm) present in the signal” (Mallat, 1999).
N = number of total samples
dt = time-sampling interval
T = sampling rate
fs= sampling frequency
fm= maximun frequency component
fn = Nyquist frequency
df = frequency-sampling interval
Re-sampling: over sampling and under sampling
Re-sampling occurs when there is either a greater number of samples (over sampling) or a smaller number of samples (under sampling) in the discrete signal (Hail, 2018). In under sampling, dt becomes larger whereas in over sampling dt becomes smaller. Over sampling increases the amount of memory and time required to complete the processing. On the other hand, down sampling can represent a significant memory save at the time of processing. Down sampling behaves as a low pass filter, it eliminates random data. It is possible that a lot of valuable information may be lost at the time of re-sampling. However, the major pitfall of down sampling is that it can cause aliasing (Siheng et al., 2015). Figure 3 shows the behavior of the signal with oversampling, under sampling, and the correct sampling.
Aliasing is the overlapping of two or more signals, it occurs as a consequence of under sampling (Siheng et al., 2015). Aliasing can also mean the contamination of the signal by frequencies beyond the Nyquist frequency. Aliasing could be avoided by applying a band pass filter before the resampling of the signal.
Correlation is an operation in geophysical signal processing to measure the similarities between two signals (Seangdeog, 2011) . The correlation is given by the equation given in Figure 4. The amount of correlation between two signals is given by the correlation coefficient, which varies between -1 and 1. The correlation coefficient is calculated by the equation in Figure 5. If the correlation coefficient is 1, it means that the signals are the same, if the correlation coefficient is -1 it means that the two signals have reverse polarity, and if the correlation coefficient is 0 it means that there is no correlation at all. There are two types of correlation: Crosscorrelation and autocorrelation.
Cross-correlation is the correlation between two different signals. Cross-correlation is not symmetric.
Auto-correlation is defined as the correlation of the signal with itself. The correlation is calculated by the multiplication of the signal with its conjugate. It is useful to understand the behavior of the signal (Nakahara, 2015). Autocorrelation is a symmetric function.
Convolution is an operation in geophysical signal processing to analyze the relation between the input and output of a LTI (Linear and time invariant) system. The word convolution means folding, where the input signal is convolved with the impulse response to generate and output (Robison et al., 1986). There are two main types of convolution: Continuous convolution and Discrete convolution.
It is given by the top equation on Figure 6. The convolution between the input and output are in the time domain, and in an analog signal.
Discrete convolution refers to the convolution (multiplication) between the input and output in a discrete signal. The discrete convolution is given by the bottom equation on Figure 6.
Deconvolution is used to reverse the process of convolution on a signal. It represents the inverse filtering operation performed on the output of the LTI system to obtain information about the input. The word deconvolution means unfolding (Robison et al., 1986). The equation for deconvolution is given by the equation in Figure 7.
Discrete Time Filters
Filters are used to remove incoherent signals like noise and other undesired signals in the frequency range of interest. Some of the most common filters are: low-pass filter, high-pass filter and band-pass filter.
When a low-pass filter is applied only frequencies below the cutoff frequency remain (passband) and the frequencies above the cutoff frequency are eliminated or attenuated (stopband). By the removing of the frequencies, the low pass filter generates a smoothing effect (Rioul et al., 1991). The low pass filtering technique is often used to remove noise, clean up signals, and perform data averaging. The most common designs of low-pass filters are the moving average filter, Butterworth filter and the Chebyshev filter. The low pass filter template is shown in Figure 8.
When a high-pass filter is applied, signals below the cutoff frequency (stopband) are eliminated or attenuated and signal above the cutoff frequency (passband) are kept. The high-pass filters technique is often used to clean up low-frequency noise, and to highlight the high-frequency trends (Rioul et al., 1991). The most common designs of low pass filter are the Butterworth filter and the Chebyshev filter. The high pass filter template is shown on Figure 9.
When a band-pass filter is applied, only frequencies within the bandwidth are maintained without distortion of the input signal. The bandwidth of the filter is defined as the difference between the higher cut off frequency and the lower cut off frequency (Chen et al., 1994). The band-pass filter is considered as a second order filter and it is usually used for noise cancelation. The bandpass filter template is shown in Figure 10.
Filters are used in many fields and task. The following are some of the most common tasks: Imaging (Seismic, EM, InSAR, GPR, Earthquake Sources, etc.), similarity search, Inversion of seismic data, sensor response correction, and Improvement of resolution.
- ↑ Kipnis, A., Eldar, Y., & Goldsmith, A. (2018). Analog-to-Digital Compression: A New Paradigm for Converting Signals to Bits. IEEE Signal Processing Magazine. 35 (3): 16–39. doi:10.1109/MSP.2017.2774249.
- ↑ Lüke, H. (1999). The Origins of the Sampling Theorem. IEEE Communications Magazine. 37 (4): 106–108. doi:10.1109/35.755459.
- ↑ Mallat, S. (1999). A wavelet tour of signal processing. San Diego, CA: Elsevier.
- ↑ Hilal, A. (2018). Image re-sampling detection through a novel interpolation kernel. Forensic Science International, 287, 25-35.
- ↑ 5.0 5.1 Siheng C., Varma, S., & Kovacevic. (2015). Discrete Signal Processing on Graphs: Sampling Theory. Signal Processing, IEEE Transactions on, 63(24), 6510-6523.
- ↑ Seungdeog C., Rahimian, & Toliyat. (2011). Implementation of a Fault-Diagnosis Algorithm for Induction Machines Based on Advanced Digital-Signal-Processing Techniques. Industrial Electronics, IEEE Transactions on, 58(3), 937-948.
- ↑ Nakahara, H. (2015). Auto Correlation Analysis of Coda Waves from Local Earthquakes for Detecting Temporal Changes in Shallow Subsurface Structures: The 2011 Tohoku-Oki, Japan Earthquake. Pure and Applied Geophysics,172(2), 213-224.
- ↑ 8.0 8.1 Robinson, E., Durrani, T., & Peardon, L. (1986). Geophysical signal processing / Enders A. Robinson, Tariq S. Durrani, with a chapter by Lloyd G. Peardon. Englewood Cliffs, N.J.: Prentice-Hall.
- ↑ 9.0 9.1 Rioul, O., & Vetterli, M. (1991). Wavelets and signal processing. IEEE signal processing magazine, 8(4), 14-38.
- ↑ Chen, L., & Dawson, F. (1994). A Phase Domain Adaptive Tracking Bandpass Filter for Power Engineering Applications, ProQuest Dissertations and Theses.
- Costen, N. P., Parker, D. M., & Craw, I. (1996). Effects of high-pass and low-pass spatial filtering on face identification. Perception & Psychophysics, 58(4), 602-612.
- Taubin, G. (1995, September). A signal processing approach to fair surface design. In Proceedings of the 22nd annual conference on Computer graphics and interactive techniques(pp. 351-358). ACM.
- Elad, M., & Aharon, M. (2006). Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image processing, 15 (12), 3736-3745.