# Difference between revisions of "User:Ageary/Testing page"

Notable works Seismic migration problems and solutions, GEOPHYSICS 66(5):1622 2016 SEG Honorary Membership, 2001 SEG Life Membership Award, 2001 Honorable Mention (Geophysics) Geophysics, Texas A&M Geophysics, Stanford University

Add text here for Visual Editor test (Click "Edit" tab, add text, save):7/16/2019 - Eric

## Translation extension check

Check that the following page displays correctly:

There will be a language bar at the top and text in the relevant language in the body of the text.

## Math equations

Given a continuous function x(t) of a single variable t, its Fourier transform is defined by the integral

 $X(\omega )=\int _{-\infty }^{+\infty }x(t)\;\exp(-i\omega t)\;dt,$ (13)

where ω is the Fourier dual of the variable t. If t signifies time, then ω is angular frequency. The temporal frequency f is related to the angular frequency ω by ω = 2πf.

The Fourier transform is reversible; that is, given X(ω), the corresponding time function is

 $x(t)=\int _{-\infty }^{+\infty }X(\omega )\;\exp(i\omega t)\;d\omega .$ (14)

Throughout this book, the following sign convention is used for the Fourier transform. For the forward transform, the sign of the argument in the exponent is negative if the variable is time and positive if the variable is space. Of course, the inverse transform has the opposite sign used in the respective forward transform. For convenience, the scale factor 2π in equations (13) and (14) are omitted.

Generally, X(ω) is a complex function. By using the properties of the complex functions, X(ω) is expressed as two other functions of frequency

 $X(\omega )=A(\omega )\;\;\exp[i\phi (\omega )],$ (15)

where A(ω) and ϕ(ω) are the amplitude and phase spectra, respectively. They are computed by the following equations:

 $A(\omega )={\sqrt {X_{r}^{2}(\omega )+X_{i}^{2}(\omega )}}$ (16)

and

 $\phi (\omega )=\tan ^{-1}{\frac {X_{i}(\omega )}{X_{r}(\omega )}},$ (17)

where Xr(ω) and Xi(ω) are the real and imaginary parts of the Fourier transform X(ω). When X(ω) is expressed in terms of its real and imaginary components

 $X(\omega )=X_{r}(\omega )+iX_{i}(\omega ),$ (18)

and is compared with equation (15), note that

 ${X_{r}}(\omega )=A(\omega )\;\cos \phi (\omega ),$ (19)

and

 ${X_{i}}(\omega )=A(\omega )\;\sin \phi (\omega ).$ (20)

## Images

Below is a picture of a neural network similar to the one we're building:

## Tables

Date Name Height
01.10.1977 Smith 1.85
11.6.1972 Ray 1.89
1.9.1992 Bianchi 1.72
 Operation Time Domain Frequency Domain (1) Shifting x(t − τ) exp(−iωτ)X(ω) (2) Scaling x(at) ${{\left|a\right|}^{-1}}X\left({\omega }/{a}\;\right)$ (3) Differentiation dx(t)/dt iωX(ω) (4) Addition f(t) + x(t) F(ω) + X(ω) (5) Multiplication f(t) x(t) F(ω) * X(ω) (6) Convolution f(t) * x(t) F(ω) X(ω) (7) Autocorrelation x(t) * x(−t) ${{\left|X\left(\omega \right)\right|}^{2}}$ (8) Parseval’s theorem $\int {{\left|x\left(t\right)\right|}^{2}}\ dt$ $\int {{\left|X\left(\omega \right)\right|}^{2}}\ d\omega$ * denotes convolution.

## Code

We are now ready to implement the neural network itself. Neural networks consist of three or more layers: an input layer, one or more hidden layers, and an output layer.

Let's implement a network with one hidden layer. The layers are as follows:

Input layer: $x^{i}$ Hidden layer: $a_{1}^{(i)}=\sigma (W_{1}x^{(i)}+b_{1})$ Output layer: ${\hat {y}}^{(i)}=W_{2}a_{1}^{(i)}+b_{2}$ where $x^{(i)}$ is the i-th sample of the input data $X;W_{1},b_{1},W_{2}$ , and $b_{2}$ are the weight matrices and bias vectors for layers 1 and 2, respectively; and $\sigma$ is our nonlinear function. Applying the nonlinearity to $W_{1}x^{(i)}+b_{1}$ in layer 1 results in the activation $a_{1}$ . The output layer yields ${\hat {y}}^{(i)}$ , the i-th estimate of the desired output. We're not going to apply the nonlinearity to the output, but people often do. The weights are randomly initialized, and the biases start at zero. During training they will be iteratively updated to encourage the network to converge on an optimal approximation to the expected output.

We'll start by defining the forward pass, using NumPy's @ operator for matrix multiplication:

def forward(xi, W1, b1, W2, b2):
z1 = W1 @ xi + b1
a1 = sigma(z1)
z2 = W2 @ a1 + b2
return z2, a1


Here is the back-propagation algorithm we'll employ:

For each training example:

For each layer:

• Calculate the error.
• Update weights.
• Update biases.

This is straightforward for the output layer. However, to calculate the gradient at the hidden layer, we need to compute the gradient of the error with respect to the weights and biases of the hidden layer. That's why we needed the derivative in the forward() function.

Let's implement the inner loop as a Python function:

def backward(xi, yi,
a1, z2,
params,
learning_rate):

err_output = z2 - yi

derivative = sigma(a1, forward=False)
err_hidden = err_output * derivative * params['W2']
grad_W1 = err_hidden[:, None] @ xi[None, :]