# Digital prediction

Other languages:
English • ‎español
Series Geophysical References Series Digital Imaging and Deconvolution: The ABCs of Seismic Exploration and Processing Enders A. Robinson and Sven Treitel 15 http://dx.doi.org/10.1190/1.9781560801610 9781560801481 SEG Online Store

The prototype minimum-delay digital signal ${\displaystyle f_{k}}$ is the causal damped geometric signal ${\displaystyle f_{k}=a^{k}}$ for k = 0, 1, 2, …, where ${\displaystyle |a|<1}$ (Table 3, first row below the column headings). We want to let this signal be the input to a causal, linear time-invariant system, which we call the extrapolation (or prediction) system E(Z). For the desired output, we choose the advanced signal (say, for the advance ${\displaystyle \varepsilon =2)f_{k+2}=a^{k+2}}$ for k = ..., –3, –2, –1, 0, 1, 2, … (Table 3, second row). The front end ${\displaystyle a^{0}}$, ${\displaystyle a^{1}}$ (Table 3, third row) of the advanced signal is anticausal, whereas the tailgate ${\displaystyle a^{2},\ a^{3},\ a^{4},\ a^{5},\ a^{6},\ldots }$ (Table 3, fourth row) is causal. Because the prediction system is causal, it cannot reach the anticausal front end; thus, it can predict only the causal tailgate. For the tailgate, we demand perfect prediction. We will allow no prediction error whatsoever.

Time k ... –3 –2 –1 0 1 2 3 4 ...
Minimum-delay signal ${\displaystyle f_{k}}$ ... 0 0 0 1 a ${\displaystyle a^{2}}$ ${\displaystyle a^{3}}$ ${\displaystyle a^{4}}$ ...
Advanced signal ${\displaystyle f_{k+2}}$ ... 0 1 a ${\displaystyle a^{2}}$ ${\displaystyle a^{3}}$ ${\displaystyle a^{4}}$ ${\displaystyle a^{5}}$ ${\displaystyle a^{6}}$ ...
Front end of ${\displaystyle f_{k+2}}$ ... 0 1 a 0 0 0 0 0 ...
Tailgate of ${\displaystyle f_{k+2}}$ ... 0 0 0 ${\displaystyle a^{2}}$ ${\displaystyle a^{3}}$ ${\displaystyle a^{4}}$ ${\displaystyle a^{5}}$ ${\displaystyle a^{6}}$ ...

The problem is to find an expression for the causal prediction filter ${\displaystyle E\left(Z\right)}$. A naive person would proceed in this way. Both the input and the desired output (i.e., the tailgate) are causal, so we can use the one-sided Z-transform. We recall that the one-sided Z-transform is denoted by Z (equation 27). The one-sided Z-transform of the input is

 {\displaystyle {\begin{aligned}{\bf {Z}}(f_{k})=\sum \limits _{k=0}^{\infty }{a^{k}Z^{k}={\frac {1}{1-aZ}},}\end{aligned}}} (99)

whereas the one-sided Z-transform of the tailgate of the desired output is

 {\displaystyle {\begin{aligned}{\rm {Z}}\left(f_{k+\varepsilon }\right)=\sum _{k=0}^{\infty }{a^{k+\varepsilon }}Z^{k}=a^{\varepsilon }{\rm {\ }}\sum _{k=0}^{\infty }{a^{k}}Z^{k}={\frac {a^{\varepsilon }}{1-aZ}}.\end{aligned}}} (100)

The prediction distance, or advance (${\displaystyle \varepsilon }$), is always a positive integer. The required prediction filter has a transfer function given by the ratio of the output Z-transform over the input Z-transform; that is,

 {\displaystyle {\begin{aligned}E\left(Z\right)={\frac {{\rm {Z}}\left(f_{k+\varepsilon }\right)}{{\rm {Z}}\left(f_{k}\right)}}={\frac {a^{\varepsilon }{\left(1-aZ\right)}^{-1}}{{\left(1-aZ\right)}^{-1}}}=a^{\varepsilon }.\end{aligned}}} (101)

This formula is correct, because if we multiply the input signal ${\displaystyle a^{k}}$ by ${\displaystyle a^{\varepsilon }}$, we indeed get the advanced value ${\displaystyle a^{k+\varepsilon }}$. A little thought tells us that everywhere the geometric signal has the same shape, so by using a constant attenuation ${\displaystyle a^{\varepsilon }}$, we can change the present shape ${\displaystyle a^{k}}$ into the future shape ${\displaystyle a^{k+\varepsilon }}$. (Note: Two curves are said to have the same shape if one is a constant factor times the other.)

The formula ${\displaystyle E\left(Z\right)=Z\left(f_{k+\varepsilon }\right){\rm {/Z}}\left(f_{k}\right)}$ certainly works in the case in which ${\displaystyle f_{k}}$ is a geometric signal. Why did we go to the bother of defining minimum delay? One reason is that this formula for ${\displaystyle E\left(Z\right)}$ works not only for a geometric signal but also for any minimum-delay signal. In fact, we can say this: Perfect prediction in the sense defined above is possible if and only if the input signal is minimum delay. For a minimum-delay signal, ${\displaystyle E\left(Z\right)}$ gives the causal prediction system for prediction distance ${\displaystyle \varepsilon }$. The prediction of a nonminimum-delay signal will be treated in the next section.

Let us look at the AR(2) system

 {\displaystyle {\begin{aligned}y_{k}+{\alpha }_{1}y_{k-1}+{\alpha }_{2}y_{k-2}=u_{k}.\end{aligned}}} (102)

We want to find the prediction system that predicts the impulse response ${\displaystyle f_{k}}$. First of all, we know that the impulse response of an AR system is necessarily minimum delay. The impulse response satisfies the difference equation

 {\displaystyle {\begin{aligned}f_{k}+{\alpha }_{1}f_{k-1}+{\alpha }_{2}f_{k-2}={\delta }_{k}\mathrm {\;\;for\;} k=0,1,2,\ldots .\end{aligned}}} (103)

The Z-transform of the impulse response is the transfer function

 {\displaystyle {\begin{aligned}F\left(Z\right)={\frac {1}{{\rm {l+}}{\alpha }_{1}Z+{\alpha }_{2}Z^{2}}}={\frac {1}{\left(1-a_{1}Z\right)\left(1-a_{2}Z\right)}},\end{aligned}}} (104)

where the poles ${\displaystyle a_{1}^{-1}}$ and ${\displaystyle a_{2}^{-1}}$ lie outside the unit circle; that is, ${\displaystyle |a_{1}|<1}$ and ${\displaystyle |a_{2}|<1}$. The impulse response ${\displaystyle f_{k}}$ can be found by expanding ${\displaystyle F\left(Z\right)}$ in partial fractions. We have

 {\displaystyle {\begin{aligned}F\left(z\right)={\frac {A_{1}}{1-a_{1}Z}}+{\frac {A_{2}}{1-a_{2}Z}},\end{aligned}}} (105)

where

 {\displaystyle {\begin{aligned}A_{1}={\frac {a_{1}}{a_{1}-a_{2}}},\;\;\;\;A_{2}={\frac {a_{2}}{a_{2}-a_{1}}}.\end{aligned}}} (106)

Thus, the impulse response is the causal minimum-delay signal

 {\displaystyle {\begin{aligned}f_{k}=A_{1}a_{1}^{k}+A_{2}a_{2}^{k}\mathrm {\;\;\;for\;} k=0,1,2,\ldots .\end{aligned}}} (107)

That is, the AR(2) impulse response is a weighted average of two geometric signals with the weights ${\displaystyle A_{1}}$ and ${\displaystyle A_{2}}$ given above. We want to feed ${\displaystyle f_{k}}$ into the causal prediction filter ${\displaystyle E\left(Z\right)}$ and obtain as output the advanced signal ${\displaystyle f_{k+\varepsilon }}$. As we stated above, E(Z) is the ratio of the one-sided Z-transform of the output over the one-sided Z-transform of the input. Of course, we know that the input Z-transform is F(Z). We can obtain symmetry in our expression for E(Z) if we write the input Z-transform F(Z) as

 {\displaystyle {\begin{aligned}{\rm {Z}}\left(f_{k}\right)=Z\left(A_{1}a_{1}^{k}+A_{2}a_{2}^{k}\right)=A_{1}{\rm {Z}}\left(a_{1}^{k}\right)+A_{2}{\rm {Z}}\left(a_{2}^{k}\right).\end{aligned}}} (108)

The output Z-transform is

 {\displaystyle {\begin{aligned}{\rm {Z}}\left(f_{k+\varepsilon }\right)={\rm {Z}}\left(A_{1}a_{1}^{k+\varepsilon }+A_{2}a_{2}^{k+\varepsilon }\right)=A_{1}a_{1}^{\varepsilon }{\rm {Z}}\left(a_{1}^{k}\right)+A_{2}a_{2}^{\varepsilon }{\rm {Z}}\left(a_{2}^{k}\right).\end{aligned}}} (109)

Thus, the required prediction system is

 {\displaystyle {\begin{aligned}E\left(Z\right)={\frac {a_{1}^{\varepsilon }A_{1}{\rm {Z}}\left(a_{1}^{k}\right)+a_{2}^{\varepsilon }A_{2}{\rm {Z}}\left(a_{2}^{k}\right)}{A_{1}{\rm {Z}}\left(a_{1}^{k}\right)+A_{2}{\rm {Z}}\left(a_{2}^{k}\right)}},\end{aligned}}} (110)

which is

 {\displaystyle {\begin{aligned}E\left(Z\right)={\frac {a_{1}^{\varepsilon }a_{1}{\left(a_{1}-a_{2}\right)}^{-1}{\left(1-a_{1}Z\right)}^{-1}+a_{2}^{\varepsilon }a_{2}{\left(a_{2}-a_{1}\right)}^{-1}{\left(1-a_{2}Z\right)}^{-1}}{a_{1}{\left(a_{1}-a_{2}\right)}^{-1}{\left(1-a_{1}Z\right)}^{-1}+a_{2}{\left(1-a_{1}\right)}^{-1}{\left(1-a_{2}Z\right)}^{-1}}}.\end{aligned}}} (111)

If we simplify this expression, we obtain

 {\displaystyle {\begin{aligned}E\left(Z\right)={\frac {a_{1}^{\varepsilon +1}-a_{2}^{\varepsilon +1}}{a_{1}-a_{2}}}-a_{1}a_{2}{\frac {a_{1}^{\varepsilon }-a_{2}^{\varepsilon }}{a_{1}-a_{2}}}Z.\end{aligned}}} (112)

This is the general expression for the prediction system with prediction distance ${\displaystyle \varepsilon }$. Therefore, for unit prediction distance ${\displaystyle \varepsilon =1}$, the prediction system is

 {\displaystyle {\begin{aligned}E_{1}(Z)={\frac {a_{1}^{2}-a_{2}^{2}}{a_{1}-a_{2}}}-a_{1}a_{2}Z=a_{1}+a_{2}-a_{1}a_{2}Z.\end{aligned}}} (113)

Because the relationship between the AR coefficients ${\displaystyle \alpha _{1}}$ and ${\displaystyle \alpha _{2}}$ and the roots ${\displaystyle a_{1}}$ and ${\displaystyle a_{2}}$ is

 {\displaystyle {\begin{aligned}1+{\alpha }_{1}Z+{\alpha }_{2}Z^{2}=\left(1-a_{1}Z\right)\left(1-a_{2}Z\right),\end{aligned}}} (114)

we have

 {\displaystyle {\begin{aligned}\alpha _{1}=-(a_{1}+a_{2}),\;\;\;\;\alpha _{2}=a_{1}a_{2}.\end{aligned}}} (115)

Thus, the prediction system for ${\displaystyle \varepsilon =1}$ is

 {\displaystyle {\begin{aligned}E_{1}(Z)=-\alpha _{1}-\alpha _{2}Z.\end{aligned}}} (116)

If ${\displaystyle f_{k}}$ is the input (at time index k) to the prediction system (with prediction distance ${\displaystyle \varepsilon }$), then we define ${\displaystyle {\hat {f_{k}}}\left(\varepsilon \right)}$ as the output (at the same time index k). The notation ${\displaystyle {\hat {f_{k}}}\left(\varepsilon \right)}$ should be read as “the predicted value (that is obtained at the present time k) of the future value ${\displaystyle f_{k+\varepsilon }}$ (that will not be known until the future time ${\displaystyle k+\varepsilon }$).” In other words, the positive integer ${\displaystyle \varepsilon }$ is the prediction distance. In the symbol ${\displaystyle {\hat {f_{k}}}\left(\varepsilon \right)}$, the caret stands for predicted value, the subscript k stands for the time at which the predicted value occurs, and ${\displaystyle \varepsilon }$ stands for the prediction distance. That is, ${\displaystyle {\hat {f}}\left(\varepsilon \right)}$ is the prediction of the future value ${\displaystyle {\rm {value}}f_{k+\varepsilon }}$, the prediction being made at the present time k. Because here ${\displaystyle \varepsilon =1}$, we can write

 {\displaystyle {\begin{aligned}{\hat {f_{k}}}\left(\varepsilon \right)={\hat {f_{k}}}\left(1\right)=\left(-{\alpha }_{1}-{\alpha }_{2}Z\right)f_{k},\end{aligned}}} (117)

which is

 {\displaystyle {\begin{aligned}{\hat {f_{k}}}\left(1\right)=-{\alpha }_{1}f_{k}-{\alpha }_{2}f_{k-1}.\end{aligned}}} (118)

Because ${\displaystyle f_{k}}$ is causal, it is zero for negative k. Thus, this equation gives

 {\displaystyle {\begin{aligned}{\hat {f}}\left(1\right)=-{\alpha }_{1}f_{0}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\\{\hat {f}}\left(1\right)=-{\alpha }_{1}f_{1}-{\alpha }_{2}f_{0}\;\;\\{\hat {f}}\left(1\right)=-{\alpha }_{1}f_{2}-{\alpha }_{2}f_{1}.\\\ldots .\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\end{aligned}}} (119)

With the initial condition ${\displaystyle f_{0}={\delta }_{0}=1}$, the above equations 119 are equivalent to the equations

 {\displaystyle {\begin{aligned}f_{k}+{\alpha }_{1}f_{k-1}+{\alpha }_{2}f_{k-2}={\delta }_{k}\mathrm {\;\;for\;} k=0,1,2,\ldots ,\end{aligned}}} (120)

which generate the impulse response ${\displaystyle f_{k}}$. Therefore, it follows that

 {\displaystyle {\begin{aligned}{\hat {f}}_{k-1}(1)=f_{k},\end{aligned}}} (121)

which shows that the prediction system perfectly predicts the tailgate ${\displaystyle f_{1}}$, ${\displaystyle f_{2},f_{3}}$, ... of the impulse response. In a nutshell, we can find the prediction system with ${\displaystyle \varepsilon =1}$ for the impulse response of an AR(2) system by inspection. If the AR(2) system is

 {\displaystyle {\begin{aligned}y_{k}+{\alpha }_{1}y_{k-1}+{\alpha }_{2}y_{k-2},\end{aligned}}} (122)

then the prediction system is the MA(1) system given by

 {\displaystyle {\begin{aligned}E_{1}\left(Z\right)=-{\alpha }_{1}-{\alpha }_{2}Z.\end{aligned}}} (123)

As an example, let us find the prediction system that (for ${\displaystyle \varepsilon =1}$) predicts the impulse response of the AR(p) system given by

 {\displaystyle {\begin{aligned}y_{k}+{\alpha }_{1}y_{k-1}+\ldots +{\alpha }_{p}y_{k-p}=u_{k}.\end{aligned}}} (124)

By inspection, the prediction system is

 {\displaystyle {\begin{aligned}E_{1}\left(Z\right)=-{\alpha }_{1}-{\alpha }_{2}Z-\ldots -{\alpha }_{p}Z^{p}.\end{aligned}}} (125)

Let us look at the minimum-delay MA(1) system

 {\displaystyle {\begin{aligned}F\left(Z\right)={\rm {Z}}\left(f_{k}\right)=1-bZ\;\;\;({\text{where}}\ |b{\rm {| (126)

By inspection, we see that the causal impulse response is

 {\displaystyle {\begin{aligned}f_{k}={\delta }_{k}-b{\delta }_{k-1}\;\;\;{\text{for}}\ k=0,1,2,...,\end{aligned}}} (127)

which in longhand is

 {\displaystyle {\begin{aligned}f_{0}{\rm {=1\ ,\ }}f_{1}=-b,\ f_{2}{\rm {\ =0,\ }}f_{3}{\rm {\ =0,\ldots }}.\end{aligned}}} (128)

Thus

 {\displaystyle {\begin{aligned}{\rm {Z}}\left(f_{k+1}\right)=f_{1}+f_{2}Z+f_{3}Z^{2}{\rm {+\ldots \ =}}-b\ \\{\rm {Z}}\left(f_{k+2}\right)=f_{2}+f_{3}Z+f_{4}Z^{2}{\rm {+\ldots \ =0}},\;\;\;\end{aligned}}} (129)

and in general,

 {\displaystyle {\begin{aligned}{\rm {Z}}\left(f_{k+\varepsilon }\right)=f_{\varepsilon }+f_{\varepsilon +1}Z+f_{\varepsilon +2}Z^{2}+\dots {\rm {=\ 0\;\;for\;}}\varepsilon {\rm {>l}}.\end{aligned}}} (130)

Thus, the prediction system for prediction distance ${\displaystyle \varepsilon }$ is

 {\displaystyle {\begin{aligned}E_{\varepsilon }\left(Z\right)={\frac {{\rm {Z}}\left(f_{k+\varepsilon }\right)}{{\rm {Z}}\left(f_{k}\right)}}={\frac {-b}{1-bZ}}\;\;{\text{for}}\;\varepsilon =1\;\;{\rm {and}}=0\;\;{\rm {for}}\;\varepsilon \;{\rm {2}},3,4,....\end{aligned}}} (131)

So if the prediction distance ${\displaystyle \varepsilon }$ is greater than one, only the trivial prediction ${\displaystyle {\hat {f}}\left(\varepsilon \right)=0}$ can be obtained. The one-step prediction system is

 {\displaystyle {\begin{aligned}E_{1}\left(z\right)={\frac {-b}{1-bZ}}=-b\left(1+bZ+b^{2}Z^{2}+b^{3}Z^{3}+\dots \right).\end{aligned}}} (132)

Therefore, the prediction is

 {\displaystyle {\begin{aligned}{\hat {f}}\left(1\right)=\left(-b-b^{2}Z-b^{3}Z^{2}-b^{4}Z^{3}{\rm {+\ldots }}\right){\rm {\ }}f_{k},\end{aligned}}} (133)

which is

 {\displaystyle {\begin{aligned}{\hat {f}}\left(1\right)=-bf_{k}-b^{2}f_{k-1}-b^{3}f_{k-2}-{\rm {\ }}b^{4}f_{k-3}-\ldots .\end{aligned}}} (134)

Because f(k) is causal, we have the system of equations

 {\displaystyle {\begin{aligned}{\hat {f}}\left(1\right)=-bf_{0}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\\{\hat {f}}\left(1\right)=-bf_{1}-b^{2}f_{0}\ \;\;\;\;\;\;\;\;\;\\{\hat {f}}\left(1\right)=-bf_{2}-b^{2}f_{1}-b^{3}f_{0}\ \\...\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\end{aligned}}} (135)

Because the impulse response ${\displaystyle f_{k}}$ is minimum delay, perfect prediction is obtained. Thus,

 {\displaystyle {\begin{aligned}{\hat {f_{0}}}\left(1\right)=f_{1}\ ,{\hat {f}}\left(1\right)=f_{2}\ ,{\hat {f_{2}}}\left(1\right)=f_{3},\ldots ,\end{aligned}}} (136)

so the above system of equations becomes

 {\displaystyle {\begin{aligned}f_{1}=-bf_{0}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\\f_{2}=-bf_{1}-b^{2}f_{0}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\\f_{3}=-bf_{2}-b^{2}f_{1}-b^{3}f_{0}\;\;\;\;\;\;\\...\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\end{aligned}}} (137)

With the initial condition ${\displaystyle f_{0}=1}$, we find

 {\displaystyle {\begin{aligned}f_{1}=-b\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\\f_{2}=-b\left(-b\right)-b^{2}=0\;\;\;\;\;\;\;\;\;\\f_{3}=-b^{2}\left(-b\right)-b^{3}=0\;\;\;\;\;\;\;\\...\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\end{aligned}}} (138)

which indeed is the impulse response of the given MA(1) system.

Let us look next at the digital minimum-delay ARMA(1,1) system

 {\displaystyle {\begin{aligned}y_{k}-ay_{k-1}=x_{k}-bx_{k-1},\end{aligned}}} (139)

so ${\displaystyle |a|<1}$ and ${\displaystyle |b|<1}$. The transfer function is

 {\displaystyle {\begin{aligned}F\left(z\right)={\rm {Z}}\left\{f_{k}\right\}={\frac {1-bZ}{1-aZ}}=1+{\frac {\left(a-b\right)Z}{1-aZ}}.\end{aligned}}} (140)

The minimum-delay impulse response is

 {\displaystyle {\begin{aligned}f_{0}{\rm {=1,\ }}f_{1}=a-b,\ f_{2}=\left(a-b\right)a,\ f_{3}=\left(a-b\right)a^{2}{\rm {\ ,\ldots ,\ }}f_{k+\varepsilon }=\left(a-b\right)a^{k+\varepsilon -1},\ldots ,\end{aligned}}} (141)

so

 {\displaystyle {\begin{aligned}{\rm {Z}}\left(f_{k+\varepsilon }\right)=\left(a-b\right)a^{\varepsilon -1}\sum _{k=0}^{\infty }{a^{k}}Z^{k}=\left(a-b\right)a^{\varepsilon -1}{\left(1-aZ\right)}^{-1}.\end{aligned}}} (142)

Thus, the prediction system is

 {\displaystyle {\begin{aligned}E\left(Z\right)={\frac {1-aZ}{1-bZ}}{\rm {\ }}{\frac {a-b}{1-aZ}}{\rm {\ }}a^{\varepsilon -1}={\frac {a-b}{1-bZ}}a^{\varepsilon -1},\end{aligned}}} (143)

which is an AR(1) system.

Next let us consider the minimum-delay ARMA(2,1) system

 {\displaystyle {\begin{aligned}F\left(Z\right)={\rm {Z}}\left(f_{k}\right)={\frac {1-bZ}{\left(1-a_{1}Z\right)\left(1-a_{2}Z\right)}},\end{aligned}}} (144)

where ${\displaystyle |b|<1}$, ${\displaystyle |a_{1}|<1}$, and ${\displaystyle |a_{2}|<1}$. The partial fraction expansion is

 {\displaystyle {\begin{aligned}F\left(Z\right)={\frac {A-{1}}{1-a_{1}Z}}+{\frac {A_{2}}{1-a_{2}Z}},\end{aligned}}} (145)

where

 {\displaystyle {\begin{aligned}A_{1}=(a_{1}-b)(a_{1}-a_{2})^{-1},\;\;\;\;\;A_{2}=(a_{2}-b)(a_{2}-a_{1})^{-1}.\end{aligned}}} (146)

The expansion of the fractions gives

 {\displaystyle {\begin{aligned}F(z)=A_{1}\sum \limits _{k=0}^{\infty }{a_{1}^{k}Z^{k}+A_{2}\sum \limits _{k=0}^{\infty }{a_{2}^{k}Z^{k},}}\end{aligned}}} (147)

so the impulse response is

 {\displaystyle {\begin{aligned}f_{k}=A_{1}a_{1}^{k}+A_{2}a_{2}^{k}.\end{aligned}}} (148)

 {\displaystyle {\begin{aligned}f_{k+\varepsilon }=A_{1}a_{1}^{k+\varepsilon }+A_{2}a_{2}^{k+\varepsilon }.\end{aligned}}} (149)

Thus, the prediction system is

 {\displaystyle {\begin{aligned}{\begin{array}{l}E(Z)={\frac {{\bf {Z}}(f_{k+\varepsilon })}{{\bf {Z}}(f_{k})}}={\frac {a_{1}^{\varepsilon }A_{1}{\bf {Z}}(a_{1}^{k})+a_{2}^{\varepsilon }A_{2}{\bf {Z}}(a_{2}^{k})}{A_{1}{\bf {Z}}(a_{1}^{k})+A_{2}{\bf {Z}}(a_{2}^{k})}},\\\;\;\;\;\;\;\;\;={\frac {a_{1}^{\varepsilon }(a_{1}-b)(a_{1}-a_{2})^{-1}(1-a_{1}Z)^{-1}+a_{2}^{\varepsilon }(a_{2}-b)(a_{2}-a_{1})^{-1}(1-a_{2}Z)^{-1}}{(a_{1}-b)(a_{1}-a_{2})^{-1}(1-a_{1}Z)^{-1}+(a_{2}-b)(a_{2}-a_{1})^{-1}(1-a_{2}Z)^{-1}}}\\\end{array}}\end{aligned}}} (150)

which, simplified, is

 {\displaystyle {\begin{aligned}E\left(Z\right)={\frac {a_{1}^{\varepsilon }\left(a_{1}-b\right)\left(1-a_{2}Z\right)-a_{2}^{\varepsilon }\left(a_{2}-b\right)\left(1-a_{1}Z\right)}{\left(a_{1}-a_{2}\right)\left(1-bZ\right)}}.\end{aligned}}} (151)

We see that the prediction system is an ARMA(1,1) system.