# Predicción digital del error

This page is a translated version of the page Digital prediction error and the translation is 47% complete.
Other languages:
Series Geophysical References Series Digital Imaging and Deconvolution: The ABCs of Seismic Exploration and Processing Enders A. Robinson and Sven Treitel 15 http://dx.doi.org/10.1190/1.9781560801610 9781560801481 SEG Online Store

From the preceding section on prediction, one might get the impression that we have favored minimum-delay signals and have slighted nonminimum-delay ones. However, when we look at the overall prediction problem rather than at the restricted problem given in the preceding section, we will correct this impression.

In the foregoing section, we considered only the prediction of the tailgate. The minimum-delay signal ${\displaystyle f_{k}}$ and the tailgate of the advanced signal ${\displaystyle f_{k+\varepsilon }}$ both started at time index ${\displaystyle k=0}$, and there was no concern for any values for ${\displaystyle k<0}$. The signal ${\displaystyle f_{k}}$ is causal, so it does not have any nonzero values before ${\displaystyle k=0}$, but the advanced signal ${\displaystyle f_{k+\varepsilon }}$ does have nonzero values in the range ${\displaystyle -\varepsilon \leq k<0}$. In fact, the advanced signal starts at ${\displaystyle k=-\varepsilon }$, whereas the signal ${\displaystyle f_{k}}$ starts at ${\displaystyle k=0}$. Because the prediction filter is causal, no output can occur before ${\displaystyle k=0}$, the start of the input signal. Thus, the segment of the advanced signal from ${\displaystyle k=-\varepsilon }$ to ${\displaystyle k=0}$ cannot appear at the output of the filter and thus represents a front-end prediction error. However, as we have seen in the preceding section, perfect prediction of the input signal is obtained at ${\displaystyle k=0}$ and beyond, provided that the input signal is minimum delay.

In summary, the minimum-delay signal ${\displaystyle f_{k}}$ is the input to the causal prediction system. We found the output ${\displaystyle {\hat {f_{k}}}\left(\varepsilon \right)}$ of the prediction system to be the advanced signal ${\displaystyle f_{k+\varepsilon }}$. Because the prediction system is causal, we cannot obtain the front end ${\displaystyle f_{k+\varepsilon }}$ from ${\displaystyle k=-\varepsilon }$ to ${\displaystyle k=-1}$ as output. However, we can obtain exactly the tailgate ${\displaystyle f_{k+\varepsilon }}$ from ${\displaystyle k=0,1,2,...,}$ at the output of the filter. This front end of the advanced signal represents the front-end prediction error, and this error cannot be reduced.

Let us discuss notation. If the input of the prediction system ${\displaystyle E\left(z\right)}$ is ${\displaystyle f_{k}}$, then the actual output is denoted by ${\displaystyle {\hat {f}}_{k}(\varepsilon )}$. The desired output is the advanced signal ${\displaystyle {\hat {f}}_{k+\varepsilon }}$, where ${\displaystyle \varepsilon >0}$. Thus, the actual output ${\displaystyle {\hat {f}}_{k}(\varepsilon )}$ represents the predicted value of the input signal. Both the input ${\displaystyle f_{k}}$ and the predicted value ${\displaystyle {\hat {f}}_{k}(\varepsilon )}$ occur at time index k. That is, the subscript in ${\displaystyle {\hat {f}}_{k}(\varepsilon )}$ indicates the time of occurrence. The prediction error that occurs at time index k is denoted by

 {\displaystyle {\begin{aligned}{\tilde {f}}\left(\varepsilon \right)=f_{k+\varepsilon }-{\hat {f}}\left(\varepsilon \right).\end{aligned}}} (152)

The notation ${\displaystyle {\tilde {f_{k}}}\left(\varepsilon \right)}$ should be read as “the prediction error (that is obtained at the present time k) in predicting the future value ${\displaystyle f_{k+\varepsilon }}$ (that will not be known until the future time ${\displaystyle k+\varepsilon {\rm {)}}}$).” In the symbol ${\displaystyle {\tilde {f_{k}}}\left(\varepsilon \right)}$, the tilde stands for prediction error, the subscript k stands for the time at which the prediction occurs, and ${\displaystyle \varepsilon }$ stands for the prediction distance. That is, ${\displaystyle {\tilde {f_{k}}}\left(\varepsilon \right)}$ is the prediction error made in estimating the unknown future value ${\displaystyle f_{k+\varepsilon }}$, the prediction being made at the present time k.

Let us now examine the prediction error ${\displaystyle {\tilde {f_{k}}}(\varepsilon )}$ in more detail. Suppose that a minimum-delay digital signal ${\displaystyle f_{k}}$ (Table 4, first row below the column headings) is the input to the prediction system

 {\displaystyle {\begin{aligned}E\left(Z\right)={\frac {{\rm {Z}}\left(f_{k+\varepsilon }\right)}{{\rm {Z}}\left(f_{k}\right)}}={\frac {f_{\varepsilon }+f_{\varepsilon {\rm {+l}}}Z+f_{\varepsilon +2}Z^{2}+\ldots }{f_{0}+f_{1}Z+f_{2}Z^{2}+\ldots }},\end{aligned}}} (153)

where Z denotes the one-sided Z-transform (equation 27). Note that ${\displaystyle F\left(Z\right)=Z\left(f_{k}\right)}$. The advanced signal is ${\displaystyle f_{k+\varepsilon }}$ (Table 4, second row). The actual output ${\displaystyle {\hat {f_{k}}}\left(\varepsilon \right)}$ has the Z-transform given by

 {\displaystyle {\begin{aligned}F\left(Z\right)E\left(Z\right)=Z\left(f_{k}\right){\rm {\ }}{\frac {{\rm {Z}}\left(f_{k+\varepsilon }\right)}{{\rm {Z}}\left(f_{k}\right)}}=Z\left(f_{k+\varepsilon }\right).\end{aligned}}} (154)

Thus, the predicted value ${\displaystyle {\hat {f_{k}}}\left(\varepsilon \right)}$ (Table 4, third row) is the causal tailgate of the advanced input ${\displaystyle f_{k+\varepsilon }}$; that is, the predicted values are …, ${\displaystyle {\hat {f}}_{-2}(\varepsilon )=0,\;{\hat {f}}_{-1}(\varepsilon )=0,\;{\hat {f}}_{0}(\varepsilon )=f_{\varepsilon },{\hat {f}}_{1}(\varepsilon )=f_{\varepsilon +1},{\hat {f}}_{\varepsilon +1},{\hat {f}}(\varepsilon )=f_{\varepsilon +2},\ldots .}$ Hence, the prediction error ${\displaystyle {\widetilde {f_{k}}}\left(\varepsilon \right)}$ (Table 3, fourth row) is the anticausal front end of the advanced input ${\displaystyle f_{k+\varepsilon }}$; that is, the prediction error is ${\displaystyle ...{\tilde {f}}_{-\varepsilon -1}(\varepsilon )=0}$, ${\displaystyle {\tilde {f}}_{-\varepsilon }(\varepsilon )=f_{0},{\tilde {f}}_{-\varepsilon +1}(\varepsilon )=f_{1},...,\;{\tilde {f}}_{-1}(\varepsilon )=f_{\varepsilon -1},\;{\tilde {f}}_{0}(\varepsilon )=0,{\tilde {f}}_{1}(\varepsilon )=0,\;{\tilde {f}}_{2}(\varepsilon )=0}$. Therefore, the error energy is the front-end energy of the minimum-delay input; that is,

 {\displaystyle {\begin{aligned}\sum _{k=-\varepsilon }^{-1}{f_{k+\varepsilon }^{2}}=f_{0}^{2}+f_{1}^{2}+\ldots +f_{\varepsilon -1}^{2}.\end{aligned}}} (155)
Table 4. Minimum-delay signal, advanced signal, predicted values, and prediction error.
Time k ${\displaystyle \varepsilon -1}$ ${\displaystyle \varepsilon }$ ${\displaystyle \varepsilon +1}$ ... –2 –1 0 1 2 ...
Minimum-delay signal 0 0 0 ... 0 0 ${\displaystyle f_{0}}$ ${\displaystyle f_{1}}$ ${\displaystyle f_{2}}$ ...
Advanced signal 0 ${\displaystyle f_{0}}$ ${\displaystyle f_{1}}$ ... ${\displaystyle f_{\varepsilon -2}}$ ${\displaystyle f_{\varepsilon -1}}$ ${\displaystyle f_{\varepsilon }}$ ${\displaystyle f_{\varepsilon +2}}$ ${\displaystyle f_{\varepsilon +2}}$ ...
Predicted values 0 0 0 ... 0 0 ${\displaystyle f_{\varepsilon }}$ ${\displaystyle f_{\varepsilon +1}}$ ${\displaystyle f_{\varepsilon +2}}$ ...
Prediction error 0 ${\displaystyle f_{0}}$ ${\displaystyle f_{1}}$ ... ${\displaystyle f_{\varepsilon -2}}$ ${\displaystyle f_{\varepsilon -1}}$ 0 0 0 ...

This is the minimum possible error energy that can be obtained by a causal linear time-invariant prediction system because this error cannot be reduced.

Next let us convert the minimum-delay signal ${\displaystyle f_{k}}$ into the nonminimum-delay signal ${\displaystyle g_{k}}$ by passing ${\displaystyle f_{k}}$ through a nontrivial all-pass system ${\displaystyle p_{k}}$; that is,

 {\displaystyle {\begin{aligned}g_{k}\ =P_{k}*f_{k}.\end{aligned}}} (156)

We also pass the predicted value ${\displaystyle {\hat {f_{k}}}\left(\varepsilon \right)}$, as well as the prediction error ${\displaystyle {\tilde {f_{k}}}\left(\varepsilon \right)}$, through the same all-pass system; that is, we have

 {\displaystyle {\begin{aligned}{\hat {g}}\left(\varepsilon \right)\ =P_{k}*{\hat {f}}\left(\varepsilon \right)\ \\{\tilde {g}}\left(\varepsilon \right)\ =P_{k}*{\tilde {f}}\left(\varepsilon \right).\end{aligned}}} (157)

Note that ${\displaystyle {\hat {f_{k}}}\left(\varepsilon \right)}$ and ${\displaystyle {\tilde {f_{k}}}\left(\varepsilon \right)}$ are, respectively, the causal tailgate and the anticausal front end of the advanced signal ${\displaystyle f_{k+\varepsilon }}$. That is, the identity “desired output equals actual output plus error,” or

 {\displaystyle {\begin{aligned}f_{k+\varepsilon }\ ={\hat {f}}\left(\varepsilon \right)+{\tilde {f}}\left(\varepsilon \right),\end{aligned}}} (158)

represents a clean split of ${\displaystyle f_{k+\varepsilon }}$ into its causal and anticausal parts. This clean split happens only in the case of a minimum-delay signal ${\displaystyle f_{k}}$. Such a clean split does not happen for ${\displaystyle g_{k}}$. That is, in the identity

 {\displaystyle {\begin{aligned}g_{k+\varepsilon }={\hat {g}}_{k}(\varepsilon )+{\tilde {g}}_{k}(\varepsilon ),\end{aligned}}} (159)

the actual output ${\displaystyle {\hat {g}}\left(\varepsilon \right)}$ is causal, but the prediction error ${\displaystyle {\tilde {g}}\left(\varepsilon \right)}$ has both a causal component and an anticausal component. The anticausal component, of course, is the anticausal front end of the advanced signal ${\displaystyle g_{k+\varepsilon }}$. This anticausal front end cannot be reached by a causal prediction system, so it must represent prediction error.

However, the causal tailgate of the advanced signal cannot be predicted perfectly, and as a result, the prediction error ${\displaystyle {\tilde {g}}\left(\varepsilon \right)}$ has a causal component also. However, the all-pass energy-delay theorem (equation 88) tells us that both prediction errors — that is, both ${\displaystyle {\tilde {f_{k}}}\left(\varepsilon \right)}$ and ${\displaystyle {\tilde {g}}\left(\varepsilon \right)-}$ have the same energy.

 {\displaystyle {\begin{aligned}\sum _{k=-\varepsilon }^{-1}{{\left[{\tilde {f}}\left(\varepsilon \right)\right]}^{2}}=\sum _{k=-\varepsilon }^{-1}{{\left[{\tilde {g}}\left(\varepsilon \right)\right]}^{2}}+\sum _{k=0}^{\infty }{{\left[{\tilde {g}}\left(\varepsilon \right)\right]}^{2}}\end{aligned}}} (160)

Thus, we have the following prediction-system theorem (Robinson, 1963, theorem 10 (g), p. 206). Let ${\displaystyle g_{k}}$ be a stable causal signal with canonical representation ${\displaystyle g_{k}=p_{k}*f_{k},}$, where ${\displaystyle p_{k}}$ is all-pass and ${\displaystyle f_{k}}$ is minimum delay. Then we can obtain the predicted value ${\displaystyle {\hat {g}}\left(\varepsilon \right)}$ for prediction distance ${\displaystyle \varepsilon >0}$ by passing ${\displaystyle g_{k}}$ into the prediction system

 {\displaystyle {\begin{aligned}E\left(Z\right)={\frac {{\rm {Z}}\left(f_{k+\varepsilon }\right)}{{\rm {Z}}\left(f_{k}\right)}}.\end{aligned}}} (161)

The predicted value is equal to the output of the all-pass system with the causal tailgate of ${\displaystyle f_{k+\varepsilon }}$ as input; that is,

 {\displaystyle {\begin{aligned}{\hat {g}}\left(\varepsilon \right)=\sum _{n=0}^{\infty }{f_{{\rm {t+}}{\mathcal {E}}}}p_{k-n}.\end{aligned}}} (162)

The prediction error

 {\displaystyle {\begin{aligned}{\tilde {g}}\left(\varepsilon \right)=g_{k+\varepsilon }-{\hat {g}}\left(\varepsilon \right)\end{aligned}}} (163)

is equal to the output of the all-pass system with the input being the anticausal front end of ${\displaystyle f_{k+\varepsilon }}$; that is,

 {\displaystyle {\begin{aligned}{\tilde {g}}\left(\varepsilon \right)=\sum _{n=-\varepsilon }^{-1}{f_{n+\varepsilon P_{k-n}}}.\end{aligned}}} (164)

The prediction-error energy is equal to the front-end energy of the minimum-delay signal; that is,

 {\displaystyle {\begin{aligned}\sum \limits _{k=-\varepsilon }^{\infty }{[{\tilde {g}}_{k}(\varepsilon )]^{2}=\sum \limits _{k=-\varepsilon }^{-1}{[f_{k+\varepsilon }]^{2}=[f_{0}]^{2}+[f_{1}]^{2}+...+[f_{\varepsilon -1})]^{2}.}}\end{aligned}}} (165)

As an example of the theorem, let us predict the stable causal signal

 {\displaystyle {\begin{aligned}g_{k}\ =-a^{k+1}+ka^{k-1}\left(1-a^{2}\right)({\text{where}}\ {\textit {a}}\;{\text{is real and}}\ k{=0,1,2,}\ \dots )\end{aligned}}} (166)

for a prediction distance ${\displaystyle \varepsilon =1}$. First, we find its Z-transform:

 {\displaystyle {\begin{aligned}G\left(Z\right)=-\sum _{k=0}^{\infty }{a^{k+1}}Z^{k}+\sum _{k=1}^{\infty }{k}a^{k-1}\left(1-a^{2}\right)Z^{k}\ \\=-a\sum _{k=0}^{\infty }{a^{k}}Z^{k}+\left(1-a^{2}\right)Z\sum _{k=1}^{\infty }{k}a^{k-1}Z^{\left(k-1\right)}\ \\=-a\sum _{k=0}^{\infty }{a^{k}}Z^{k}+\left(1-a^{2}\right)Z\sum _{i=0}^{\infty }{\left(i+1\right)}a^{i}Z^{j}\ \\={\frac {-a}{1-aZ}}+{\frac {\left(1-a^{2}\right)Z}{{\left(1-aZ\right)}^{2}}}={\frac {-a+Z}{{\left(1-aZ\right)}^{2}}}.\end{aligned}}} (167)

Thus, we can write ${\displaystyle G\left(Z\right)}$ in the form

 {\displaystyle {\begin{aligned}G\left(Z\right)={\frac {-a+Z1}{1-aZ1-aZ}},\end{aligned}}} (168)

where the first factor is the all-pass system ${\displaystyle P\left(Z\right)}$ and the second factor is the minimum-delay system ${\displaystyle F\left(Z\right)}$. Thus, the minimum-delay signal ${\displaystyle f_{k}}$ is

 {\displaystyle {\begin{aligned}f_{k}=a^{k}\;\;\;{\text{(where}}\ k=0,1,2,\dots ).\end{aligned}}} (169)

Therefore, the least-error-energy-predicting system is

 {\displaystyle {\begin{aligned}E\left(Z\right)={\frac {{\rm {Z}}\left(a^{k+1}\right)}{{\rm {Z}}\left(a^{k}\right)}}=a,\end{aligned}}} (170)

which is merely the constant a. Thus, the predicted value of ${\displaystyle g_{k+1}}$ is obtained by multiplying ${\displaystyle g_{k}}$ by the constant a; that is,

 {\displaystyle {\begin{aligned}{\hat {g}}\left(1\right)=ag_{k}=-a^{k+2}+k\;a^{k}\left(1-a^{2}\right){\text{(where}}\ k=0,1,2,\dots ).\end{aligned}}} (171)

The prediction error is

 {\displaystyle {\begin{aligned}{\tilde {g}}\left(1\right)=g_{k+1}\ \;\;\;\;\;\;\;\;\;\;\;{\text{for}}\ k=-1\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\\{\tilde {g}}\left(1\right)=g_{k+1}-ag_{k}\ \;\;\;\;{\text{for}}\ k=0,1,2,....\;\;\;\;\;\;\;\;\;\;\;\end{aligned}}} (172)

The two components are the front-end prediction error

 {\displaystyle {\begin{aligned}g_{-1}\left(1\right)=g_{0}=-a\ \;\;\;{\text{for}}\ k=-1\end{aligned}}} (173)

and the tailgate prediction error

 {\displaystyle {\begin{aligned}{\tilde {g}}\left(1\right)=\left[-a^{k+2}+\left(k+1\right)a^{\left(k+1\right)-1}\left(1-a^{2}\right)\right]-a\left[-a^{k+1}+ka^{k-1}\left(1-a^{2}\right)\right]\ \\=\left(1-a^{2}\right)a^{k}\ {\text{for}}\ k=0,1,2,....\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\end{aligned}}} (174)

The error energy is the sum of

 {\displaystyle {\begin{aligned}{\begin{array}{l}{\rm {Front-end}}\;{\rm {error}}\;{\rm {energy}}\;=(-a)^{2}=a^{2}\\{\rm {Tailgate}}\;{\rm {error}}\;{\rm {energy}}\;\;\;\,\;\;\;=\;(1-a^{2})^{2}[1+a^{2}+a^{4}+...]=(1-a^{2})^{2}{\frac {1}{1-a^{2}}}=1-a^{2}.\\\end{array}}\end{aligned}}} (175)

Thus, the total prediction-error energy is ${\displaystyle a^{2}+\left({\rm {1\ }}-a^{2}\right)}$, which is unity.

As a check, let us verify the results of the foregoing example by using another approach. We now want to predict the minimum-delay signal ${\displaystyle f_{k}}$ and pass the result through the all-pass filter to predict ${\displaystyle g_{k}}$. Again, we let ${\displaystyle \varepsilon =1}$. The prediction of the minimum-delay signal is

 {\displaystyle {\begin{aligned}{\hat {f}}\left(1\right)=af_{k}\ {\text{(for}}\ k=0,1,2,\dots ).\end{aligned}}} (176)

The prediction error is

 {\displaystyle {\begin{aligned}{\tilde {f}}\left(1\right)=f_{k{\rm {+l}}}\mathrm {\;} \;\;\;\;\;\;\;\;\;{\rm {for}}\;k=-1\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\\{\overline {f}}\left(1\right)=f_{k+1}-af_{k}\mathrm {\;} \;\;{\rm {for}}\;k=0,1,2,\ldots .\;\;\;\;\;\;\;\;\;\;\;\;\;\end{aligned}}} (177)

Respectively, the two components are the front-end prediction error (for ${\displaystyle k=-1}$)

 {\displaystyle {\begin{aligned}{\tilde {f}}_{-1}(1)=f_{0}=a^{0}=1\end{aligned}}} (178)

and the tailgate prediction error (for k = 0, 1, 2, …)

 {\displaystyle {\begin{aligned}{\tilde {f_{k}}}\left(1\right)=a^{k+1}-a\left(a^{k}\right)=0.\end{aligned}}} (179)

This result confirms our knowledge that the tailgate prediction error of a minimum-delay signal is zero, and the entire prediction error is the front-end prediction error. The prediction ${\displaystyle {\hat {g}}_{k}(1)}$ of the signal ${\displaystyle {\rm {g}}_{k}}$ can be obtained by passing the prediction ${\displaystyle {\hat {f_{k}}}\left(1\right)}$ through the all-pass filter; that is,

 {\displaystyle {\begin{aligned}{\hat {g}}\left(1\right)={\hat {f_{k}}}\left(1\right)*p_{k}\end{aligned}}} (180)

or

 {\displaystyle {\begin{aligned}{\hat {g_{k}}}\left(1\right)=a\left(a^{k}\right)*p_{k}.\end{aligned}}} (181)

From equation 168, the all-pass system is

 {\displaystyle {\begin{aligned}P\left(Z\right)={\frac {-a+Z}{1-aZ}},\end{aligned}}} (182)

which can be expanded as

 {\displaystyle {\begin{aligned}P\left(Z\right)=-a+{\frac {\left(1-a^{2}\right)Z}{1-aZ}},\end{aligned}}} (183)

which is

 {\displaystyle {\begin{aligned}P\left(Z\right)=-a+\left(1-a^{2}\right)Z\sum _{i=0}^{\infty }{a^{i}}Z^{i}=-a+\left(1-a^{2}\right)\sum _{k=1}^{\infty }{a^{k-1}}Z^{k}.\end{aligned}}} (184)

Thus, the all-pass signal is

 {\displaystyle {\begin{aligned}p_{k}=-a\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;{\rm {for}}\;k=0\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\\p_{k}=(1-a^{2})a^{k-1}\;\;\;{\rm {for}}\,\,k=1,2,3,...\,.\\\end{aligned}}} (185)

Therefore, from equations 181 and 185, the prediction is

 {\displaystyle {\begin{aligned}{\hat {g}}\left(1\right)=a^{k{\rm {+l}}}*p_{k}=-a^{k+2}+ka^{k}\left(1-a^{2}\right)\mathrm {for} k=0,1,2,\ldots .\end{aligned}}} (186)

Likewise, the prediction error ${\displaystyle {\tilde {g}}\left(1\right)}$ is obtained by passing the prediction error ${\displaystyle {\tilde {f_{k}}}\left(1\right)}$ through the all-pass operator; that is,

 {\displaystyle {\begin{aligned}{\tilde {g}}\left(1\right)={\tilde {f}}\left(1\right)*p_{k}.\end{aligned}}} (187)

Because ${\displaystyle {\tilde {f_{k}}}\left(1\right)}$ consists of only the front-end component

 {\displaystyle {\begin{aligned}{\tilde {f}}\left(1\right)=a^{0}{\rm {\ }}{\delta }_{k+1}={\delta }_{k+1},\end{aligned}}} (188)

we have

 {\displaystyle {\begin{aligned}{\tilde {g_{k}}}\left(1\right)=a^{0}{\rm {\ }}{\delta }_{k+1}*P_{k},\end{aligned}}} (189)

which is

 {\displaystyle {\begin{aligned}{\tilde {g}}\left(1\right)=p_{k{\rm {+l}}}.\end{aligned}}} (190)

In this case, we see that the prediction error is given by the advanced all-pass signal. Thus, the front-end prediction error is ${\displaystyle {\tilde {g}}-1\left(1\right)=p_{0}=-a\mathrm {\;} {for}\;k=-1}$, which is the same as equation 173. The tailgate prediction error is ${\displaystyle {\tilde {g}}\left(1\right)=\left({\rm {1\ }}-a^{2}\right)a^{k}\mathrm {\;} {\rm {for}}\;k=0,1,2,\ldots ,}$, which is the same as equation 174. The total prediction-error energy is equal to the front-end prediction-error energy of the minimum-delay signal; that is, ${\displaystyle {\left[{\tilde {f}}\left(1\right)\right]}^{2}={\rm {1.}}}$.

## Sigue leyendo

Sección previa Siguiente sección
Predicción digital Predicción analógica del error
Capítulo previo Siguiente capítulo