The model we are interested in is
where
refers to time and
is either
-
an AR(1) process
-
an AR(2) process
with
Likelihood for AR(p) errors
From Hamilton (1994)
where
is determinant of inverted matrix
,
= variance-covariance matrix of order p,
,
and
is the
row of the design matrix corresponding to time t = p
= function of fitted terms
Likelihood for AR(1) errors
Setting p = 1,
= variance of the process, y,
is the first observation
The log likelihood,
Differentiating
with respect to
and equating to zero yields the maximum likelihood estimator,
Substituting
back into the log likelihood yields (Beach and
MacKinnon (1978a))
Two additional terms exist in the likelihood that do not appear in
the conditional likelihood used in GLS procedures. The term
ensures that the initial value has an effect on the estimates and
constrains the stationary condition to hold.
Maximization of the likelihood
Maximization of the likelihood requires iterative or numerical
procedures.
The estimate of
which maximizes the log-likelihood conditional on
is
where
and
where
is the Prais and Winsten transformation
So a procedure which searches for
is all that is required to find maximum likelihood estimates of
,
,
and
Likelihood for AR(2) errors
Following the same method as for AR(1) errors.
Set p = 2,
is the vector of means for t = 1,2 and
is the corresponding vector of observations
The log likelihood,
Differentiating
with respect to
and equating to zero yields the maximum likelihood estimator,
Substituting
back into the log likelihood yields
Simplifying further,
and
The likelihood now has the form of that found in Beach and MacKinnon (1978b)
$$
\begin{align}
logL\left( \underline{\beta}, \underline{\phi}; \underline{y} \right)
&= const. +log(1+\phi_2) +
\frac{1}{2}log(1-\phi_1-\phi_2)+\frac{1}{2}log(1+\phi_1-\phi_2)
\\
&-\frac{n}{2}log\left[ (y_1 -
X_1\underline{\beta})^2(1-\phi_2^2)-2(y_1-X_1\underline{\beta})(y_2-X_2\underline{\beta})\phi_1(1
+ \phi_2)+(y_2 -X_2\underline{\beta})^2(1-\phi_2^2) \\
+ \sum^n_{t=3} (y_t - X_t\underline{\beta} - \phi_1 (y_{t-1}
-X_{t-1}\underline{\beta}) - \phi_2 (y_{t-2}
-X_{t-2}\underline{\beta}))^2\right]\\
\end{align}
$$
Maximization of the likelihood
As in the case for AR(1) model maximizing the likelihood requires
numerical methods.
The estimate of
which maximizes the log-likelihood conditional on
(Beach and MacKinnon (1978b)) is
where
and
where
is the Prais and Winsten transformation
So a procedure which searches for
&
is all that is required to find maximum likelihood estimates of
,
,
and
Likelihood from first principles
AR(1)
where
,
is the conditional distribution of
given
and
is the distribution of the first point.
This is the Exact likelihood for an AR(1) process. The conditional
likelihood multiplied by the marginal likelihood of the first point.
Density of
is normally distributed with mean =
which equates to
with a variance =
and has density:
Density of
The conditional distribution of
is also normally distributed but we get to it in a round about way:
Recall
then
So
Rearrange to obtain
which has a normal distribution with mean = 0 and variance =
This results in
Exact likelihood for AR1
The likelihood is therefore:
Taking the logs and simplifying results in:
Note that
Using the notation of Beach and MacKinnon (1978a) we can simplify the log
likelihood,
and substituting
we get
AR(2)
Following the same reasoning we can obtain the densities of each of
the three components of the likelihood. The exact likelihood is the
product of the conditional likelihood, the conditional distribution of
and the marginal distribution of
.
Density of
is normally distributed with mean =
which equates to
with a variance =
Density of
is more complicated. Need to work this out
Density of
Recall
then
and
So
which simplifies to
which results in
Exact likelihood for AR2
The likelihood is therefore:
References
Beach, Charles M., and James G. MacKinnon. 1978a.
“A
Maximum Likelihood Procedure for
Regression with Autocorrelated
Errors.” Econometrica 46 (1): 51.
https://doi.org/10.2307/1913644.
———. 1978b.
“Full Maximum Likelihood Estimation of Second- Order
Autoregressive Error Models.” Journal of Econometrics 7
(2): 187–98.
https://doi.org/10.1016/0304-4076(78)90068-4.
Hamilton, James D. 1994. Time Series Analysis. Princeton, N.J:
Princeton University Press.