A comparative simulation study of AR(1) estimators in short time series
A comparative simulation study of AR(1) estimators in short time series
Tanja Krone 0 1
Casper J. Albers 0 1
Marieke E. Timmerman 0 1
0 Heymans Institute for Psychological Research , Psychometrics and Statistics, Grote Kruisstraat 2/1, 9712TS Groningen , The Netherlands
1 & Tanja Krone
Various estimators of the autoregressive model exist. We compare their performance in estimating the autocorrelation in short time series. In Study 1, under correct model specification, we compare the frequentist r1 estimator, C-statistic, ordinary least squares estimator (OLS) and maximum likelihood estimator (MLE), and a Bayesian method, considering flat (Bf) and symmetrized reference (Bsr) priors. In a completely crossed experimental design we vary lengths of time series (i.e., T = 10, 25, 40, 50 and 100) and autocorrelation (from -0.90 to 0.90 with steps of 0.10). The results show a lowest bias for the Bsr, and a lowest variability for r1. The power in different conditions is highest for Bsr and OLS. For T = 10, the absolute performance of all measurements is poor, as expected. In Study 2, we study robustness of the methods through misspecification by generating the data according to an ARMA(1,1) model, but still analysing the data with an AR(1) model. We use the two methods with the lowest bias for this study, i.e., Bsr and MLE. The bias gets larger when the non-modelled moving average parameter becomes larger. Both the variability and power show dependency on the non-modelled parameter. The differences between the two estimation methods are negligible for all measurements.
Time series analysis; Misspecification
Time series analysis has been valuable for achieving insight into the nature of longitudinal
processes. Especially the autoregressive moving average (ARMA) model
(Box and Jenkins
has gained enormous popularity in various research areas. The autoregressive part
models the serial dependence between consecutive measurements. The moving average
part models the serial dependence between consecutive error terms. The ARMA(p, q)
model is given by:
yt ¼ l þ
/iyt i þ
hjet j þ et;
with yt the score at time t ðt ¼ 1; 2; . . .; T Þ, l the population mean, /i the autocorrelation
for lag i ði ¼ 1; 2; . . .; pÞ, hj the moving average parameter at lag j ðj ¼ 1; 2; . . .; qÞ and et
One of the simplest versions of the ARMA(p, q) is the AR(1) model:
yt ¼ l þ /ðyt 1
lÞ þ et;
where, for simplicity, the subscript 1 is omitted from /. Several estimation methods have
been proposed to estimate the AR(1) model. These estimation methods include closed form
estimation methods, such as the r1 estimator
(Yule 1927; Walker 1931; Box and Jenkins
and Ordinary Least Squares (OLS) estimator, and iterative
estimation methods, such as frequentist Maximum Likelihood Estimation (MLE) and
Bayesian Markov Chain Monte Carlo (MCMC) estimation. The performance of the closed
form estimation methods in terms of efficiency have been examined and compared in some
(Huitema and McKean 1991; DeCarlo and Tryon 1993; Arnau and Bono
2001; Solanas et al. 2010)
. Generally, in particular for shorter time series (e.g., length
T 50), the closed form estimation methods have been shown to have biased
autocorrelation estimates and/or high variability. Because the closed form and iterative estimation
methods have not been mutually compared so far, it is unclear which estimation methods
perform better in terms of having a low bias and variability under relevant conditions for
empirical practice. Further, little is known about the robustness of the specific estimation
methods towards misspecification of the model. This knowledge is important to optimize a
time series research design, and to select a low-variability, low-bias, and robust method for
estimating an AR(1) model in empirical practice.
In this paper, we discuss two studies to assess the relative performance of several
estimators of the AR(1) model. We focus on short time series, with a length T between 10
and 100. Even though these lengths are relevant, for example in psychological research,
they are not thoroughly studied yet for all estimators we compare. For the autocorrelation
we use values between -1 and 1, and hence consider stationary time series. Our first study
provides the information needed to make an informed choice between the estimation
methods for the AR(1) model. To this end, we selected five popular and/or promising
estimation methods. In a simulation study, we compare these methods with regard to bias,
standard error, the bias of the standard error, the rejection rate for / ¼ 0, the power for
/ 6¼ 0, and the point and 95 % interval estimates.
Our second study focuses on the issue of robustness. Robustness, as used in this paper,
is the resilience to misspecification with regard to the number of parameters. The effects of
misspecification of the ARMA(1,1), AR(1) and AR(2) model have been studied for the
least squares estimator. For an underspecified model, the parameters become more biased
when the unspecified parameters are further from zero
(Tanaka and Maekawa 1984)
Overspecification of the model gives a larger prediction mean squared error for the
estimation of the score at yt (Kunitomo and Yamamoto 1985). To study the robustness with
regard to misspecification, we use the two estimation methods that showed the lowest bias
in the first study. In the misspecification study we generate the data using an ARMA(1,1)
model, but estimate the parameters as if the data was generated using the same AR(1) =
ARMA(1,0) model as used in Study 1.
In the next section, we describe the selection process of the estimation methods used in
this paper, followed by a short introduction to the estimators. Then, we present the design,
performance criteria and the results of the first simulation study, which aims at comparing
various estimators when applied to short time series following an AR(1) model. We
continue with the design and results of the second simulation study, which aims at
exploring the effect of underspecifying a short time series following an ARMA(1,1) model
as data following an AR(1) model. We conclude our paper with a discussion of the
simulation studies and the implications of the results.
2 Selection of estimation methods
To start, we performed a literature search towards estimation methods for AR(1) models.
Our selection criteria for the papers were as follows: (1) it must discuss one or more
simulation studies that compare different estimators of the AR(1) model; (2) it must
include conditions with less than 50 time points and a range of values between r1 and 1 for
the autocorrelation /. This literature search revealed the five papers shown in Table 1.
The earliest discussed estimator is the r1 estimator
, as implemented in
the Yule–Walker model
(Yule 1927; Box and Jenkins 1976)
. However, since several
studies have found that the bias of r1 for small samples is large, especially for data with a
positive autocorrelation, various alternatives were proposed
(Huitema and McKean 1991,
1994; DeCarlo and Tryon 1993; Arnau and Bono 2001; Solanas et al. 2010)
. A selection of
these is given by name in Table 1. Note that most alternatives are based on the original r1,
as can be deduced from the names using ‘r’ or ‘r1’ and a sub- or superscript. In general, the
modifications of r1 showed a smaller bias than r1 itself, but a larger variability of the
(Huitema and McKean 1991, 1994; Arnau and Bono 2001;
Solanas et al. 2010)
, except for the estimators r1þ and the C-statistic. In direct comparisons
between r1þ and the C-statistic, it was shown that the C-statistic had a smaller average bias
and a smaller average mean square error, thus a smaller variability, over different values of
/ than the r1þ estimation method.
Apart from the modifications of r1, another closed form solution may be used. The
ordinary least squares (OLS) estimator is used in many different applications, most notably
in regression analysis. Since the autocorrelation may be interpreted as a special kind of
regression parameter, OLS can be used to find the autocorrelation. In comparisons, the
OLS estimator showed a smaller bias than most derivations from the r1 estimators
(Huitema and McKean 1991; Solanas et al. 2010)
. However, the OLS estimator also showed a
slightly larger mean squared error than most r1 derivations. These comparisons between
estimators reveal a bias-variance tradeoff in the autocorrelation estimator.
Two important methods that are not found in the comparisons listed in Table 1, are the
frequentist MLE and Bayesian MCMC estimation. Though simulation studies using MLE
have been done, those studies did not include the conditions of our primary interest. For
example, the studies that considered different ARMA(p, q)-models
(Stoica et al. 1986;
Pantula and Fuller 1985; Garcia-Hiernaux et al. 2009 )
, had no condition with less than 100
(Cox and Llatas 1991)
or were aimed at examining other parts of the estimation
process, such as deciding on which ARMA(p, q)-model to use
(Watson and Nicholls
. This was the same for papers using Bayesian MCMC estimation. Examples of this
are studies that have no systematic comparison using different estimators
(West and Wilcox 1996)
or use lagged cross-correlation
. The MLE and Bayesian MCMC have become often-used methods of
analysis in different fields and applications.
3 Estimation methods
In the next paragraphs we will describe the five different estimation methods used in this
3.1 The r1 estimator in the Yule–Walker method
The Yule–Walker method for ARMA models
(Yule 1927; Walker 1931; Box and Jenkins
may be the best known estimation method in time series analysis. It uses the r1
estimator to estimate the lag 1 autocorrelation:
where yt is the observed score at time t, ðt ¼ 1; 2; . . .; TÞ and y is the mean score over the
T observations. Asymptotically, the autocorrelation function for this series is biased by
ð1 þ 4/Þ=T
(Kendall and Ord 1990)
. This bias has empirically been shown to be as large
as -0.73 for T = 6 and / ¼ 0:90
(DeCarlo and Tryon 1993)
. This empirical bias is
surprisingly close to the asymptotic bias of -0.77. To keep the bias within reasonable
limits, Box and Jenkins (1976, pp. 32–33) advise a minimum length of 50 time points for a
The standard error of the /^r1 is calculated as:
where r^y2 is the estimated variance of yt and r^e2 is the estimated variance of e.
In comparison studies, several other proposals were done to replace the r1 estimator
(Huitema and McKean 1991, 1994; Young 1941)
. One of these, which outperformed the r1
estimator and some of the other estimators in several studies, was the C-statistic
1941; DeCarlo and Tryon 1993; Solanas et al. 2010)
compensates the bias of the r1 estimator by adding a factor to
/C ¼ /r1 þ
y 2Þ :
The /^C is asymptotically unbiased. The /^C has been shown to be a better estimator than
/r1 for / for short time series and a positive /
(DeCarlo and Tryon 1993; Solanas et al.
). However, the bias still remains quite large (e.g., -0.38 for / ¼ 0:60 and r = 5) and
the power remains quite low (e.g., 0:09 for / ¼ 0:60 and r = 5) for short time series
(Solanas et al. 2010).
The standard error associated with /^C is:
1ÞðT þ 1Þ
which is obviously only dependent on the number of observations.
3.3 Ordinary least squares
The ordinary least squares (OLS) for an AR(1) model is:
The asymptotic standard error for /^ols is:
T T 2ðT T 1Þ/T2y2T 1:
The OLS estimation is capable of handling non-stationary data under certain restrictions.
This means that it is possible to obtain a non-stationary estimate (i.e., j/olsj [ 1). To
identify possible different behaviours, we distinguish two types of OLS analysis results:
OLS-A will refer to the complete results, where OLS-S will refer to the results where the
non-stationary results are left out.
3.4 Maximum likelihood estimation
The iterative Maximum Likelihood Estimation (MLE) used to estimate the autocorrelation,
shares asymptotic properties with the OLS estimation
(Lu¨ tkepohl 1991, p. 368–370)
MLE method uses a collection of algorithms to find the maximum likelihood for a
parameter or model
(Durbin and Koopman 2012)
. In this study, we will compute the MLE
with the ‘Broyden–Fletcher–Goldfarb–Shanno’ algorithm
(Byrd et al. 1995)
asymptotic standard error for /^mle may be estimated in the same way as for /^r1 , using Eq. 3. The
asymptotic bias for an AR(1) model with population mean assumed to be zero, is 2/=T .
For an AR(1) model with the mean estimated, the asymptotic bias is ð 3/ þ 1Þ=T
3.5 Bayesian Markov Chain Monte Carlo
The Bayesian MCMC is the only non-frequentist estimation method considered in this
paper. Bayesian analysis uses a prior probability distribution for the parameters, set up
before the analysis. This is combined with the observed likelihood, as computed from the
observed data, to form the posterior probability of the parameters. This posterior
probability can be expressed through Bayes’ theorem: pð/jY Þ / ðY j/Þpð/Þ. For the Bayesian
analyses we will use MCMC sampling to find the combination of parameter values which
gives the highest likelihood.
In these simulation studies we will consider two weak informative Bayesian priors.
Since we assume stationarity we restrict ourselves to prior distributions with non-zero
probabilities for j/j 1. That is, we consider a flat prior, giving all values of / between -1
and 1 an equal probability:
pf ð/Þ ¼ 2; for
Further, we consider the symmetrized reference prior defined by
(Berger and Yang 1994)
which is specifically tailored to autoregressive processes. The symmetrized reference prior
is given as:
psrð/Þ ¼ 1=½2p
1 /2 ; for
This symmetrized reference prior gives a higher probability to higher values of j/j and has
a narrower posterior distribution and a smaller mean square error than the flat prior or
Jeffrey’s prior in the case of AR(1) models
(Berger and Yang 1994)
. We will denote these
methods as Bf and Bsr, respectively.
4 Research design study 1: comparison of estimators
To compare the various estimators for the autocorrelation (/), we simulate according to an
AR(1) model (see Eq. 2). In the generation of the data we vary the length of the time series
T and the autocorrelation /. For T we use five different sizes, namely 10, 25, 40, 50 and
100. For /, we use an autocorrelation of -0.90 to 0.90 inclusive, taking steps of 0.10.
Earlier studies show that there is a difference between the bias for the negative and positive
/ for several estimators, including r1 and the C-statistic
(DeCarlo and Tryon 1993; Solanas
et al. 2010)
. This indicates that a thorough test is required to include both positive and
negative autocorrelations. Finally, the number of replications must be set. All of the studies
in Table 1 have a minimum of 10,000 replications per condition. However, a pilot study
showed that the maximum standard deviation of the mean /^ over 5000–10,000 replications
was 0.0007, when T = 10 and / ¼ 0:7, for all estimators. Therefore we use N = 2000
replications per condition. Considering a fully crossed experimental design, this yields 19
9 5 9 5000 = 475,000 simulated data sets.
Across all conditions, l is set to zero and re2 to one, which can be done without loss of
generality. This results in a standard normal distribution for yt given /.
Priors We performed a small simulation study to decide on the values for the
hyperparameters of the priors in our Bayesian analyses. In the model we use, only the prior
distributions for l and re have such hyperparameters. We used 3 conditions, with / ¼
0:50; 0 and 0.50, using 1000 replications per condition and 2000 iterations per analysis.
We set T = 10, since shorter series provide less data, and will therefore be more strongly
influenced by the choice of the prior. For l we used a normal prior with mean and standard
deviation as given, and for re we used a c prior with shape and rate as given in the top part
of Table 2.
As can be seen in Table 2, the differences in the estimated parameters are small,
especially when taking into account the uncertainty added by the small T. As a result, we
based our choice of priors on theoretical grounds. To reduce the influence of the priors, we
choose our priors close to the distributions used for the data generation: l Nð0; 2Þ and
re Cð2; 2Þ:
Outcome measures For each data set we obtain different estimators: r1, C-statistic, OLS,
MLE, Bf and Bsr. To compare the estimators, we consider the bias of the various estimators
of /, their empirical standard error, the bias of the estimated standard error, the rejection
rate for / ¼ 0, power for / 6¼ 0, and the point and 95 % interval estimates of /. All
outcome measures are calculated for each condition and each estimation method.
The bias is computed as:
1 X ^
where n ðn ¼ 1; 2; . . .; NÞ refers to the replication number.
To compare the variability of the different estimators over the different conditions, we
consider two estimators: the empirical standard error and the bias of the estimated standard
error. The empirical standard error shows the variability of the /^ across replications. The
bias of the estimated standard error shows to what extent the standard error estimated by
the estimation method, resembles the empirical standard error.
4.2.1 Empirical standard error: SDð/^Þ
The empirical standard error of /^ is calculated by:
SDð/^Þ ¼ tuuN 1 1 XN /^ /^ 2;
where /^ is the mean estimated /^ over all replications within a condition.
4.2.2 Bias of the estimated standard error
For the frequentist estimators, the estimated standard error SEð/^Þ is calculated using
Eqs. 3, 4 and 5, and for the Bayesian estimation, the estimated standard error is obtained
through MCMC. To estimate the expected value of SEð/^Þ for each estimator, we compute
the average SEð/^Þ over all replications within a condition:
SEð/^Þ ¼ N n¼1 SEð/^Þ:
To assess the bias of the estimated standard error with regard to the observed standard
error, we substract the observed standard error, SDð/^Þ from the mean estimated standard
Bias of SEð/^Þ ¼ SEð/^Þ
4.3 Rejection rate and power
For each estimation method and condition, we compute the empirical probability (EPr) for
rejecting H0 : / ¼ 0, with a ¼ 0:05. In the condition with / ¼ 0, the EPr indicates the
rejection rate or actual a, in all other conditions the EPr equals the actual power. For the r1,
MLE, OLS-S and C-statistic methods, first a p value is obtained using a t-distribution.
Considering the t-statistic for a correlation coefficient:
tall ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffi ;
dfall ¼ T
For the OLS-A method, a t test based on the estimated standard error of /^ is applied, since
the possibility of /^ having a higher value than one in absolute value renders the t-statistic
for correlations inapplicable:
tols ¼ SE/^ ;
dfols ¼ T
For the Bayesian estimation methods, we consider the percentage of datasets for which the
95 % credible interval (CrI) does not hold zero.
For each condition and method, we then calculate the EPr of rejecting H0 : / ¼ 0 as:
forBf ; Bsr:
statistic; MLE; OLS
EPr ¼ #ðH0isrejectedÞ=N;
EPr ¼ #ðCrI does not hold 0Þ=N:
4.4 Point and interval estimates for /
To illustrate the joint effects of bias and variability we consider the two estimation
methods with the smallest bias, using the point and interval estimates of /. As point
estimate we use the mean of /^ per condition, for the interval estimation we use the mean
95 percentile of the /^ over all replications per condition.
For the simulations and analyses we use the program ‘R’
(R Core Team 2014)
C-statistic was computed directly with the basic functions available. For the Yule–Walker,
OLS and MLE methods we use the command ‘ar’ from the software package ‘stats’. The
Bayesian analyses are done with the program ‘Rstan’
(Stan Development Team 2014)
5 Results study 1
The OLS estimator rendered estimates of / that were higher than one in absolute value,
and thus non-stationary, as expected. The highest percentage of non-stationary estimates,
15.1 %, was found for the shortest series, r = 10 and the highest autocorrelation, / ¼ 0:90.
For r = 10 and / ¼ 0:90 to / ¼ 0:80, up to 6.8 % of the estimates per condition were
non-stationary, with higher percentages associated with higher values of j/j. For T = 25 to
50 and / ¼ 0:50 to 0.90 in absolute value, up to 2.3 % of the estimates were
nonstationary. However, the difference in the results was quite small. Thus we will discuss
only the OLS-A results for the OLS, which includes all measurements, unless the OLS-S
shows a strong deviation from OLS-A.
For the Bayesian analysis, non-convergence is expressed in the potential scale reduction
factor, R^. The potential scale reduction factor shows the ratio of how much the estimation
may change when the number of iterations is doubled, with a perfect 1 indicating that no
change is expected
(Gelman and Rubin 1992; Stan Development Team 2014)
. For each
estimated parameter /, l and re, less than 0.39 % of the estimates showed a R^ above 1.02.
Furthermore, a maximum of 2:8 %, found for l as estimated with Bf, showed a R^ above
The bias of the six estimators as a function of / for T = 10, 25, and 50 is presented in
Fig. 1. The conditions for T = 40 and are not shown due to their uninformative nature: T =
40 yields results highly similar to T = 50, and T = 100 yields results with hardly any
differences between the estimators. As can be seen in Fig. 1, the bias becomes smaller as
T increases for all methods, which is to be expected. The relation between the bias and / is
roughly linear for all methods, being positive for negative values of / and negative for
positive values of /. Further, the bias for positive values of / is larger than the bias for
their negative counterparts (i.e. -/). This holds for all values of T and for all methods,
except for the C-statistic.
With regard to the ordering of the estimation methods, differences are found between
negative and positive values of / and between short time series, T = 10, and longer time
series, T C 25. For the shortest time series with T = 10, the differences between the
methods with regard to bias are strongly dependent on /. For low, negative values of /, the
smallest bias is shown by the OLS, MLE and, to a lesser extent, the r1. For positive values
of /, the smallest bias is shown by the Bsr, followed by the Bf. The largest bias for T = 10 is
associated with the C-statistic for negative values of /, and the r1 for positive values of /.
For T C 25 and any /, Bsr consistently shows the smallest bias. Just as for the shortest
series, the largest bias for T C 25 is associated with the C-statistic for negative values of /,
and with the r1 for positive values of /.
With regard to variability, the results for T 40 are highly similar to the results for T = 25
with regard to pattern of the variability and the order of the estimation methods. The only
difference is the decline in absolute size. This prompted us to only explicitly show the
results for T = 10 and T = 25 for the empirical standard error and the bias of the estimated
5.2.1 Empirical standard error: SDð/^Þ
The empirical standard error (SDð/^Þ) as a function of / is shown in Fig. 2 for T = 10
(panel a) and T = 25 (panel b). For all frequentist estimators, the SDð/^Þ for positive values
of / is larger than the SDð/^Þ for their negative counterparts (i.e. -/), implying that the
variability is higher for positive values of / than for negative values of /. For the Bayesian
estimators, this differs between values of T and j/j.
With regard to the ordering of the estimation methods for the SDð/^Þ, small differences
are found between the short time series, T = 10, and longer time series, T 25. For the
shortest time series, T = 10, and / below 0.40, the lowest SDð/^Þ is shown by r1, for /
above 0.40 this is shown by Bf. The highest SDð/^Þ for / below 0.30 is shown by Bsr, for /
above 0.30 this is shown by the OLS estimator. The OLS and MLE stand out due to the
continuing increase in the SDð/^Þ for higher values of /.
For T C 25, the Bsr shows an distinct pattern. The Bsr shows the lowest SDð/^Þ for /
below -0.80 and above 0.80, but the highest SDð/^Þ for / between -0.6 and 0.60. The
lowest SDð/^Þ for / between —0.70 and 0.40 is shown by the r1. The highest SDð/^Þ for /
below -0.70 is shown by the C-statistic, for / above 0.70 this is shown by the OLS
followed by the MLE. When T increases, the empirical standard error of the different
methods become smaller and more similar to each other. For T = 100, the size of the SDð/^Þ
is between 0.05 and 0.10 for all values of / and all estimators.
5.2.2 Bias of the estimated standard error
The bias of SEð/^Þ as a function of / is shown in Fig. 2 for T = 10 (panel c) and for T = 25
(panel d). In general, the bias of SEð/^Þ decreases when T becomes larger, indicating a
smaller difference between the estimated and the empirical standard errors. For T = 100,
the bias of SEð/^Þ is between -0.01 and 0.04 for all values of / and all estimators. With
regard to the ordering of the estimation methods, small differences are found between T =
10 and longer time series. Differences were also found for different values of /.
The direction of the bias of SEð/^Þ differs between the methods and the value of /. For
r1 and the C-statistic, the bias of SEð/^Þ is positive for all /, indicating an overestimation of
the standard error. The OLS shows a positive bias of SEð/^Þ for / between -0.70 and 0.20,
and a negative bias of SEð/^Þ for other values of /. For the MLE the bias of SEð/^Þ is
0:70, and a
negative for all /. Both Bf and Bsr show a negative bias of SEð/^Þ for /\
positive bias of SEð/^Þ for higher values of /.
For T = 10, the smallest bias of SEð/^Þ for / below -0.70 is shown by the OLS method.
The smallest bias of SEð/^Þ for / above -0.50 is shown by the C-statistic, closely followed
by the OLS and the MLE. The largest bias of SEð/^Þ for / above -0.80, is shown by r1,
which is joined in this regard by Bf and Bsr for / between -0.20 to 0.60.
The bias of SEð/^Þ for T C 25 is smaller than the bias of SEð/^Þ for T = 10 and the
different methods are closer together. The domain of / for which the C-statistic shows the
largest bias of all estimators increases when T becomes larger; for T = 10 this is when / is
below -0.50 and above 0.70, for T = 100 this is when / is below -0.30 and above 0.20.
The other estimators show the same pattern and order in the bias of SEð/^Þ as for T = 10.
5.3 Rejection rate and power
The EPr of the different methods is presented in Fig. 3 for T = 10, 25 and 50, where the EPr
at / = 0 indicates the empirical rejection rate and the EPr at / 6¼ 0 the empirical power. As
with the bias, the EPr for T = 40 and T = 100 are not shown due to their uninformative
nature. When looking at the rejection rate, the empirical rejection rate approaches the
nominal a as the length of the time series increases, as to be expected. The rejection rate
for T = 10 is between 0.01 and 0.04 and for T C 25 between 0.03 and 0.05, for most
estimators. The only exception is the rejection rate for OLS-A at T = 40, which is 0.06. At
T = 100, the MLE, Bsr, OLS-S and Bf show a rejection rate of 0.050, which is equal to the
nominal a of 0.05. For all practical purposes, the difference in rejection rates between
estimation methods is negligable.
The power of the estimated / shows a positive relation to the size of T and the absolute
value of /, as expected. When we would consider a minimal power of 0.80, for T = 10 this
is only found for the estimators OLS and MLE, and at very low values of /, i.e.
/ 0:90. For T = 25 and negative /, the power is above 0.80 for / 0:60 for all
estimators except for the C-statistic, which has a power above 0.80 for / 0:70; for
positive values of /, the Bsr shows a power above 0.80 for / 0:60, for the other
estimators this is for / 0:70. For larger T, the power reaches 0.80 at lower values of /; for T
= 100, the power is 0.80 for j/j 0:30.
The order of the estimation methods with regard to the power is consistent for the
different values of T. The highest power for negative / is shown by the OLS, for positive /
this is Bsr. The lowest power for negative / is shown by the C-statistic, for positive / this
is /. In general, the difference in power between the methods becomes smaller as T
5.4 Point and interval estimates for /
The mean /^ with a 95 % estimation interval for the MLE and Bsr estimations can be seen
in Fig. 4. We only present this for the MLE and Bsr, since these methods show the lowest
bias. As can be seen in Fig. 4, the 95 % intervals are larger for smaller values of T,
indicating a larger variability in the /^, as would be expected. The strongest decrease in
both variability and bias is from T = 40 to T = 25: for Bsr the bias decreases with up to
89 % and the variability with 29–51 %, for the MLE the bias decreases by 82 % and the
variability by 34–51 %.
In general, it may be concluded that the Bayesian Bsr and the frequentist MLE perform best
in terms of bias, but not in terms of variability. With regard to empirical variability, the r1
performs best. For the bias of the estimated variability, the MLE performs best.
Furthermore, the Bsr is favorable with regard to power for positive /, showing only slight
differences with the OLS estimator for a negative Bsr. This leads us to continue with the MLE
and the Bsr estimators for the misspecification study of this paper.
6 Research design study 2: robustness
To study the robustness of the estimation methods, we misspecify the model. The data is
still analysed as if they stem from an AR(1) model, but we generate the data using an
ARMA(1,1) model. We generate data sets for two different sizes of T, namely 25 and 50.
For / and h, we use parameters of -0.90 to 0.90 inclusive, taking steps of 0.15. Every
condition consists of 5000 replications. Considering a fully crossed design, this yields
13 9 13 9 2 9 5000 = 1,690,000 datasets.
Again, across all conditions, l is set to zero and re2 to one. We consider the same
outcome measures for Study 2 as we did for Study 1.
7 Results study 2
We successively present the results on the bias, empirical standard error, bias of the
estimated standard error, rejection rate, power, and point and 95 % interval estimates. Note
that when h is zero, the simulated data follows an AR(1) model, rendering the results equal
to the results discussed in the first study of this paper, apart from small deviations resulting
from simulation variability.
As with the first study, we checked the R^ of the estimated parameters /, l and re of Bsr.
For each of the parameters, less than 0:14 % showed an R^ above 1.02, and less than 1:69 %
showed an R^ above 1.01.
In Fig. 5, heatmaps for the bias of Bsr and MLE for T = 25 and T = 50 are presented,
expressing the bias depending on the combination of / and h. The h influences the bias in
two ways: first, the bias is smaller when h is close to zero, second, the bias gets larger when
h is further from /.
The bias is also influenced by the value of T and the estimation method. When looking
at T, in the MLE the bias for T = 50 is larger than the bias for T = 25, unless both h and /
are negative. The difference between the bias of T = 50 and the bias of T = 25 ranges for
MLE from -0.03 to 0.12 per condition. For the Bsr, the bias for T = 50 is smaller than for T
= 25, unless / has a value above 0.30. The difference between the bias of T = 50 and T =
25 ranges for Bsr from -0.07 to 0.03 per condition. Comparing the estimation methods
reveals that the bias is small, and in general slightly larger for the Bsr than for the MLE,
with the largest difference being 0.11 for / ¼ 0:60, h ¼ 0, and T = 25. The difference
between the estimation methods is larger for T = 25 than for T = 50.
Close inspection of the results for the variability and EPr for the Bsr and MLE estimators
and the two lengths of T, revealed that the patterns are very similar across methods and
different lengths of T. This prompted us to only present the results of Bsr and T = 25 in
Fig. 6. However, we discuss any quantitative differences between the methods. For
comparison purposes, we also plotted the SDð/^Þ, the bias of SEð/^Þ and the EPr for Bsr and
T = 25 of Study 1.
7.2.1 Empirical standard error: SDð/^Þ
The empirical standard error (SDð/^Þ), for Bsr with T = 25 and h ¼ 0:45; 0:00, and 0.45,
can be seen in Fig. 6 (panel a). Some differences between the SDð/^Þ over different values
of h, / and T are found. First, the SDð/^Þ shows a positive slope over / for negative values
of h, and a negative slope over / for positive values of h. Second, the SDð/^Þ is smaller for
T = 50 compared to T = 25. For the MLE estimator the SDð/^Þ is up to 0.07 smaller for T =
50, for the Bsr the SDð/^Þ is up to 0.08 smaller for T = 50. When comparing methods of
estimation, the SDð/^Þ of the Bsr is larger than the SDð/^Þ of the MLE, for both values of
T. For T = 25, this differs up to 0.03, for T = 50 this differs up to 0.01 per condition.
7.2.2 Bias of the estimated standard error
The bias of SEð/^Þ for Bsr with T = 25 and h ¼ 0:45; 0:00, and 0.45, can be seen in Fig. 6
(panel b). The bias of SEð/^Þ for most combinations of / and h, where h 6¼ 0, is positive
and higher than the bias for the AR(1) data. Only for low values of jhj combined with high
values of j/j, the bias of SEð/^Þ is negative. The bias of SEð/^Þ is larger for T = 25 than for
T = 50, for all methods and conditions, with differences up to 0.02 for both methods.
Furthermore, the bias of SEð/^Þ is slightly larger for the Bsr estimation than for the MLE,
with a maximum absolute difference of 0.01 for T = 25 and 0.03 for T = 50.
7.3 Rejection rate and power
The EPr for Bsr with T = 25 and h ¼ 0:45; 0:00, and 0.45, can be seen in Fig. 6 (panel c).
When h 6¼ 0, the curve of the EPr shifts relative to the curve of the AR(1) data. For a
negative h, the curve shifts to the right, for a positive h, the curve shifts to the left. In both
cases, the amount the curves shifts is roughly equal to the absolute value of h. This is the
same for both methods, with the shape of the curve being dependent on T, as in Study 1.
The differences between the methods are negligible.
7.4 Point and interval estimates for /
The mean /^ with a 95 % estimation interval for the MLE and Bsr estimations are presented
in Fig. 7. As can be seen in Fig. 7, the 95 % intervals are larger for smaller values of T,
indicating a larger variability in the /^. For both methods, the decrease in the 95 %
estimation interval is between 33 and 44 % from T = 25 to T = 50 for the different conditions.
In most conditions, the 95 % estimation interval is larger and the mean /^ is slightly higher
for the Bsr than for the MLE.
We found that the further the h deviates from zero, the larger the difference between the /
and /^ is. The Bsr shows a larger bias than the MLE when h is further from zero, showing a
larger influence of the h parameter in the estimated /. Furthermore, the observed
variability is slightly smaller for the MLE, with the difference between Bsr and MLE being
larger for T = 25 than for T = 50.
We compared five estimation methods for the autocorrelation: the r1, C-statistic, ordinary
least squares, maximum likelihood estimation and Bayesian MCMC estimation. For the
Bayesian MCMC estimation we used both a flat prior and a symmetrized reference prior,
giving a total of six autocorrelation estimators. We compared these estimators with regard
to bias, variability, rejection rate, power and point and 95 % estimation interval estimates.
After this comparison, we selected the Bayesian estimation with symmetrized reference
prior and the maximum likelihood estimator to use in a second study. In this small study
we assessed the robustness of the methods against underspecification.
The results we found in the first study largely complied with the results from previous
studies. For the bias for positive values of /, we found the bias of the C-statistic and the
OLS to be smaller than the bias of r1, as did previous studies
(DeCarlo and Tryon 1993;
Huitema and McKean 1991; Solanas et al. 2010)
. For the empirical standard error, we
found smaller values for r1 than for OLS, as did
Huitema and McKean (1991)
. The low
rejection rate we found for the r1 and the C-statistic confirms to earlier studies
and McKean 1991; DeCarlo and Tryon 1993; Solanas et al. 2010)
. The power we found for
the C-statistic is similar to the power found by
Arnau and Bono (2001)
. However, the
Solanas et al. (2010)
with regard to power were only partly confirmed by our
study: for negative / we indeed found a higher power for OLS followed r1, than for the
C-statistic. But for positive /, we found a higher power for the C-statistic than for r1.
The first study showed a strong improvement in all measures for all methods between
T = 10 and T = 25. This improvement continued, be it not as strong, for higher values of
T. When comparing methods, Bsr showed the smallest bias. For the frequentist methods,
this was MLE followed by the C-statistic. The smallest empirical standard error is shown
by r1, the smallest bias of the estimated standard error is shown by the C-statistic, the OLS
and the MLE. We further found that a small bias is often paired with a high empirical
standard error. The power was rather low for all methods at the lengths of time series we
considered. For T = 25, the power is below 80 % for all methods for / between -0.5 and
0.5, for T = 100, the power is below 80 % for / between -0.2 and 0.2. The differences
between methods with regard to power are negligible. In research areas where effect sizes
are small, this may pose a problem. Some studies use moving windows to assess the
stability of parameter estimates over time. For these moving windows, these results
indicate that a moving window of at least 50 time points is advisable, especially when the
differences in parameters over time are small.
The first study was conducted to explore the differences between estimation methods for
the autocorrelation in a single subject design. However, this is only a small step in a large
research area. The next step may be to explore these results in multilevel or group analyses,
thus when there is not one but multiple subjects per dataset. Another issue may be how the
different methods respond to non-stationary data, i.e., j/j [ 1.
In the second study, the robustness of the MLE and Bsr to underspecification was
examined. In general, we confirmed the notion that the further the unmodelled parameter is
from zero, the larger the influence of this parameter is on /^
(Tanaka and Maekawa 1984)
As with the first study, the empirical standard error decreased when T became larger.
However, the bias reacted differently for both methods: for the MLE, the bias became
slightly smaller for most conditions, where the bias of Bsr became slightly larger for a
larger T. The difference in performance for all measurements between the MLE and the Bsr
was small for both values of T. It was shown that the bias, variability, rejection rate and
power were all highly dependent on the value of the non-modeled parameter in the data, h.
This can be related to the fact that the autocorrelation of the error also has an influence on
the autocorrelation of the total score.
The robustness study was rather small and specific, looking into only one possible way
to misspecify the ARMA (1, 1) model. More options within misspecification should be
explored to find how robust the estimation methods are with regard to under-, over- and
misspecification. Important points are the influence of a misspecified error distribution or
overspecification of the model.
In conclusion, we found that the best performing methods for autocorrelation estimation
are the Bayesian estimator with symmetrized reference prior and the maximum likelihood
estimator. The difference in performance between these two is, for all practical purposes,
negligible. The results for the measurements improving greatly between T = 10 and T = 25
and continue to do so, but in a less spectacular fashion. For the misspecification study, we
found the size of h, the non-modelled parameter, to be vital for the performance of the
estimation methods. The differences between lengths of the series and estimation methods
was of lesser influence on the results.
Acknowledgments This work is funded by the Netherlands Organization for Scientific Research (NWO),
Grant Number 406-11-018.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution,
and reproduction in any medium, provided you give appropriate credit to the original author(s) and the
source, provide a link to the Creative Commons license, and indicate if changes were made.
Arnau , J. , Bono , R.: Autocorrelation and bias in short time series: an alternative estimator . Qual. Quant . 35 ( 4 ), 365 - 387 ( 2001 ). doi: 10 .1023/A:1012223430234
Berger , J.O. , Yang , R.Y.: Noninformative priors and Bayesian testing for the AR(1) model . Econ. Theory 10 , 461 - 482 ( 1994 ). doi: 10 .1017/S026646660000863X
Box , G.E.P. , Jenkins , G.M. : Time Series Analysis: Forecasting and Control. Holden-Day, San Francisco ( 1976 )
Byrd , R.H. , Lu , P. , Nocedal , J. , Zhu , C. : A limited memory algorithm for bound constrained optimization . SIAM J. Sci. Comput . 16 ( 5 ), 1190 - 1208 ( 1995 ). doi:10.1137/0916069
Cox , D.D. , Llatas , I. : Maximum likelihood type estimation for nearly nonstationary autoregressive time series . Ann. Stat. 19 ( 3 ), 1109 - 1128 ( 1991 ). doi: 10 .1214/aos/1176348240
DeCarlo , L.T., Tryon , W.W.: Estimating and testing autocorrelation with small samples: a comparison of the C-statistic to a modified estimator . Behav. Res. Ther . 31 ( 8 ), 781 - 788 ( 1993 ). doi: 10 .1016/ 0005 - 7967 ( 93 ) 90009 - J
Durbin , J. , Koopman , S. : Time Series Analysis by State Space Models . Oxford University Press, Oxford ( 2012 )
Garcia-Hiernaux , A. , Casals , J. , Jerez , M. : Fast estimation methods for time-series models in state-space form . J. Stat. Comput. Simul . 79 ( 2 ), 121 - 134 ( 2009 ). doi:10.1080/00949650701617249
Gelman , A. , Rubin , D.B. : Inference from iterative simulation using multiple sequences . Stat. Sci. 7 ( 4 ), 457 - 472 ( 1992 ). doi: 10 .1214/ss/1177011136
Huitema , B.E. , McKean , J.W. : Autocorrelation estimation and inference with small samples . Psychol. Bull . 110 ( 2 ), 291 - 304 ( 1991 ). doi: 10 .1037/ 0033 - 2909 . 110 .2. 291
Huitema , B.E. , McKean , J.W. : Reduced bias autocorrelation estimation: three jackknife methods . Educ. Psychol. Meas . 54 ( 3 ), 654 ( 1994 ). doi:10.1177/0013164494054003008
Kendall , S.M. , Ord , J.K. : Time Series. Edward Arnold, London ( 1990 )
Kunitomo , N. , Yamamoto , T. : Properties of predictors in misspecified autoregressive time series models . J. Am. Stat. Assoc . 80 ( 392 ), 941 ( 1985 ). doi: 10 .1080/01621459. 1985 .10478208
L u¨tkepohl , H.: Introduction to Multiple Time Series Analysis . Springer Verlag, Berlin ( 1991 )
Pantula , S.G. , Fuller , W.A. : Mean estimation bias in least squares estimation of autoregressive processes . J. Econ . 77 ( 1 ), 99 - 121 ( 1985 ). doi: 10 .1016/ 0304 - 4076 ( 85 ) 90046 - 6
Price , L.R. : Small sample properties of bayesian multivariate autoregressive time series models . Struct. Equ. Model . 19 ( 1 ), 51 - 64 ( 2012 ). doi: 10 .1080/10705511. 2012 .634712
R Core Team ( 2014 ) R: a language and environment for statistical computing . R Foundation for Statistical Computing , Vienna, Austria, URL http://www.R-project.org/
Solanas , A. , Manolov , R. , Sierra , V. : Lag-one autocorrelation in short series: estimation and hypotheses testing . Psicologica 31 ( 2 ), 357 - 381 ( 2010 )
Stan Development Team ( 2014 ) RStan: the R interface to Stan, version 2.4 . URL http://mc-stan.org/rstan. html
Stoica , P. , Friedlander , B. , So¨derstr o¨m, T.: Least-squares, yule-walker, and overdetermined walker estimation of ar parameters: a Monte Carlo analysis of finite-sample properties . Int. J. Control 43 ( 1 ), 13 - 27 ( 1986 ). doi:10.1080/00207178608933446
Tanaka K ( 1984 ) An asymptotic expansion associated with the maximum likelihood estimators in ARMA models . J. R. Stat. Soc. Ser. B Methodol . 46 ( 1 ): 58 - 67 . URL http://www.jstor.org/stable/2345462
Tanaka , K. , Maekawa , K. : The sampling distributions of the predictor for an autoregressive model under misspecifications . J. Econ . 25 ( 3 ), 327 - 351 ( 1984 ). doi: 10 .1016/ 0304 - 4076 ( 84 ) 90005 - 8
Walker , G. : On periodicity in series of related terms . Proc. R. Soc. Lon. Ser. A Contain. Pap. Math. Phys. Character 131 ( 818 ), 518 - 532 ( 1931 ). doi: 10 .1098/rspa. 1931 .0069
Watson P , Nicholls S ( 1992 ) ARIMA modelling in short data sets: some Monte Carlo results . Soc. Econ. Stud . 41 ( 4 ): 53 - 75 . URL http://www.jstor.org/stable/27865115
West , K.D. , Wilcox , D.W.: A comparison of alternative instrumental variables estimators of a dynamic linear model . J. Bus. Econ. Stat . 14 ( 3 ), 281 - 293 ( 1996 ). doi:10.2307/1392443
Young , L. : On randomness in ordered sequences . Ann. Math. Stat. 12 ( 3 ), 293 - 300 ( 1941 ). doi: 10 .1214/ aoms/1177731711
Yule , G.U. : On a method of investigating periodicities in disturbed series, with special reference to Wolfer's sunspot numbers . Philos. Trans. R. Soc. Lon. Ser. A Contain. Pap. Math. Phys. Character 226 , 267 - 298 ( 1927 ). doi: 10 .1098/rsta. 1927 .0007
Zhang , Z. , Nesselroade , J.R. : Bayesian estimation of categorical dynamic factor models . Multivar. Behav. Res . 42 ( 4 ), 729 - 756 ( 2007 ). doi:10.1080/00273170701715998