Sequential Monte Carlo for inference of latent ARMA time-series with innovations correlated in time
Urteaga et al. EURASIP Journal on Advances in Signal
Processing
Sequential Monte Carlo for inference of latent ARMA time-series with innovations correlated in time
Iñigo Urteaga 1
Mónica F. Bugallo 0
Petar M. Djuric´ 0
0 Department of Electrical and Computer Engineering, Stony Brook University , 11794 Stony Brook, NY , USA
1 Department of Applied Physics and Applied Mathematics, Columbia University , 10027 New York, NY , USA
We consider the problem of sequential inference of latent time-series with innovations correlated in time and observed via nonlinear functions. We accommodate time-varying phenomena with diverse properties by means of a flexible mathematical representation of the data. We characterize statistically such time-series by a Bayesian analysis of their densities. The density that describes the transition of the state from time t to the next time instant t + 1 is used for implementation of novel sequential Monte Carlo (SMC) methods. We present a set of SMC methods for inference of latent ARMA time-series with innovations correlated in time for different assumptions in knowledge of parameters. The methods operate in a unified and consistent manner for data with diverse memory properties. We show the validity of the proposed approach by comprehensive simulations of the challenging stochastic volatility model.
Sequential Monte Carlo; Correlated innovations; Latent time-series; State-space models; ARMA; FARIMA; Fractional Gaussian process
1 Introduction
This paper addresses inference of a broad class of latent
time-series observed via nonlinear functions. We aim at
modeling time-series with diverse memory properties in a
unified manner so that a method for inference of
heterogeneous time-varying data can be proposed. To that end,
we elaborate on classical autoregressive moving average
(i.e., ARMA) models and consider innovations1 that are
correlated in time. With these flexible modeling
assumptions, a diverse set of scenarios and data properties can be
accommodated. The studied latent time-series framework
not only covers classical ARMA type models and their
fractionally integrated generalizations, i.e., autoregressive
fractionally integrated moving average (ARFIMA)
processes but also allows for inference of time-series with
heterogeneous memory properties.
The analysis of time-series is relevant in a plethora of
disciplines in science, engineering and economics [
1–3
].
In all these areas, stochastic processes are used to model
the behavior of time-varying data. Often, the modeling is
carried out by two processes, one of which is latent and the
other, observed and informative about the hidden process.
Among the relevant features of time-series data and the
stochastic models used for their description, their memory
is one of the most important characteristics. On the one
hand, there are short-memory processes, where only few
past data values affect the present of the time-series. On
the other, the present value is dependent on samples far
into the past for long-memory processes.
ARMA models have been widely studied for
characterizing short-term processes, as they accurately describe
quickly forgetting data. The pioneering work on
shortmemory processes and ARMA(p, q) time-series was
presented in the early 1950s by [
4
], it was continued by
[
5
], and later expanded by [
2
]. ARMA(p, q) processes
are defined by their autoregressive (AR) parameters
a1, a2, · · · , ap, of order p; moving average (MA)
parameters b1, b2, · · · , bq, of order q; and driving innovations
ut, which are assumed to be independent and identically
distributed (i.i.d.).
The work on long-memory processes also began in the
middle of the 20th century, with the groundwork laid by
[
6
]. He studied Nile river data and realized that it
manifested long-range dependence. In the following decades,
plenty of other geophysical, climatological, and financial
records have been described by similar long-term
characteristics [
7–9
].
For modeling time-series with long memory, there are
two types of formulations that have attracted interest of
practitioners [
8
]. They arise naturally from limit theorems
and classic models. With the first formulation, the
longmemory processes are described as stationary increments
of self-similar models, of which the fractional Gaussian
process (fGp) is a prime example. The second formulation
appears in the form of autoregressive fractionally
integrated moving average processes. These models are built
upon ARMA models by introducing non-integer values
of the differencing parameter d, which accounts for the
“integrated” part I of the model. The acronyms ARFIMA
or FARIMA are used to refer to these processes (where
the F refers to the “fractional” component), even if the
ARIMA(p, d, q) notation suffices if fractional values of
d are considered.
Both short- and long-memory processes (modeled by
ARMA, FARIMA, or other models) are commonly used in
practice to describe all kinds of time-vary (...truncated)