# Séries temporelles notions de base .pdf

À propos / Télécharger Aperçu

**Séries temporelles notions de base.pdf**

Ce document au format PDF 1.4 a été généré par LaTeX with beamer class version 3.07 / pdfTeX-1.40.3, et a été envoyé sur fichier-pdf.fr le 22/09/2017 à 20:46, depuis l'adresse IP 86.245.x.x.
La présente page de téléchargement du fichier a été vue 1017 fois.

Taille du document: 364 Ko (47 pages).

Confidentialité: fichier public

### Aperçu du document

Time Series Analysis

Basic Concepts

Rachidi Kotchoni (rachidi.kotchoni@u-paris10.fr)

Université Paris Ouest Nanterre La Défense

September 30, 2016

R. Kotchoni ()

Time Series Analysis

September 30, 2016

1 / 47

Types of Time Series

A time series is a variable that describes a statistical entity over time.

The value taken by X at time t is denoted Xt , t = 0, 1, ..., ∞.

Two types of time series:

Stock: level of a variable recorded at a given point in time. Example:

the price of a stock; the wealth of an agent; the stock of capital in a

given economy.

Flow: variable of a stock. Example: GDP growth rate; return on a

…nancial asset; investments during a year in a given economy.

R. Kotchoni ()

Time Series Analysis

September 30, 2016

2 / 47

Types of Time Series

Suppose we have monthly observations (Xt ) and want to obtain

quarterly data

If Xt is a ‡ow, we simply take its sum within each quarter

If Xt is a stock, we may pick the observations at the beginning, in the

middle or a the end of every quarter. Alternatively, we can take the

average value of Xt over the quarter.

Example: If Xt,j is the log-return on an asset at day j of month t:

Xt,j = log

Pt,j

,

Pt,j 1

the monthly returns are:

et,j =

X

mt

∑ (log Pt,j

log Pt,j

1)

= log Pt,m t

log Pt,1

j =1

where mt is the number of days in month t.

R. Kotchoni ()

Time Series Analysis

September 30, 2016

3 / 47

Regularly Sampled Time Series

Time series that are observed at …xed lengh of intervals are said to be

regular.

Intra-daily series: e.g., observed every 5 minutes

daily series: one observation per day

weekly series: one observation per week

Likewise: monthly series; quarterly series; yearly series.

Examples:

quarterly GDP of Canada

monthly Consumer Price Index of Benin

daily close price of the S&P500 (US stock market index)

R. Kotchoni ()

Time Series Analysis

September 30, 2016

4 / 47

Irregularly Sampled Time Series

Stock transaction prices are typically observed at irregular time

intervals

Example: Intradaily transaction prices of the IBM stock.

ask price: demanded for an immediate "sell"

bid price: o¤ered for an immediate "buy".

The "bid price" is always below the "ask price".

A transaction occurs when a seller chooses to sell at the buyer’s price

(bid), or when a buyer chooses to buy at the seller’s price (ask).

Pt,j = price of transaction #j during day t.

If several transactions occur at the same second, the recorded price will

be an average of these transactions’prices.

R. Kotchoni ()

Time Series Analysis

September 30, 2016

5 / 47

Irregularly Sampled Time Series

Transactions occur at random time intervals

The time elapsed between Pt,j and Pt,j +1 is a random variable.

For certain stocks, hundreds or thousands of daily transactions

For some other stock, only a few transactions.

The number of transactions on an asset is a proxy for its level of

liquidity

The di¤erence between the "ask" and the "bid" is also a measure of

liquidity.

R. Kotchoni ()

Time Series Analysis

September 30, 2016

6 / 47

Frequency of a Series

High frequency data: intra-daily, daily, weekly (or even, monthly for

macroeconomists).

Low frequency data: one observation per month, quarter or year.

The frequency of a regular series is:

freq=

1 unit of time

time between 2 consecutive obs.

If the unit of time is a day, month or year, then freq gives the number

of observations per day, month or year.

R. Kotchoni ()

Time Series Analysis

September 30, 2016

7 / 47

Features of times series

Time series data are usually not independent and identically

distributed

Correlation over time: xt can often be used to predict the behavior of

xt + 1

Trend: xt may be increasing or decreasing over time so that

E ( xt ) 6 = E ( xt + 1 ) .

Volatility: The variance of xt may be time varying

Conditional Heteroscedasticity: The variance of xt conditional on

past information may be time varying

R. Kotchoni ()

Time Series Analysis

September 30, 2016

8 / 47

Features of times series

Cycles: the trajectory of xt may display predictable swings over time.

Seasonality: usually refers to cycles within a year. For instance, the

sales of toy shops are quite in Q4 and low in Q1

Long cycles: usually refers to cycles of several years, like economic

expansions and recessions.

Erratic ‡uctuations: ‡uctuations that are hard to predict.

R. Kotchoni ()

Time Series Analysis

September 30, 2016

9 / 47

Why Study Time Series?

1

Find the model that best describes an economic series

2

Forecast future realizations of economic time series

3

Test economics theories. Example of theories are: the Phillips curve;

The consumption CAPM; The e¢ cient Market Hypothesis; The

Expectation Hypothesis; The Purchasing Power Parity etc.

4

etc.

R. Kotchoni ()

Time Series Analysis

September 30, 2016

10 / 47

Trend and Seasonality

Let yt be a quarterly series in which we suspect the presence of a

trend, time dependence and seasonality.

We may specify the following model for yt :

yt

µt

= µt + ρ yt 1 µt 1 + εt avec

= α0 + α1 t + δ1 Qt,1 + δ2 Qt,2 + δ3 Qt,3 , 8 t

µt is the mean of yt

α0 : a constant

α1 t: a linear tend

ρ: autocorrelation or time dependence,

δ1 Qt,1 , δ2 Qt,2 and δ3 Qt,3 : Seasonality

εt : error term.

R. Kotchoni ()

Time Series Analysis

September 30, 2016

11 / 47

Trend and Seasonality

In general, the trend can be any deterministic function of time g (t )

linéaire tend: g (t ) = α1 t

logarithmic tend: g (t ) = α1 log t

quadratic tend: g (t ) = α1 t 2

Qt,k , k = 1, 2, 3, 4 are seasonal dummy variables:

Qt,k =

1 if t is the k th quarter of the current year

0, sinon

Qt,4 is not included to avoid multicolinearity.

δ1 , δ2 and δ3 are called seasonal oe¢ cients.

R. Kotchoni ()

Time Series Analysis

September 30, 2016

12 / 47

De-trending and de-Seasonalization

Substitute µt and µt

yt

1

in the expression of yt :

= (α0 + α1 t + δ1 Qt,1 + δ2 Qt,2 + δ3 Qt,3 ) +

ρ (yt 1 α0 α1 (t 1) + δ1 Qt 1,1 + δ2 Qt

+ εt .

1,2

+ δ3 Qt

1,3 )

This yields an equation that can be estimated by OLS:

yt

R. Kotchoni ()

= b

β0 + b

β1 t + b

ρyt 1 + b

δ1 Qt,1 + b

δ2 Qt,2 + b

δ3 Qt,3

+b

β2 Qt 1,1 + b

β3 Qt 1,2 + b

β4 Qt 1,3 + bεt

Time Series Analysis

September 30, 2016

13 / 47

De-trending and de-Seasonalization

α0 and α1 are estimated as:

b

β1 = b

α1 (1

b

ρ) and b

β0 = (1

b

ρ) b

α0 + b

ρb

α1

After estimating the coe¢ cients, one computes the de-trended and

de-seasonalized series as

where

yet = yt

bt

µ

bt = b

µ

α0 + b

α1 t + b

δ1 Qt,1 + b

δ2 Qt,2 + b

δ3 Qt,3

R. Kotchoni ()

Time Series Analysis

September 30, 2016

14 / 47

De-trending and de-Seasonalization

This procedure can be adapted for the case of an autoregresive

process of order p.

p

yt

µt

=

∑ ρi

yt

i

µt

i

+ εt avec

i =1

µt

= α0 + α1 t + δ1 Qt,1 + δ2 Qt,2 + δ3 Qt,3 ,

Simply substitue µt , ..., µt p in the expression of yt and estimated

the resulting equation by OLS

Note that it is possible to …lter µt nonparametrically as well.

R. Kotchoni ()

Time Series Analysis

September 30, 2016

15 / 47

Strong Stationarity

yt is said to be strongly stationary if the distribution of

(yt , yt +1 ..., yt +p ) is the same for all t, 8 p 2 Z.

This de…nition does not require that the moments of yt exist

It is hard to use empirically.

It can be used to check if a speci…ed theoretical model is stationary

Moments of a random variable X :

Any linear combination of quantities of type E X k , k =2 N.

R. Kotchoni ()

Time Series Analysis

September 30, 2016

16 / 47

Famous moments

First moment: µ = E (X )

Second moments:

h

m2 = E X 2 (non centered); µ2 = E (X

i

µ)2 (centered)

Third moments:

h

m3 = E X 3 (non centered); µ3 = E (X

Skewness: S =

µ3

σ3/2

i

µ)3 (centered)

Fourth moments:

h

m4 = E X 4 (non centered); µ4 = E (X

Kurtosis: K =

R. Kotchoni ()

Time Series Analysis

µ4

σ4

i

µ)4 (centered)

September 30, 2016

17 / 47

Covariance Stationarity

yt is said to be second order stationary or covariance stationary if :

E (yt ) = µ (constant 8 t)

Var (yt ) = σ2 (constant 8 t)

Cov (yt , yt

h)

= γh (Only depend on h)

If yt is strongly stationary and has …nite second moments, then it is

also covariance stationary.

A series that has a trend or a seasonality is not stationary.

A series that is heteroscedastic is not stationary

"Stationary" with no other precision means "covariance stationary".

R. Kotchoni ()

Time Series Analysis

September 30, 2016

18 / 47

Conditional Heteroscedasticity

A series can be stationary but conditionally heteroscedastic:

constant mean: E (yt ) = µ for all t.

constant unconditional variance: Var (yt ) = σ2 for all t.

stable autocovariance structure: Cov (yt , yt h ) = γh for all t.

but, time varying conditional variance:

Var (yt jyt 1 , yt 2 ...y0 ) = σ2t

In this case, we have:

Var (yt )

E (Var (yt jyt

= E

R. Kotchoni ()

σ2t

+ Var

1 , yt 2 ...y0 )) + Var

(µ) = E σ2t .

Time Series Analysis

(E (yt jyt

1 , yt 2 ...y0 ))

September 30, 2016

19 / 47

Autocovariance

Let yt be a covariance stationary series.

The autocovariance of order h of yt is:

γ(h ) = Cov (yt , yt

h)

γh .

Note that:

= Cov (yt , yt h )

= Cov (yt , yt +h ) γ h and

γ(0) = Cov (yt , yt ) = Var (yt ) = σ2

γh

R. Kotchoni ()

Time Series Analysis

γ0 .

September 30, 2016

20 / 47

Autocorrelation

The autocorrelation of order h is:

ρ (h ) =

γh

γ0

ρh

By de…nition, we have: ρ0 = 1.

R. Kotchoni ()

Time Series Analysis

September 30, 2016

21 / 47

Partial Autocorrelation

The partial autocorrelation of order h of yt , denoted e

ρh , is the

correlation between yt and yt h that does not transit by observations

yt 1 , ..., yt h +1 .

It is the "response" of yt to yt

(yt 1 , ..., yt h +1 ).

h

when we control for the e¤ects of

It can be obtained via the following OLS regression:

yt = β0 + β1 yt

1

+ ... + βt

h + 1 yt h + 1

By de…nition, e

ρ0 = ρ0 = 1 and e

ρ1 = ρ1 .

+e

ρh yt

h

+ εt

To …nd e

ρ2 ,...,e

ρh one need to estimate separate regressions at di¤erent

lags and pick the last coe¢ cient.

R. Kotchoni ()

Time Series Analysis

September 30, 2016

22 / 47

Autoregressive Model of Order 1

Consider the autoregressive model of order 1, denoted AR(1):

yt = µ + ρyt

1

+ εt

where εt is a "white noise" with mean 0 and variance σ2ε .

Weak white noise: εt is uncorrelated with is past and future

realizations:

E (εt εt h ) = 0 for all h 2 Z

Strong white noise: εt is IID on top of being a weak white noise.

R. Kotchoni ()

Time Series Analysis

September 30, 2016

23 / 47

Autoregressive Model of Order 1

The AR(1) model implies that:

yt

Substituting for yt

yt

1

1

= µ + ρyt

2

+ εt

1

into yt yields:

= µ + ρ (µ + ρyt 2 + εt 1 ) + εt

= µ (1 + ρ) + ρ2 yt 2 + εt + ρεt

1

But we also have:

yt

R. Kotchoni ()

2

= µ + ρyt

Time Series Analysis

3

+ εt

2

September 30, 2016

24 / 47

Autoregressive Model of Order 1

Substitute again yt

yt

2

in this expression

= µ (1 + ρ) + ρ2 (µ + ρyt 3 + εt 2 ) + εt + ρεt

= µ 1 + ρ + ρ2 + ρ3 yt 3 + εt + ρεt 1 + ρ2 εt

∑ ρi + ρ3 yt

3

+

t 1

yt = µ

∑

∑ ρi εt

3,

yt

4,

etc., we obtain:

ρi + ρt y0 +

i =0

R. Kotchoni ()

i

i =0

i =0

By substituting recursively yt

2

3 1

3 1

= µ

1

Time Series Analysis

t 1

∑ ρi εt

i

i =0

September 30, 2016

25 / 47

Autoregressive Model of Order 1

Let us consider the case ρ 6= 1. We have:

E (yt ) = µ

1

1

ρt

+ ρt E (y0 )

ρ

Var (yt ) = ρ2t Var (y0 ) +

t 1

∑ ρ2i σ2ε

i =0

= ρ2t Var (y0 ) + σ2ε

1

1

ρ2t

.

ρ2

Note that ρ 6= 1 is necessary for E (yt ) and Var (yt ) to exist.

R. Kotchoni ()

Time Series Analysis

September 30, 2016

26 / 47

Autoregressive Model of Order 1

Let us …nd the limit of E (yt ) and Var (yt ) as t ! ∞.

When jρj < 1, we have:

lim E (yt ) =

t !∞

µ

1

ρ

and lim Var (yt ) =

t !∞

σ2ε

.

1 ρ2

Moreover,

Cov (yt , yt

h)

= ρh Var (yt ) .

Hence, the AR(1) is second order stationary if and only if jρj < 1. In

this case, we have:

E (yt ) =

R. Kotchoni ()

µ

1

ρ

and Var (yt ) =

Time Series Analysis

σ2ε

.

1 ρ2

September 30, 2016

27 / 47

Explosive Process

When jρj > 1, we have

lim E (yt ) =

t !∞

∞ and

lim Var (yt ) = ∞.

t !∞

In this case, the AR(1) is explosive and therefore non stationary.

R. Kotchoni ()

Time Series Analysis

September 30, 2016

28 / 47

Stochastic Trend

Unit Root

Let us now consider the case ρ = 1 (unit root).

We have:

t 1

yt = µt + y0 +

∑ εt

i

i =0

so that:

E (yt ) = µt and Var (yt ) = tσ2ε

The series behaves as though it has a linear trend but its variance is

in…nite.

Hence the term "stochastic trend".

R. Kotchoni ()

Time Series Analysis

September 30, 2016

29 / 47

Stochastic Trend

Unit Root

Likewise, when ρ =

yt = µ

1 we have:

t 1

t 1

i =0

i =0

∑ ( 1)i + ( 1)t y0 +

∑(

1)i εt

i

Hence:

E (yt jy0 ) =

µ + y0 and Var (yt ) = tσ2ε if t even

y0 and Var (yt ) = tσ2ε if t is odd

E (yt jy0 ) = µ

so that yt is non stationary

Economic time series will mostly have ρ = 1.

R. Kotchoni ()

Time Series Analysis

September 30, 2016

30 / 47

Stochastic Trend

Testing for Unit Root

Several tests exist in the literature to detect the presence of unit root

e.g.: Dickey-Fuller test.

Suppose ρ = 1 so that

yt = µ + yt

The …rst di¤erence ∆yt = yt

yt

1

1

+ εt

is therefore stationary:

∆yt

= µ + εt

E (∆yt ) = µ; Var (∆yt ) = σ2ε

Cov (∆yt , ∆yt h ) = 0 for all h

R. Kotchoni ()

Time Series Analysis

September 30, 2016

31 / 47

Stochastic Trend

Testing for Unit Root

The Dickey-Fuller test exploits the idea that if ρ = 1, the coe¢ cient

α should not be signi…cant in the following regression:

∆yt = µ + αyt

1

+ εt

Versions of the test exist where a trend and/or longer lags are added

in the RHS.

∆yt = α0 + µ1 t + α1 yt

1

+ α2 yt

2

+ ... + αp yt

p

+ εt

In all cases, the test is based on the slope coe¢ cient of the …rst lag

yt 1

The distribution of the test statistic depends on whether there is a

trend or not or whether lags of yt are added or not

R. Kotchoni ()

Time Series Analysis

September 30, 2016

32 / 47

Integration of Order d

If the data generating process is more complicated than an AR(1), it

can be necessary to di¤erenciate yt more than once before obtaining a

stationary series.

∆d yt is the series yt di¤erenciated d times:

∆yt

∆ yt

2

∆3 yt

=

=

=

=

=

yt

yt

1

∆ (∆yt ) = ∆ (yt

yt

(yt yt 1 ) (yt 1

yt 2yt 1 + yt 2

∆ ∆2 yt = ...etc.

1)

yt

2)

If ∆d yt is stationary while ∆n yt is not for n < d, then we say that yt

is integrated at order d

i.e., yt have a unit root with order of multiplicity d

R. Kotchoni ()

Time Series Analysis

September 30, 2016

33 / 47

Autoregressive Model of Order p: AR(p)

The AR(p) model is of the form:

p

yt = c + ∑ ρi yt

i

+ εt

i =1

where εt is a white noise.

Let us de…ne the "lag" operator as:

Lyt = yt

1.

Then we have:

L2 yt

p

L yt

R. Kotchoni ()

= LLyt = Lyt

= yt p .

Time Series Analysis

1

= yt

2

and

September 30, 2016

34 / 47

Stationarity of an AR(p)

Let us de…ne P (L) = 1

p

∑i =1 ρi Li . Then the AR(p) becomes:

P ( L ) yt = c + ε t

If there exist a polynomial P 1 (L) = ∑i∞=0 θ i Li such that

2

∑i∞=0 θ i < ∞ and P 1 (L)P (L) = 1, then P (L) is invertible.

For an AR(1), P (L) = 1 ρL.

If jρj < 1, we have:

P 1 (L) =

∞

∑ ρ i Li

i =0

If jρj

1, then P 1 (L) does not exist.

P (L) is invertible if all the solutions of P (x ) = 0 are outside the unit

circle.

R. Kotchoni ()

Time Series Analysis

September 30, 2016

35 / 47

Stationarity of an AR(p)

If P (L) is invertible, we can write:

P (L)yt

yt

= c + εt ,

= P

1

∞

(c + εt ) =

∑ θ i Li ( c + ε t ) = c

i =0

∞

∞

i =0

i =0

∑ θ i + ∑ θ i εt

1.

with the concention that: Li c = c.

If εt

1

is a white noise with mean 0 and variance σ2ε , then:

∞

E (yt ) = c

∑ θi

and Var (yt ) = σ2ε

i =0

∞

∑ θ 2i

i =0

The AR(p) model P (L)yt = c + εt is stationary if and only if the

polynomial P (L) is invertible.

R. Kotchoni ()

Time Series Analysis

September 30, 2016

36 / 47

Mean of a stationary AR(p)

Consider a stationary AR(p) given by:

p

yt = c + ∑ ρi yt

i

+ εt

i =1

Take the expectation on both sides of the equality:

p

E (yt ) = c + ∑ ρi E (yt i ) + E (εt ) ,

i =1

where E (εt ) = 0 for all t.

Moreover, E (yt ) = E (yt i ) = µ because of stationarity. Hence:

p

µ = c + ∑ ρi µ ) µ =

i =1

R. Kotchoni ()

Time Series Analysis

1

c

p

∑ i =1 ρ i

September 30, 2016

37 / 47

Mean of a stationary AR(p)

Stationarity requires that ∑pi=1 ρi 6= 1.

p

∑i =1 ρi = 1 implies the existence of a unit root.

With no loss of generality, we can write the AR(p) as:

p

yt

µ=

∑ ρi (yt

i

µ ) + εt ,

i =1

using the fact that c = µ

R. Kotchoni ()

p

∑i =1 ρi µ.

Time Series Analysis

September 30, 2016

38 / 47

Autocovariances of a stationary AR(p)

Multiply both sides of equality by (yt

h

E (yt

i

µ )2 =

µ) and take the expectation:

p

∑ ρi E [(yt

i

µ) (yt

µ)] + E [εt (yt

µ)]

i =1

But note that:

E [(yt

i

h

E (yt

µ )2

µ) (yt

i

= Var (yt ) = γ0

µ)] = Cov (yt i , yt ) = γi

Likewise:

p

E [εt (yt

µ)] =

∑ ρi E [εt (yt

i

µ)] + E ε2t = σ2ε .

i =1

since εt is uncorrelated with lags of yt .

R. Kotchoni ()

Time Series Analysis

September 30, 2016

39 / 47

Autocovariances of a stationary AR(p)

Finally we have:

p

γ0 =

∑ ρi γi + σ2ε ,

i =1

This de…nes a …nite di¤erence equation of order p for the sequence γi .

Classical methods exist to solve this kind of equation, but not relevant

for this case.

Instead, we will try to …nd a system of linear equation of which

γ0 , γ1 , γ2 , ..., γp is solution

The equation above contains p + 1 unknowns. Hence, we need p other

equations.

R. Kotchoni ()

Time Series Analysis

September 30, 2016

40 / 47

Autocovariances of a stationary AR(p)

Multiply again the AR(p) by (yt

E [(yt

µ) (yt

h

µ) and take the expectation:

h

µ)]

p

=

∑ ρi E [(yt

µ) (yt

i

h

µ)] + E [εt (yt

h

µ)]

i =1

for h = 1, ..., p.

This leads to:

p

γh

=

∑ ρ i γ ji

i =1

= ρ1 γh

1

hj

+ ρ2 γh

2

+ ... + ρh γ0 + ρh +1 γ1 + ... + ρp γp

h

Together with the previous equation, we obtain, a system of p + 1

equations with p + 1 unknowns.

R. Kotchoni ()

Time Series Analysis

September 30, 2016

41 / 47

Example for an AR(2)

For an AR(2), we have to solve:

8

< γ0 = ρ1 γ1 + ρ2 γ2 + σ2ε

γ1 = ρ1 γ0 + ρ2 γ1

:

γ2 = ρ1 γ1 + ρ2 γ0

Solve this system for γ0 , γ1 and γ2 , assuming ρ1 , ρ2 and σ2ε are

known

Empirically, we often have to deal with the inverse problem:

b0 , γ

b 1 and γ

b2 .

try to estimate ρ1 , ρ2 and σ2ε for γ

R. Kotchoni ()

Time Series Analysis

September 30, 2016

42 / 47

Estimation of AR(p) Model

Consider the last p equations of the previous system:

γh = ρ1 γh

1

+ ρ2 γh

2

+ ... + ρh γ0 + ρh +1 γ1 + ... + ρp γp

h,

for h = 1, ..., p.

In matrix notation, we have:

0

1 0

γ0

γ1

B γ C B

γ1

B 2 C B

B .. C = B

.

@ . A B

@ ..

γp

γp 1

γ1

γ0

..

.

..

.

..

.

γ1

γp

..

.

γ1

γ0

1

10

CB

CB

CB

C@

A

ρ1

ρ2

..

.

ρp

This justi…es the estimation of the coe¢ cients ρ1 , ..., ρp

R. Kotchoni ()

Time Series Analysis

1

C

C

C

A

by OLS.

September 30, 2016

43 / 47

Estimation of AR(p) Models

OLS estimation of the slope coe¢ cients:

0

B

B

B

@

where

b

ρ1

b

ρ2

..

.

b

ρp

bh =

γ

1

C B

C B

C=B

A B

@

1

T

0

b0

γ

b1

γ

..

.

bp

γ

b1

γ

1

b0

γ

..

.

.

..

.

b1

γ

1

b1

γ

b0

γ

1

C

C

C

C

A

1

0

B

B

B

@

T

h t =∑

h +1

(yt

b ) (yt

µ

b

The constant is deduced as b

c=µ

R. Kotchoni ()

..

bp

γ

..

.

h

p

b ) and µ

b=

µ

1

b1

γ

b2

γ

..

.

C

C

C

A

bp

γ

1

T

T

∑ yt .

t =1

b.

ρi µ

∑ i =1 b

Time Series Analysis

September 30, 2016

44 / 47

Moving Average Model of Order q: MA(q)

The MA(q) model is of the form:

q

yt = c + εt

∑ θ i εt

j

j =1

where εt is a white noise.

Let us de…ne Q (L) = 1

q

∑j =1 θ i Li . Then the MA(q) becomes:

yt = c + Q ( L ) ε t

The MA(q) model is always stationary for …nite q:

q

E (yt ) = c and Var (yt ) = σ2ε

1+

∑ θ 2i

j =1

R. Kotchoni ()

Time Series Analysis

!

.

September 30, 2016

45 / 47

Invertibility of an MA(q)

The MA(q) model is invertible if the polynomial Q (L) is invertible.

Q (L) is invertible if all the solutions of Q (x ) = 0 are outside the unit

circle.

In this case, there exist Q 1 (L) = ∑i∞=1 ρi Li such that

Q (L)Q 1 (L) = 1.

An invertible MA(q) always admits an AR(∞) representation:

∞

∑ ρi (yt

1

c ) = εt

i =1

Likewise, the inverse of a stationary AR(p) is an MA(∞).

The MA(q) model cannot be estimated by OLS.

Instead, one may use the maximum likelihood of the method of

moments.

R. Kotchoni ()

Time Series Analysis

September 30, 2016

46 / 47

ARMA Models

Stationarity and Invertibility

The ARMA(p,q) is of the following form:

q

p

yt = c + ∑ ρi yt

i

+ εt

i =1

∑ θ i εt

j

j =1

Using our previous notation, we have:

P ( L ) yt = c + Q ( L ) ε t

where

p

P (L) = 1

∑ ρi Li and Q (L) = 1

i =1

q

∑ θ i Li

j =1

The ARMA(p,q) model us stationary if P (L) is invertible.

The ARMA(p,q) model is invertible if Q (L) is invertible.

ARMA models may be estimated by maximum likelihood.

R. Kotchoni ()

Time Series Analysis

September 30, 2016

47 / 47