# What does two cointegration mean

## Cointegration and error correction in the context of time series analysis

### TABLE OF CONTENTS

List of abbreviations

Symbol directory

1 Introduction

2. Basic concepts

2.1 Stationarity

2.2 integration

2.3 Sham regression

3. Cointegration and error correction

3.1 The concept of cointegration

3.2 Granger representation theorem

3.3 The error correction model

3.4 Test for cointegration

4. Further developments and relevance for research and practice

4.1 Further developments of the cointegration concept

4.2 The importance of cointegration for research

5. Conclusion

bibliography

attachment

### LIST OF ABBREVIATIONS

Figure not included in this excerpt

### 1 Introduction

Time series are of central importance in research. They show the course of chronologically ordered observation values and are analyzed in order to better predict future events.^{1}Furthermore, two or more time series can be examined for correlations and thus forecasts can be made. If you look at the time series consumption and income, you intuitively see that both are related. The change in one variable leads to a change in the other. The classical model of linear regression can be used to examine a possible correlation between the variables. However, this requires that the time series are stationary. Since economic time series are typically non-stationary, differences have to be formed in order to achieve stationarity. Otherwise it can lead to pseudo-connections. The formation of differences leads to a loss of information, so this method does not provide any reliable statements. R. F. Engle and C. Granger have developed a model which, under certain circumstances, enables non-stationary time series to be used within the framework of the classical model of linear regression without any loss of information. The two researchers have also recognized that every cointegration model can be represented as an error correction model and vice versa. This enabled research to analyze long-term and short-term relationships between time series. The work entitled "Co-Integration and Error Correction: Representation, Estimation, and Testing" is the main subject of this work. First, the basic terms stationarity, integration and sham regression are clarified. The third chapter deals with the terms cointegration and error correction. The cointegration concept is presented in the first section. What follows is a brief consideration of the Granger representation theorem. The error correction model is then presented in detail, and test procedures for the existence of cointegration are discussed. Chapter 4 provides information about the extensions of the cointegration concept. Furthermore, the importance of cointegration and its areas of application is discussed.

### 2. Basic concepts

### 2.1 Stationarity

Most of the economic data that one encounters over time is non-stationary and trending.^{2}They follow a stochastic trend. This means that the variability of the time series increases over time. Nonstationarity therefore means that a variable has no clear tendency to return to a constant value or to a linear trend. It is an important property of time series. In order to be able to work sensibly with linear processes, an additional property must be required, the so-called stationarity.^{3}

There are two different grades of this term. One speaks of strict stationarity when all properties of the process segment remain constant over time. However, this concept is rarely used in practice. For this reason, weak stationarity can always be assumed in the following. A stochastic process is weakly stationary if the expectation value is finite and independent of time. This means that there is no sign of a trend. In addition, the auto covariances are finite. They only depend on the time difference and not on the discrete points in time. As a result, the variance is constant and independent of time. As an example, one can assume a stochastic process ɛt.^{4}ɛt is referred to as a purely random process if the following properties are present: The expected value of musst must be zero, V (ɛt) = δ2 must be fulfilled and the random variables of the process must be mutually uncorrelated. If these properties are given, one can speak of weak stationarity. The random process ɛt is also referred to in this context as “white noise” or “white noise”.

### 2.2 integration

If it is necessary to form differences in order to achieve stationarity, then a time series is said to be integrated.^{5}For the majority of economic time series, a single difference calculation is sufficient to make them stationary.^{6} Since the initial series is obtained by simply adding up the transformed series, such processes are called integrated of order 1 (I (1)). This leads to a general definition of the term integration.^{7}A stochastic process xt is called integrated of order d if its d-fold differences ∆dxt are a stationary process, xt ~ I (d) (x is integrated of order d). In the following, the values d = 0 and d = 1 are considered. There are major differences between integrated time series of order 0 I (0) and integrated time series of order 1 I (0). If a time series xt is integrated of order 0, then its variance is finite. In the case of time series that are integrated of order 1 I (1), the variance tends towards infinity if t tends towards infinity.

### 2.3 Sham regression

In a 1974 paper, Granger and Newbold coined the term spurious regression.^{8}This expression stands for the fact that between integrated time series that do not show any dependencies, due to the non-stationarity, artificially statistically significant pseudo-connections are shown. A well-known example of this is the number of child births and the number of pairs of storks.^{9} Although the two features correlate with one another, there is still no causal relationship. In an extensive simulation study, Granger and Newbold regressed independent stochastic processes on one another.^{10}They found that the least squares estimators of the slope parameters converge towards random variables with non-degenerate distributions and not towards zero. The same also applies to the coefficient of determination. On the other hand, the estimated residuals behave like I (1) processes. The Durbin-Watson statistics of the residuals converge to zero. In order to get around such pseudo-dependencies, time series analysts were of the opinion in the past that it is not allowed to work with the original series. The original series must be converted to be weakly stationary. To estimate the dynamic relationship between time series, differences are formed until the converted time series no longer provide any indication of non-stationarity. The cross-correlogram of these series is then used to identify the relationship. When forming differences, long-term movements are filtered out. However, information is also lost. There are also two other problems with this method. On the one hand, there is the possibility that the estimated coefficients are not statistically significant, although a corresponding relationship exists. Even if they are supposed to be statistically significant, on the other hand they can be strongly distorted downwards due to the error-in-the-variables problem that occurs. Two options therefore came into question for the time series economists.^{11}Either they run the risk of producing spurious regression in the variables or they do not produce significant results from the differences. This dilemma could be avoided with cointegration. Because precisely when there is cointegration, there can be no sham regression.

### 3. Cointegration and error correction

### 3.1 The concept of cointegration

Cointegration always occurs when two or more I (1) variables show common developments over the long term.^{12}Apart from temporary fluctuations, they do not move apart. This situation is also known as statistical equilibrium. In practice, the situation can be interpreted as a long-term economic relationship. An example of this are the prices of a good on two different geographically separate markets. Although the prices of the good diverge in the short term, it can be observed that they converge in the long term.

The Nobel Prize winners Engle and Granger define cointegration in their work from 1987 as follows: The components of a vector xt are cointegrated of the order (d, b), xt ~ CI (d, b), if and only if all components of xt are integrated by are of order d and there is (at least) one linear combination zt of these variables which is integrated of order d - b, where d ≥ b> 0, ie if (3.1) α´xt = zt ~ I (d - b ) applies. The vector α is called the cointegration vector.

The following is a simple, static regression relationship between two I (1) variables. Given are x and y, the two I (1) processes. Normally, a linear combination of x and y is an I (1) process, unless there is a parameter a. The linear combination (3.2) xt - a yt = zt is then I (0), i.e. it is stationary. Hence x and y are cointegrated. The corresponding equilibrium relation is given by (3.3) x = ay. The vector α´ = (1 -a) is the cointegration vector. The process z is the equilibrium error. It describes the deviations from equilibrium. Since z is a stationary process and its variance is therefore finite, the equilibrium error cannot be arbitrarily large. One can observe a return of the system to the path over and over again. Thus the equilibrium relation (3.3) x = ay represents an attractor.

### 3.2 Granger representation theorem

In the Granger representation theorem, important properties of cointegration relationships are presented.^{13} It says that there is always an error correction representation for every cointegration relationship. Since it first appeared in Granger 1981 and was found in Granger 1983, it is called the Granger Representation Theorem. The theorem says the following: If an N * 1 vector xt is cointegrated of the order CI (1, 1) with cointegration rank r, the system has in addition to the autoregressive representation (3.4) A (B) xt = d (B) ɛt, also an error correction representation, (3.5) A * (B) (1 - B) xt = - ɣ zt-1 + d (B) ɛ t, with (3.6) A (1) = ɣ α´. ɣ and α´ are N * r matrices of rank r, 0

Corresponding to the rank of the matrices α and α´, the system of N I (1) variables r contains linearly independent cointegration vectors, as well as N-r independent stochastic trends.

### 3.3 The error correction model

Error correction mechanisms are widely used in business. Behind this is the following idea.^{14}The portion of the imbalance in one period, ie the equilibrium error, is corrected in the next period. For example, the price change in one period can depend on the level of excess demand in the previous period. For a multivariate system we can define a general error correction representation with respect to B, the backshift operator. The filter B with Bxt = xt-1 is referred to here as the backshift operator.^{15}B shifts the time series by one time unit. In this case ∆ = 1 - B applies to the difference filter and thus (3.7) ∆xt = xt - xt-1 = (1 - B) xt.

A time series xt has an error correction representation if it can be represented as:

(3.8) A (B) (1 - B) xt = ɣ zt-1 + ɛt, where (3.9) zr = α`xr and ɣ ≠ 0 holds.^{16}In this illustration, only the imbalance in the previous period is an explanatory variable. In order to deepen the explanation of the error correction model, one can start from the following cointegration model:

(3.10) xt = m + n yt + zt.^{17}

There is a linear connection between the levels. In this case m and n are parameters. Instead of a specification in the levels, an analog difference specification is chosen, which is represented as follows: (3.11) ∆xt = n ∆yt + ɛt, with ɛt = ∆zt.

As mentioned above, one loses information about the level correlation by forming differences, but one obtains a stationary estimated equation.

**[...]**

^{1}See Kirchgässner / Wolters (2006), p.1.

^{2}See Hassler (2003), p. 811.

^{3}See Möller (2003), p. 48.

^{4}See Kirchgässner / Wolters (2006), p. 13.

^{5}See Hassler (2003), p. 812.

^{6}See Kirchgässner / Wolters (2006), p. 179.

^{7}See Engle / Granger (1987), p. 252.

^{8}See Granger / Newbold (1974), p. 111.

^{9} See Figure 1 in the Appendix

^{10}See Kirchgässner / Wolters (2006), p. 179.

^{11}See Hassler (2003), p. 813.

^{12}See Engle / Granger (1987), p. 253.

^{13}Vlg. Engle / Granger (1987), pp. 255-256.

^{14}See Engle / Granger (1987), p. 254.

^{15}See Schlittgen / Streitberg (2001), p. 43.

^{16}See Engle / Granger (1987), p. 254.

^{17}See Jerger (1993), p. 131-134.

- What is high-end torque
- Should children hear Marilyn Manson
- Are you aware 1
- How do I apply for PSG College
- What are the top 10 OSHA violations
- Why did Stargate die
- How do portable GPS jammers work
- The USA is too generous
- Should women enter into a second marriage
- Working home remedies
- May TCS buy Microsoft one day
- Why do games hate loot boxes
- What is int tan 2 2x dx
- Have you ever regretted having a boyfriend
- Was FDR an enemy of the Conservatives
- How do you read these guitar tabs
- Motivations for IAS preparation
- All wills are subject to review
- Who was Tony Robbins mentor
- Which is more acidic phenol or methanol
- Are sports overrated
- What does NA mean in grades
- Isn't he interested in meeting me
- Do you hear silence