Applications of Linear Algebra and Eigenvalues
Some additional material and references
\[ A \equiv \begin{bmatrix} 0.9 & 0.1 \\ 0.5 & 0.8 \\ \end{bmatrix} \]
Iterate \(x_{t+1} = A x_t\) from \(x_0\) for \(t=100\)
rho(A) = 1.079128784747792
x_200 = [3406689.32410673 6102361.18640516]
rho(A) = 0.9449489742783178
x_200 = [6.03450418e-06 2.08159603e-05]
Q = np.array([[-0.85065081, -0.52573111], [0.52573111, -0.85065081]])
print(f"check orthogonal: dot(x_1,x_2) approx 0: {np.dot(Q[:,0], Q[:,1])}")
Lambda = [1.0, 0.8] # choosing eigenvalue so max_n|lambda_n| = 1
A = Q @ np.diag(Lambda) @ inv(Q)
print(f"rho(A) = {np.max(np.abs(eigvals(A)))}")
print(f"x_{t} = {np.linalg.matrix_power(A, t) @ x_0}")
check orthogonal: dot(x_1,x_2) approx 0: 0.0
rho(A) = 1.0
x_200 = [ 0.27639321 -0.17082039]
\[ \begin{aligned} E_{t+1} &= (1-\alpha) E_t + \phi U_t\\ U_{t+1} &= \alpha E_t + (1-\phi) U_t \end{aligned} \]
\[ \underbrace{\begin{bmatrix} E_{t+1}\\U_{t+1}\end{bmatrix}}_{X_{t+1}} = \underbrace{\begin{bmatrix} 1-\alpha & \phi \\ \alpha & 1-\phi \end{bmatrix}}_{ A} \underbrace{\begin{bmatrix} E_{t}\\U_{t}\end{bmatrix}}_{X_t} \]
Simulate by iterating \(X_{t+1} = A X_t\) from \(X_0\) until \(T=100\)
Find \(X_{\infty}\) by iterating \(X_{t+1} = A X_t\) many times from a \(X_0\)?
Alternatively, note that this expression is the same as
\[ 1 \times \bar{X} = A \bar{X} \]
real eigenvalues = [1. 0.85]
eigenvectors in columns of =
[[ 0.89442719 -0.70710678]
[ 0.4472136 0.70710678]]
first eigenvalue = 1? True
X_bar = [666666.66666667 333333.33333333]
X_100 = [666666.6870779 333333.31292209]
Lambda_fast = np.array([1.0, 0.4])
A_fast = Q @ np.diag(Lambda_fast) @ inv(Q) # same eigenvectors
print("A_fast =\n", A_fast)
print(f"alpha_fast/alpha = {A_fast[1,0]/A[1,0]:.2g}, \
phi_fast/phi = {A_fast[0,1]/A[0,1]:.2g}")
X_fast = simulate(A_fast, X_0, T)
print(f"X_{T} = {X_fast[:,T]}")
A_fast =
[[0.8 0.4]
[0.2 0.6]]
alpha_fast/alpha = 4, phi_fast/phi = 4
X_100 = [666666.66666667 333333.33333333]
Assume dividends follow \(y_{t+1} = G y_t\) for \(t=0,1,\ldots\) and \(y_0\) is given
\(G > 0\), dividends are discounted at factor \(\beta > 1\) then \(p_t = \sum_{s=0}^{\infty} \beta^s y_{t+s} = \frac{y_t}{1-\beta G}\)
More generally if \(x_{t+1} = A x_t\), \(x_t \in \mathbb{R}^N\), \(y_t = G x_t\) and \(A \in \mathbb{R}^{N \times N}\), then
\[ \begin{aligned} p_t &= y_t + \beta y_{t+1} + \beta^2 y_{t+2} + \ldots = G x_t + \beta G A x_t + \beta^2 G A A x_t + \ldots\\ &= \sum_{s=0}^{\infty} \beta^s G A^s y_t\\ &= G (I - \beta A)^{-1} x_t\quad,\text{ if } \rho(A) < 1/\beta \end{aligned} \]
where \(\rho(A)\) is the spectral radius
Here is an example with \(1 < \rho(A) < 1/\beta\). Try with different \(A\)
beta = 0.9
A = np.array([[0.85, 0.1], [0.2, 0.9]])
G = np.array([[1.0, 1.0]]) # row vector
x_0 = np.array([1.0, 1.0])
p_t = G @ solve(np.eye(2) - beta * A, x_0)
#p_t = G @ inv(np.eye(2) - beta * A) @ x_0 # alternative
rho_A = np.max(np.abs(np.real(eigvals(A))))
print(f"p_t = {p_t[0]:.4g}, spectral radius = {rho_A:.4g}, 1/beta = {1/beta:.4g}")
p_t = 24.43, spectral radius = 1.019, 1/beta = 1.111
inv
is a bad idea due to poor conditioning\[ \begin{aligned} c_0 + c_1 x_1 + c_2 x_1^2 + \ldots + c_N x_1^N &= y_1\\ \dots &= \dots\\ c_0 + c_1 x_N + c_2 x_N^2 + \ldots + c_N x_N^N &= y_N\\ \end{aligned} \]
\[ A \equiv \begin{bmatrix} 1 & x_0 & x_0^2 & \ldots & x_0^N\\ \vdots & \vdots &\vdots & \vdots & \vdots\\ 1 & x_N & x_N^2 & \ldots & x_N^N \end{bmatrix} \]
\[ A c = y \]
N = 5
x = np.linspace(0.0, 10.0, N + 1)
y = np.exp(x) # example function to interpolate
A = np.array([[x_i**n for n in range(N + 1)] for x_i in x]) # or np.vander
c = solve(A, y)
c_inv = inv(A) @ y
print(f"error = {norm(A @ c - y, np.inf)}, \
error using inv(A) = {norm(A @ c_inv - y, np.inf)}")
print(f"cond(A) = {cond(A)}")
error = 1.574562702444382e-11, error using inv(A) = 1.1932570487260818e-09
cond(A) = 564652.3214000753
N = 10
x = np.linspace(0.0, 10.0, N + 1)
y = np.exp(x) # example function to interpolate
A = np.array([[x_i**n for n in range(N + 1)] for x_i in x]) # or np.vander
c = solve(A, y)
c_inv = inv(A) @ y # Solving with inv(A) instead of solve(A, y)
print(f"error = {norm(A @ c - y, np.inf)}, \
error using inv(A) = {norm(A @ c_inv - y, np.inf)}")
print(f"cond(A) = {cond(A)}")
error = 5.334186425898224e-10, error using inv(A) = 6.22717197984457e-06
cond(A) = 4462824600234.486
N = 20
x = np.linspace(0.0, 10.0, N + 1)
y = np.exp(x) # example function to interpolate
A = np.array([[x_i**n for n in range(N + 1)] for x_i in x]) # or np.vander
c = solve(A, y)
c_inv = inv(A) @ y # Solving with inv(A) instead of solve(A, y)
print(f"error = {norm(A @ c - y, np.inf)}, \
error using inv(A) = {norm(A @ c_inv - y, np.inf)}")
print(f"cond(A) = {cond(A):.4g}")
error = 6.784830475226045e-10, error using inv(A) = 31732.823760853855
cond(A) = 1.697e+24
solve
, which is faster and can often solve ill-conditioned problems. Rarely use inv
, and only when you know the problem is well-conditionedN = 40
x = np.linspace(-1, 1, N+1) # Or any other range of x values
A = np.array([[np.polynomial.Chebyshev.basis(n)(x_i) for n in range(N+1)] for x_i in x])
A_monomial = np.array([[x_i**n for n in range(N + 1)] for x_i in x]) # or np.vander
print(f"cond(A) = {cond(A):.4g}, cond(A_monomial) = {cond(A_monomial):.4g}")
cond(A) = 3.64e+09, cond(A_monomial) = 2.926e+18