Introdunction to Linear Algebra-第8章-Linear Transformations

Introdunction to Linear Algebra-第8章-Linear Transformations

[TOC]
《Introdunction to Linear Algebra》的第 8 章: Linear Transformations.

8.1 The Idea of a Linear Transformation

本节概要

  1. A linear transformation $T$ takes vectors $\boldsymbol{v}$ to vectors $T(\boldsymbol{v})$. Linearity requires $T(c \boldsymbol{v}+d \boldsymbol{w})=c T(\boldsymbol{v})+d T(\boldsymbol{w})$. Note $T(\mathbf{0})=\mathbf{0}$ so $T(\boldsymbol{v})=\boldsymbol{v}+\boldsymbol{u}_{0}$ is not linear.
  2. The input vectors $\boldsymbol{v}$ and outputs $T(\boldsymbol{v})$ can be in $\mathbf{R}^{\boldsymbol{n}}$ or matrix space or function space.
  3. If $A$ is $m \times n$, $T(\boldsymbol{x})=A \boldsymbol{x}$ is linear from the input space $\mathbf{R}^{\boldsymbol{n}}$ to the output space $\mathbf{R}^{\boldsymbol{m}}$.
  4. The derivative $T(f)=\frac{d f}{d x}$ is linear. The integral $T^{+}(f)=\int_{0}^{x} f(t) d t$ is its pseudoinverse.
  5. The product $S T$ of two linear transformations is still linear: $(S T)(\boldsymbol{v})=S(T(\boldsymbol{v}))$.

线性变换的定义:

A transformation $T$ assigns an output $T(\boldsymbol{v})$ to each input vector $\boldsymbol{v}$ in $V$. The transformation is linear if it meets these requirements for all $\boldsymbol{v}$ and $\boldsymbol{w}$ :
(a) $T(\boldsymbol{v}+\boldsymbol{w})=T(\boldsymbol{v})+T(\boldsymbol{w})$
(b) $T(c \boldsymbol{v})=c T(\boldsymbol{v})$ for all $c$

(a) 与 (b) 合成:
Linear transformation
$T(c \boldsymbol{v} + d \boldsymbol{w})$ must equal $cT(\boldsymbol{v}) + dT(\boldsymbol{w})$.

Shift is not linear
$\boldsymbol{v} + \boldsymbol{w} + \boldsymbol{u}_0$ is not $T(\boldsymbol{v}) + T(\boldsymbol{w}) = (\boldsymbol{v} + \boldsymbol{u}_0) + (\boldsymbol{w} + \boldsymbol{u}_0)$.

identity transformation
when $\boldsymbol{u}_0 = 0$.

affine
The linear-plus-shift transformation $T(\boldsymbol{v}) = A \boldsymbol{v} + \boldsymbol{u}_0$.

If the output involves squares or products or lengths, $\boldsymbol{v}_1^2$ or $\boldsymbol{v}_1 \boldsymbol{v}_2$ or $||\boldsymbol{v}||$, then $T$ is not linear.

Lines to Lines, Triangles to Triangles, Basis Tells All


上图解释了 Lines to lines, equal spacing to equal spacing, $\boldsymbol{u} = 0$ to $T(\boldsymbol{u}) = 0$.

Linearity: $\boldsymbol{u}=c_{1} \boldsymbol{v}_{1}+c_{2} \boldsymbol{v}_{2}+\cdots+c_{n} \boldsymbol{v}_{n} $ must transform to $$ T(u)=c_{1} T\left(\boldsymbol{v}_{1}\right)+c_{2} T\left(\boldsymbol{v}_{2}\right)+\cdots+c_{n} T\left(\boldsymbol{v}_{n}\right) $$

也就是说
Suppose you know $T ( \boldsymbol{v})$ for all vectors $\boldsymbol{v}_1,… , \boldsymbol{v}_n$ in a basis. Then you know $T (\boldsymbol{u})$ for every vector $\boldsymbol{u}$ in the space.

对于微积分作为例子:
微分为线性变换, 积分为伪逆变换
以 3 阶多项式举例:
$d/dx$ 的矩阵为

$$
\boldsymbol{A}=\left[\begin{array}{lll}
0 & 1 & 0 \\
0 & 0 & 2
\end{array}\right]=\text { matrix form of the derivative } T=\frac{d}{d x}
$$

$$
\begin{array}{ll}
\text { Input } \boldsymbol{u} \quad a + bx + cx^2 & \text { Multiplication } A \boldsymbol{u}=\left[\begin{array}{lll}
0 & 1 & 0 \\
0 & 0 & 2
\end{array}\right]\left[\begin{array}{l}
a \\
b \\
c
\end{array}\right]=\left[\begin{array}{c}
b \\
2 c
\end{array}\right] \quad \text { Output } \frac{\boldsymbol{d u}}{\boldsymbol{d} x}=b+2 c x
\end{array}
$$

积分例子: $\int_{0}^{x}(D+E x) d x=D x+\frac{1}{2} E x^{2}$

$$
\begin{array}{ll}
\text { Input } \boldsymbol{v} \quad D+Ex & \text { Multiplication } A^{+} \boldsymbol{v}=\left[\begin{array}{ll}
0 & 0 \\
1 & 0 \\
0 & \frac{1}{2}
\end{array}\right]\left[\begin{array}{l}
D \\
E
\end{array}\right]=\left[\begin{array}{c}
0 \\
D \\
\frac{1}{2} E
\end{array}\right] \quad \begin{array}{c}
\text { Output }=\text { Integral of } \boldsymbol{v} \\
T^{+}(\boldsymbol{v})=D x+\frac{1}{2} E x^{2}
\end{array}
\end{array}
$$

除了多项式 $x^n$, $\cos x, \sin x,e^x$ 的微分也是线性变换.

Transformations have a language of their own.
Range of $T =$ set of all outputs $T ( \boldsymbol{v})$. Range corresponds to column space.
Kernel of $T =$ set of all inputs for which $T( \boldsymbol{v}) = 0$.Kernel corresponds to nullspace.

8.2 The Matrix of a Linear Transformation

本节概要

  1. We know all $T(\boldsymbol{v})$ if we know $T(\boldsymbol{v}_1), … , T(\boldsymbol{v}_n )$ for an input basis $\boldsymbol{v}_1, … , \boldsymbol{v}_n$ : use linearity.
  2. Column $j$ in the “matrix for $T$” comes from applying $T$ to the input basis vector $\boldsymbol{v}_j$.
  3. Write $T( \boldsymbol{v}_j) = a_{1 j} \boldsymbol{w}_1 + · · · + a_{m j} \boldsymbol{w}_m$ in the output basis of $\boldsymbol{w}$'s. Those $a_{ij}$ go into column $j$.
  4. The matrix for $T(\boldsymbol{x}) =A \boldsymbol{x}$ is $A$, if the input and output bases= columns of $I_{n \times n}$ and $I_{m \times m}$.
  5. When the bases change to $\boldsymbol{v}$’s and $\boldsymbol{w}$’s, the matrix for the same $T$ changes from $A$ to $W^{- 1} AV$.
  6. Best bases: $V = W =$ eigenvectors and $V, W =$ singular vectors give diagonal $A$ and $\varSigma$.

使用矩阵 $A$ 来描述向量空间之间线性变换的过程: The next pages assign a matrix $A$ to every linear transformation $T$. For ordinary column vectors, the input $\boldsymbol{v}$ is in $V = R^n$ and the output $T ( \boldsymbol{w})$ is in $W = R^m$. The matrix $A$ for this transformation will be $m$ by $n$. Our choice of bases in $V$ and $W$ will decide $A$. 选择不同的变换前后的向量空间的基可以得到不同的变换矩阵 $A$.

The same transformation $T$ is represented by other matrices. A main theme of linear algebra is to choose the bases that give the best matrix (a diagonal matrix) for $T$.

输入的 $V$ 空间先变基再与变换矩阵 $A$ 相乘.
Suppose we know $T(\boldsymbol{v})$ for the input basis vectors $\boldsymbol{v}_{1}$ to $\boldsymbol{v}_{n}$. Columns 1 to $n$ of the matrix will contain those outputs $T\left(\boldsymbol{v}_{1}\right)$ to $T\left(\boldsymbol{v}_{n}\right)$. $A$ times $c=$ matrix times vector $=$ combination of those $n$ columns. $A \boldsymbol{c}$ is the correct combination $c_{1} T\left(\boldsymbol{v}_{1}\right)+\cdots+c_{n} T\left(\boldsymbol{v}_{n}\right)=T(\boldsymbol{v})$.

Change of Basis

线性变换中包括变基这一操作.
Suppose the same vec­tor $\boldsymbol{u}$ is written in the input basis of $\boldsymbol{v}$’s and the output basis of $\boldsymbol{w}$’s. Suppose that $T( \boldsymbol{v}) = \boldsymbol{v}$ is the identity transformation. 特殊情况, 当变基矩阵是 $I$ 时, $\boldsymbol{v} = \boldsymbol{w}$. 当变基矩阵不是 $I$ 时, 下面是求解方法.

$$ \begin{aligned} &\boldsymbol{u}=c_{1} \boldsymbol{v}_{1}+\cdots+c_{n} \boldsymbol{v}_{n} \\\\ &\boldsymbol{u}=d_{1} \boldsymbol{w}_{1}+\cdots+d_{n} \boldsymbol{w}_{n} \end{aligned} \text { is }\left[\boldsymbol{v}_{1} \cdots \boldsymbol{v}_{n}\right]\left[\begin{array}{c} c_{1} \\\\ \vdots \\\\ c_{n} \end{array}\right]=\left[\begin{array}{c} \boldsymbol{w}_{1} \cdots \boldsymbol{w}_{n} \end{array}\right] \text { and } V \boldsymbol{c} = W \boldsymbol{d} $$

The coefficients $\boldsymbol{d}$ in the new basis of $\boldsymbol{w}$’s are $\boldsymbol{d} = W^{- 1} V \boldsymbol{c}$. Then 变基矩阵 $B$ is $W^{- 1} V$.

Construction of the Matrix

上面讲的是比较特殊的一类线性变换, 下面一般化线性变换, 从变基矩阵 $B$ 到一般变换矩阵 $A$.

Key rule: The $j$ th column of $A$ is found by applying $T$ to the $j$ th basis vector $\boldsymbol{v}_{j}$ $T\left(\boldsymbol{v}_{j}\right)=$ combination of output basis vectors $=a_{1 j} \boldsymbol{w}_{1}+\cdots+a_{m j} \boldsymbol{w}_{m}$.

The matrix $A$ tells us what $T$ does. Every linear transformation from $V$ to $W$ can be converted to a matrix. This matrix depends on the bases.

例子 1, 微分:
The input basis of $\boldsymbol{v}$’s is $1, x, x^{2}, x^{3}$.
The output basis of $\boldsymbol{w}$’s is $1, x, x^{2}$. Then $T$ takes the derivative: $T(\boldsymbol{v})=\frac{d v}{d x}$ and $A=$ “derivative matrix”.

$$
\text{ If } \boldsymbol{v} = c_{1}+c_{2} x+c_{3} x^{2}+c_{4} x^{3} \\
\text{ then } \frac{d \boldsymbol{v}}{d x} = 1 c_{2} + 2 c_{3} x + 3 c_{4} x^{2} \\
A \boldsymbol{c} = \left[\begin{array}{llll}0 & 1 & 0 & 0 \\ 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 3\end{array}\right]\left[\begin{array}{l}c_{1} \\ c_{2} \\ c_{3} \\ c_{4}\end{array}\right]=\left[\begin{array}{c}c_{2} \\ 2 c_{3} \\ 3 c_{4}\end{array}\right]
$$

例子2, 积分:

For the integral $T^{+}(\boldsymbol{v})$, the first basis function is $1$. Its integral is the second basis function $x$. So the first column of the “integral matrix” $A^{+}$ is $(0,1,0,0)$.

$$
\begin{array}{l}\text { The integral of } d_{1}+d_{2} x+d_{3} x^{2} \\ \text { is } \quad d_{1} x+\frac{1}{2} d_{2} x^{2}+\frac{1}{3} d_{3} x^{3}\end{array} \quad A^{+} d=\left[\begin{array}{ccc}0 & 0 & 0 \\ 1 & 0 & 0 \\ 0 & \frac{1}{2} & 0 \\ 0 & 0 & \frac{1}{3}\end{array}\right]\left[\begin{array}{l}d_{1} \\ d_{2} \\ d_{3}\end{array}\right]=\left[\begin{array}{l}0 \\ \frac{1}{2} d_{2} \\ \frac{1}{3} d_{3}\end{array}\right]
$$

微分与积分前后顺序的差别可以用矩阵解释:

If you integrate a function and then differentiate, you get back to the start. So $AA^+ = I$.
But if you differentiate before integrating, the constant term is lost. So $A^+ A$ is not $I$.
The integral of the derivative of $1$ is zero:
$T^+ T(1) =$ integral of zero function $= 0$.

Matrix Products AB Match Transformations TS

线性变换的嵌套可以用矩阵的乘积表示:

Multiplication The linear transformation $T S$ starts with any vector $\boldsymbol{u}$ in $\mathbf{U}$ , goes to $S(\boldsymbol{u})$ in $\mathbf{V}$ and then to $T(S(\boldsymbol{u}))$ in $\mathbf{W}$. The matrix $A B$ starts with any $\boldsymbol{x}$ in $\mathbf{R}^{p}$, goes to $B \boldsymbol{x} $ in $ \mathbf{R}^{n} $ and then to $ A B \boldsymbol{x} $ in $ \mathbf{R}^{m} $. The matrix $ A B $ correctly represents $ T S $ :
$$
T S: \quad U \rightarrow V \rightarrow W \quad A B: \quad(m \text{ by } n)(n \text{ by } p)=(m \text{ by } p)
$$

可以解释三角函数运算:
例子1:
$S$ rotates the plane by $ \theta $ and $T$ also rotates by $\theta$. Then $ T S $ rotates by $ 2 \theta $. This transformation $ T^{2} $ corresponds to the rotation matrix $ A^{2} $ through $ 2 \theta $:
$$
T=S \quad A=B \quad T^{2}=\text { rotation by } 2 \theta \quad A^{2}=\left[\begin{array}{rr}
\cos 2 \theta & -\sin 2 \theta \\
\sin 2 \theta & \cos 2 \theta
\end{array}\right]
$$

By matching (transformation)$ ^{2} $ with (matrix) $ ^{2}$, we pick up the formulas for $ \cos 2 \theta $ and $ \sin 2 \theta $. Multiply $ A $ times $ A$:

$$
\left[\begin{array}{rr}
\cos \theta & -\sin \theta \\
\sin \theta & \cos \theta
\end{array}\right]\left[\begin{array}{rr}
\cos \theta & -\sin \theta \\
\sin \theta & \cos \theta
\end{array}\right]=\left[\begin{array}{cc}
\cos ^{2} \theta-\sin ^{2} \theta & -2 \sin \theta \cos \theta \\
2 \sin \theta \cos \theta & \cos ^{2} \theta-\sin ^{2} \theta
\end{array}\right]
$$

例子2:
$S$ rotates by the angle $ \theta $ and $ T $ rotates by $ -\theta $. Then $ T S=I $ leads to $ A B = I $. In this case $ T(S(\boldsymbol{u})) $ is $ \boldsymbol{u} $. We rotate forward and back. For the matrices to match, $ A B x $ must be $ x $. The two matrices are inverses. Check this by putting $ \cos (-\theta)=\cos \theta $ and $ \sin (-\theta)=-\sin \theta $ into the backward rotation matrix $ A $:

$$
A B=\left[\begin{array}{rr}
\cos \theta & \sin \theta \\
-\sin \theta & \cos \theta
\end{array}\right]\left[\begin{array}{rr}
\cos \theta & -\sin \theta \\
\sin \theta & \cos \theta
\end{array}\right]=\left[\begin{array}{cc}
\cos ^{2} \theta+\sin ^{2} \theta & 0 \\
0 & \cos ^{2} \theta+\sin ^{2} \theta
\end{array}\right]=I
$$

Choosing the Best Bases

最好的矩阵自然还是对角矩阵,那如何得到它呢?

对于可逆方阵: If there are $n$ indepen­dent eigenvectors, choose those as the input and output basis. In this good basis,the matrix for $T$ is the diagonal eigenvalue matrix $A$.

对于一般矩阵:The SVD says that $U^{- 1} AV = \varSigma$ The right singular vectors $\boldsymbol{v}_1, … , \boldsymbol{v}_n$ will be the input basis. The left singular vectors $\boldsymbol{u}_1, … , \boldsymbol{u}_m$ will bet the output basis.

因此可以说 $\varSigma$ is “isometric” to $A$.

8.3 The Search for a Good Basis

本节概要

  1. With a new input basis $B_{in}$ and output basis $B_{out}$, every matrix $A$ becomes $B_{out}^{-1} AB_{in}$.
  2. $B_{in}$ = $B_{out}$ = “generalized eigenvectors of $A$” produces the Jordan form $J = B^{- 1} AB$.
  3. The Fourier matrix $F = B_{in} = B_{out}$ diagonalizes every circulant matrix (use the FFT).
  4. Sines and cosines, Legendre and Chebyshev: those are great bases for function space.

Here are four important choices for vectors and three choices for functions.

  1. $B_{in} =B_{out}=$ eigenvector matrix $X$.Then $X^{- 1} AX =$ eigenvalues in $A$.
    要求: $A$ to be a square matrix with $n$ independent eigenvectors.”$A$ must be diagonalizable.” We get $\Lambda$ when $B_{in} =B_{out}$ is the eigenvector matrix $x$.
  2. $B_{in} = V$ and $B_{out}= U$: singular vectors of $A$.Then $U^{- 1} AV= \text{diagonal} \varSigma$.
    $\varSigma$ is the singular value matrix (with $\sigma_1, … , \sigma_n$ on its diagonal) when $B_{in}$ and $B_{out}$ are the singular vector matrices $V$ and $U$. Recall that those columns of $B_{in}$ and $B_{out}$ are orthonormal eigenvectors of $A^T A$ and$AA^ T$.Then $A= U \varSigma V^T$ gives $\varSigma= U^{- 1} AV$.
  3. $B_{in} =B_{out}=$ generalized eigenvectors of $A$. Then $B^{- 1} AB$= Jordan form $J$.
    Jordan form 的图示如下:

$$
\textbf { Jordan matrix } \quad J=\left[\begin{array}{llll}
2 & & & \\
& 2 & & \\
& & {\left[\begin{array}{ll}
3 & 1 \\
0 & 3
\end{array}\right]}
\end{array}\right] \begin{array}{l}
\text { Two } 1 \text { by } 1 \text { blocks } \\
\text { One } 2 \text { by } 2 \text { block } \\
\text { Three eigenvectors } \\
\text { Eigenvalues } 2,2,3,3
\end{array}
$$

出现重复的本征值,会导致对角线上的上三角矩阵,其中本征值的上方值一定为1.
$(J-3I)x_4=x_3$ 式子连接了重复的2个本征值.

推导:
由 $Bx_3=b_3,Bx_4=b_4$ 可得:

$$ \left(B J B^{-1}-3 I\right) \boldsymbol{b}_{4}=B J \boldsymbol{x}_{4}-3 B \boldsymbol{x}_{4}=B(J-3 I) \boldsymbol{x}_{4}=B \boldsymbol{x}_{3}=\boldsymbol{b}_{3} $$

定义:

Jordan form If $A$ has $ s $ independent eigenvectors, it is similar to a matrix $ J $ that has $ s $ Jordan blocks $ J_{1} \ldots, J_{s} $ on its diagonal. Some matrix $B$ puts $ A $ into Jordan form:

$$
\textbf { Jordan form } \quad B^{-1} A B=\left[\begin{array}{ccc}J & & \\ & \ddots & \\ & & J_{s}\end{array}\right]=J .
$$

Each block $ J_{i} $ has one eigenvalue $ \lambda_{i} $, one eigenvector, and $1$’s just above the diagonal:

$$
\textbf { Jordan block } \quad J_{i}=\left[\begin{array}{ccc}\lambda_{i} & 1 & \\ & \cdot & \cdot \\ & \cdot & 1 \\ & & \lambda_{i}\end{array}\right] .
$$

Matrices are similar if they share the same Jordan form $J$ -not otherwise.

应用:
The Jordan form solves the differential equation $ d \boldsymbol{u} / d t = A \boldsymbol{u} $ for any square matrix $ \boldsymbol{A}=\boldsymbol{B} \boldsymbol{J} \boldsymbol{B}^{-\mathbf{1}} $. The solution $e^{A t} \boldsymbol{u}(0) $ becomes $ \boldsymbol{u}(t)=B e^{J t} B^{-1} \boldsymbol{u}(0) $. $J $ is triangular and its matrix exponential $ e^{J t} $ involves $ e^{\lambda t} $ times powers $ 1, t, \ldots, t^{s-1} $.

限制:in actual computations the Jordan form is not at all popular-its cal­culation is not stable. A slight change in A will separate the repeated eigenvalues and remove the off-diagonal 1’s-switching Jordan to a diagonal $\Lambda$.

  1. $B_{in} =B_{out}=$ Fourier matrix $F$. Then $Fx$ is a Discrete Fourier Transform of $\boldsymbol{x}$.

例如 $4 \times 4$ 的换行矩阵如下:

$$
\text { If } \lambda^{4}=1 \text { then } P x=\left[\begin{array}{llll}
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0
\end{array}\right]\left[\begin{array}{l}
1 \\
\lambda \\
\lambda^{2} \\
\lambda^{3}
\end{array}\right]=\lambda\left[\begin{array}{l}
1 \\
\lambda \\
\lambda^{2} \\
\lambda^{3}
\end{array}\right]=\lambda \boldsymbol{x}
$$

The eigenvector matrix $F$ diagonalizes the permutation matrix $P$:

$$
\begin{array}{l}
\text { Eigenvalue } \\
\text { matrix } \Lambda
\end{array}\left[\begin{array}{llll}
1 & & & \\
& i & & \\
& & -1 & \\
& & & -i
\end{array}\right] \quad \begin{array}{l}
\text { Eigenvector } \\
\text { matrix is } \\
\text { Fourier } \\
\text { matrix } F
\end{array} \quad\left[\begin{array}{llrr}
1 & 1 & 1 & 1 \\
1 & i & -1 & -i \\
1 & i^{2} & 1 & (-i)^{2} \\
1 & i^{3} & -1 & (-i)^{3}
\end{array}\right]
$$

What other matrices beyond $P$ have this same eigenvector matrix $F$?

  1. $P^2$ and $P^3$ and $P^4=I$.
  2. any comnination:$C = c_1P + c_2P^2 + c_3P^3 + c_4I$.
$$ \text { Circulant matrix } C=\left[\begin{array}{cccc} c_{0} & \boldsymbol{c}_{1} & c_{2} & c_{3} \\\\ c_{3} & c_{0} & \boldsymbol{c}_{1} & c_{2} \\\\ c_{2} & c_{3} & c_{0} & \boldsymbol{c}_{1} \\\\ \boldsymbol{c}_{1} & c_{2} & c_{3} & c_{0} \end{array}\right] \left[\begin{array}{cccc} \text { has eigenvectors in the Fourier matrix } F \\\\ \text { has four eigenvalues } c_{0}+c_{1} \lambda+c_{2} \lambda^{2}+c_{3} \lambda^{3} \\\\ \text { from the four numbers } \lambda=1, i,-1,-i \\\\ \text { The eigenvalue from } \lambda=1 \text { is } c_{0}+c_{1}+c_{2}+c_{3} \end{array}\right] $$

这里虽然是四维的,但是由于 $\lambda^5,\lambda^6,\lambda^7,\lambda^8=\lambda^1,\lambda^2,\lambda^3,\lambda^4$,结论可以推广到更高的维度.

circulant matrix 的一大特点是 has constant diagnoals($c_0$),它对应了微分方程中的 constant coeffients.

The point is that equations with constant coefficients have simple solutions like $e^{\lambda t}$.You discover ,$\lambda$ by substituting $e^{\lambda t}$ into the differential equation. That number ,$\lambda$ is like an eigenvalue. For $u = cost$ and $u = sin t$ the number is ,$\lambda = i$. Euler’s great formula
$e^{it} = cos t+ i sin t$ introduces complex numbers.

Bases for Function Space

说到函数的基,第一反应很可能是多项式:$1,x,x^2,…$. 但这是不好的基,原因是 $x_n$ are just barely independent.$x_{10}$ is almost a combination of other basis vectors $1, x, … , x_9$.

研究基的正交性,我们依旧使用工具 $B^TB$ 是否等于 $I$. $1, x, x^2 , …$
produces the evil Hilbert matrix.
由于函数是连续的对于向量的内积运算会被转化成积分:
$\textbf{x}^T \textbf{y} = x_1 y_1 + · · · + x_n y_n$

$$
\begin{aligned}
\text { Inner product }(\boldsymbol{f}, \boldsymbol{g}) &=\int f(x) g(x) d x \\
\text { Complex inner product }(\boldsymbol{f}, \boldsymbol{g}) &=\int \overline{f(x)} g(x) d x, \bar{f}=\text { complex conjugate } \\
\text { Weighted inner product }(\boldsymbol{f}, \boldsymbol{g})_{\boldsymbol{w}} &=\int \boldsymbol{w}(x) \overline{f(x)} g(x) d x, \boldsymbol{w}=\text { weight function }
\end{aligned}
$$

选择对称积分区间 $[-1,1]$ 可以得到 奇数阶与偶数阶正交:

$$
\text { Interval }[-1,1] \quad \int_{-1}^{1} x^{2} x^{5} d x=0 \quad \int_{-1}^{1} \operatorname{even}(x) \operatorname{odd}(x) d x=0
$$

Orthogonal Bases for Function Space

  1. The Fourier basis, $\sin x, \cos x, \sin 2 x, \cos 2 x, \ldots $
  2. The Legendre basis, $1, x, x^2 - \frac{1}{3}, x^3 - \frac{3}{5} x$
  3. The Chebyshev basis $1, x, 2 x^{2}-1,4 x^{3}-3 x, \ldots $

Fourier basis

This basis is especially good for functions $f (x)$ that are themselves periodic : $f (x + 2\pi) = f (x)$.

The Fourier transform connects $f (x)$ to the coefficients $a_k$ and $b_k$ in its Fourier series:

$$
\text { Fourier series } f(x)=a_{0}+b_{1} \sin x+a_{1} \cos x+b_{2} \sin 2 x+a_{2} \cos 2 x+\cdots
$$

系数计算过程:

$$
\text { Fourier coefficient } \quad a_{3}=\frac{(f(x), \cos 3 x)}{(\cos 3 x, \cos 3 x)}=\frac{\int f(x) \cos 3 x d x}{\int \cos 3 x \cos 3 x d x}
$$

just like the formula $b^T a/ a^T a$ for projecting $a$ vector $b$ onto the line through $a$.

Fourier series is just linear algebra in function space.

Legendre polynomials

Gram-Schmidt 过程的应用:把原本不正交的 $1,x,x^2…$ 正交化:

$$
\frac{\left(x^{2}, 1\right)}{(1,1)}=\frac{\int x^{2} d x}{\int 1 d x}=\frac{2 / 3}{2}=\frac{1}{3} \quad \text { Gram-Schmidt gives } x^{2}-\frac{1}{3}=\text { Legendre }
$$

$$
\frac{\left(x^{3}, x\right)}{(x, x)}=\frac{\int x^{4} d x}{\int x^{2} d x}=\frac{2 / 5}{2 / 3}=\frac{3}{5} \quad \text { Gram-Schmidt gives } x^3 - \frac{3}{5} x = \text { Legendre }
$$

Chebyshev polynomials

$1, cos \theta, cos 2\theta, cos 3\theta,…$,使用 $x =cos \theta$ 即可得到:

$$
\begin{array}{l}
\text { Chebyshev } \quad 2 x^{\mathbf{2}} -1 = 2(\cos \theta)^{2}-1=\cos 2 \theta\\
\text { to Fourier } \quad 4 x^{3}- 3 x = 4(\cos \theta)^{3}-3(\cos \theta)=\cos 3 \theta
\end{array}
$$

The $n^{th}$ degree Chebyshev polynomial $T_n (x)$ converts to Fourier’s $cos n \theta = T_n ( cos \theta)$.

应用:
These polynomials are the basis for a big software project called “chebfun“.Every function $f(x)$ is replaced by a super-accurate Chebyshev approximation. Then you can integrate $f(x)$, and solve $f(x) = 0$, and find its maximum or minimum. More than that, you can solve differential equations involving $f ( x)$-fast and to high accuracy.

Introdunction to Linear Algebra-第8章-Linear Transformations

https://www.chuxin911.com/Introdunction_to_Linear_Algebra_Chapter8_20220220/

作者

cx

发布于

2022-02-20

更新于

2022-12-02

许可协议