Linear Algebra Lecture 10
2018-02-18 02:13
253 查看
Linear Algebra Lecture 10
1. Four Fundamental SubspacesFour subspaces
Column space C(A)C(A)
Null space N(A)N(A)
Row space = All combinations of rows of AA = All combinations of ATAT = C(AT)C(AT)
Null space of ATAT = the left null space of AA(左零矩阵) = N(AT)N(AT)
When AA is m×nm×n,
C(A)C(A) in RmRm
N(A)N(A) in RnRn
C(AT)C(AT) in RnRn
N(AT)N(AT) in RmRm
basis of A transpose
A=⎡⎣⎢111212323111⎤⎦⎥=⎡⎣⎢100010110100⎤⎦⎥=[I0F0]=RA=[123111211231]=[101101100000]=[IF00]=R
The column space changed after we do row reduction, the column space of R is not the column space of AA, C(R)≠C(A)C(R)≠C(A), different column spaces.
The row space of AA and row space of RR are all combinations of these rows, then the basis of RR will be a basis for the row space of the original AA.
For the row space of AA or of RR, a basis is the first rr(rank) rows of RR. It’s the best basis. If the columns of the identity matrix are the best basis for RnRn, the rows of RR are the best basis for the row space, best in the sense of being as clean as I can make it.
Null space of A transpose
For N(AT)N(AT), it has in it vectors, call them yy, if ATy=0ATy=0, then yy is in the null space of AA transpose.
Take transpose on both side of
ATy=0→yTA=0TATy=0→yTA=0T , then I have a row vector, yy transpose, multiplying AA and multiplying from the left, that’s why it called the left null space.
Basis of left null space
Simplified AA to RR should have revealed the left null space too. From AA to RR, took some step, and I’m interested in what were those steps.
Gauss-Jordan, were you tack on the identity matrix, [Am×nIm×m][Am×nIm×m].
And do the reduced row echelon form of this matrix,rref[Am×nIm×m]→[Rm×nEm×m]rref[Am×nIm×m]→[Rm×nEm×m].
EE is just going to contain a record of what we did, we did whatever it took to get AA to become RR, and at the same time, we were doing it to the identity matrix.
So we started with the identity matrix, we took all this row reduction amounted to multiplying on the left by some matrix, some series of elementary matrices that altogether gave us one matrix, and that matrix is EE.
E[Am×nIm×m]→[Rm×nEm×m]E[Am×nIm×m]→[Rm×nEm×m]
EA=REA=R
When AA was square and invertible, EA=IEA=I, then EE was A−1A−1.
Now AA is rectangular, it hasn’t got an inverse. Then follow Gauss-Jordan to get EE
⎡⎣⎢111212323111100010001⎤⎦⎥=⎡⎣⎢100010110100−11−12−10001⎤⎦⎥[123110011210101231001]=[1011−12001101−100000−101]
E=⎡⎣⎢−11−12−10001⎤⎦⎥E=[−1201−10−101]
The dimension of the left null space is supposed to be m−rm−r.
There is one combination of those three rows that produces the zero row. If I am looking for the left null space, I am looking for combinations of rows that give the zero row.
basis and dimension of four subspaces
Four subspaces | C(A)C(A) | N(A)N(A) | C(AT)C(AT) | N(AT)N(AT) |
---|---|---|---|---|
Basis | pivot columns | special solutions | first rr rows of RR | last m−rm−r rows of EE |
Dimension | rr | n−rn−r | rr | m−rm−r |
相关文章推荐
- Linear Algebra Lecture 6
- Linear Algebra Lecture 11
- Linear Algebra Lecture 7
- Mastering Linear Algebra in 10 Days: Astounding Experiments in Ultra-Learning
- Linear Algebra Lecture 4
- StanFord University CS131 lecture2:Linear Algebra Primer
- Linear Algebra Lecture 8
- Introduction to Linear Algebra Lecture 1
- Linear Algebra Lecture 5
- Linear Algebra Lecture 9
- Mastering Linear Algebra in 10 Days: Astounding Experiments in Ultra-Learning
- Linear Algebra Lecture 1
- Linear Algebra Lecture 2
- Linear Algebra Lecture 3
- Randomized Numerical Linear Algebra Notes I
- #“Machine Learning”(Andrew Ng)#Week 1_3:Linear Algebra Review
- Matrix and linear algebra in F#, Part V: Sparse matrix implementation in PowerPack, and PInvoke a large scale SVD library as an application[z]
- 【Stanford Machine Learning】Lecture 2--Linear Regression with Multiple Variables
- Stanford ML - Lecture 10 - Dimensionality Reduction
- 【线性代数公开课MIT Linear Algebra】 第一课 矩阵的行图像与列图像