📄 chap31.htm
字号:
<h2>Matrix inverses, ranks, and determinants</h2><P>
<a name="096c_1a70">We define the <I><B>inverse</I></B> of an <I>n <img src="../images/mult.gif"> n</I> matrix <I>A</I> to be the <I>n <img src="../images/mult.gif"> n</I> matrix, denoted <I>A</I><SUP>-1</SUP> (if it exists), such that <I>AA</I><SUP>-1</SUP> = <I>I<SUB>n</SUB> </I>= <I>A</I><SUP>-1</SUP> <I>A</I>. For example,<P>
<img src="735_a.gif"><P>
<a name="096c_1a71"><a name="096c_1a72"><a name="096c_1a73">Many nonzero <I>n <img src="../images/mult.gif"> n </I>matrices do not have inverses. A matrix without an inverse is is called <I><B>noninvertible</I></B>, or <I><B>singular</I></B><I>.</I> An example of a nonzero singular matrix is<P>
<img src="735_b.gif"><P>
<a name="096c_1a74">If a matrix has an inverse, it is called <I><B>invertible</I></B>, or <I><B>nonsingular</I></B>. Matrix inverses, when they exist, are unique. (See Exercise 31.1-4.) If <I>A </I>and <I>B </I>are nonsingular <I>n <img src="../images/mult.gif"> n</I> matrices, then<P>
<pre>(<I>BA</I>)<SUP>-1</SUP> = <I>A</I><SUP>-1</SUP><I>B</I><SUP>-1</SUP>.</sub></sup></pre><P>
<h4><a name="096c_1a83">(31.6)<a name="096c_1a83"></sub></sup></h4><P>
The inverse operation commutes with the transpose operation:<P>
<pre>(<I>A</I><SUP>-1</SUP>)<SUP>T</SUP> = (<I>A</I><SUP>T</SUP>)<SUP>-1 </SUP>.</sub></sup></pre><P>
<a name="096c_1a75"><a name="096c_1a76"><a name="096c_1a77">The vectors <I>x</I><SUB>1</SUB>, <I>x</I><SUB>2</SUB>, . . . , <I>x<SUB>n</I></SUB> are <I><B>linearly dependent</I></B> if there exist coefficients <I>c</I><SUB>1</SUB>,<SUB> </SUB><I>c</I><SUB>2</SUB>, . . . , <I>cn,</I> not all of which are zero, such that <I>c</I><SUB>1</SUB><I>x</I><SUB>1 </SUB>+ <I>c</I><SUB>2</SUB><I>x</I><SUB>2</SUB> + . . . + <I>c<SUB>n</SUB>x<SUB>n</I></SUB> = 0. For example, the vectors <I>x</I><SUB>1</SUB> = ( 1 2 3 )<SUP>T</SUP>, <I>x</I><SUB>2</SUB> = ( 2 6 4 )<SUP>T</SUP>, and <I>x</I><SUB>3</SUB> = ( 4 11 9 )<SUP>T</SUP> are linearly dependent, since 2<I>x</I><SUB>1</SUB> + 3<I>x</I><SUB>2</SUB> - 2<I>x</I><SUB>3</SUB> = 0. If vectors are not linearly dependent, they are <I><B>linearly independent.</I></B> For example, the columns of an identity matrix are linearly independent.<P>
<a name="096c_1a78"><a name="096c_1a79"><a name="096c_1a7a"><a name="096c_1a7b"><a name="096c_1a7c">The<I><B> column rank </I></B>of a nonzero <I>m <img src="../images/mult.gif"> n</I> matrix <I>A</I> is the size of the largest set of linearly independent columns of <I>A</I>. Similarly, the <I><B>row rank </I></B>of <I>A</I> is the size of the largest set of linearly independent rows of <I>A</I>. A fundamental property of any matrix <I>A</I> is that its row rank always equals its column rank, so that we can simply refer to the<I><B> rank </I></B>of <I>A</I>. The rank of an <I>m</I> <img src="../images/mult.gif"> <I>n </I>matrix is an integer between 0 and min(<I>m</I>, <I>n</I>), inclusive. (The rank of a zero matrix is 0, and the rank of an <I>n</I> <img src="../images/mult.gif"> <I>n</I> identity matrix is <I>n</I>.) An alternate, but equivalent and often more useful, definition is that the rank of a nonzero <I>m</I> <img src="../images/mult.gif"> <I>n</I> matrix <I>A</I> is the smallest number <I>r</I> such that there exist matrices <I>B</I> and <I>C</I> of respective sizes <I>m </I><img src="../images/mult.gif"> <I>r</I> and <I>r</I> <img src="../images/mult.gif"> <I>n</I> such that<P>
<pre><I>A</I> = <I>BC </I>.</sub></sup></pre><P>
<a name="096c_1a7d"><a name="096c_1a7e">A square <I>n </I><img src="../images/mult.gif"> <I>n</I> matrix has <I><B>full rank </I></B>if its rank is <I>n</I>. A fundamental property of ranks is given by the following theorem.<P>
<a name="096c_1a84">Theorem 31.1<a name="096c_1a84"><P>
A square matrix has full rank if and only if it is nonsingular. <P>
An <I>m</I> <img src="../images/mult.gif"> <I>n</I> matrix has <I><B>full column rank</I></B> if its rank is <I>n</I>.<P>
<a name="096c_1a7f">A<I><B> null vector </I></B>for a matrix <I>A</I> is a nonzero vector <I>x</I> such that <I>Ax</I> = 0. The following theorem, whose proof is left as Exercise 31.1-8, and its corollary relate the notions of column rank and singularity to null vectors.<P>
<a name="096c_1a85">Theorem 31.2<a name="096c_1a85"><P>
A matrix <I>A</I> has full column rank if and only if it does not have a null vector. <P>
<a name="096c_1a86">Corollary 31.3<a name="096c_1a86"><P>
A square matrix <I>A</I> is singular if and only if it has a null vector. <P>
<a name="096c_1a80"><a name="096c_1a81">The <I>ij</I>th<I><B> minor</I></B> of an <I>n</I> <img src="../images/mult.gif"> <I>n</I> matrix <I>A</I>, for <I>n</I> > 1, is the (<I>n</I> - 1) <img src="../images/mult.gif"> (<I>n</I> - 1) matrix A<SUB>[<I>ij</I>] </SUB>obtained by deleting the <I>i</I>th row and <I>j</I>th column of <I>A</I>. The <I><B>determinant</I></B> of an <I>n</I> <img src="../images/mult.gif"> <I>n</I> matrix<I> A</I> can be defined recursively in terms of its minors by<P>
<img src="736_a.gif"><P>
<h4><a name="096c_1a87">(31.7)<a name="096c_1a87"></sub></sup></h4><P>
<a name="096c_1a82">The term (-1)<I><SUP>i+ j </I></SUP>det(<I>A</I><SUB>[<I>ij</I>]</SUB>) is known as the <I><B>cofactor </I></B>of the element <I>a<SUB>ij</I></SUB>.<P>
The following theorems, whose proofs are omitted here, express fundamental properties of the determinant.<P>
<a name="096c_1a88">Theorem 31.4<a name="096c_1a88"><P>
The determinant of a square matrix <I>A</I> has the following properties:<P>
<FONT FACE="Courier New" SIZE=2><img src="../images/dot12.gif"> </FONT>If any row or any column of <I>A</I> is zero, then det (<I>A</I>) = 0.<P>
<FONT FACE="Courier New" SIZE=2><img src="../images/dot12.gif"> </FONT>The determinant of <I>A</I> is multiplied by <img src="../images/lambdauc.gif"> if the entries of any one row (or any one column) of <I>A</I> are all multiplied by <img src="../images/lambdauc.gif">.<P>
<FONT FACE="Courier New" SIZE=2><img src="../images/dot12.gif"> </FONT>The determinant of <I>A</I> is unchanged if the entries in one row (respectively, column) are added to those in another row (respectively, column).<P>
<FONT FACE="Courier New" SIZE=2><img src="../images/dot12.gif"> </FONT>The determinant of <I>A</I> equals the determinant of <I>A</I><SUP>T</SUP>.<P>
<FONT FACE="Courier New" SIZE=2><img src="../images/dot12.gif"> </FONT>The determinant of <I>A</I> is multiplied by - 1 if any two rows (respectively, columns) are exchanged.<P>
Also, for any square matrices <I>A</I> and <I>B</I>, we have det (<I>AB</I>) = det(<I>A</I>) det(<I>B</I>). <P>
<a name="096c_1a89">Theorem 31.5<a name="096c_1a89"><P>
An<I> n </I><img src="../images/mult.gif"> <I>n</I> matrix <I>A</I> is singular if and only if det(<I>A</I>) = 0. <P>
<P>
<h2>Positive-definite matrices</h2><P>
<a name="096d_1a83">Positive-definite matrices play an important role in many applications. An <I>n</I> <img src="../images/mult.gif"> <I>n </I>matrix <I>A</I> is<I><B> positive-definite</I></B> if<I> x</I><SUP>T</SUP> <I>Ax</I> > 0 for all size-<I>n</I> vectors <I>x</I> <FONT FACE="Times New Roman" SIZE=3><img src="../images/noteq.gif"></FONT> 0. For example, the identity matrix is positive-definite, since for any nonzero vector <I>x</I> = ( <I>x</I><SUB>1</SUB><I> x</I><SUB>2</SUB> . . . <I>x<SUB>n</I></SUB>)<SUP>T</SUP>,<P>
<img src="737_a.gif"><P>
As we shall see, matrices that arise in applications are often positive-definite due to the following theorem.<P>
<a name="096d_1a84">Theorem 31.6<a name="096d_1a84"><P>
For any matrix <I>A</I> with full column rank, the matrix <I>A</I><SUP>T</SUP><I>A</I> is positive-definite.<P>
<I><B>Proof </I></B>We must show that<I> x</I><SUP>T</SUP> (<I>A</I><SUP>T</SUP><I>A</I>)<I>x </I>> 0 for any nonzero vector <I>x</I>. For any vector<I> x</I>,<P>
<pre><I>x</I><SUP>T</SUP>(<I>A</I><SUP>T</SUP><I> A</I>)<I>x = </I>(<I>Ax</I>)<SUP>T</SUP>(<I>Ax</I>)<I> </I>(by Exercise 31.1-3)</sub></sup></pre><P>
<pre><I>= </I>||<I>Ax</I>||<SUP>2</sub></sup></pre><P>
<pre><img src="../images/gteq.gif"> 0 .</sub></sup></pre><P>
<h4><a name="096d_1a85">(31.8)<a name="096d_1a85"></sub></sup></h4><P>
Note that ||<I>Ax</I>||<SUP>2</SUP> is just the sum of the squares of the elements of the vector <I>Ax</I>. Therefore, if ||<I>Ax</I>||<SUP>2</SUP> = 0, every element of <I>Ax</I> is 0, which is to say <I>Ax</I> = 0. Since <I>A</I> has full column rank, <I>Ax</I> = 0 implies <I>x</I> = 0, by Theorem 31.2. Hence, <I>A</I><SUP>T</SUP> <I>A</I> is positive-definite. <P>
Other properties of positive-definite matrices will be explored in Section 31.6.<P>
<P>
<h2><a name="096e_1a88">Exercises<a name="096e_1a88"></h2><P>
<a name="096e_1a89">31.1-1<a name="096e_1a89"><P>
<a name="096e_1a84">Prove that the product of two lower-triangular matrices is lower-triangular. Prove that the determinant of a (lower- or upper-) triangular matrix is equal to the product of its diagonal elements. Prove that the inverse of a lower-triangular matrix, if it exists, is lower-triangular.<P>
<a name="096e_1a8a">31.1-2<a name="096e_1a8a"><P>
<a name="096e_1a85">Prove that if<I> P</I> is an <I>n</I> <img src="../images/mult.gif"> <I>n</I> permutation matrix and <I>A</I> is an <I>n </I><img src="../images/mult.gif"> n matrix, then <I>PA</I> can be obtained from <I>A</I> by permuting its rows, and <I>AP</I> can be obtained from <I>A </I>by permuting its columns. Prove that the product of two permutation matrices is a permutation matrix. Prove that if<I> P</I> is a permutation matrix, then <I>P</I> is invertible, its inverse is <I>P</I><SUP>T</SUP>, and <I>P</I><SUP>T</SUP> is a permutation matrix.<P>
<a name="096e_1a8b">31.1-3<a name="096e_1a8b"><P>
<a name="096e_1a86">Prove that (<I>AB</I>)<SUP>T </SUP>= <I>B</I><SUP>T </SUP><I>A</I><SUP>T </SUP>and that <I>A</I><SUP>T </SUP><I>A</I> is always a symmetric matrix.<P>
<a name="096e_1a8c">31.1-4<a name="096e_1a8c"><P>
Prove that if <I>B</I> and <I>C</I> are inverses of <I>A</I>, then <I>B</I> = <I>C.</I><P>
<a name="096e_1a8d">31.1-5<a name="096e_1a8d"><P>
Let <I>A</I> and <I>B</I> be <I>n<FONT FACE="Times New Roman" SIZE=3> </I><img src="../images/mult.gif"><I> n</I></FONT> matrices such that <I>AB</I> = <I>I</I>. Prove that if <I>A</I>' is obtained from <I>A</I> by adding row <I>j</I> into row <I>i</I>, then the inverse <I>B</I>' of <I>A</I>' can be obtained by subtracting column <I>i</I> from column <I>j</I> of <I>B.</I><P>
<a name="096e_1a8e">31.1-6<a name="096e_1a8e"><P>
Let <I>A</I> be a nonsingular <I>n<FONT FACE="Times New Roman" SIZE=3> </I><img src="../images/mult.gif"><I> n</I></FONT> matrix with complex entries. Show that every entry of <I>A</I><SUP>-1</SUP> is real if and only if every entry of <I>A </I>is real.<P>
<a name="096e_1a8f">31.1-7<a name="096e_1a8f"><P>
Show that if <I>A</I> is a nonsingular symmetric matrix, then <I>A</I><SUP>-1 </SUP>is symmetric. Show that if <I>B </I>is an arbitrary (compatible) matrix, then <I>B AB</I><SUP>T </SUP>is symmetric.<P>
<a name="096e_1a90">31.1-8<a name="096e_1a90"><P>
Show that a matrix <I>A</I> has full column rank if and only if <I>Ax</I> = 0 implies <I>x</I> = 0. (<I>Hint</I>: Express the linear dependence of one column on the others as a matrix-vector equation.)<P>
<a name="096e_1a91">31.1-9<a name="096e_1a91"><P>
Prove that for any two compatible matrices <I>A</I> and <I>B</I>,<P>
rank(<I>AB</I>) <img src="../images/lteq12.gif"> min(rank(<I>A</I>), rank(<I>B</I>)) ,<P>
where equality holds if either <I>A</I> or<I> B</I> is a nonsingular square matrix. (<I>Hint</I>: Use the alternate definition of the rank of a matrix.)<P>
<a name="096e_1a92">31.1-10<a name="096e_1a92"><P>
<a name="096e_1a87">Given numbers <I>x</I><SUB>0</SUB>, <I>x</I><SUB>1</SUB>, . . . , <I>x<SUB>n</I>-1</SUB>, prove that the determinant of the <I><B>Vandermonde matrix</B></I><P>
<img src="738_a.gif"><P>
is<P>
<img src="738_b.gif"><P>
(<I>Hint:</I> Multiply column <I>i</I> by -<I>x</I><SUB>0</SUB> and add it to column <I>i</I> + 1 for <I>i</I> = <I>n</I> - 1, <I>n</I> - 2, . . . , 1, and then use induction.)<P>
<P>
<P>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -