When doing gaussian elimination often the zero pivot failure can be dealt with by doing row exchanges or column exchanges just making sure the corresponding changes are made in case of column exchange in the augmented matrix






inverse of elementary matrices is just reversing the sign of transformation E21^-1 *E32^-1 matrix product is easily calculable by keeping track of multipliers
the reason for the efficiency comes from 2 facts
inverse of elementary matrices is just reversing the sign of transformation
because of the order of the inverses of elementary matrices the final matrix can be written simply by keeping track of multipliers
first the last row is manipulated using original rows and then the transformed last row is never used for subsequent row transformations resulting in the final matrix product of the inverses having such a peculiar property
The efficiency stems from two main facts:
while Building L matrix from the cofactors the order of matrix multiplications of the elementary matrices is the reverse order( En,n-1…E2,1A=U ⇒ (E2,1)^-1 ……(En,n-1)^-1 U = A ⇒ (E2,1)^-1 ……(En,n-1)^-1 = L) Consider (En,n-2)^-1 * (En,n-1)^-1
the final matrix product is nothing but linear combinations of the rows of (En,n-1)^-1 determined by the rows of (En,n-2)^-1 Except the last row all others remain the same
For the last row the cofactor of (En,n-2)^-1 multiplies to the n-2 row of (En,n-1)^-1 and added to the nth row of (En,n-2)^-1 and placement of the cofactor does not add up with the cofactor of (En,n-1)^-1 using the reasoning recursively we can see that cofactors don’t “disturb” each other and the final matrix can be built using the cofactors directly. (make sure to take the inverse sign of the factors)
