2020-04-19 12:35:35 +00:00
% vim: set ts=2 sw=2 et tw=80:
\documentclass [12pt,a4paper] { article}
\usepackage [utf8] { inputenc} \usepackage [margin=2cm] { geometry}
\usepackage { amstext}
\usepackage { amsmath}
\usepackage { array}
\usepackage [utf8] { inputenc}
\usepackage [margin=2cm] { geometry}
\usepackage { amstext}
\usepackage { array}
\newcommand { \lra } { \Leftrightarrow }
\newcolumntype { L} { >{ $ } l< { $ } }
\DeclareMathOperator * { \argmax } { arg\, max}
\DeclareMathOperator * { \argmin } { arg\, min}
\title { Lecture notes 6 -- Introduction to Computational Science}
\author { Micheal Multerer \\ Copied by: Claudio Maggioni}
\begin { document}
\maketitle
\section * { 2.5: Partial pivoting}
2020-04-19 21:30:55 +00:00
Obviously, Algorithm 2.8 fails if one of the pivots becomes zero. In this case, we need to choose a different pivot element.
2020-04-19 12:35:35 +00:00
\paragraph { Simple approach:} Column pivoting choose $ |a ^ { ( i ) } _ { k,i } | = \max _ { i \leq l \leq n } |a ^ { ( i ) } _ { l,i } | $
In order to move the pivot element and the corresponding row we switch row k and row i by a \textit { primitive matrix} :
\[
\bar { P} _ i := \begin { bmatrix}
1_ 1 \\
& \ddots \\
& & 1_ { i-1} \\
& & & 0 & \hdots & 1_ k \\
& & & \vdots & 1_ { i+1} \\
& & & 1_ i & & 0 \\
& & & & & & \ddots \\
& & & & & & & 1_ n \\
\end { bmatrix}
\]
The following rules apply:
\begin { enumerate}
\item Multiplication by $ P _ i $ from the left $ \Rightarrow $ interchange rows i and k
\item Multiplication by $ P _ i $ from the right $ \Rightarrow $ interchange columns i and k
\item $ ( \bar { P } _ i ) ^ 2 = \bar { P } _ i \cdot \bar { P } _ i = \bar { I } $
\end { enumerate}
Then performing an LU decompisition with pivoting can be
written in matrix notation as:
$$ A _ { i + 1 } = L _ i \cdot \bar { P } _ i \cdot A _ i $$
\paragraph { Note:} For $ j < i $ , it holds $ \bar { P } _ i \bar { L } _ j = \widetilde { L } _ j \bar { P } _ i $ where $ \widetilde { L } $ is the same matrix as $ \bar { L } _ j $ except that $ [ \widetilde { l } _ j ] _ i $ and $ [ \widetilde { L } _ j ] _ k $ are interchanged:
\[
\bar { L} _ i := \begin { bmatrix}
1\\
& \ddots \\
& & 1\\
& & & \hat { l} ^ { (j)} _ 2\\
& & & \hat { l} ^ { (j)} _ i& & 1\\
& & & \hat { l} ^ { (j)} _ k& & & \ddots \\
& & & \hat { l} ^ { (j)} _ n& & & & 1\\
\end { bmatrix}
\text { \hspace { 1cm} }
\widetilde { L} _ i := \begin { bmatrix}
1\\
& \ddots \\
& & 1\\
& & & \hat { l} ^ { (j)} _ 2\\
& & & \hat { l} ^ { (j)} _ k& & 1\\
& & & \hat { l} ^ { (j)} _ i& & & \ddots \\
& & & \hat { l} ^ { (j)} _ n& & & & 1\\
\end { bmatrix}
\]
Resolving (*) then yields:
$$ \bar { A } _ { i + 1 } = \bar { L } _ i \bar { P } _ i \bar { L } _ { i - 1 } \bar { P } _ { i - 1 } \ldots \bar { L } _ 1 \bar { P } _ 1 A $$
or
$$ \bar { L } _ { n - 1 } \bar { P } _ { n - 1 } \bar { L } _ { n - 2 } \bar { P } _ { n - 2 } \ldots \bar { L } _ 1 \bar { P } _ 1 \bar { A } = \bar { U } $$
Now we can exploit $ P _ 2 \bar { L } _ 1 \bar { P } _ 1 = \widetilde { L } _ 1 \bar { P } _ 2 \bar { P } _ 1 $ and so on. This yields:
$$ \bar { P } \bar { A } = \bar { L } \bar { U } $$
with
$$ \bar { P } = \bar { P } _ { n - 1 } \bar { P } _ { n - 2 } \ldots \bar { P } _ 1 $$
and
$$ \widetilde { L } = \widetilde { L } ^ { - 1 } _ 1 \ldots \widetilde { L } ^ { - 1 } _ { n - 1 } $$
and $$ \widetilde { L } _ { n - 1 } = \bar { L } _ { n - 1 } $$ $$
\widetilde { L} _ { n-2} = \bar { P} _ { n-1} \bar { L} _ { n-2} \bar { P} _ { n-1} $$ $$
\vdots $$ $$
\widetilde { L} _ 1 = \bar { P} _ { n-1} \bar { P} _ { n-2} \ldots \bar { P} _ { 2} \bar { L} _ 1 \bar { P} _ 2 \ldots \bar { P} _ { n-2} \bar { P} _ { n-1} $$
\paragraph { Note:} If $ \bar { A } \in R ^ { n \times n } $ is non-singular, the pivoted LU decomposition $ \bar { P } \bar { A } = \bar { L } \bar { U } $ always exists.
\\ \\
We can easily add column priority in Algorithm 2.8:
\paragraph { Algorithm 2.9} (Outer product LU decomposition with column pivoting) \\
input: matrix $ \bar { A } = [ a _ { i,j } ] ^ n _ { i,j = 1 } \in R ^ { n \times n } $ \\
output: pivoted LU decomposition $ \bar { L } \bar { U } = \bar { P } \bar { A } $
\begin { enumerate}
\item Set $ \bar { A } _ 1 = \bar { A } , \bar { p } = [ 1 , 2 , \ldots ,n ] $
\item For $ i = 1 , 2 , \ldots ,n $ \begin { itemize}
\item compute: k = $ \argmax _ { 1 \leq j \leq n } | a ^ { ( i ) } _ { p _ j,i } | $ \% find pivot
\item swap: $ p _ i \gets \to p _ k $
2020-04-19 21:30:55 +00:00
\item $ \bar { l } _ i : = \bar { a } ^ { ( i ) } _ { :,i } / a ^ { ( i ) } _ { p _ i,i } $
\item $ \bar { u } _ i : = a ^ { ( i ) } _ { p _ i,: } $
2020-04-19 12:35:35 +00:00
\item compute: $ \bar { A } _ { i + 1 } = \bar { A } _ i - \bar { l } _ i \cdot \bar { u } _ i $
\end { itemize}
2020-04-19 21:30:55 +00:00
\item set $ \bar { P } : = [ \bar { e } _ { p _ 1 } , \bar { e } _ { p _ 2 } , \ldots , \bar { e } _ { p _ n } ] ^ T $ \% $ \bar { e } _ i $ is i-th unit vector
\item set $ \bar { L } = \bar { P } [ \bar { l } _ 1 , \bar { l } _ 2 , \cdots , \bar { l } _ n ] $
2020-04-19 12:35:35 +00:00
\end { enumerate}
\paragraph { Example 2.10} \textit { (omitted)}
\section * { 2.6: Cholesky decomposition}
2020-04-19 21:30:55 +00:00
If $ \bar { A } $ is symmetric and positive definite, i.e. all eigenvalues of $ \bar { A } $ are bigger than zero or equivalently $ \bar { x } ^ T \bar { A } \bar { x } > 0 $ for all $ \bar { x } \neq 0 $ , we can compute a symmetric decomposition of $ \bar { A } $ .
2020-04-19 12:35:35 +00:00
\paragraph { Note:} if $ \bar { A } $ is symmetric and positive definite, then the \textit { Schur complement} $ \bar { S } : = \bar { A } _ { 2 :n, 2 :n } - ( \bar { a } _ { 2 :n } , 1 / a _ { 1 , 1 } ) \bar { a } ^ T _ { 2 :n, 1 } $ , is symmetric and positive definite as well. In particular, it holds $ s _ { i,i } > 0 $ and $ a _ { i,i } > 0 $ !
\paragraph { Definition 2.11} A decomposition $ \bar { A } = \bar { L } \bar { L } ^ T $ with a lower triangular matrix $ \bar { L } $ with positive diagonal elements is called \textit { Cholesky decomposition of $ \bar { A } $ } .
\paragraph { Note:} A Cholesky decomposition exists, if $ \bar { A } $ is symmetric and positive definite.
\paragraph { Algorithm 2.12} (outer product of Cholesky decomposition) \\
input: matrix $ \bar { A } $ symmetric and positive definite \\
output: Cholesky decomposition $ \bar { A } = \bar { L } \bar { L } ^ T =
[\bar { l} _ 1,\bar { l} _ 2,\ldots ,\bar { l} _ n] [\bar { l} _ 1,\bar { l} _ 2,\ldots ,\bar { l} _ n]^ T$
\begin { enumerate}
\item set: $ \bar { A } _ 1 : = \bar { A } $
\item for $ i = 1 , 2 , \ldots ,n $ \begin { itemize}
\item set: $ \bar { l } _ i : = a ^ { ( i ) } _ { :,i } / \sqrt { a ^ { ( i ) } _ { i,i } } $
\item set: $ \bar { A } _ { i + 1 } : = \bar { A } _ { i } - \bar { l } _ i \bar { l } ^ T _ i $
\end { itemize}
\item set: $ \bar { L } = [ \bar { l } _ 1 , \bar { l } _ 2 , \ldots , \bar { l } _ n ] $
\end { enumerate}
The computational cost is $ \frac { 1 } { 6 } n ^ { 3 } + O ( n ^ { 2 } ) $ and thus only half the cost of LU decomposition.
\paragraph { Example 2.13} \textit { (omitted)}
\end { document}