mp5: annotated 4.2 solution

This commit is contained in:
Claudio Maggioni (maggicl) 2020-11-29 23:11:28 +01:00
parent 9deb8a3238
commit 5bbd92b438
3 changed files with 9 additions and 44 deletions

Binary file not shown.

View file

@ -72,7 +72,6 @@ second derivative we have:
since $A$ is positive definite. Therefore, we can say that the absolute minima since $A$ is positive definite. Therefore, we can say that the absolute minima
of $f(x)$ is the solution for $Ax = b$. of $f(x)$ is the solution for $Ax = b$.
\section{Conjugate Gradient [40 points]} \section{Conjugate Gradient [40 points]}
\subsection{ Write a function for the conjugate gradient solver \subsection{ Write a function for the conjugate gradient solver
\texttt{[x,rvec]=myCG(A,b,x0,max\_itr,tol)}, where \texttt{x} \texttt{[x,rvec]=myCG(A,b,x0,max\_itr,tol)}, where \texttt{x}
@ -98,8 +97,10 @@ The plot of the squared residual 2-norms over all iterations can be found in Fig
condition number and convergence rate.} condition number and convergence rate.}
The eigenvalues of A can be found in figure The eigenvalues of A can be found in figure
\ref{fig:plot2}. The condition number for matrix $A$ according to \texttt{rcond(...)} is $\approx 3.2720 \cdot 10^7$, which is very low without sitting in the denormalized range (i.e. $< \text{eps}$) and thus very good for the Conjugate Gradient algorithm. \ref{fig:plot2}. The condition number for matrix $A$ according to \texttt{rcond(...)} is $\approx 3.2720 \cdot 10^7$,
This well conditioning is also reflected in the eigenvalue plot, which shows a not so which is very low without sitting in the denormalized range (i.e. $< \text{eps}$) and thus very good for the Conjugate Gradient algorithm.
This well conditioning is also
reflected in the eigenvalue plot, which shows a not so
drastic increase of the first eigenvalues ordered in increasing order. drastic increase of the first eigenvalues ordered in increasing order.
\begin{figure}[h] \begin{figure}[h]
@ -118,9 +119,14 @@ Matlab documentation). Solve the system with both solvers using
$max\_iter=200$ $tol= 10^{-6}$. Plot the convergence (residual $max\_iter=200$ $tol= 10^{-6}$. Plot the convergence (residual
vs iteration) of each solver and display the original and final vs iteration) of each solver and display the original and final
deblurred image.} deblurred image.}
Plots already rendered.
\subsection{ When would \texttt{pcg} be worth the added computational cost? \subsection{ When would \texttt{pcg} be worth the added computational cost?
What about if you are debluring lots of images with the same What about if you are debluring lots of images with the same
blur operator?} blur operator?}
\textit{pcg} better for many, myCG better for one thanks to cost of ichol.
\end{document} \end{document}

View file

@ -1,41 +0,0 @@
close all;
clear; clc;
%% Load Default Img Data
load('blur_data/B.mat');
B=double(B);
load('blur_data/A.mat');
A=double(A);
ciao = A;
% Show Image
figure
im_l=min(min(B));
im_u=max(max(B));
imshow(B,[im_l,im_u])
title('Blured Image')
% Vectorize the image (row by row)
b=B';
b=b(:);
%IL = ichol(A, struct('type', 'nofill', 'diagcomp', 0));
y = IL \ b;
x0 = IL' \ y;
[x, rvec] = myCG(A, b, diag(IL), 200, 1e-6);
semilogy(rvec);
[X2,flag,~,~,rvec2] = pcg(A, b, 1e-6, 200);
%% Validate Test values
load('test_data/A_test.mat');
load('test_data/x_test_exact.mat');
load('test_data/b_test.mat');
%res=||x^*-A^{-1}b||
res=x_test_exact-inv(A_test)*b_test;
norm(res);
%(Now do it with your CG and Matlab's PCG routine!!!)