This repository has been archived on 2024-10-22. You can view files and clone it, but cannot push or open issues or pull requests.
OM/Claudio_Maggioni_2/Claudio_Maggioni_2.tex

156 lines
5.9 KiB
TeX
Raw Normal View History

2021-04-09 13:36:06 +00:00
\documentclass{scrartcl}
\usepackage[utf8]{inputenc}
2021-04-09 14:29:32 +00:00
\usepackage{float}
2021-04-09 13:36:06 +00:00
\usepackage{graphicx}
\usepackage{subcaption}
\usepackage{amsmath}
\usepackage{pgfplots}
\pgfplotsset{compat=newest}
\usetikzlibrary{plotmarks}
\usetikzlibrary{arrows.meta}
\usepgfplotslibrary{patchplots}
\usepackage{grffile}
\usepackage{amsmath}
\usepackage{subcaption}
\usepgfplotslibrary{external}
\tikzexternalize
\usepackage[margin=2.5cm]{geometry}
% To compile:
% sed -i 's#title style={font=\\bfseries#title style={yshift=1ex, font=\\tiny\\bfseries#' *.tex
% luatex -enable-write18 -shellescape main.tex
\pgfplotsset{every x tick label/.append style={font=\tiny, yshift=0.5ex}}
\pgfplotsset{every title/.append style={font=\tiny, align=center}}
\pgfplotsset{every y tick label/.append style={font=\tiny, xshift=0.5ex}}
\pgfplotsset{every z tick label/.append style={font=\tiny, xshift=0.5ex}}
\setlength{\parindent}{0cm}
\setlength{\parskip}{0.5\baselineskip}
\title{Optimization methods -- Homework 2}
\author{Claudio Maggioni}
\begin{document}
\maketitle
\section{Exercise 1}
\subsection{Implement the matrix $A$ and the vector $b$, for the moment, without taking into consideration the
boundary conditions. As you can see, the matrix $A$ is not symmetric. Does an energy function of
the problem exist? Consider $N = 4$ and show your answer, explaining why it can or cannot exist.}
Answer is a energy function does not exist. Since A is not symmetric
(even if it is pd), the minimizer used for the c.g. method
(i.e. $\frac12 x^T A x - b^T x$ won't work
since $x^T A x$ might be negative and thus the minimizer does not point to
the solution of $Ax = b$ necessairly
\subsection{Once the new matrix has been derived, write the energy function related to the new problem
and the corresponding gradient and Hessian.}
we already enforce x(1) = x(n) = 0, since b(1) = b(n) = 0 and thus
A(1, :) * x = b(0) = 0 and same for n can be solved only for x(1) = x(n)
= 0size(A, 1)
The objective is therefore $\phi(x) = (1/2)x^T\overline{A}x - b^x$ with a and b
defined above, gradient is = $\overline{A}x - b$, hessian is $= \overline{A}$
\subsection{Write the Conjugate Gradient algorithm in the pdf and implement it Matlab code in a function
called \texttt{CGSolve}.}
See page 112 (133 for pdf) for the algorithm implementation
The solution of this task can be found in Section 1.3 of the script \texttt{main.m}.
\subsection{Solve the Poisson problem.}
The solution of this task can be found in Section 1.4 of the script \texttt{main.m}.
\subsection{Plot the value of energy function and the norm of the gradient (here,
use semilogy) as functions of the iterations.}
The solution of this task can be found in Section 1.5 of the script \texttt{main.m}.
2021-04-09 14:29:32 +00:00
\begin{figure}[H]
\begin{subfigure}{0.5\textwidth}
\resizebox{\textwidth}{\textwidth}{\input{obvalues}}
\caption{Objective function values w.r.t. iteration number}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\resizebox{\textwidth}{\textwidth}{\input{gnorms}}
\caption{Norm of the gradient w.r.t. iteration number \\ (y-axis is log scaled)}
\end{subfigure}
\caption{Plots for Exercise 1.4.}
\end{figure}
2021-04-09 13:36:06 +00:00
\subsection{Finally, explain why the Conjugate Gradient method is a Krylov subspace method.}
Because theorem 5.3 holds, which itself holds mainly because of this (5.10, page 106 [127]):
\[r_{k+1} = r_k + a_k * A * p_k\]
\section{Exercise 2}
Consider the linear system $Ax = b$, where the matrix $A$ is constructed in three different ways:
\begin{itemize}
\item $A =$ diag([1:10])
\item $A =$ diag(ones(1,10))
\item $A =$ diag([1, 1, 1, 3, 4, 5, 5, 5, 10, 10])
\item $A =$ diag([1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0])
\end{itemize}
\subsection{How many distinct eigenvalues has each matrix?}
Each matrix has a distinct number of eigenvalues equal to the number of distinct
elements on its diagonal. So, in order, each A has respectively 10, 1, 5, and 10 distinct eigenvalues.
\subsection{Construct a right-hand side $b=$rand(10,1) and apply the
Conjugate Gradient method to solve the system for each $A$.}
The solution of this task can be found in section 2.2 of the \texttt{main.m} MATLAB script.
\subsection{Compute the logarithm energy norm of the error for each matrix
and plot it with respect to the number of iteration.}
The solution of this task can be found in section 2.3 of the \texttt{main.m} MATLAB script.
2021-04-09 14:29:32 +00:00
\begin{figure}[H]
\begin{subfigure}{0.5\textwidth}
\resizebox{\textwidth}{!}{\input{a1}}
\caption{First matrix}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\resizebox{\textwidth}{!}{\input{a2}}
\caption{Second matrix}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\resizebox{\textwidth}{!}{\input{a3}}
\caption{Third matrix}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\resizebox{\textwidth}{!}{\input{a4}}
\caption{Fourth matrix}
\end{subfigure}
\caption{Plots of logarithm energy norm of the error per iteration. Minus infinity logarithms not shown in the plot.}
\end{figure}
2021-04-09 13:36:06 +00:00
\subsection{Comment on the convergence of the method for the different matrices. What can you say observing
the number of iterations obtained and the number of clusters of the eigenvalues of the related
matrix?}
The method converges quickly for each matrix. The fastest convergence surely happens for $A2$, which is
the identity matrix and therefore makes the $Ax = b$ problem trivial.
For all the other matrices, we observe the energy norm of the error decreasing exponentially as the iterations
increase, eventually reaching $0$ for the cases where the method converges exactly (namely on matrices $A1$ and $A3$).
Other than for the fourth matrix, the number of iterations is exactly equal
to the number of distinct eigenvalues for the matrix. That exception on the fourth matrix is simply due to the
tolerance termination condition holding true for an earlier iteration, i.e. we terminate early since we find an
approximation of $x$ with residual norm below $10^{-8}$.
\end{document}