hw5: done 1.1, 2, 3
This commit is contained in:
parent
259751dc3e
commit
96d5e805a0
4 changed files with 126 additions and 13 deletions
|
@ -16,6 +16,54 @@ header-includes:
|
|||
---
|
||||
\maketitle
|
||||
|
||||
# Excecise 1
|
||||
|
||||
## Exercise 1.1
|
||||
|
||||
### The Simplex method
|
||||
|
||||
The simplex method solves constrained minimization problems with a linear
|
||||
cost function and linearly-defined equality and inequality constraints. The main
|
||||
approach used by the simplex method is to consider only basic feasible points
|
||||
along the feasible region polytope and to iteratively navigate between them
|
||||
hopping through neighbours and trying to find the point that minimizes the cost
|
||||
function.
|
||||
|
||||
Although the Simplex method is relatively efficient for most in-practice
|
||||
applications, it has exponential complexity, since it has been proven that
|
||||
a carefully crafted $n$-dimensional problem can have up to $2^n$ polytope
|
||||
vertices, thus making the method inefficient for complex problems.
|
||||
|
||||
### Interior-point method
|
||||
|
||||
The interior point method aims to have a better worst-case complexity than the
|
||||
simplex method but still retain an in-practice acceptable performance. Instead
|
||||
of performing many inexpensive iterations walking along the polytope boundary,
|
||||
the interior point takes Newton-like steps travelling along "interior" points in
|
||||
the feasible region (hence the name of the method), thus reaching the
|
||||
constrained minimizer in fewer iterations. Additionally, the interior-point
|
||||
method is easier to be implemented in a parallelized fashion.
|
||||
|
||||
### Penalty method
|
||||
|
||||
The penalty method allows for a linear constrained minimization problem with
|
||||
equality constraints to be converted in an unconstrained minimization problem,
|
||||
and to allow the use of conventional unconstrained minimization algorithms to
|
||||
solve the problem. Namely, the penalty method builds a new uncostrained
|
||||
objective function with is the summation of:
|
||||
|
||||
- The original objective function;
|
||||
- An additional term for each constraint, which is positive when the current
|
||||
point $x$ violates that constraint and zero otherwise.
|
||||
|
||||
With some fine tuning of the coefficients for these new "penalty" terms, it is
|
||||
possible to build an equivalend unconstrained minimization problem whose
|
||||
minimizer is also constrained minimizer for the original problem.
|
||||
|
||||
## Exercise 1.2
|
||||
|
||||
## Exercise 1.3
|
||||
|
||||
# Exercise 2
|
||||
|
||||
## Exercise 2.1
|
||||
|
@ -55,7 +103,7 @@ index set $1, 2, \ldots, n$ such that:
|
|||
indices in $\beta$ are linearly independent from each other.
|
||||
|
||||
The geometric interpretation of basic feasible points is that all of them
|
||||
are verticies of the polytope that bounds the feasible region. We will use this
|
||||
are vertices of the polytope that bounds the feasible region. We will use this
|
||||
proven property to manually solve the constrained minimization problem presented
|
||||
in this section by aiding us with the graphical plot of the feasible region in
|
||||
figure \ref{fig:a}.
|
||||
|
@ -85,7 +133,7 @@ Figure 1 taken from the book.-->
|
|||
|
||||
Since the geometrical interpretation of the definition of basic feasible point
|
||||
states that these point are non-other than the vertices of the feasible region,
|
||||
we first look at the plot above and to these points (i.e. the verticies of the
|
||||
we first look at the plot above and to these points (i.e. the vertices of the
|
||||
bright green non-trasparent region). Then, we look which constraint boundaries cross these
|
||||
edges, and we formulate an algebraic expression to find these points. In
|
||||
clockwise order, we have:
|
||||
|
@ -130,7 +178,8 @@ x^*_3 = \frac{1}{13} \cdot \begin{bmatrix}3\\24\end{bmatrix} \;\;\; f(x^*_3) = 4
|
|||
\cdot \frac{3}{13} + 3 \cdot \frac{24}{13} = \frac{84}{13}$$$$
|
||||
x^*_4 = \frac12 \cdot \begin{bmatrix}3\\2\end{bmatrix} \;\;\; f(x^*_4) = 4 \cdot
|
||||
\frac32 + 3 \cdot 1 = 9$$$$
|
||||
x^*_5 = \begin{bmatrix}2\\0\end{bmatrix} \;\;\; f(x^*_5) = 4 \cdot 2 + 1 \cdot 0 = 8$$
|
||||
x^*_5 = \begin{bmatrix}2\\0\end{bmatrix} \;\;\; f(x^*_5) = 4 \cdot 2 + 1 \cdot 0
|
||||
= 8$$
|
||||
|
||||
Therefore, $x^* = x^*_1 = \begin{bmatrix}0 & 0\end{bmatrix}^T$ is the global
|
||||
constrained minimizer.
|
||||
|
@ -138,6 +187,73 @@ constrained minimizer.
|
|||
# Exercise 3
|
||||
|
||||
## Exercise 3.1
|
||||
<!--
|
||||
I consider the given problem, which is exactly the same as one of the problems
|
||||
of the previous assignment (Homework 4):
|
||||
|
||||
$$\min_{x} f(x) = 3x^2_1 + 2x_1x_2 + x_1x_3 +
|
||||
2.5x^2_2 + 2x_2x_3 + 2x^2_3 - 8x_1 - 3x_2 - 3x_3
|
||||
$$$$\text{ subject to } x_1 + x_3 = 3 \;\;\; x_2 + x_3 = 0$$
|
||||
|
||||
defining $x$ as $(x_1,\,x_2,\,x_3)^T$, that can be written in the form of a
|
||||
quadratic minimization problem:
|
||||
|
||||
$$\min_{x} f(x) = \dfrac{1}{2} \langle x,\, Gx\rangle + \langle x,\, c\rangle \\
|
||||
\text{ subject to } Ax = b$$
|
||||
|
||||
Where $G\in \mathbb{R}^{n\times n}$ is a symmetric positive definite matrix,
|
||||
$x$, $c \in \mathbb{R}^n$. The equality constraints are defined in terms of the
|
||||
matrix $A\in \mathbb{R}^{m\times n}$, with $m \leq n$ and vector $b \in
|
||||
\mathbb{R}^m$. Here, matrix $A$ has full rank.
|
||||
-->
|
||||
|
||||
Yes, the problem can be solved with _Uzzawa_'s method since the problem can be
|
||||
reformulated as a saddle point system. The KKT conditions of the problem can be
|
||||
reformulated as a matrix-vector to vector equality in the following way:
|
||||
|
||||
$$\begin{bmatrix}G & -A^T\\A & 0 \end{bmatrix} \begin{bmatrix}
|
||||
x^*\\\lambda^* \end{bmatrix} = \begin{bmatrix} -c\\b \end{bmatrix}.$$
|
||||
|
||||
If we then express the minimizer $x^*$ in terms of $x$, an approximation of it,
|
||||
and $p$, a search step (i.e. $x^* = x + p$), we obtain the following system.
|
||||
|
||||
$$\begin{bmatrix}
|
||||
G & A^T\\
|
||||
A & 0
|
||||
\end{bmatrix}
|
||||
\begin{bmatrix}
|
||||
-p\\
|
||||
\lambda^*
|
||||
\end{bmatrix} =
|
||||
\begin{bmatrix}
|
||||
g\\
|
||||
h
|
||||
\end{bmatrix}$$
|
||||
|
||||
This is the system the _Uzzawa_ method will solve. Therefore, we need to check
|
||||
if the matrix:
|
||||
|
||||
$$K = \begin{bmatrix}G & A^T \\ A& 0\end{bmatrix} = \begin{bmatrix}
|
||||
6 & 2 & 1 & 1 & 0 \\
|
||||
2 & 5 & 2 & 0 & 1 \\
|
||||
1 & 2 & 4 & 1 & 1 \\
|
||||
1 & 0 & 1 & 0 & 0 \\
|
||||
0 & 1 & 1 & 0 & 0 \\
|
||||
\end{bmatrix}\text{ recalling the computed values of }A\text{ and }G\text{ from the
|
||||
previous assignment}$$
|
||||
|
||||
Has non-zero positive and negative eigenvalues. We compute the eigenvalues of this
|
||||
matrix with MATLAB, and we find:
|
||||
|
||||
$$\begin{bmatrix}
|
||||
-0.4818\\
|
||||
-0.2685\\
|
||||
2.6378\\
|
||||
4.3462\\
|
||||
8.7663\end{bmatrix}$$
|
||||
|
||||
Therefore, the system is indeed a saddle point system and it can be solved with
|
||||
_Uzzawa_'s method.
|
||||
|
||||
## Exercise 3.2
|
||||
|
||||
|
|
Binary file not shown.
|
@ -58,21 +58,18 @@ legend('2x1 + 3x2 <= 6', '-3x1 + 2x2 <= 3', '2x2 <= 5', ...
|
|||
'2x1 + x2 <= 4', 'x1 > 0 and x2 > 0', 'feasible region');
|
||||
hold off
|
||||
|
||||
%% gsppn
|
||||
|
||||
for i=1:5
|
||||
obj = 4 * px(i) + 3 * py(i);
|
||||
fprintf("x1=%g x2=%g y=%g\n", px(i), py(i), obj);
|
||||
end
|
||||
|
||||
|
||||
%% Exercise 3.2
|
||||
%% Exercise 3.1
|
||||
|
||||
G = [6 2 1; 2 5 2; 1 2 4];
|
||||
c = [-8; -3; -3];
|
||||
A = [1 0 1; 0 1 1];
|
||||
b = [3; 0];
|
||||
|
||||
K = [G A'; A zeros(2)];
|
||||
eig(K)
|
||||
|
||||
%% Exercise 3.2
|
||||
|
||||
[x, lambda] = uzawa(G, c, A, b, [0;0;0], [0;0], 1e-8, 100);
|
||||
display(x);
|
||||
display(lambda);
|
||||
|
|
Reference in a new issue