hw1: final touch on training procedure description

This commit is contained in:
Claudio Maggioni 2021-05-07 17:12:59 +02:00
parent 85090ae316
commit 68b64da919
3 changed files with 5 additions and 0 deletions

2
.gitignore vendored
View file

@ -422,3 +422,5 @@ TSWLatexianTemp*
# option is specified. Footnotes are the stored in a file with suffix Notes.bib. # option is specified. Footnotes are the stored in a file with suffix Notes.bib.
# Uncomment the next line to have this generated file ignored. # Uncomment the next line to have this generated file ignored.
#*Notes.bib #*Notes.bib
*.zip

View file

@ -112,6 +112,9 @@ The model I've chosen is a feed-forward neural network with 2 hidden layers:
Finally, the output neuron has a linear activation function. As described Finally, the output neuron has a linear activation function. As described
before, the validation set was used to manually tailor the NNs architecture. before, the validation set was used to manually tailor the NNs architecture.
The network was trained using the `adam` optimizer with 5000 maximum number of
epochs and with an early stopping procedure that has a patience of 120 epochs.
The fitted model has these final performance parameters: The fitted model has these final performance parameters:
- Final MSE on validation set $\approx 0.0143344$ - Final MSE on validation set $\approx 0.0143344$