created directory

This commit is contained in:
Tommaso Rodolfo Masera 2018-12-13 14:37:39 +01:00
parent 7e07f017e2
commit a23d00a981
3 changed files with 4 additions and 3 deletions

Binary file not shown.

View file

@ -76,14 +76,15 @@ Number of \emph{Tag} bits = $32 bits - 2 bits - 3 bits - 2 bits= 25 bits$
\paragraph{} \paragraph{}
The floating point standard normalizes every binary number as 1.XXXX, storing only the fraction so that it can save 1 bit of memory by simply assuming, from the normalized format, that the first bit before the comma is always a 1. The floating point standard normalizes every binary number as 1.XXXX, storing only the fraction so that it can save 1 bit of memory by simply assuming, from the normalized format, that the first bit before the comma is always a 1.
It also ranges from $1.0_2\ (1_{10})\ to\ 1.1111... (almost\ 2_{10})$. \\ It also ranges from $1.0_2\ (1_{10})\ to\ 1.1111... (almost\ 2_{10})$. \\
The reason for having three different formats for precision is, indeed, to have more precise numbers to work with. This is especially useful with big positive numbers as, sometimes, more bits can be used to represent them since one more digit in a big number makes much more difference than in a small number. The reason for having three different formats for precision is, indeed, to have more precise numbers to work with. This is especially useful with big numbers as, sometimes, more bits can be used to represent them since one more digit in a big number makes much more difference than in a small number.
\subsection{Exercise 2.2} \subsection{Exercise 2.2}
\paragraph{} \paragraph{}
Underflow would usually result in a value of 0 or NaN. \\ Underflow would usually result in a value of 0 or NaN. \\
But, with denormalization, instead of a ``flush to 0", you get a gradual loss of precision for values that go to 0. But, with denormalization, instead of a ``flush to 0'', you get a gradual loss of precision for values that go to 0.
On the other hand, denormalized numbers are not a valid solution to overflow as the gradual loss of precision towards infinity doesn't apply. On the other hand, denormalized numbers are not a valid solution to overflow as the gradual loss of precision towards infinity doesn't apply since there is no finite upper bound to floating point numbers. In addition, such denormalization would not make sense with the IEEE 754 specification of mantissa bits: each mantissa bit falls to the right side of the comma/decimal separator
and not to the left.
\subsection{Exercise 2.3} \subsection{Exercise 2.3}

BIN
Homework 11/hw11_ex3.ods Normal file

Binary file not shown.