133 lines
No EOL
4.6 KiB
TeX
133 lines
No EOL
4.6 KiB
TeX
\documentclass[12pt]{article}
|
|
\title{Computer Architecture -- Assignment 10}
|
|
\author{Claudio Maggioni \and Tommaso Rodolfo Masera}
|
|
\date{}
|
|
|
|
\usepackage[margin=3cm]{geometry}
|
|
|
|
\begin{document}
|
|
\maketitle
|
|
|
|
\section{Exercise 1}
|
|
|
|
\subsection{Exercise 1.1}
|
|
|
|
\paragraph{}
|
|
Cache line size = 32 bytes
|
|
|
|
\paragraph{}
|
|
Cache memory size = 16384 bytes
|
|
|
|
\paragraph{}
|
|
Number of cache lines in cache memory = $(16384 / 32) = 512 $
|
|
|
|
\subsection{Exercise 1.2}
|
|
|
|
\subsubsection{Byte:}
|
|
|
|
\paragraph{}
|
|
Bytes per word = $32 bits / 8 bits = 4$
|
|
|
|
\paragraph{}
|
|
Number of \emph{Byte} bits = $log_2(4) = 2 bits$
|
|
|
|
\subsubsection{Word:}
|
|
|
|
\paragraph{}
|
|
Words per cache line = $32 bytes / 4 bytes = 8$
|
|
|
|
\paragraph{}
|
|
Number of \emph{Word} bits = $log_2(8) = 3 bits$
|
|
|
|
\subsubsection{Line:}
|
|
|
|
\paragraph{}
|
|
Number of \emph{Line} bits = $log_2(512) = 9 bits$
|
|
|
|
\subsubsection{Tag:}
|
|
|
|
\paragraph{}
|
|
Number of \emph{Tag} bits = $32 bits - 9 bits - 3 bits - 2 bits= 18 bits$
|
|
|
|
\subsection{Exercise 1.3}
|
|
|
|
\subsubsection{Byte and Word:}
|
|
|
|
\paragraph{}
|
|
The number of \emph{Byte} and \emph{Word} bits do not change.
|
|
|
|
\subsubsection{Line:}
|
|
|
|
\paragraph{}
|
|
Number of sets = 4
|
|
|
|
\paragraph{}
|
|
Number of \emph{Set} bits = $log_2(4) = 2 bits$
|
|
|
|
\subsubsection{Tag:}
|
|
|
|
\paragraph{}
|
|
Number of \emph{Tag} bits = $32 bits - 2 bits - 3 bits - 2 bits= 25 bits$
|
|
|
|
\section{Exercise 2}
|
|
|
|
\subsection{Exercise 2.1}
|
|
|
|
\paragraph{}
|
|
The floating point standard normalizes every binary number as 1.XXXX, storing only the fraction so that it can save 1 bit of memory by simply assuming, from the normalized format, that the first bit before the comma is always a 1.
|
|
It also ranges from $1.0_2\ (1_{10})\ to\ 1.1111... (almost\ 2_{10})$. \\
|
|
The reason for having three different formats for precision is, indeed, to have more precise numbers to work with. This is especially useful with big numbers as, sometimes, more bits can be used to represent them since one more digit in a big number makes much more difference than in a small number.
|
|
|
|
\subsection{Exercise 2.2}
|
|
|
|
\paragraph{}
|
|
Underflow would usually result in a value of 0 or NaN. \\
|
|
But, with denormalization, instead of a ``flush to 0'', you get a gradual loss of precision for values that go to 0.
|
|
On the other hand, denormalized numbers are not a valid solution to overflow as the gradual loss of precision towards infinity doesn't apply since there is no finite upper bound to floating point numbers. In addition, such denormalization would not make sense with the IEEE 754 specification of mantissa bits: each mantissa bit falls to the right side of the comma/decimal separator
|
|
and not to the left.
|
|
|
|
\subsection{Exercise 2.3}
|
|
|
|
\paragraph{A) Conversion of 01000001010110000000000000000000 to decimal:}
|
|
|
|
\begin{itemize}
|
|
\item[] Sign bit: 0
|
|
\item[] Exponent bits: 10000010
|
|
\item[] Mantissa bits: 1.10110000000000000000000
|
|
\end{itemize}
|
|
|
|
The exponent bits map to $+3$ and the mantissa bits evaluate to $2^{-1}$,$\ 2^{-3}\ $and$ \ 2^{-4}$.\\
|
|
We then multiply the mantissa by the exponent and we get $1.10110000000000000000000\ *\ 2^3 = 1101.10000000000000000000$.\\
|
|
We then convert the binary number to decimal:\\
|
|
$1101.10000000000000000000\ =\ 2^3\ +\ 2^2\ +\ 2^0\ +\ 2^{-1}\ =\ 8\ +\ 4\ +\ 1\ +\ 0.5\ =\ 13.5$.
|
|
|
|
|
|
\paragraph{B) Conversion of 01000100001001101001000000000000 to decimal:}
|
|
|
|
\begin{itemize}
|
|
\item[] Sign bit: 0
|
|
\item[] Exponent bits: 10001000
|
|
\item[] Mantissa bits: 1.01001101001000000000000
|
|
\end{itemize}
|
|
|
|
The exponent bits map to $+9$ and the mantissa bits evaluate to $2^{-2}$, $2^{-5}$, $2^{-6}$, $2^{-8}$ and $2^{-11}$.\\
|
|
We then multiply the mantissa by the exponent and we get $1.01001101001000000000000\ *\ 2^9\ =\ 1010011010.01000000000000$.\\
|
|
We then convert the binary number to decimal:\\
|
|
$1010011010.01000000000000\ =\ 2^9\ +\ 2^7\ +\ 2^4\ +\ 2^3\ +\ 2^1 +\ 2^{-2}\ =\ 512\ +\ 128\ +\ 16\ +\ 8\ +\ 2\ + 0.25\ =\ 666.25$
|
|
|
|
\subsection{Exercise 2.4}
|
|
|
|
In order to compute the number of IEEE 754 single-precision floating point numbers between 0 and 1 (both included), we first consider
|
|
a constant positive ($= 1$) sign bit (and therefore we do not count it in our calculation of possible permutations).
|
|
|
|
Then we count the number of denormalized numbers (including 0), which is: $2^{23} = 8388608$.
|
|
|
|
After that, we count the number of valid from $2^{126}$ to $2^{1}$ included, which is 126. All these exponents will generate
|
|
a number $< 1$ even with the highest possible mantissa. Then, we compute the number of numbers with these exponents, equal to:
|
|
$126 * 2^{23} = 1056964608$.
|
|
|
|
Finally, we consider the number 1 itself (0x3f800000) and we sum all the combinations: $8388608 + 1056964608 + 1 = 1065353217$
|
|
|
|
Therefore, there are 1065353217 IEEE 754 single-precision floating point numbers between 0 and 1 (both included).
|
|
|
|
\end{document} |