Partial lesson 2019-04-09
This commit is contained in:
parent
b3ce28dceb
commit
9ed14f6cb9
1 changed files with 41 additions and 4 deletions
45
notes.md
45
notes.md
|
@ -143,7 +143,7 @@ def right(x):
|
||||||
|
|
||||||
**Max heap property**: for all i > 1 A[parent(i)] >= A[i]
|
**Max heap property**: for all i > 1 A[parent(i)] >= A[i]
|
||||||
|
|
||||||
# Data structures
|
# Some data structures
|
||||||
|
|
||||||
Way to organize information
|
Way to organize information
|
||||||
|
|
||||||
|
@ -160,7 +160,7 @@ A data structure has data and meta-data (like size, length).
|
||||||
|
|
||||||
## Queue (FIFO)
|
## Queue (FIFO)
|
||||||
|
|
||||||
## Structure
|
### Structure
|
||||||
|
|
||||||
- Based on array
|
- Based on array
|
||||||
- `length`
|
- `length`
|
||||||
|
@ -175,7 +175,7 @@ Data structure for fast search
|
||||||
|
|
||||||
A way to implement a dictionary is a *Direct-access table*
|
A way to implement a dictionary is a *Direct-access table*
|
||||||
|
|
||||||
## API of dictionary
|
### API of dictionary
|
||||||
|
|
||||||
- `Insert(D, k)` insert a ket `k` to dictionary `D`
|
- `Insert(D, k)` insert a ket `k` to dictionary `D`
|
||||||
- `Delete(D, k)` removes key `k`
|
- `Delete(D, k)` removes key `k`
|
||||||
|
@ -183,12 +183,14 @@ A way to implement a dictionary is a *Direct-access table*
|
||||||
|
|
||||||
Many different implementations
|
Many different implementations
|
||||||
|
|
||||||
## Direct-access table
|
# Direct-access tables
|
||||||
|
|
||||||
- universe of keys = {1,2,...,M}
|
- universe of keys = {1,2,...,M}
|
||||||
- array `T` of size M
|
- array `T` of size M
|
||||||
- each key has its own position in T
|
- each key has its own position in T
|
||||||
|
|
||||||
|
## The 'dumb' approach
|
||||||
|
|
||||||
```python
|
```python
|
||||||
|
|
||||||
def Insert(D, x):
|
def Insert(D, x):
|
||||||
|
@ -228,3 +230,38 @@ def Chained_hash_search(T, k):
|
||||||
return List_search(T[hash(k)], k)
|
return List_search(T[hash(k)], k)
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Elements are spreaded evenly across the table if the hash function is good.
|
||||||
|
|
||||||
|
*alpha* = *n / |T|*
|
||||||
|
|
||||||
|
is the average-case average length of the linked lists inside
|
||||||
|
the table (where n is the number of elements in the table and *|T|* is the size
|
||||||
|
of the table.
|
||||||
|
|
||||||
|
A good hash table implementation makes the complexity of *alpha* O(1).
|
||||||
|
If *n*, the number of elements that we want to store in the hash table, grows,
|
||||||
|
then *|T|* must also grow. *alpha* represents the time complexity of both
|
||||||
|
insertion and search.
|
||||||
|
|
||||||
|
## Growing a Chained hash table
|
||||||
|
|
||||||
|
In order to grow a table, a new table must be created. The hash function (or its
|
||||||
|
range parameters) must be changed as well.
|
||||||
|
|
||||||
|
### Rehashing
|
||||||
|
|
||||||
|
*Rehashing* is the process of putting all the elements of the old table in the new
|
||||||
|
table according to the new hash function. The complexity is O(n), since
|
||||||
|
`Chained-hash-insert` is constant.
|
||||||
|
|
||||||
|
### Growing the table every time
|
||||||
|
|
||||||
|
If the table is grown by a constant factor every time the table *overflows*,
|
||||||
|
then the complexity of insertion is O(n^2) due to all the *rehashing* needed.
|
||||||
|
|
||||||
|
### Growing the table by doubling the size
|
||||||
|
|
||||||
|
If the table size is doubled when *overfloiwng*, then the complexity for
|
||||||
|
insertion becomes linear again by sacrificing some memory complexity.
|
||||||
|
|
||||||
|
|
Reference in a new issue