44 lines
2.4 KiB
Text
44 lines
2.4 KiB
Text
The Google 2019 Borg cluster traces analysis were conducted by using Apache
|
|
Spark and its Python 3 API (pyspark). Spark was used to execute a series of
|
|
queries to perform various sums and aggregations over the entire dataset
|
|
provided by Google.
|
|
|
|
In general, each query follows a general Map-Reduce template, where traces are
|
|
first read, parsed, filtered by performing selections, projections and computing
|
|
new derived fields. Then, the trace records are often grouped by one of their
|
|
fields, clustering related data toghether before a reduce or fold operation is
|
|
applied to each grouping.
|
|
|
|
Most input data is in JSONL format and adheres to a schema Google profided in
|
|
the form of a protobuffer specification located here:
|
|
|
|
https://github.com/google/cluster-data/blob/master/clusterdata_trace_format_v3.proto
|
|
|
|
On of the main quirks in the traces is that fields that have a "zero" value
|
|
(i.e. a value like 0 or the empty string) are often omitted in the JSON object
|
|
records. When reading the traces in Apache Spark is therefore necessary to check
|
|
for this possibility and populate those zero fields when omitted.
|
|
|
|
Most queries use only two or three fields in each trace records, while the
|
|
original records often are made of a couple of dozen fields. In order to save
|
|
memory during the query, a projection is often applied to the data by the means
|
|
of a .map() operation over the entire trace set, performed using Spark's RDD
|
|
API.
|
|
|
|
Another operation that is often necessary to perform prior to the Map-Reduce core of
|
|
each query is a record filtering process, which is often motivated by the
|
|
presence of incomplete data (i.e. records which contain fields whose values is
|
|
unknown). This filtering is performed using the .filter() operation of Spark's
|
|
RDD API.
|
|
|
|
The core of each query is often a groupBy followed by a map() operation on the
|
|
aggregated data. The groupby groups the set of all records into several subsets
|
|
of records each having something in common. Then, each of this small clusters is
|
|
reduced with a .map() operation to a single record. The motivation behind this
|
|
computation is often to analyze a time series of several different traces of
|
|
programs. This is implemented by groupBy()-ing records by program id, and then
|
|
map()-ing each program trace set by sorting by time the traces and computing the
|
|
desired property in the form of a record.
|
|
|
|
Sometimes intermediate results are saved in Spark's parquet format in order to
|
|
compute and save intermediate results beforehand.
|