done
This commit is contained in:
parent
a6662bc950
commit
5f30b3b71b
2 changed files with 11 additions and 2 deletions
|
@ -419,5 +419,14 @@ test is inconclusive.
|
||||||
|
|
||||||
## Practical Usefulness
|
## Practical Usefulness
|
||||||
|
|
||||||
Discuss the practical usefulness of the obtained classifiers in a
|
The evaluation shows that all classifiers but the Naive Bayes classifier outperform the biased classifier in overall performance
|
||||||
realistic bug prediction scenario (1 paragraph).
|
(represented by the F1 score). Given the evaluation was not performed on a dedicated test set, all classifiers may be subjected to
|
||||||
|
training bias, which might yield better metrics than a similar evaluation performed on completely new data. However, given the
|
||||||
|
evaluation sample size (i.e. the number of training runs) this effect is minimal.
|
||||||
|
|
||||||
|
Given this premise, I can say with reasonable confidence the *DT*, *MLP*, *RP* and *SVP*
|
||||||
|
classifiers would be useful in predicting potential bugs in the Google JSComp project, i.e. the source of the used dataset. Given the
|
||||||
|
literature presented during lecture, it is not certain if the same trained classifier would yield acceptable results for source
|
||||||
|
code from another project. This is because differences in coding conventions, the bug tracking process, or simply the definition of
|
||||||
|
what constitutes a bug may vary from project to project. A solution to this problem might be to simply train the classifiers on a dataset
|
||||||
|
coming from the project where bug prediction is needed.
|
BIN
report/main.pdf
BIN
report/main.pdf
Binary file not shown.
Reference in a new issue