Comments on "Researcher Bias: The Use of Machine Learning in Software Defect Prediction"

Authors: Chakkrit Tantithamthavorn Shane McIntosh Ahmed E. Hassan Kenichi Matsumoto

Venue: TSE   IEEE Transactions on Software Engineering, Vol. 42, No. 11, pp. 1092-1094, 2018

Year: 2018

Abstract: Shepperd et al. find that the reported performance of a defect prediction model shares a strong relationship with the group of researchers who construct the models. In this paper, we perform an alternative investigation of Shepperd et al.'s data. We observe that (a) research group shares a strong association with other explanatory variables (i.e., the dataset and metric families that are used to build a model); (b) the strong association among these explanatory variables makes it difficult to discern the impact of the research group on model performance; and (c) after mitigating the impact of this strong association, we find that the research group has a smaller impact than the metric family. These observations lead us to conclude that the relationship between the researcher group and the performance of a defect prediction model is more likely due to the tendency of researchers to reuse experimental components (e.g., datasets and metrics). We recommend that researchers experiment with a broader selection of datasets and metrics to combat potential bias in their results.

BibTeX:

@article{chakkrittantithamthavorn2018co"btuomlisdp,
    author = "Chakkrit Tantithamthavorn and Shane McIntosh and Ahmed E. Hassan and Kenichi Matsumoto",
    title = {Comments on "Researcher Bias: The Use of Machine Learning in Software Defect Prediction"},
    year = "2018",
    pages = "1092-1094",
    journal = "IEEE Transactions on Software Engineering",
    volume = "42",
    number = "11"
}

Plain Text:

Chakkrit Tantithamthavorn, Shane McIntosh, Ahmed E. Hassan, and Kenichi Matsumoto, "Comments on "Researcher Bias: The Use of Machine Learning in Software Defect Prediction"," IEEE Transactions on Software Engineering, pp. 1092-1094