Download PDFOpen PDF in browserOn the Robustness of Active Learning11 pages•Published: December 10, 2019AbstractActive Learning is concerned with the question of how to identify the most useful samples for a Machine Learning algorithm to be trained with. When applied correctly, it can be a very powerful tool to counteract the immense data requirements of Artificial Neural Networks. However, we find that it is often applied with not enough care and domain knowledge. As a consequence, unrealistic hopes are raised and transfer of the experimental results from one dataset to another becomes unnecessarily hard.In this work we analyse the robustness of different Active Learning methods with respect to classifier capacity, exchangeability and type, as well as hyperparameters and falsely labelled data. Experiments reveal possible biases towards the architecture used for sample selection, resulting in suboptimal performance for other classifiers. We further propose the new ”Sum of Squared Logits” method based on the Simpson diversity index and investigate the effect of using the confusion matrix for balancing in sample selection. Keyphrases: active learning, computer vision, data analytics, hierarchical networks, image classification In: Diego Calvanese and Luca Iocchi (editors). GCAI 2019. Proceedings of the 5th Global Conference on Artificial Intelligence, vol 65, pages 152-162.
|