Abstract:
Text documents are often mapped to vectors of binary values where 1 indicates the presence of a word and 0 indicates the absence. The vectors are then used to train predictive models. In tree-based ensemble models, predictions from some decision trees may be made purely from absent words. This type of predictions should be trusted less as absent words can be interpreted in multiple ways. In this work, we propose to improve the comprehensibility and accuracy of ensemble models by distinguishing word presence and absence. The presented method weights predictions based on word presence. Experimental results on 35 real text datasets indicate that our method outperforms state-of-the-art ensemble methods on various text classification tasks.