13/04/2021

Predictive power of nearest neighbors algorithm under random perturbation

Yue Xing, Qifan Song, Guang Cheng

Keywords:

Abstract: This work investigates the predictive performance of the classical k Nearest Neighbors (k-NN) algorithm when the testing data are corrupted by random perturbation. The impact of corruption level on the asymptotic regret is carefully characterized and we reveal a phase-transition phenomenon that, when the corruption level of the random perturbation \omega is below a critical order (i.e., small-\omega regime), the asymptotic regret remains the same; when it is beyond that order (i.e., large-\omega regime), the asymptotic regret deteriorates polynomially. More importantly, the regret of k-NN classifier heuristically matches the rate of minimax regret for randomly perturbed testing data, thus implies the strong robustness of k-NN against random perturbation on testing data. In fact, we show that the classical k-NN can achieve no worse predictive performance, compared to the NN classifiers trained via the popular noise-injection strategy. Our numerical experiment also illustrates that combining k-NN component with modern learning algorithms will inherit the strong robustness of k-NN. As a technical by-product, we prove that under different model assumptions, the pre-processed 1-NN proposed in xue2017achieving will at most achieve a sub-optimal rate when the data dimension d>4 even if k is chosen optimally in the pre-processing step.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at AISTATS 2021 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers