Abstract:
We present and study approximate notions of dimensional and margin complexity that allow for only approximating a given hypothesis class. We show that such notions are not only sufficient for learning using linear predictors or a kernel, but unlike the exact variants, are also necessary. Thus they are better suited for discussing limitations of linear/kernel methods.