04/08/2021

Open Problem: Are all VC classes learnable with computable learners?

Sushant Agarwal, Nivasini Ananthakrishnan, Shai Ben-David, Tosca Lechner, Ruth Urner

Keywords:

Abstract: The independence from set theory of learnability results ([Ben-David et al. 2017, 2019]), motivated us to consider learnability by algorithms with computable output predictors (both learners and predictors are then representable as finite objects). In [Agrawal et al 2020] we introduced the notion of CPAC learnability, by adding some basic computability requirements into a PAC learning framework. As a first step we have shown there that, in contrast with the well studied case of learnability without such restrictions, in this framework proper learnability of hypothesis classes is not implied by the finiteness of their VC-dimension. A major remaining open question is whether a similar result holds even for improper learning. Namely, does there exist a computable concept class of finite VC-dimension (of computable classifiers) such that no computable learner can PAC learn it (even if the learner is not restricted tooutput a hypothesis that is a member of the class)?

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at COLT 2021 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd

Similar Papers