04/08/2021

Learning and testing junta distributions with sub cube conditioning

Xi Chen, Rajesh Jayaram, Amit Levi, Erik Waingarten

Keywords:

Abstract: We study the problems of learning and testing junta distributions on $\{-1,1\}^n$ with respect to the uniform distribution, where a distribution $p$ is a $k$-junta if its probability mass function $p(x)$ depends on a subset of at most $k$ variables. The main contribution is an algorithm for finding relevant coordinates in a $k$-junta distribution with subcube conditioning (Bhattacharyya et al 2018., Canonne et al. 2019). We give two applications: \begin{itemize} \item An algorithm for learning $k$-junta distributions with $\tilde{O}(k/\eps^2) \log n + O(2^k/\eps^2)$ subcube conditioning queries, and \item An algorithm for testing $k$-junta distributions with $\tilde{O}((k + \sqrt{n})/\eps^2)$ subcube conditioning queries. \end{itemize} All our algorithms are optimal up to poly-logarithmic factors. Our results show that subcube conditioning, as a natural model for accessing high-dimensional distributions, enables significant savings in learning and testing junta distributions compared to the standard sampling model. This addresses an open question posed by Aliakbarpour et al 2016.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at COLT 2021 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers