Abstract:
We consider the problem of learning Markov Random Fields (including
the prototypical example, the Ising model) under the constraint of
differential privacy. Our learning goals include both
\emph{structure learning}, where we try to estimate the underlying
graph structure of the model, as well as the harder goal of
\emph{parameter learning}, in which we additionally estimate the
parameter on each edge. We provide algorithms and lower bounds for
both problems under a variety of privacy constraints --
namely pure, concentrated, and approximate differential privacy.
While non-privately, both learning goals enjoy roughly the same
complexity, we show that this is not the case under differential
privacy. In particular, only structure learning under approximate
differential privacy maintains the non-private logarithmic
dependence on the dimensionality of the data, while a change in
either the learning goal or the privacy notion would necessitate a
polynomial dependence. As a result, we show that the privacy
constraint imposes a strong separation between these two learning
problems in the high-dimensional data regime.