13/04/2021

Unconstrained MAP inference, exponentiated determinantal point processes, and exponential inapproximability

Naoto Ohsaka

Keywords:

Abstract: We study the computational complexity of two hard problems on determinantal point processes (DPPs). One is maximum a posteriori (MAP) inference, i.e., to find a principal submatrix having the maximum determinant. The other is probabilistic inference on exponentiated DPPs (E-DPPs), which can sharpen or weaken the diversity preference of DPPs with an exponent parameter p. We prove the following complexity-theoretic hardness results that explain the difficulty in approximating unconstrained MAP inference and the normalizing constant for E-DPPs. (1) Unconstrained MAP inference for an n \times n matrix is NP-hard to approximate within a 2^{\beta n}-factor, where \beta = 10^{-10^{13}}. This result improves upon a (9/8-\epsilon)-factor inapproximability given by Kulesza and Taskar (2012). (2) The normalizing constant for E-DPPs of any (fixed) constant exponent p \geq \beta^{-1} = 10^{10^{13}} is NP-hard to approximate within a 2^{\beta pn}-factor. This gives a(nother) negative answer to open questions posed by Kulesza and Taskar (2012); Ohsaka and Matsuoka (2020).

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at AISTATS 2021 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd

Similar Papers