Abstract:
Quantum Neural Networks (QNNs), or the so-called variational quantum circuits,
are important quantum applications both because of their similar promises as
classical neural networks and because of the feasibility of their implementation
on near-term intermediate-size noisy quantum machines (NISQ). However, the
training task of QNNs is challenging and much less understood. We conduct a
quantitative investigation on the landscape of loss functions of QNNs and
identify a class of simple yet extremely hard QNN instances for training.
Specifically, we show for typical under-parameterized QNNs,
there exists a dataset that induces a loss function with the number of spurious
local minima depending exponentially on the number of parameters.
Moreover, we show the optimality of our construction by providing an almost
matching upper bound on such dependence.
While local minima in classical neural networks are due to non-linear
activations, in quantum neural networks local minima appear as a result of the
quantum interference phenomenon.
Finally, we empirically confirm that our constructions can
indeed be hard instances in practice with typical gradient-based optimizers, which
demonstrates the practical value of our findings.