22/11/2021

Quantum Unsupervised Domain Adaptation: Does Entanglement Help?

Nanqing Dong, Michael C. Kampffmeyer, Irina D Voiculescu

Keywords: quantum machine learning, quantum neural network, unsupervised domain adptation

Abstract: Fueled by the recent breakthroughs in quantum theory and hardware, it has been demonstrated that computer vision (CV) tasks can be solved in quantum computers via the emerging quantum machine learning (QML). In contrast to classical computing, quantum computing relies on the entanglement between qubits to communicate. While many conventional machine learning (ML) tasks have been well-studied, there are still ML problems to be solved on quantum data. One of this challenges is the domain shift on quantum data. In this work, we aim to understand the role of entanglement in mitigating the domain shift in a quantum domain. We formulate quantum unsupervised domain adaptation (QUDA) for the first time and, to address the domain shift existing in quantum data, propose quantum adversarial domain adaptation (QADA). Limited by the capacity of the current quantum devices, we lay the groundwork for a computation-efficient quantum system that implements QADA in a simple manner. QADA integrates two quantum neural networks (QNNs), including a quantum classifier and a quantum discriminator. Two QNNs are trained in a hybrid classical-quantum platform with an adversarial strategy. We evaluate QADA on unsupervised domain adaptation tasks between the MNIST and SVHN image datasets under a quantum setting. Our simulated experiments show that QADA can be used to mitigate the domain shift in a quantum device, which further validates that the entanglement in a quantum circuit model can be used to achieve QUDA.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at BMVC 2021 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers