Abstract:
We consider the task of solving generic inverse problems, where one wishes
to determine the hidden parameters of a natural system that will give rise to a
particular set of measurements. Recently many new approaches based upon deep
learning have arisen, generating promising results. We conceptualize these models
as different schemes for efficiently, but randomly, exploring the space of possible
inverse solutions. As a result, the accuracy of each approach should be evaluated
as a function of time rather than a single estimated solution, as is often done now.
Using this metric, we compare several state-of-the-art inverse modeling approaches
on four benchmark tasks: two existing tasks, a new 2-dimensional sinusoid task,
and a challenging modern task of meta-material design. Finally, inspired by our
conception of the inverse problem, we explore a simple solution that uses a deep
neural network as a surrogate (i.e., approximation) for the forward model, and
then uses backpropagation with respect to the model input to search for good
inverse solutions. Variations of this approach - which we term the neural adjoint
(NA) - have been explored recently on specific problems, and here we evaluate it
comprehensively on our benchmark. We find that the addition of a simple novel
loss term - which we term the boundary loss - dramatically improves the NA’s
performance, and it consequentially achieves the best (or nearly best) performance
in all of our benchmark scenarios.