Abstract:
Neuromorphic computing offers the potential of an unprecedented level of parallelism at a local scale. Although in their infancy, current first-generation neuromorphic processing units (NPUs) deliver as many as 128K artificial neurons in a package smaller than current laptop CPUs and demanding significantly less energy. Neuromorphic systems consisting of such NPUs and offering a total of 100 million neurons are anticipated in 2020. NPUs were envisioned to accelerate machine learning, and designing neuromorphic algorithms to leverage the benefits of NPUs in other domains remains an open challenge. We design and analyze neuromorphic graph algorithms, focusing on shortest path problems. Our neuromorphic algorithms are packet-passing algorithms relying on data movement for computation, and we develop data-movement lower bounds for conventional algorithms. A fair and rigorous comparison with conventional algorithms and architectures is paramount, and we prove a polynomial-factor advantage even when we assume an NPU with a simple grid-like network of neurons. To the best of our knowledge, this is one of the first examples of a provable asymptotic computational advantage for neuromorphic computing.