Abstract:
RISC-V is an open-source instruction set and now has been examined as a universal standard to unify the heterogeneous platforms. However, current research focuses primarily on the design and fabrication of general-purpose processors based on RISC-V, despite the fact that in the era of IoT (Internet of Things), the fusion of heterogeneous platforms should also take application-specific processors into account. Accordingly, this paper proposes a collaborative RISC-V multi-core system for Deep Neural Network (DNN) accelerators. To the best of our knowledge, this is the first time that a multi-core scheduling architecture for DNN acceleration is formulated and RISC-V is explored as the ISA of a multi-core system to bridge the gap between the memory and the DNN Processor in order to increase the entire system throughput. The experiment realizes a four-stage design of the RISC-V core, and further reveals that a multi-core design along with an appropriate scheduling algorithm can efficiently decrease the runtime and elevate the throughput. Moreover, the experiment also provides us with a constructive suggestion regarding the ideal proportion of the cores to Process Engines (PE), which provides us with significant assistance in building highly efficient AI System-on-Chips (SoCs) in resource-aware situations.