Published: 
By  High Performance Low Ppower Lab (HPLP - Stan)
RISCV

Vaibhav Verma gave a talk titled "AI-RISC - Custom Extensions to RISC-V for Energy-efficient AI Inference at the Edge of IoT" in RISC-V Summit 2021 which is co-located with DAC 2021. You can listen to the talk here.Numerous hardware accelerators have been proposed to meet the performance and energy-efficiency requirements of AI applications. But these accelerators have been developed in separate silos with little to no infrastructure for integrating these accelerators in the top-level system stack. We present AI-RISC as a solution to bridge this research gap. AI-RISC is a hardware/software codesign methodology where AI accelerators are integrated in the RISC-V processor pipeline at a fine-granularity and treated as regular functional units during the execution of instructions. AI-RISC also extends the RISC-V ISA with custom instructions which directly target these AI functional units (AFU) resulting in a tight integration of AI accelerators with the processor. AI-RISC adopts a 2-step compilation strategy where open-source TVM is used as the front-end compiler while LLVM based custom C-compiler is used as the backend along with complete SDK generation. AI-RISC enables a RISC-V based processor which supports both AI and non-AI workloads for edge applications, flexibly hot-swaps AFUs when better hardware is available and scales with new instructions as AI algorithms evolve in the future.