Accelerated Machine Learning at the Edge for Low Latency Inference

Project Start Date
Project End Date


You will explore how machine learning accelerators on FPGAs can be used for accelerated inference, with consideration for a connected setting where clients need to send data to an inference engine implemented on FPGA. You should have some familiarity with FPGAs and using FPGA design tools, and will use existing tools for designing ML accelerators, which then need to be integrated to receive network data. Alternative approaches for passing network packet data into the accelerator will be explored to demonstrate the benefits of direct accelerator ingestion.

Project Deliverables: 

A small testbed for edge-acceleration of ML workloads implemented on FPGA, used to run experiments that demonstrate the benefits of this approach in terms of latency and power.