Hi Xilinx/Avnet people!
Could someone please provide us with a configurable QNN/FINN IP block to use in our contest designs? I don’t want to have to use HLS to regenerate the IP cores, when it seems the NN projects are already working quite well. This is especially cumbersome as building the projects on Github are currently requires a Linux system.
I want to be able to drop in Convolutional Neural Network IP, and select the configuration for the network topology (variable numbers of convolutional, max pool and fully connected layers), and the quantization level (cnvW1A1, cnvW1A2, cnvW2A2), and the array sizes. Then I could train my own networks, and load the weights in using the Pynq drivers.
Yes, I understand this is a bit of work (although I think the hard work has been done), but for increasing the adoption rate of FPGA boards, allowing us to just use Neural Networks nearly out-of-the-box would really help. Otherwise, the amount of knowledge to get even a simple project up and running is just too much.
What I mean is proficiency in:
- Deep Learning
With a drop-in DNN IP, people who know just Python, deep learning and Vivado can start working on accelerated designs, with greatly expands the talent pool.