Proposals for implementing high-performance SVMs on hardware using reconfigurable computing
SGD, SVM, FPGA, Reconfigurable Computing
Stochastic Gradient Descent algorithms (SGDs) have good scalability and are good options for training machine learning algorithms in applications such as massive data mining. Nevertheless, even with the use of SGD, training time can become large depending on the data set to be analyzed. For this reason, accelerators such as Field Programmable Gate Arrays (FPGAs) are used. In this work, we explore hardware implementations using FPGA of a fully parallel Support Vector Machine using a stochastic gradient for training. We will also study the impact of hardware implementation techniques such as stochastic computing, quantization, and binarization, after which we analyze the impact of each on throughput, power consumption, hardware consumption, and statistical accuracy. One of the proposed implementations in FPGA has speedups of up to 319x compared to implementations found in the literature while requiring less hardware resources. The results show that the proposed architecture is a viable solution to problems with high computational demands, such as those present in big data analysis.