JUNE 18–22, 2017

Presentation Details

Name: Incremental SVM on Intel Xeon Phi Processors
Time: Wednesday, June 21, 2017
08:30 am - 09:00 am
Room:   Analog 1+2
Messe Frankfurt
Breaks:08:00 am - 09:00 am Welcome Coffee
Speaker:   Yida Wang, Intel
Abstract:   Support vector machines (SVMs) are conventionally batch trained. Such implementations can be very inefficient for online streaming applications demanding real-time guarantees, as the inclusion of each new data point requires retraining of the model from scratch. This paper focuses on the high-performance implementation of an accurate incremental SVM algorithm on Intel Xeon Phi processors that efficiently updates the trained SVM model with streaming data. We propose a novel cycle break heuristic to fix an inherent drawback of the algorithm that leads to a deadlock scenario which is not acceptable in real-world applications. We further employ intelligent caching of dynamically changing data and various programming optimizations to speed up the incremental SVM algorithm. Experiments on a number of real-world datasets show that our implementation achieves high performance in Intel Xeon Phi processors (1.3-2.1x faster than Intel Xeon processors) and is up to 2.1x faster than existing high-performance incremental algorithms while achieving comparable accuracy.