Kernel techniques have long been used in SVM to handle linearly inseparable
problems by transforming data to a high dimensional space, but training and testing
large data sets is often time consuming. In contrast, we can efficiently train and
test much larger data sets using linear SVM without kernels. In this work, we apply
fast linear-SVM methods to the explicit form of polynomially mapped data and
investigate implementation issues. The approach enjoys fast training and testing,
but may sometimes achieve accuracy close to that of using highly nonlinear kernels.
Empirical experiments show that the proposed method is useful for certain
large-scale data sets. We successfully apply the proposed method to a natural
language processing (NLP) application by improving the testing accuracy under some
training/testing speed requirements.