Breadcrumb navigation

Example of training time reduction for a classifier

Technical Articles(decision tree, gradient boosting decision tree

May dd, 2022
Shoichiro Yokotani, Application Development Expert
AI Platform division

In this column, I will present an example of analysis using the Frovedis machine learning algorithm.

 In machine learning algorithms, supervised learning can be categorized into two types: regression and classification. In this article, we will take the latter, classification, as an example and run a sample using the Frovedis learning algorithm. We will also compare the time required for learning between the Frovedis version and the scikit-learn version.

 Classifiers are applied to datasets with discrete output y for many input variables. For example, the Credit Card Fraud Detection dataset used in this study has 29 different features with binary output y, such as Not Fraud and Fraud. We will use this data set to make a two-class decision using a machine learning algorithm.

 Typical machine learning algorithms for classifications include logistic regression, linear support vector machines, random forests as an ensemble method of classification trees and classification trees, and gradient boosting classification trees. In this column, we will focus on two-class classification using classification trees and gradient boosting classification trees.

 Gradient boosting decision trees can be used for class classification. scikit-learn version of gradient boosting decision trees does not perform parallel processing, but Frovedis version processes each decision tree creation in parallel, so it is expected to reduce training time compared to scikit-learn on very large datasets. The Frovedis version is expected to reduce training time compared to scikit-learn on very large datasets.


credit_classify

In these samples using the Credit Card Fraud Detection dataset, we first show the classification tree using the Frovedis and scikit-learn versions, and then the analysis using the gradient boosting classification tree. Finally, the classification results are graphed using PCA feature reduction. The comparison of training time between Frovedis and scikit-learn learning algorithms is shown in the table below.

Learning algorithms Frovedis (sec) scikit-learn (sec) Ratio
Classification Trees 0.26 7.81 x30.0
Gradient boosting 16.84 1398.77 x83.1

The optimization and cross-validation of large data sets by iteratively training with varying machine learning parameters can be time consuming in many cases. In cases where data with new characteristics are frequently added, time-consuming re-training is repeated.
By using Frovedis' parallelized algorithms on SX-Aurora TSUBASA, high performance training models can be prepared frequently and quickly, reducing the cost of system development and maintenance.