Supervised models
This page gathers several pre-trained supervised models on several datasets.
Description
The regular models are trained using the procedure described in [1]. They can be reproduced using the classification-results.sh script within our github repository. The quantized models are build by using the respective supervised settings and adding the following flags to the quantize subcommand.
-qnorm -retrain -cutoff 100000
Table of models
Each entry describes the test accuracy and size of the model. You can click on a table cell to download the corresponding model.
dataset | ag news | amazon review full | amazon review polarity | dbpedia |
---|---|---|---|---|
regular | 0.924 / 387MB | 0.603 / 462MB | 0.946 / 471MB | 0.986 / 427MB |
compressed | 0.92 / 1.6MB | 0.599 / 1.6MB | 0.93 / 1.6MB | 0.984 / 1.7MB |
dataset | sogou news | yahoo answers | yelp review polarity | yelp review full |
---|---|---|---|---|
regular | 0.969 / 402MB | 0.724 / 494MB | 0.957 / 409MB | 0.639 / 412MB |
compressed | 0.968 / 1.4MB | 0.717 / 1.6MB | 0.957 / 1.5MB | 0.636 / 1.5MB |
References
If you use these models, please cite the following paper:
[1] A. Joulin, E. Grave, P. Bojanowski, T. Mikolov, Bag of Tricks for Efficient Text Classification
@article{joulin2016bag,
title={Bag of Tricks for Efficient Text Classification},
author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Mikolov, Tomas},
journal={arXiv preprint arXiv:1607.01759},
year={2016}
}
[2] A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Jégou, T. Mikolov, FastText.zip: Compressing text classification models
@article{joulin2016fasttext,
title={FastText.zip: Compressing text classification models},
author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Douze, Matthijs and J{\'e}gou, H{\'e}rve and Mikolov, Tomas},
journal={arXiv preprint arXiv:1612.03651},
year={2016}
}