Automated machine learning (AutoML) frameworks are gaining popularity among data scientists as they dramatically reduce the manual work devoted to the construction of ML pipelines while obtaining similar and sometimes even better results than manually-built models. Such frameworks intelligently search among millions of possible ML pipeline configurations to finally retrieve an optimal pipeline in terms of predictive accuracy. However, when the training dataset is large, the construction and evaluation of a single ML pipeline take longer, which makes the overall AutoML running times increasingly high. To this end, in this work we demonstrate SubStrat, an AutoML optimization strategy that tackles the dataset size rather than the configurations search space. SubStrat wraps existing AutoML tools, and instead of executing them directly on the large dataset, it uses a genetic-based algorithm to find a small yet representative data subset that preserves characteristics of the original one. SubStrat then employs the AutoML tool on the generated subset, resulting in an intermediate ML pipeline, which is later refined by executing a restricted, much shorter, AutoML process on the large dataset. We demonstrate SubStrat on both AutoSklearn, TPOT, and H2O, three popular AutoML frameworks, using several real-life datasets.
|Title of host publication||CIKM 2022 - Proceedings of the 31st ACM International Conference on Information and Knowledge Management|
|Publisher||Association for Computing Machinery|
|Number of pages||5|
|State||Published - 17 Oct 2022|
|Event||31st ACM International Conference on Information and Knowledge Management, CIKM 2022 - Atlanta, United States|
Duration: 17 Oct 2022 → 21 Oct 2022
|Name||International Conference on Information and Knowledge Management, Proceedings|
|Conference||31st ACM International Conference on Information and Knowledge Management, CIKM 2022|
|Period||17/10/22 → 21/10/22|
Bibliographical notePublisher Copyright:
© 2022 ACM.
- automated machine learning (AutoML)
- data reduction