Gradient Boosting: a Silver Bullet in Forecasting | by Davide Burba | Jul, 2023

We show that gradient boosting is very powerful for timeseries forecasting and we try to explain why

Image by Joe from Pixabay

Time-series forecasting is a crucial task in many domains, including finance, sales, and weather prediction. While classical timeseries models and deep learning techniques have been widely used for this purpose, there’s growing evidence that gradient boosting often outshines other methods.

Gradient boosting is a machine learning technique that builds predictive models by combining an ensemble of weak learners in a sequential manner. It aims to create a strong learner by iteratively minimizing the errors made by the previous models. The core idea is to fit subsequent models to the residuals of the previous models, gradually improving predictions with each iteration.

Image by author.

LightGBM and XGBoost are two prominent libraries that implement gradient boosting algorithms. They have gained popularity due to their efficiency, scalability, and exceptional performance.

While gradient boosting was not designed specifically for time-series data, we can use it for forecasting via a feature engineering step. You can check this article for a concrete example.

We can examine the winning solutions of competitions to evaluate the most powerful models in a given domain. Winning solutions are sometimes criticised for being too complex and not easily reproducible in a production environment. However, when a particular model consistently emerges in the winning solutions across different competitions, it demonstrates its ability to tackle complex challenges effectively.

Source link

Leave a Comment