14/09/2020

End-to-End Learning for Prediction and Optimization with Gradient Boosting

Takuya Konishi, Takuro Fukunaga

Keywords: combinatorial optimization, boosting/ensemble methods

Abstract: Mathematical optimization is a fundamental tool in decision making. However, it is often difficult to obtain an accurate formulation of an optimization problem due to uncertain parameters. Machine learning frameworks are attractive to address this issue: we predict the uncertain parameters and then optimize the problem based on the prediction. Recently, end-to-end learning approaches to predict and optimize the successive problems have received attention in the field of both optimization and machine learning. In this paper, we focus on gradient boosting which is known as a powerful ensemble method, and develop the end-to-end learning algorithm with maximizing the performance on the optimization problems directly. Our algorithm extends the existing gradient-based optimization through implicit differentiation to the second-order optimization for efficiently learning gradient boosting. We also conduct computational experiments to analyze how the end-to-end approaches work well and show the effectiveness of our end-to-end approach.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at ECML PKDD 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers