End-to-End Learning for Constrained Optimization

End-to-End Learning for Constrained Optimization

Abstract:

In this thesis, we study the integration of constrained optimization algorithms with predictive modeling via deep neural networks. The use of such techniques spans several application areas, which we broadly divide into two categories. When Learning to Optimize, the goal is to train neural networks to solve or aid in the solution of constrained optimization problems. In Prediction-and-Optimize, the goal is to learn the unknown coefficients of such problems from exogenous data. Both approaches stem from efforts to enhance optimization modeling technology for operations research and decision making tasks.

This thesis contributes to both areas and seeks to combine techniques from each to enhance the expressive and computational ability of models that learn to make decisions. In the Learning to Optimize scope, our contributions show how to use predictive modeling to estimate optimal solutions via Lagrangian dual functions, non-Euclidean metrics, and optimal data generation schemes. We also show how the Predict-and-Optimize paradigm can employ constrained optimization to enhance performance in prominent machine learning tasks such as learning to rank and ensemble learning. Finally, we study the overlap of these two fields, by adapting techniques from one scope to solve problems in the other.

Committee:  

  • Jundong Li, Committee Chair (CS, ECE/SEAS, SDS/UVA  )
  • Ferdinando Fioretto, Advisor (CS/SEAS/UVA)
  • Madhav Marathe (CS, Biocomplexity /SEAS/UVA )
  • Anil Vullikanti (CS, Biocomplexity/SEAS/UVA )
  • Bartolomeo Stellato (ENG, Princeton University)