Article two of five

Alkiviadis Vazacopoulos

In our first article we identified four scenarios where Machine Learning can cooperate with Mathematical Optimization. In this article, we will consider Scenario D and present promising results from a real-world application.

This scenario is defined as:
ML can be used to help MOPT solvers to perform better not only for finding solutions faster but for finding more good solutions or identifying structure in MOPT problems

A very interesting example of Scenario D is the work that Tuomas Sandolm presented his talk “Machine Learning in MIP (Mixed Integer Programming” in Egon Balas’s symposium on October 27-28 2020.

This work was performed between 2000 to 2010 while he was the founder and CEO of Combinenet. The work used Machine Learning to improve the solving time of MIPs. The problems considered appear in the algorithmic design and mechanisms for combinatorial auctions to determine the auction winner and to execute package bids. The objective was to maximize revenue with several constraints such as budget and rule-based constraints.

Combinenet run over 800 auctions totalling over 60 billion dollars ranging from 2B USD to 7B USD per auction. They created savings of more than 12.6% for customers and suppliers. They also solved problems 100 times larger than competitors. This work is for a real-world problem that was performed earlier than other academic research.

MIP programming solvers can adequately handle these problems but Combinenet wanted to perform thousands of scenarios, thus the need for shorter solution times.

The goal was to design faster-clearing algorithms and they achieved that by applying branch and bound search strategies to speed up the solution process. Preprocessing, cutting planes, and heuristic technologies help all speed up the algorithms.

12 algorithm configurations were selected manually. In their database, they had more than 80,000 instances and used the instances to train the machine learning algorithm.

They used 50 handcrafted instance features that are quick to compute and hundreds of scenarios per instance. Machine learning algorithms decided the configuration to use for running the instance with the optimizer. Using this approach the speedup was improved by two to three times.

Tuomas notes that the small speed improvement might be attributed to the small number of features for the instance and/or the small number of recommended configurations. Thus creating more configurations and/or a larger number of model features becomes a very interesting problem for research.