Skip to content

Blog

Trust but verify - how to test optimization models

cover

TL; DR

๐Ÿ˜“ Testing optimization models is difficult because the model only makes sense as a whole. Testing individual constraints or variables adds little to no value.

โœ‚๏ธ Separate out logic such as set generation for constraints so that they can easily be tested.

๐Ÿงช Use property-based testing (e.g. with hypothesis in Python) for the model code.

๐Ÿ‘จโ€๐Ÿ’ผ The properties for the property-based testing should be provided directly by the business, e.g. "the solution should always have 4 routes to CPH".

๐Ÿค– LLMs do a great job of writing boilerplate hypothesis code, use them!

pyOptInterface - what can it do?

cover

TL;DR

๐Ÿงฎ pyOptInterface is a new Python modeling framework first released in April 2024 and is currently in version 0.4.1.

๐Ÿ‘ It is well-written, and really quite fast. It has decent documentation, and also supports conic and nonlinear programming.

๐Ÿ’ก๐Ÿ’ก๐Ÿ’ก๐Ÿ’ก๐Ÿ’ก๐Ÿ’ก๐Ÿ’ก๐Ÿ’ก๐Ÿ’ก๐Ÿ’ก๐Ÿ’ก๐Ÿ’ก๐Ÿ’ก

๐Ÿ™Œ The internet is AWESOME! Based on my testing, I thought the constraint handling in pyOptInterface was bad. But then, I posted my article and got awesome feedback from Robert Schwarz about how to do this well. So now, I am a big fan ๐Ÿ˜Š

๐Ÿ‚ I don't want to ignore my mishap, but I also don't think it adds value. So I am changing the article from ground up but with references to the original when needed. If you are interested, just go to this commit in the repo.

LLM-ify me - Optimization edition

cover

TL;DR

๐Ÿ’ฌ LLMs such as ChatGPT are everywhere, and are getting better and better at all sorts of tasks such as coding. What about optimization though?

๐ŸคŒ In their current form, they can't optimize, but they will pretend like they can.

๐Ÿ“  The clearest use case is as an interface for developing and maintaining optimization code.

๐ŸŽ› A sneaky use case is the ability to migrate code from e.g. one modeling framework to another, which works surprisingly well. This should reduce vendor lock-in for e.g. proprietary libraries of languages.

๐Ÿ— They can also be used to convert papers into code, but so far does not seem to work perfectly yet.

Goodbye python-mip?

cover

TL;DR

๐Ÿ•š python-mip's latest release of 1.15.0 came in January 2023, and the latest commit to master in May 2024.

๐Ÿ’˜ With over 90 forks and 500 stars, it is one of the bigger modeling frameworks. Yet nobody is caring for it.

๐Ÿ˜ฆ I believe that without corporate support of open-source projects, optimization will likely not have a thriving open-source community in the long term.

It's too much for you, init. The create pattern

cover

TL;DR

๐Ÿฆพ When you have calculations to perform to instantiate your object, don't use __init__, but use a create classmethod instead.

๐Ÿ“š Combined with pydantic, this makes for very readable and reliable code.

๐Ÿ“‡ The advantages are: expected behavior, closeness to the object and ability to use pydantic.

How should I benchmark optimization solvers?

cover

TL;DR

๐Ÿ˜ˆ Speed is the most important factor when it comes to mathematical optimization solvers

๐Ÿซ˜ Benchmarking requires three things: reliable hardware, representative test data, robust testing setup

๐Ÿ—„ Organize your models as .mps or .lp files in a folder, and solve each model using a CLI with multiple random seeds

๐Ÿš€ When done right, a benchmarking exercise will improve the overall testing and deployment of your optimization project