Skip to content

Blog

LLM-ify me - Optimization edition

cover

TL;DR

💬 LLMs such as ChatGPT are everywhere, and are getting better and better at all sorts of tasks such as coding. What about optimization though?

🤌 In their current form, they can't optimize, but they will pretend like they can.

📠 The clearest use case is as an interface for developing and maintaining optimization code.

🎛 A sneaky use case is the ability to migrate code from e.g. one modeling framework to another, which works surprisingly well. This should reduce vendor lock-in for e.g. proprietary libraries of languages.

🏗 They can also be used to convert papers into code, but so far does not seem to work perfectly yet.

Goodbye python-mip?

cover

TL;DR

🕚 python-mip's latest release of 1.15.0 came in January 2023, and the latest commit to master in May 2024.

💘 With over 90 forks and 500 stars, it is one of the bigger modeling frameworks. Yet nobody is caring for it.

😦 I believe that without corporate support of open-source projects, optimization will likely not have a thriving open-source community in the long term.

It's too much for you, init. The create pattern

cover

TL;DR

🦾 When you have calculations to perform to instantiate your object, don't use __init__, but use a create classmethod instead.

📚 Combined with pydantic, this makes for very readable and reliable code.

📇 The advantages are: expected behavior, closeness to the object and ability to use pydantic.

How should I benchmark optimization solvers?

cover

TL;DR

😈 Speed is the most important factor when it comes to mathematical optimization solvers

🫘 Benchmarking requires three things: reliable hardware, representative test data, robust testing setup

🗄 Organize your models as .mps or .lp files in a folder, and solve each model using a CLI with multiple random seeds

🚀 When done right, a benchmarking exercise will improve the overall testing and deployment of your optimization project