pydantic validates your inputs, or does it?
TL;DR
🎊 pydantic
calls itself the most widely used data validation library for Python
😧 Yet, it does not validate default values, nor assignments by default. That's ... unexpected
🎊 pydantic
calls itself the most widely used data validation library for Python
😧 Yet, it does not validate default values, nor assignments by default. That's ... unexpected
💬 LLMs such as ChatGPT are everywhere, and are getting better and better at all sorts of tasks such as coding. What about optimization though?
🤌 In their current form, they can't optimize, but they will pretend like they can.
📠 The clearest use case is as an interface for developing and maintaining optimization code.
🎛 A sneaky use case is the ability to migrate code from e.g. one modeling framework to another, which works surprisingly well. This should reduce vendor lock-in for e.g. proprietary libraries of languages.
🏗 They can also be used to convert papers into code, but so far does not seem to work perfectly yet.
🕚 python-mip
's latest release of 1.15.0 came in January 2023, and the latest commit to master in May 2024.
💘 With over 90 forks and 500 stars, it is one of the bigger modeling frameworks. Yet nobody is caring for it.
😦 I believe that without corporate support of open-source projects, optimization will likely not have a thriving open-source community in the long term.
🦾 When you have calculations to perform to instantiate your object, don't use __init__
, but use a create
classmethod instead.
📚 Combined with pydantic
, this makes for very readable and reliable code.
📇 The advantages are: expected behavior, closeness to the object and ability to use pydantic
.
🥘 With Python and beautifulsoup4
, web scraping is really easy
🖥 The developer console is a big help
🏖 The code itself is pretty simple
🫵 You shouldn't run a single model and call it a benchmark
🙊 JuMP
is slower than pyomo
? Really?
😤 At least the gurobipy
code is not something I would write
😈 Speed is the most important factor when it comes to mathematical optimization solvers
🫘 Benchmarking requires three things: reliable hardware, representative test data, robust testing setup
🗄 Organize your models as .mps or .lp files in a folder, and solve each model using a CLI with multiple random seeds
🚀 When done right, a benchmarking exercise will improve the overall testing and deployment of your optimization project