I realised I didn’t write much about my academic work in the past (beyond the actual, you know, publications). So let’s change that.
Many of you know I am a big fan of genetic algorithms and I really believe they are severely underrated, but often for good reasons. You can get good solutions to some very complicated problems using evolutionary approaches, while also discovering more about the problem itself while doing it.
And it often requires little to no data. While neural networks, LLMs, decision trees, clustering, etc. need ample amounts of data to unlock secrets, a genetic algorithm needs only a way of creating new solutions and a method of comparing how good new solutions are relative to each other.
Of course, that ‘only’ does some heavy lifting. It sounds easy, but compared to the data hungry algorithms I referenced earlier, you trade the need for data away for expert knowledge of your problem. If you don’t understand what a good solution looks like and can’t quantify that, then you will not get good results.
To give some practical examples of GAs in the wild:
- traveling salesperson problem - it’s easy to come up with new routes - it’s easy to compare which of two routes is better (the expert knowledge here is summing up the distances) => perfect for a GA to tackle
- antenna design - it’s easy to come up with a new antenna structure - it’s not that difficult to see how good reception is for some frequencies (but requires expert knowledge in creating the simulation) => perfect for a GA to tackle
- balancing StarCraft 1 to make zergling rush strategies weaker - it’s easy to come up with patches to the game that change unit stats - it’s easy to then run tens of games to see if your changes had the desired effect (though computationally expensive. Very expensive. Weeks of running just for some small changes. Ask me how I know) => perfect for a GA to tackle
And the coolest part? GAs do not need to exist in a vacuum. Mix and match their strengths with other algorithms to supercharge your work. Some of my research was in combining ML algorithms in interesting ways. But that’s for future posts.