Antifragile: Why AI fails and alternative

Quick Summary

We all want to predict the future. Once we know what will happen, we can prepare and take advantage of the situation and become stronger. For example, if you know which stock will go up/down, you would make a lot of money. The ability to predict is the foundation of science: experiment to find models that can forecast the future. Nassim Taleb destroyed this idea in his book “The Black Swan”: The most impactful events, the game changers, are unpredictable. Although we often use post-rationalization to make it look like we could have foreseen the events after the fact. For example, the 9/11 attack.

In “Antifragile: Things That Gain from Disorder”, Taleb explains what we can do when we cannot predict the future but know it will change everything big time. The answer is in the title: we should build antifragile system. We know there will be shocks. Instead of predicting these shocks themselves and lowering their consequences, we make systems that take advantage of them! Systems that get stronger thanks to the shocks.

Personal Impact

Taleb directly speaks to me. While working doing research on Ankou and Entropic, I often confronted with this idea of how to deal with extremely rare events. In 2019, I got passionate about “Complex System” theory: system with so many inner interactions that their behavior cannot be well understood. I tried to read as much as possible on the subject: Geoffrey West, John Holland, Stuart Kauffman, and some others. I tried to understand “what can we do in this situation”.

Basically, we cannot use the usual scientific approach of decomposing systems and then scaling up this understanding to predict the behavior of the whole system. Complex systems can only be characterized with high level statistical descriptors like entropy. We are limited to description, we cannot predict. And Taleb’s insight is a similar one: Complex systems, due to their many interactions have hidden, unforeseeable and non-linear behaviors. Their causal opacity and non-linearity make them perfect Black Swan generators. And guess what, almost everything we deal with nowadays is complex systems, are socio-economical system, most of the product we engineer (that’s why we can’t seem to solve all bugs), and of course nature itself.

Solution

We should stop trying to be smart. Stop trying to predict how systems are going to react to this or that shock. We should go back to our real goal: the payoff. Focus back on what we get in the end. If we put ourselves in an antifragile situation, we can benefit from future shocks, but we do not need to plan for them.

The Importance of Options

“Optionality is a substitute for intelligence.”

“The option is an agent of antifragility.”

So how do we make ourselves antifragile? We give ourselves options. We often try to cut all which seems unnecessary, or redundant, to perform better at some given metric. But maybe this metric won’t be so important tomorrow. By having options, you give yourself more paths to explore in the future when you’ll get more information.

Taleb likes to use the “rational flâneur” metaphor: when you travel, instead of having a two week plan of how you’ll visit this city or this country, you just plan a few days forward. This way you have the freedom to plan the rest of the trip when you acquire more information while you are already there. You don’t know what kind of information you will acquire, you don’t know what unexpected event can occur. By keeping a maximum of options available, you make sure that whatever happens, you’ll be able to transform to your benefit.

This “option” solution can be reformulated in a ‘Trial and error’ policy. All the things you can try are your options. Notice that you still need some rationality to try options that have some potential, and then recognize a successful trial. Taleb is saying that we rely too much on models and top-down logic, but not that science and formalization are never useful. We still need some. Just much less of it.

And in the end, this is exactly the evolutionary heuristics we use for fuzzers! Each seed is an option the fuzzer has. The fitness function is the way we chose the potential “good options” for the future.

Via Negativa

“Rule No. 1: Never lose money. Rule No. 2: Don’t forget rule No. 1” Warren Buffet

The other way Taleb suggests to achieve antifragility is to “remove the bad stuff”. It is not easy to predict what is going to happen, and it may be even harder to take advantage of it. However, it is often quite simple to detect fragility. The via negativa way is: detect the bad and remove it. This you are left with either robust, meaning that it does not have much reaction to shocks, negatively or positively, or antifragile.

“We know a lot more about what is wrong than what is right. Negative knowledge is more robust to error than positive knowledge.”

This is what he calls the “barbell” strategy. You have the robust part which you can’t lose; it is your insurance. And you have the antifragile part, which is riskier, but gives you exposure to different kinds of shocks by being diversified, and makes your profit/payoff. This is not very sophisticated, but in the end, less is more. This technique was lightly sketched out in “The Black Swan”, but without all of the fragility framework, so it was hard to understand why it would work.

Conclusion

Do not be a fool. Do not over-optimize your system thinking you got everything under control. With a thin margin of safety, you risk losing it all. You are never fully safe from a Black Swan unless you set your system up this way.