#english #mental_model September 2024 The [map-territory mental model](https://fs.blog/map-and-territory/) is about understanding the difference between perceptions (the ‘map’) and the actual reality (the ‘territory’). The map of reality is not reality. Even the best maps are imperfect. That’s because they are reductions of what they represent. If a map were to represent the territory with perfect fidelity, it would no longer be a reduction and thus would no longer be useful to us. A map can also be a snapshot of a point in time, representing something that no longer exists or that will exist in the future. This is important to remember as we think through problems and make better decisions. The description of the thing is not the thing itself. All models are wrong but some are useful. However, the only way we can process the complexity of reality is through abstraction. Frequently, we don’t understand our maps or their limits. In fact, we are so reliant on abstraction that we will frequently use an incorrect model simply because we feel any model is preferable to no model. This tendency is obviously problematic in our effort to simplify reality. When we see a powerful model work well, we tend to over-apply it, using it in non-analogous situations. We have trouble delimiting its usefulness, which causes errors. A model might show you some risks, but not the risks of using it. Moreover, models are built on a finite set of parameters, while reality affords us infinite sources of risks. One of [Nassim Taleb](https://en.wikipedia.org/wiki/Nassim_Nicholas_Taleb)’s most trenchant points is that on the day before whatever “worst case” event happened in the past, you would have not been using the coming “worst case” as your worst case, because it wouldn’t have happened yet. The tails are very fat in finance — improbable and consequential events seem to happen far more often than they should based on naive statistics. There is also a severe but often unrecognised recursiveness problem, which is that the models themselves influence the outcome they are trying to predict. How do we do better? The first step is to realise that you do not understand a model, map, or reduction unless you understand and respect its limitations. We must always be vigilant by stepping back to understand the context in which a map is useful, and where the cliffs might lie. Another obvious key is building systems that are robust to model error. Use simpler heuristics, back-up systems, and margins-of-safety operating at multiple levels. Extra cash rather than extra leverage. Taking great pains to make sure the tails can’t kill you. Instead of optimising to a model, accept the limits of your clairvoyance. This is what [Warren Buffett](https://en.wikipedia.org/wiki/Warren_Buffett) has done with Berkshire Hathaway. In case of doubt, when map and terrain differ, follow the terrain.