
LLMs, much like human societies, are dynamic systems. With every retraining cycle, they receive an updated feed from the ever-changing internet, incorporating larger and larger subsections of the evolving digital world. Many models also incorporate explicit human feedback, e.g., Reinforcement Learning with Human Feedback, to shape their behavior. Even small changes to the input or the training process can lead to unpredictable, large-scale effects on the model’s overall output and functionality.
The dynamic, complex nature of LLMs finds a compelling parallel in Friedrich Hayek’s 1974 Nobel Lecture, “The Pretence of Knowledge,” in which he explored economic and political systems characterized by billions of variables and interconnections. This structure aligns with the organization of LLMs. Hayek concluded that while we can understand the general principles that govern complex systems and predict the abstract consequences of interventions, we cannot predict the specific outcomes or the precise states of all their elements. In his broader theory of complexity, Hayek defines two types of order:
- Taxis: A conscious, central design or constructed order. Traditional software falls into this category.
- Kosmos: A complex, emergent, spontaneous, and functional order that arises from decentralized interactions and vast, unstructured data. LLMs function as kosmos.
The power of LLMs lies in their emergent behavior, which is far more complex than any human could consciously design. Treating LLMs as a form of kosmos represents the key conceptual breakthrough that has accelerated AI development.
Read the whole thing, despite my “accidentally” having had ChatGPT create an illustration of the wrong Hayek to accompany this link. (And I have no idea why there’s text on the back of her laptop. Was it over when ChatGPT bombed Pearl Harbor?)