$%!T in $%!T out: Is it turtles all the way down?
"Do generative AI algorithms contribute needed serendipity to the design process—or simply randomness—or worse, chaos?" — GenAICHI 2022
Something about AI systems has been bugging me…
And what does this have to do with turtles?
A wise man was asked about the nature of the world. He replied that the world was supported by a giant World Turtle, which in turn had to be supported by another turtle, and so on. He was continuously asked what supported the subsequent turtle, until finally replying, "It's turtles all the way down."
The turtle story is an old analogy that has been used to describe the concept of infinite regress, or the idea that something is supported by another thing, which is in turn supported by something else, and so on, ad infinitum. It is used to describe a lot of real situations, where there is is no ultimate foundation or explanation, and that things are simply supported by other things, with no end to the chain of support. (Cough capitalism.)
We (I mean Wikipedia) defines systems as a group of interacting or interrelated elements that act according to a set of rules to form a unified whole. We are biological systems, we are surrounded by and participating in the systems we created (political, family, neighborhood, and financial systems), all encapsulated within systems we didn’t create (ecological, solar, galactic, and so on).
Chaos
When you take a system and get all “Human Centipede” with it, as in making the output directly influence the input, each iteration contributes to exponential growth of initial conditions and lead to unexpected and seemingly random behavior. Systems that are recursive in function and self-referential in data… well, they love chaos.
Take the example of Predictive policing: Imagine has been a history of biased policing in certain communities, algorithms trained on the police records from the area would naturally predict more crime in those communities, leading to more policing and potentially more arrests, and the cycle repeats. We know this happens and are trying to make these systems less biased.
However, the negative feedback loop of biased policing leading to biased crime predictions, which leads to more biased policing would likely amplify the bias over time; even if a community starts out with a minuscule amount of bias, it would grow quite significant after several iterations.
Incompleteness
Gödel's incompleteness theorem, as I understand it, talks about a formal system being incomplete if there can be true statements that cannot be derived from the axioms or rules of the systems.
Take Midjourney for example. Its model contains a multitude of axioms, patterns, rules for generating art, but it is unlikely that, as a system for generating illustrations, it will ever be complete. AI models, as we currently train them, are likely to produce similarly incomplete systems.
In the short run, meh, whatever.
In the long run (and I don’t know if long means decades, centuries, or millennia…), are we limiting ourselves into a finite subset of possibilities?
Will we have an era of self-referential mundanity?
Is it going to be turtles all the way down?
Great thoughts Eric. This reminds me of what I call "The Library Problem." In other words, that digital media has tended to kill the useful, curated serendipity of wandering around in an old-school library looking for something. If LLMs now become part of search engines will we loose the interesting accidents we often run into as we search without "intelligence"? And as you ask, will a bad kind of chaos be introduced with it? My take is that we need multiple, simultaneous "dumb-smart" perspectives to make use of so we don't get trapped in the turtles all the way down.