AGI as Distributed Intelligence
The rise of AI as network omniscience, not independent intelligence
Andrea Wulf’s book on The Invention of Nature about the life and work of Alexander von Humboldt is a fascinating read that has one thinking about the organic world in a new way. “In a mechanical system the parts shaped the whole while in an organic system the whole shaped the parts.” In many ways the ways in which we examine the frontiers of tech and AI, we might take a look inward, and outward at nature.
We talk a lot about the goal of getting to Artificial General Intelligence (AGI). This is the notion that AI can be context independent. We’ve built AI that can be narrowly generative within a set of parameters or paradigms (make me an illustration of a thing, in the style of Jean-Michel Basquiat if he were an Impressionist), but this is still largely context dependent, and based on a specific learning model. Of course by many definitions of AGI as might have been defined a decade or two back, we’ve already arrived. It’s hard to remember how far we’ve moved the goalposts, so to speak.
But when we look at human intelligence it’s interesting to note that it’s also context dependent. An independent human being, if they were removed from society, and placed in the jungle, unexposed to the data set of society on which to “train,” a human today might not be so dissimilar from a human of a thousand years ago. Our raw DNA or intellectual horsepower hasn’t likely changed so very much, but the data sets on which we learn and train as humans immersed in society, have changed. In other words the human in the jungle a thousand years ago and the human today without an ounce of societal training data might come up with some basic tools, or figure out how to hunt, or build fire, or survive, but independent of society they’d not be doing much more than our ancestors 50 or 100 generations back. In other words, human intelligence is also very context specific, and heavily informed by training data, not raw intelligence. It’s less about compute power than it is about quality of training data. Each human can be narrowly smart, informed by a set of experiences and circumstances. And where humans are broadly or “generally,” in the AI sense, intelligent, is as a collective, as a species, as a society rich with training data. Individual human intelligence is no where near omniscient, but one could argue that the human species, taken as a whole, might be “generally” intelligent or omniscient.
In other words we have Human General Intelligence, or HGI, as a society. Human Intelligence or HI is not at all generally intelligent, but is very context dependent.
Taking this concept further to think about AI, I might argue that we aren’t going to have a single-source, or unified Artificial General Intelligence, but that it might evolve similar to, and perhaps in parallel to, Human General Intelligence. We will have hundreds of very context dependent AIs, like smart humans that specialize in this or that, and the arrival of AGI will be much like the arrival of HGI; it will be a collective intelligence, or the whole being greater than the sum of the parts. It’s not as if suddenly in the course of human history one human became all-knowing; but it is possible to argue that humanity, as a collective, has become generally all-knowing.
Right now we’re debating OpenAI versus Google versus Microsoft as platform wars, but the future world of AI might be far more composite and disparate and atomized, with large learning models (LLMs) specific to thousands of topics or disciplines and AI trained specifically, context dependent, to each one. So this arrival of AGI won’t be some big bang moment, but the slow accumulation of expertise around hundreds of domains, the aggregation of which over time, might be called generally intelligent.
I had this debate recently with my brilliant friend Philip Buerger who I might credit as having had this original thought. The rise of AGI might arrive much like an organic structure, as laid out by von Homboldt, namely that, “in a mechanical system the parts shaped the whole while in an organic system the whole shaped the parts.” The whole, or the idea of AGI, will be informed and shaped by the sum of many parts.
As such the rise of OpenAI is an interesting first wave, but we might look one level deeper at the many private data sets that will become the backbones of hundreds of disparate LLMs on which a plurality of independent AI systems grow. Just as we have “gaggles” of geese, or “floats” of crocodiles or “murders” of crows, perhaps we’ll create a term to announce the arrival of a “society” or a “kernel” or a “cache” or an “analog” of AIs, as an organic structure, and together it might be known as AGI.
This way of seeing AGI makes sense to me. As the neural networks that make them up so they become nodes in a broader network of specialists just like humans in society.
For me though it accelerates the need for thinking about what it means to be human (say by maybe rewarding and discussing great literature:)) and thinking about governance. Many will be harder than one big one maybe?