Collaborative AI as a path to AGI

Let's say you made a time machine, and the catch is you can only bring people forward from the past. You turn it on and zap a dozen people into the room from 1800. Given enough time to catch up on the language and the latest in education, are these dozen people likely to be less intelligent than modern humans?

The science says maybe a little but not enough that you couldn't find a dozen similarly intelligent people today. Despite that, our ability to create is far greater than in 1800. We can produce products and share knowledge at a scale unimaginable 250 years ago.

Most modern depictions of Artificial General Intelligence (AGI) model a single super intelligent agent far surpassed human intelligence. It is certainly one path to AGI but being more intelligent than us may not be a requirement to create AGI on equal footing with humans.

Instead, imagine billions of smaller, simpler AIs that learn to collaborate, develop tools and form a collective resource base. Much like our friends from the 1800s, they may not be so different individually, but with a greater power to collaborate, they will leave us in the dust.

These AIs may even be intentionally limited in their hardware. With a small set of skills, they could, at least in theory, learn to produce results greater than themselves.

Fundamental to this is some form of communication.

These AIs would need one or more shared languages in order to collaborate. Much like humans, they don't all need to speak the same language, but some language link must hold them all together.

Besides the obvious need for them to talk to each other, some sort of information storage is likely also an essential requirement. AI trained today only has a limited knowledge capacity, and without some kind of read and write operations, the group would not be able to produce anything new.

Given these two factors, language and data storage, even simple AIs could collaborate and act as a cohesive unit that outpaces human civilisation. Their main competitive advantage would be a faster path to creating leverage. They would thrive if they could learn to create their own AI industrial revolution and allow each unit to produce more per hour than acting alone.

I see this model of large scale AI to be more likely than the single AI superintelligence set on destroying us. This is primarily because this model of AGI doesn't require large-scale advances in compute capability or intelligence beyond our imagination. More so, this model of AGI can be built from numerous AIs, all set on optimising a single utility. This means each AI can maintain a positive economic impact. AIs which generate economic value are more likely to receive continued investment and improvement than their pure research counterparts.

It would be fair to ask if a group of coordinated AIs could be considered one large AGI. By some definitions, yes, but with that in mind, we should be comparing its potential to all humanity rather than any one human.

In this hypothetical scenario, our competition is amongst two groups of potential collaborators.  For humanity to minimise its risk of destruction (violently or through obsoleteness), we need to out-collaborate.

First, we need to learn to collaborate amongst ourselves. As far as that goes, we seem to have some reasonable ways to do that. Despite current political unrest, humanity has managed to achieve a lot. To compete with an array of AIs, we'll need to take that to another level.

Similar to imagining an invasion from another world, time will tell if a large scale threat will unite us or divide us.

Second, perhaps our best bet of 'winning' (for whatever that term is worth) is to learn to collaborate with this fictional AI conglomerate. Given the criteria of language and an interface to data, there's theoretically no reason we couldn't interface as well.

Collaboration could benefit both groups under this model, especially assuming these AI are no more intelligent than us. Whatever these AI are optimising for, their goals will likely require intervention in the real world. For us humans, their creators, one would hope that their utility continues to benefit us somehow.

At least, that is the hope.

Subscribe to Elliot C Smith

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe