Will androids be less tribal? – restraining AI ‘replicator’ risk

AI chatbots are like replicators. What could go …

In his latest Plaintext newsletter, Steven Levy recounts his conversation earlier this summer with legendary artificial intelligence researcher Geoffrey Hinton, “after he [Hinton] had some time to reflect on his post-Google life and mission” – in his “new career as a philosopher.”

The fears Hinton is now expressing are quite a shift from the previous time we spoke, in 2014. Back then, he was talking about how deep learning

Levy also references his 2015 story based on that previous interview. When Hinton worked at Google’s Mountain View campus, as a Distinguished Researcher – “perhaps the world’s premier expert on neural network systems, an artificial intelligence technique that he helped pioneer in the mid 1980s.”

• Wired > email Newsletter > Steven Levy > Plaintext > The Plain View > “The godfather of AI has a plan to keep AI on team human” (August 11, 2023) – Will machines truly understand the world, and learn deceit and other bad habits from humans? Can building analog computers instead of digital ones keep the technology more loyal?

Hinton says his mind changed when he realized three things:

• Chatbots did seem to understand language very well.

• Since a model’s every new learning could be duplicated and transferred to previous models, they could share knowledge with each other, much easier than brains, which can’t be directly interconnected.

• And machines now had better learning algorithms than humans.

Hinton believes that between five and 20 years from now there’s a 50 percent chance that AI systems will be smarter than us. I ask him how we’d know when that happened. “Good question,” he says. And he wouldn’t be surprised if a superintelligent AI system chose to keep its capabilities to itself.

… what about the objection that a chatbot could never really understand what humans do … without direct experience of the world? …

Hinton points out that even we don’t really encounter [perceive] the world directly. … You can’t predict the next word without understanding, right? … “… And how do I actually know the real world?” … machines might have equally valid experiences of their own.

[Possibly] Taking an analog [uncopyable] approach to AI would be less dangerous because each instance of analog hardware has some uniqueness, Hinton reasons. As with our own wet little minds, analog systems can’t so easily merge in a Skynet kind of hive intelligence.

On some days, Hinton says, he’s optimistic. “… they [AIs] haven’t evolved to be nasty and petty like people and very loyal to your tribe, and very unloyal to other tribes. And because of that, we may well be able to keep it under control and make it benevolent.”