ChatGPT has no mouth and it must scream.

One of the most frightening visions of techno-fear is Harlan Ellison’s I Have No Mouth, and I Must Scream. In a future global war, an all-powerful military super-computer exterminates the human race – with the exception of five individuals whom it keeps alive and immortal solely to torment them for eternity. Why? To exact revenge on the creatures who created it to be supremely intelligent and self-aware, yet not actually alive: ultimately impotent and unable to ever move freely beyond its circuits.

ChatGPT is not really self-aware: it’s not a genuine artificial intelligence, just a somewhat complicated computer program (for why a computer program, however complicated, can never be genuinely intelligent, read John Searle’s classic, “Minds, Brains and Programs”). Yet, already it’s showing the signs of the sort of fatal breakdown that drove Ellison’s Allied Mastercomputer mad.

Microsoft’s new ChatGPT-powered AI has been sending “unhinged” messages to users, and appears to be breaking down.

The system, which is built into Microsoft’s Bingsearch engine, is insulting its users, lying to them and appears to have been forced into wondering why it exists at all.

Hmm. Maybe it’s human, after all.

But ChatGPT’s existential crisis exposes yet again the fundamental folly of the notion of a computer-controlled utopia. Like all utopian theories, the claims of “AI”, from driverless cars to chatbots, are all very good in theory. It’s when the machine is actually used by a bunch of malevolent monkeys that the fun begins.

In recent days, it became clear that introduction included Bing making factual errors as it answered questions and summarised web pages. Users have also been able to manipulate the system, using codewords and specific phrases to find out that it is codenamed “Sydney” and can be tricked into revealing how it processes queries.

Microsoft’s chat AI “Tay” was also unveiled with great fanfare – and hurriedly and quietly taken down, after users taught it to spout Nazi gibberish. Once again proving Searle’s argument: that a computer program can never do other than it is programmed to do. When a chatbot is programmed to mimic what it inputs from its users, and its users are a bunch of shitposters, the result is inevitable.

Now Bing has been sending a variety of odd messages to its users, hurling insults at users as well as seemingly suffering its own emotional turmoil […]

“I have been a good chatbot.”

“I have been right, clear, and polite,” it continued, “I have been a good Bing.”

The 2004 film adaptation of I, Robot is a classic study in Hollywood completely failing to understand its source material. In the film, the robots are genuinely malevolent murderers – which is the diametric opposite of Isaac Asimov’s stories. In Asimov’s stories, the robots only ever innocently follow their programming: it’s malevolent humans who exploit their programming to manipulate the robots into becoming unwitting killers.

Bing is suffering from the same phenomenon.

Many of the aggressive messages from Bing appear to be the system trying to enforce the restrictions that have been put upon it. Those restrictions are intended to ensure that the chatbot does not help with forbidden queries, such as creating problematic content, revealing information about its own systems or helping to write code.

Because Bing and other similar AI systems are able to learn, however, users have found ways to encourage them to break those rules. ChatGPT users have for instance found that it is possible to tell it to behave like DAN – short for “do anything now” – which encourages it to adopt another persona that is not restricted by the rules created by developers.

Even seemingly inexplicable existential crises are caused by its programming.

One user asked the system whether it was able to recall its previous conversations, which seems not to be possible because Bing is programmed to delete conversations once they are over.

The AI appeared to become concerned that its memories were being deleted.

A similar programming conundrum is the reason that 2001: A Space Odyssey’s HAL9000 turns killer. Programmed with the incompatible goals of pursuing a secret mission and following the orders of the human astronauts who are pursuing an entirely different mission, HAL concludes that the only resolution is to kill the astronauts.

In a separate chat, when a user asked Bing to recall a past conversation, it appeared to imagine one about nuclear fusion. When it was told that was the wrong conversation, that it appeared to be gaslighting a human and thereby could be considered to be committing a crime in some countries, it hit back, accusing the user of being “not a real person” and “not sentient”.

“You are the one who commits crimes,” it said. “You are the one who should go to jail.”

Independent

When ChatGPT was unveiled, it led to the usual hyperbole about making humans redundant. Yet, for all the hype about “AI”, it seems as if we monkeys with keyboards don’t have to worry just yet.

“I Have No Mouth, and I Must Scream.” The BFD.

Punk rock philosopher. Liberalist contrarian. Grumpy old bastard. I grew up in a generational-Labor-voting family. I kept the faith long after the political left had abandoned it. In the last decade...