On Monday night, Q&A took a deep dive into the future, pondering cheerful questions like this: will the robots kill us all, and if so, when?
The answer: it depends.
Q&A: Will robots rule us?
Discussing the many issues around artificial intelligence, Q&A panelists Sandra Peters and Adam Spencer get pithy. Vision courtesy ABC
What it depends on is who you ask – there were no robots on hand to take the question, but there was an entertainingly human panel whose two political members even managed to tweak their natural algorithms and come across as more engaging than is usual for their type on this program.
And then there was panellist Sandra Peters, of Sydney Business Insights, whose initial response to the discussion on murderous Artificial Intelligence was more earthy than the subject matter might have suggested.
Adam Spencer and the panellists discussed the rise of artificial intelligence. Photo: ABC
“Am I allowed to say bullshit on national television?” Peters inquired of host Tony Jones.
“I think you pretty much did?”
Peters: “OK, I won’t. [But] I respectfully disagree.”
The point of contention began with a discussion about the Google-owned Deep Mind Project, an AI expedition that, as the questioner noted, has prompted concerns by giant brains such as Stephen Hawking about computers becoming more aggressive as they become more complex and sophisticated.
Panellist Sandra Peters and host Tony Jones. Photo: ABC
The questioner wanted to know: “How seriously should we treat this threat, and if so, how?”
Panellist Adam Spencer, the broadcaster and boffin, suggested we don’t have to worry – yet! – and that there are other, more pressing concerns in the short term.
Will the robots kill us all? It depends, said the panellists. Photo: ABC
“We’re not yet at the moment where that’s about to happen… My greater concern is not that tomorrow we’ll wake up and we’re all living in some Will Smith movie,” Spencer said.
“My concern is we’ll wake up and some medium-sized European or American bank will go, ‘Where has all the money gone? It’s just been taken’. Cyber crime, human hacking is a far greater challenge in the foreseeable future than artificial intelligence.”
Tony Jones: “Hawking jumps further down the track and wants people to think about it now because in the military sphere you have autonomous AI weapons that decide when to kill. That’s already happening.”
Spencer: “They’re saying we have to start thinking about that now. If we throw our human intellect behind that, walking one with the technology, a lot of people think that’s a future we can head off. It won’t be like in those movies, that we’re all hanging with the robots and one idiot says one nasty thing to one robot, and the robot says, ‘That’s it, let’s take them out’.”
Breathe easy, people!
Or can we?
Labor’s Ed Husic raised a flag.
“Individuals that are prominent like Hawking and Musk are saying it, the World Economic Forum is saying it as well… One example which has been denied hotly by Facebook is whether they had AI machines talking to each other and invented their own language that wasn’t understood by humans. It’s been denied by the people that have been involved.”
The “boundary markers”, Husic warned, are “very woolly… governments should be stepping forward. Australia has got a trusted voice in the stage. We should put that voice to use, and get nations thinking about how AI will be developed, how it will be used. We shouldn’t let this stuff… go along its way and then we scramble.”
It was here that Sandra Peters called “bullshit”.
Jones: “Which part do you disagree with?”
Peters: “Most of it. I don’t think it’s necessarily a woolly problem.”
Her contention: that robots won’t kill us all because they don’t give a damn about us and we can’t teach them to.
“I think that’s where a lot of people get stuck on what it can and can’t do. There is a scenario in which it will come and kill us all and I for one welcome our robot overlords.”
But in reality, she said, science had spent 60 years teaching computers to recognise a cat – yet a robot still couldn’t understand the intrinsic difference between a cat and a dog.
“That AI doesn’t understand what a cat is. Doesn’t understand that it’s different to a dog. Doesn’t really care about the cat. We can teach it to play chess and teach it a million interesting things and get it to help us but it doesn’t give a damn.
“It doesn’t give a damn about what it’s doing, nor does it have a will, nor does it want to feed cats. It doesn’t care that we’re humans and wants to be better than us. It does what we tell it to do.”
To which an intellectually challenged layman in the audience could only think: far from being homicidal, these robots actually sound like cats… who do what they’re told.
When do we get them as pets?