Read Time:1 Minute, 58 Second

Experts have called for global regulation to prevent out-of-control artificial intelligence systems that could end up “eliminating the whole human race”.

Researchers from Oxford University told MPs on the science and technology committee that just as humans wiped out the dodo, AI machines could eventually pose an “existential threat” to humanity.

The committee “heard how advanced AI could take control of its own programming”, said The Telegraph.

“With superhuman AI there is a particular risk that is of a different sort of class, which is, well, it could kill everyone,” said doctoral student Michael Cohen. If it is smarter than humans “across every domain” it could “presumably avoid sending any red flags while we still could pull the plug”.

Michael Osborne, professor of machine learning at Oxford, said that “the bleak scenario is realistic”. This is because, he explained, “we’re in a massive AI arms race… with the US versus China and among tech firms there seems to be this willingness to throw safety and caution out the window and race as fast as possible to the most advanced AI”.

There are “some reasons for hope in that we have been pretty good at regulating the use of nuclear weapons”, he said, adding that “AI is as comparable a danger as nuclear weapons”.

He hoped that countries across the globe would recognise the “existential threat” from advanced AI and agree treaties that would prevent the development of dangerous systems.

“Similar concerns appear to be shared by many scientists who work with AI,” said The Times, pointing to a survey in September by a team at New York University. It found that more than a third of 327 scientists who work with artificial intelligence agreed it is “plausible” that decisions made by AI “could cause a catastrophe this century that is at least as bad as an all-out nuclear war”.

As the Daily Mail put it: “The doomsday predictions have worrying parallels to the plot of science fiction blockbuster The Matrix, in which humanity is beholden to intelligent machines.”

All in all though, said Time magazine when the New York University research came out, “the fact that ‘only’ 36% of those surveyed see a catastrophic risk as possible could be considered encouraging, since the remaining 64% don’t think the same way”.

#Call #stop #eliminating #human #race #Week

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %
Previous post Stocks making the biggest moves premarket: Southwest, Tesla, Las Vegas Sands and more
Next post Supernovae might be a good place to hunt for alien broadcasts