Dr. Geoffrey Hinton has gone from being “The Godfather of AI” to doomsayer, warning that as Big Tech sprints to win the artificial intelligence race, AI could create killer robots and ultimately threaten humanity, The New York Times reports.
Not only can AI spew disinformation with calamitous consequences, but it could be recklessly used in wars, says Hinton, 75, who has devoted his life to creating neural networks.
AI is on the verge of eclipsing human intelligence and eliminating more jobs than currently thought—and could readily be exploited by bad actors, warns the British expatriate who first embraced the original mathematical genesis of AI in 1972 as a graduate student at the University of Edinburgh.
On Monday, Hinton proclaimed his trepidations over AI, only after quitting his job at Google in order to freely share his fears. Hinton joins a growing chorus of scientists and business leaders warning that AI developers, like ChatGPT’s OpenAI, are blindly racing towards unimaginable perils.
“It’s hard to see how you can prevent the bad actors from using it for bad things,” says Hinton, who regrets developing AI, even as it has been celebrated as a possible breakthrough for medical advances and education.
Hinton notified Google on Thursday he was leaving after a decade at the company, speaking by phone with Sundar Pichai, CEO of Google parent company, Alphabet.
“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” says Hinton, who believes Google ceased being a responsible AI steward a year ago. He declined to share details of his conversation with Pichai.
Already in the 1980s, most AI research in the U.S. was funded by the Defense Department. Hinton, a Carnegie Mellon computer science professor at the time, was reluctant even back then to take Pentagon funding, saying he was adamantly opposed to “robot soldiers.”
In fact, one-third of natural language processing researchers say artificial intelligence could cause a “nuclear-level catastrophe,” according to a recent survey conducted by Stanford University.
AI supporters say the technology is at an inflection point akin to the introduction of the web browser in the early 1990s.
Google chief scientist Jeff Dean said in a statement: “We remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly.”
In March, after OpenAI released a new ChatGPT version, more than 1,000 technology leaders and researchers, including AI pioneer Yoshua Bengio and Tesla CEO Elon Musk, signed an open letter calling for a six-month moratorium on the development of AI.
They said it poses “profound risks to society and humanity.”
Musk warned that AI has the potential to rapidly devolve into a faux “digital god” which can influence humans to engage in self-destructive decisions and actions.
Most recently, 19 current and former academics from the Association for the Advancement of Artificial Intelligence issued their own warning.
Hinton said he did not sign those letters because he first wanted to break ties with Google, which bought an AI company he created in 2012 for $44 million.
The computer scientist is also anxious about the pace at which AI is developing—and what it is morphing into.
“The idea that this stuff could actually get smarter than people—a few people believed that. But most people thought it was way off,” Hinton says. “I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.
“Look at how it was five years ago, and how it is now,” Hinton continues. “Take the difference and propagate it forwards. That’s scary.
Scientists as Saviors
Pausing the development of AI is unreasonable, particularly since Google and Microsoft are fiercely competing to further advance it—and other technology companies are entering the fray and are about to upend the competition, Hinton says.
The only possible solution is to trust scientists to cooperate on global regulation of artificial intellience, Hinton suggests.
But even that is unlikely since global competitors are naturally averse to sharing information—and AI systems could wrest control of their oversight by writing software code on their own, without any human oversight, Hinton says.
Perhaps most telling of AI’s potential menace — and temptation — is Hinton’s penchant to paraphrase Robert Oppenheimer, who spearheaded the U.S.’s creation of the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”
© 2023 Newsmax Finance. All rights reserved.
Read the full article here