A researcher and SETI (search for intelligent life) leader has said that he believes that artificial intelligence must be regulated by the ruling class soon. Michael Garrett, who is a radio astronomer at the University of Manchester and the director of the Jodrell Bank Centre for Astrophysics, said that AI could wipe out humanity in 200 years.

The real problem here is asking the ruling class, who is always up to nefarious controlling schemes to regulate a system they could use to their advantage. Anyone paying attention knows the rulers do not have humanity’s best interest in mind.

Garret says that the Fermi Paradox could be why humans have yet to find concert evidence of intelligent life. This paradox captures this idea: in an infinite universe, how can it be that there are no other civilizations sending us radio signals? Could it be that, once civilizations develop AI, it’s a short ride into oblivion for most of them?

In his paper, which was peer-reviewed and published in the journal of the International Academy of Astronautics, he compares theories about artificial superintelligence against concrete observations using radio astronomy.

According to a report by Popular Mechanics, Garrett explains in the paper that scientists grow more and more uneasy the longer we go without hearing any sign of other intelligent life. “This ‘Great Silence’ presents something of a paradox when juxtaposed with other astronomical findings that imply the universe is hospitable to the emergence of intelligent life,” he writes. “The concept of a ‘great filter’ is often employed – this is a universal barrier and insurmountable challenge that prevents the widespread emergence of intelligent life.”

Right now, AI is not nearly advanced enough to be able to wipe humanity off the globe. But Garrett estimates that that possibility could only be 100-200 years away. In Garrett’s model, advanced civilizations (although we’d argue that we are “advanced” considering most still believe they need a master and ruling class) have just a couple hundred years in their AI eras before they blip off the map.

Over distance and the very long course of cosmic time, such tiny timeframes mean almost nothing. They reduce to zero, which, he says, fits with SETI’s current success rate of 0 percent. “Without practical regulation, there is every reason to believe that AI could represent a major threat to the future course of not only our technical civilization but all technical civilizations,” Garrett explains.

Father Of Artificial Intelligence: ‘Singularity Is Less Than 30 Years Away’

Read the full article here

Share.
Leave A Reply