Air Liquide announces Executive Committee reshuffle
Jun 21, 2023The Risks of Large
Jun 25, 2023Advantages and Limitations of Karl Fischer Titration
Jan 09, 2024Heterologous production of the D
Jan 12, 2024Jacob deGrom injury: How other pitchers have fared after a second Tommy John, as Rangers ace heads for surgery
May 06, 2023Paper Claims AI May Be a Civilization
If aliens are out there, why haven't they contacted us yet? It may be, a new paper argues, that they — or, in the future, we — inevitably get wiped out by ultra-strong artificial intelligence, victims of our own drive to create a superior being.
This potential answer to the Fermi paradox — in which physicist Enrico Fermi and subsequent generations pose the question: "where is everybody?" — comes from National Intelligence University researcher Mark M. Bailey, who in a new yet-to-be-peer-reviewed paper posits that advanced AI may be exactly the kind of catastrophic risk that could wipe out entire civilizations.
Bailey cites superhuman AI as a potential "Great Filter," a potential answer to the Fermi paradox in which some terrible and unknown threat, artificial or natural, wipes out intelligent life before it can make contact with others.
"For anyone concerned with global catastrophic risk, one sobering question remains," Bailey writes. "Is the Great Filter in our past, or is it a challenge that we must still overcome?"
We humans, the researcher notes, are "terrible at intuitively estimating long-term risk," and given how many warnings have already been issued about AI — and its potential endpoint, an artificial general intelligence or AGI — it's possible, he argues, that we may be summoning our own demise.
"One way to examine the AI problem is through the lens of the second species argument," the paper continues. "This idea considers the possibility that advanced AI will effectively behave as a second intelligent species with whom we will inevitably share this planet. Considering how things went the last time this happened — when modern humans and Neanderthals coexisted — the potential outcomes are grim."
Even scarier, Bailey notes, is the prospect of near-god-like artificial superintelligence (ASI),in which an AGI surpasses human intelligence because "any AI that can improve its own code would likely be motivated to do so."
"In this scenario, humans would relinquish their position as the dominant intelligent species on the planet with potential calamitous consequences," the author hypothesizes. "Like the Neanderthals, our control over our future, and even our very existence, may end with the introduction of a more intelligent competitor."
There hasn't yet, of course, been any direct evidence to suggest that extraterrestrial AIs wiped out natural life in any alien civilizations, though in Bailey's view, "the discovery of artificial extraterrestrial intelligence without concurrent evidence of a pre-existing biological intelligence would certainly move the needle."
The possibility, of course, raises the possibility that there are destructive AIs lingering around the universe after eliminating their creators. To that end, Bailey helpfully suggests that "actively signaling our existence in a way detectable to such an extraterrestrial AI may not be in our best interest" because "any competitive extraterrestrial AI may be inclined to seek resources elsewhere — including Earth."
"While it may seem like science fiction, it is probable that an out-of-control... technology like AI would be a likely candidate for the Great Filter — whether organic to our planet, or of extraterrestrial origin," Bailey concludes. "We must ask ourselves; how do we prepare for this possibility?"
Reader, it's freaky stuff — but once again, we're glad someone is considering it.
More on an AI apocalypse: Warren Buffett Compares AI to the Atom Bomb
More on an AI apocalypse: