▲▲▲
Proposed Sign for "Dangerous Artificial Intelligence"
The Artificial Intelligence Threat to Humanity: Skynet Rising?
The worst-case technological singularity scenario is unimpeded artificial, machine intelligence exceeding human, biological intelligence. That is, humans create an artificial intelligence greater than themselves that is unhindered. The singularity then occurs because the future cannot be determined beyond this event horizon. In this scenario, there is no merger of machines and humans into cyborgs and humans becoming transbiological. Humans are left behind and become evolutionary artifacts as happened to the Neanderthals.
It is machines versus humans. Humans attempt to contain the superior artificial intelligence. Is this event inevitable and unstoppable as technology increases at an increasing rate? Could this nightmare singularity be prevented by imprisoning an artificial super-intelligence? Successful, long-term imprisonment of super-intelligence will most likely fail and is a last-ditch, futile effort of a then-obsolete life-form to justify their superseded and antiquated existence.
Skynet Rising: The AI Threat to Humanity's Existence with Dr. Roman V. Yampolskiy
Alex talks with Roman Yampolskiy, a computer scientist at the University of Louisville in Kentucky, who recently wrote an article about the danger to humanity from AI and super-intelligent computers. Mr. Yampolskiy is trained in the fields of programming, forensics, biometrics and artificial intelligence.
Humanity Must 'Jail' Dangerous AI to Avoid Doom, Expert Says
Super-intelligent computers or robots have threatened humanity's existence more than once in science fiction. Such doomsday scenarios could be prevented if humans can create a virtual prison to contain artificial intelligence before it grows dangerously self-aware.
Keeping the artificial intelligence (AI) genie trapped in the proverbial bottle could turn an apocalyptic threat into a powerful oracle that solves humanity's problems, said Roman Yampolskiy, a computer scientist at the University of Louisville in Kentucky. But successful containment requires careful planning so that a clever AI cannot simply threaten, bribe, seduce or hack its way to freedom.
"It can discover new attack pathways, launch sophisticated social-engineering attacks and re-use existing hardware components in unforeseen ways," Yampolskiy said. "Such software is not limited to infecting computers and networks — it can also attack human psyches, bribe, blackmail and brainwash those who come in contact with it."
Humanity Must 'Jail' Dangerous AI to Avoid Doom, Expert Says
Hal 9000 AI in 2001: A Space Odyssey
Skynet AI in The Terminator
▲▲▲
No comments:
Post a Comment