July 17, 2015

Dangers of Artificial Intelligence Research to Humans



Science research run amok, as at the Broad Institute at MIT investigating how the mind works, is an issue I raised with a research scientist. I asked "What if scientists learned how to completely control the mind?" Someone always comes along to use new technology for evil purposes. He said "You can't stop it." If he is right the same conclusion applies to AI research. Researcher scientists are so intent on learning everything they can, they cannot be stopped. In a portion of a painting by Pieter The Elder Brueghel, called The Blind Leading The Blind, he shows how close to oblivion scientists lead us in their never ending search to know.  


The Blind Leading The Blind

[From article]
Artificial intelligence has the potential to be as dangerous to mankind as nuclear weapons, a leading pioneer of the technology has claimed.
Professor Stuart Russell, a computer scientist who has lead research on artificial intelligence, fears humanity might be 'driving off a cliff' with the rapid development of AI.
He fears the technology could too easily be exploited for use by the military in weapons, putting them under the control of AI systems.
Professor Russell, who is a researcher at the University of California in Berkeley and the Centre for the study of Existential Risk at Cambridge University, compared the development of AI to the work that was done to develop nuclear weapons.
[. . .]
In an interview with the journal Science for a special edition on Artificial Intelligence, [Professor Stephen Hawking] said: 'From the beginning, the primary interest in nuclear technology was the "inexhaustible supply of energy".
'The possibility of weapons was also obvious. I think there is a reasonable analogy between unlimited amounts of energy and unlimited amounts of intelligence.
'Both seem wonderful until one thinks of the possible risks.
[. . .]
Professor Russell, however, cautions that this unchecked development of technology can be dangerous if the consequences are not fully explored and regulation put in place.
[. . .]
'To those who say, well, we may never get to human-level or superintelligent AI, I would reply: It's like driving straight toward a cliff and saying, 'Let's hope I run out of gas soon!'
In April Professor Russell raised concerns at a United Nations meeting in Geneva over the dangers of putting military drones and weapons under the control of AI systems.
He joins a growing number of experts who have warned that scenarios like those seen in films from Terminator, AI and 2001: A Space Odyssey are not beyond the realms of possibility.
[. . .]
Professor Russell said computer scienitsts needed to modify the goals of their research to ensure human values and objectives remain central to the development of AI technology.

http://www.dailymail.co.uk/sciencetech/article-3165356/Artificial-Intelligence-dangerous-NUCLEAR-WEAPONS-AI-pioneer-warns-smart-computers-doom-mankind.html

'Artificial Intelligence is as dangerous as NUCLEAR WEAPONS': AI pioneer warns smart computers could doom mankind
Expert warns advances in AI mirrors research that led to nuclear weapons
He says AI systems could have objectives misaligned with human values
Companies and the military could allow this to get a technological edge
He urges the AI community to put human values at the centre of their work
By RICHARD GRAY FOR MAILONLINE
PUBLISHED: 09:30 EST, 17 July 2015 | UPDATED: 10:12 EST, 17 July 2015

No comments: