Why You Don't Need To Worry About "Superintelligent AI" Destroying The World (But Artificial Intelligence Is Still Scary)

Current Affairs - Un pódcast de Current Affairs

Categorías:

Some, including both geniuses like Stephen Hawking and nongeniuses like Elon Musk, have warned that artificial intelligence poses a major risk to humankind's future. Some in the "Effective Altruist" community have become convinced that artificial intelligence is developing so rapidly that we could soon create "superintelligent" computers that are so much smarter than us that they could take over and pose a threat to our existence as a species. Books like Nick Bostrom's Superintelligence and Stuart Russell's Human Compatible have warned that we need to get machine intelligence under control before it controls us. Erik J. Larson is dubious about the chances that we'll produce "artificial general intelligence" anytime soon. He argues that we simply have no idea how to simulate important kinds of intelligent reasoning with computers, which is why even as they seem to get much smarter, they also remain very stupid in obvious ways. Larson is the author of The Myth of Artificial Intelligence: Why Computers Can't Think The Way We Do (Harvard University Press) which shows that there are important aspects of intelligence that we have no clue how to make machines do, and that while they're getting very good at playing Go and generating images from prompts, AI systems are not making any progress toward possessing the kind of common sense that we depend on every day to make intelligent decisions. Larson says that a lot of progress in AI is overstated and a lot of people who hype up its potential don't grasp the scale of the challenges that face the project of creating a system capable of producing insight. (Rather than producing very impressive pictures of cats.)Today, Erik joins to explain how different kinds of reasoning work, which kinds computers can simulate and which kinds they can't, and what he thinks the real threats from AI are. Just because we're not on the path to "superintelligence" doesn't mean we're not creating some pretty terrifying technology, and Larson warns us that military and police applications of AI don't require us to develop systems that are particularly "smart," they just require technologies that are useful in applying violent force. A Current Affairs article on the "superintelligence" idea can be read here. Another echoing Larson's warnings about the real threats of AI is here. The "Ukrainian teenager" that Nathan refers to is a chatbot called Eugene Goostman. The transcript of the conversation with the "sentient" Google AI is here.The image for this episode is what DALL-E 2 spat out in response to the prompt "a terrifying superintelligent AI destroying the world." 

Visit the podcast's native language site