Should You Fear AI?

AI takeover—the hypothetical event wherein computers or robots take over the world and obliterate humankind—is a common trope in science fiction books and apocalyptic movies. But is superintelligent AI really something we should fear?
AI takeover—the hypothetical event wherein computers or robots take over the world and obliterate humankind—is a common trope in science fiction books and apocalyptic movies. But is superintelligent AI really something we should fear?
In this TEDTalk, scientist and philosopher Grady Booch thinks not. While movies like The Matrix, Metropolis, and The Terminator exacerbate humans’ fears of being supplanted by technology—that is, that we might develop technology that is much too advanced for own good—we forget, in Booch’s view, an important point. Engineers are not looking to build sentient machines, they are looking to build “simple brains” that can simply carry out tasks. And even if engineers did manage to develop the technology to make systems that have a theory of mind and ethical and moral foundations, he argues, we would teach them our own moral systems, not ones which would try to subvert us. Plus we can be assured that we can always unplug what we have built.
But is Booch too optimistic about the innocence of superintelligent AI? Could or is there some technology whose development worries you? Enter your comments below, and check out his TedTalk here: