AGI: (M)ending the World – it’s either one or the other (by Alberto Romero)

Imagine this: In front of you there’s a big magical button. You happen to know that, if you press it, there’s an indeterminate but non-zero chance that you’ll solve all the world’s problems right away. Sounds great! There’s a caveat, though. At the other end of the probability distribution lies a similarly tiny but very real possibility that you will, just as instantly, kill everyone.

Do you press it?


Superintelligence: Utopia or apocalypse?

That button is, as you may have imagined, a metaphor for the hypothetical AGI or superintelligence (will use them interchangeably) we hear about everywhere nowadays. The dichotomic scenario described above is the setting that so-called “AI optimists” and “AI doomers” have submerged us in. Superintelligence will be humanity’s blessing or humanity’s curse. It’ll be a paradisiac dream or a hellish nightmare. It’ll be the panacea to solve all our problems or the doom that will end human civilization.
Public discussions on social media and traditional media about superintelligence and the broad range of futures that will open up if—or when, for some people—we manage to create an AGI have captured the conversation; everything else pales in comparison. Debates about current, actual problems that are existentially urgent to many people are relegated to obscurity because they’re not as “existentially serious as … [AIs] taking over,” as AI pioneer Geoffrey Hinton stated recently.

Read the full article at: