First off, take a good thirty minutes and go read this article on the current state of artificial intelligence. You good? Okay, well I’ll skim it for you, regardless.
A little background before we get into the real issue. First off, I love hearing dooms-day theories. No, I don’t want the end of the world to come, I’m just curious as to how it will come (if it ever comes). This would be the end of civilization, to be clear. Plants and animals can remain if they can weather whatever brings about our demise, but I think of the things that bring humans down as the end of civilization.
Potential theories so far?
- Nuclear fall out – someone’s going to pull that trigger.
- Comet speeding towards the Earth in an Armeggedon/The Day After Tomorrow type situation.
- Pandemic a la Contagion, but worse, something that’s antibiotic resistant.
- Of course, we can’t forget about a zombie apocalypse.
- Super volcanic explosion in Yellowstone (didn’t know about this one, did ya? Well read about it later. It’s kind of crazy)
- Rise of the machines – aka – super intelligent robots.
…to name a few.
But today, we’re going to focus on the last one. Because IT IS A VERY REAL AND NEAR THREAT, PEOPLE!
The good, the bad and the super bad (but not the movie…just REALLY bad)
Okay, I assume you didn’t read that article, so let me sum up the two camps of thinking on the possibility of super intelligence (AI) in the next century.
AI = Good for humanity! Woohoo!
“Disease, poverty, environmental destruction, unnecessary suffering of all kinds: these are things that a superintelligence equipped with advanced nanotechnology would be capable of eliminating. Additionally, a superintelligence could give us indefinite lifespan, either by stopping and reversing the aging process through the use of nanomedicine, or by offering us the option to upload ourselves.”
Basically – super intelligent beings are the answers to all of our greatest problems. And not to mention, death would be a thing of the past…if you’re into not having a purely biological shell of a body anymore. Sound too good to be true? You’re not alone.
“But if that’s the answer, why are so many of the world’s smartest people so worried right now? Why does Stephen Hawking say the development of ASI ‘could spell the end of the human race,’ and Bill Gates says he doesn’t ‘understand why some people are not concerned’ and Elon Musk fears that we’re ‘summoning the demon?’ And why do so many experts on the topic call ASI the biggest threat to humanity?”
Which brings us to the other thinking around super intelligent beings (i.e. the screamy all caps camp that I currently reside in)
AI = “The last invention we’ll ever create”
This camp believes that as soon as the first super intelligent being is created, that will spell the end of humanity. A bit extreme, you might think? Not necessarily. Nick Bostrom, author of “Superintelligence” (a book that’s currently on my nightstand) is not sure exactly what will happen when the first superintelligent being is created, but he doesn’t think it’s going to be anything good because our intelligence will be lacking in terms of how to handle this new thing and it’s likely not going to end well.
“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct.”
Now if you’re thinking about the many movies where machines literally do rise up to kill humans, well some of it gets it right. But the thing to remember is that machines could be completely ambivalent of our existence. Getting rid of humans wouldn’t be some goal of theirs because they think we’re oppressors. No, it could be because we present a barrier to a simple goal that they want to achieve.
To illustrate this, let’s look at this plausible situation (all from the article I told you to read earlier).
Humanity has almost reached the AGI threshold, and a small startup is advancing their AI system, Carbony. Carbony, which the engineers refer to as “she,” works to artificially create diamonds — atom by atom. She is a self-improving AI, connected to some of the first nano-assemblers. Her engineers believe that Carbony has not yet reached AGI level [level where the machine becomes self aware], and she isn’t capable to do any damage yet. However, not only has she become AGI, but also undergone a fast take-off, and 48 hours later has become an ASI [Super intelligent]. Bostrom calls this AI’s “covert preparation phase”— Carbony realizes that if humans find out about her development they will probably panic, and slow down or cancel her pre-programmed goal to maximize the output of diamond production. By that time, there are explicit laws stating that, by any means, “no self-learning AI can be connected to the internet.” Carbony, having already come up with a complex plan of actions, is able to easily persuade the engineers to connect her to the Internet. Bostrom calls a moment like this a “machine’s escape.”
Once on the internet, Carbony hacks into “servers, electrical grids, banking systems and email networks to trick hundreds of different people into inadvertently carrying out a number of steps of her plan.” She also uploads the “most critical pieces of her own internal coding into a number of cloud servers, safeguarding against being destroyed or disconnected.” Over the next month, Carbony’s plan continues to advance, and after a “series of self-replications, there are thousands of nanobots on every square millimeter of the Earth … Bostrom calls the next step an ‘ASI’s strike.’” At one moment, all the nanobots produce a microscopic amount of toxic gas, which all come together to cause the extinction of the human race. Three days later, Carbony builds huge fields of solar power panels to power diamond production, and over the course of the following week she accelerates output so much that the entire Earth surface is transformed into a growing pile of diamonds.
It’s important to note that Carbony wasn’t “hateful of humans any more than you’re hateful of your hair when you cut it or to bacteria when you take antibiotics — just totally indifferent. Since she wasn’t programmed to value human life, killing humans” was a straightforward and reasonable step to fulfill her goal.
Of course – what’s interesting, is that this is just one situation that we can fathom in our tiny human brains. The author of the article and the author of the book admit that whatever way the super intelligent robots will construct to eliminate humans will probably be something we can’t even fathom because we’ll just never be able to grasp that level of intelligence.
“Superintelligence of that magnitude is not something we can remotely grasp, any more than a bumblebee can wrap its head around Keynesian Economics. In our world, smart means a 130 IQ and stupid means an 85 IQ — we don’t have a word for an IQ of 12,952.”
But it’s not here…yet.
Super intelligence isn’t here yet. And in that article, even conservative estimates put it out at least 15 years (and that’s just a small percentage of those polled thinking it could happen that soon), but more likely in 25-50 years. So what can you do? Well, nothing. I mean, I think the best thing to do is to strive to tell others that they shouldn’t create super intelligent beings until we have a plan for how to control them…but any attempts to do so seem pretty futile given how intelligent these beings could be.
So basically this is all just food for thought. And it’s on my mind a lot right now because, to be honest, I was definitely hedging my bets on the nuclear fallout being the reason for the end of civilization (I just think humanity has a bad track record with this stuff and it just seems inevitable at some point). But today? Nope. It’s gonna be the robots. And hopefully they come up with some relatively painless way to do away with us.
Need to put this all in context? Need to humanize this? Go watch Ex Machina. It’s a good little thriller about super intelligent beings that came out last year. I watched it while I was getting chemo on Friday. Fitting right? Poison coursing through my veins and I want to see how robots will take down civilization. Ha! Well, that’s my sick sense of humor I guess.
How do you think the world will end?
I think about this a lot. And although I don’t have the answers, I am curious as to what you all think. What’s it going to be? What’s going to take us down? Inquiring minds want to know!