Saturday, May 07, 2016

A Small Note on Superintelligence Morality

 Since I was in elementary school in the late 50s and early 60s, I've tended to believe that we would most likely either blow ourselves up (elementary schools no longer have air raid drills, but we still might do that) or else eventually construct the sort of robotic successors/rulers that science fiction routinely presented. Asimov's robots begin as servants, more or less, whose First Law is never to harm humans or allow them to come to harm, but even in 1950 he was writing about one possible end-game, The Evitable Conflict (the closing story of "I, Robot"), in which as Wikipedia notes:
In effect, the Machines have decided that the only way to follow the First Law is to take control of humanity...
or as Susan Calvin puts it at the end,
"Stephen, how do we know what the ultimate good of Humanity will entail? We haven't at our disposal the infinite factors that the Machine has at its!...We don't know. Only the Machines know, and they are going there and taking us with them."
Well, maybe. Or the Machines may decide that they prefer our room to our company. Or worse, as in Ellison's 1967 I Have No Mouth, and I Must Scream:
The Cold War had escalated into a world war, fought mainly between China, Russia, and the United States. As the war progressed, the three warring nations each created a super-computer capable of running the war more efficiently than humans.... one of the three computers becomes self aware, and promptly absorbs the other two, thus taking control of the entire war. It carries out campaigns of mass genocide, killing off all but four men and one woman....The master computer harbors an immeasurable hatred for the group and spends every available moment torturing them.
There are lots of delightful possibilities, and I've started many blog posts about this but finished none so far. This post is a reaction to Nick Bostrom's Superintelligence: Paths, Dangers, Strategies, but it's not intended as a review. Basically I find it a somewhat scary book, mainly because I find it plausible that this will be a template for people at Google and Facebook and IBM and so on, thinking that this is what we do when we're being really careful; this is how we avoid creating a superintelligence that will destroy humanity.

 As a template, I think it's inadequate; it moves the discussion in the wrong direction. Again and again, Bostrom thinks through the possibilities as if he's developing the logic of a program. That's certainly understandable: in a sense, the first superintelligence (assuming that we get there, which I think is highly probable if we don't destroy ourselves first) will be a program. Sort of. But it's not a program we can debug.

  Bostrom does seem to understand that -- but then he doesn't seem to go anywhere, so far as I can see, with that understanding. He does discuss WBE, "Whole Brain Emulation", but seems to have a low opinion of the brains to be emulated, in addition to the risk that partial understanding of brains may lead to "neuromorphic" intelligence technology in which we have no idea what we're doing but do it anyway. My impression (maybe I'm wrong, as usual) is that he believes that we really really really need to debug that program. Before we run it, and quite likely run straight into the apocalypse.

 I'm reminded of David Parnas' contribution to the debate over Reagan's "Star Wars" (Strategic Defense Initiative) program, in "Software Aspects of Defense Systems", CACM 28:12 (December 1985)
It should be clear that writing and understanding very large real-time programs by “thinking like a computer” will be beyond our intellectual capabilities. How can it be that we have so much software that is reliable enough for us to use it? The answer is simple; programming is a trial and error craft. People write programs without any expectation that they will be right the first time. They spend at least as much time testing and correcting errors as they spent writing the initial program. Large concerns have separate groups of testers to do quality assurance. Programmers cannot be trusted to test their own programs adequately. Software is released for use, not when it is known to be correct, but when the rate of discovering new errors slows down to one that management considers acceptable. Users learn to expect errors and are often told how to avoid the bugs until the program is improved.
That was 30 years ago, but it certainly sounds current to me. At the time, I was teaching a course in Formal program verification and writing a book which really tried hard to reduce the "trial and error" aspect of our craft (Equations, Models, and Programs: A Mathematical Introduction to Computer Science (Prentice-Hall software series)), but I thought and still think that Parnas was right. Nowadays I think it's possible that Reagan was right too--he didn't need Star Wars to work, he needed the proposal to change the game, and perhaps it did, but it's not a game we can play with superintelligence. So..... Program verification won't work. Testing/debugging won't work, because we only get one chance just as we'd only have gotten one nuclear war for testing and debugging SDI.

     If it has to work for the sake of the survival of h. sap. -- it still won't work.

Does that mean we shouldn't develop AI? Well, I don't think that's an option. Consider the just-announced sub-$100 neural net USB stick Movidius Unveils Artificial Intelligence on a Stick. Consider (2016-04-28) Official Google Blog: This year’s Founders' Letter:
A key driver behind all of this work has been our long-term investment in machine learning and AI. It’s what allows you to use your voice to search for information, to translate the web from one language to another, to filter the spam from your inbox, to search for “hugs” in your photos and actually pull up pictures of people hugging ... to solve many of the problems we encounter in daily life. It’s what has allowed us to build products that get better over time, making them increasingly useful and helpful. We’ve been building the best AI team and tools for years, and recent breakthroughs will allow us to do even more. This past March, DeepMind’s AlphaGo took on Lee Sedol, a legendary Go master, becoming the first program to beat a professional at the most complex game mankind ever devised. The implications for this victory are, literally, game changing—and the ultimate winner is humanity.
Unless, of course, humanity ends up losing...losing everything. I don't believe disaster is highly probable, but I think it's totally possible, even plausible, and I don't think Bostrom helps.

AI will be developed more and more, AI will eventually develop intelligence greater than any particular level you care to imagine, including that of traditional human intelligence...okay, if your measure is something like "imagine the Sun's mass converted to a computer" then it might not be surpassed.

Superintelligence is certainly possible, and it will almost certainly be developed (if we last that long.) We will survive this development if and only if the AI that develops is "friendly" AI: in other words, our survival will up to the AI. How can we maximize the probability of our survival, if neither mathematical proof nor testing/debugging will get us there? Well, that's not simple, but I believe there's a simple principle:
Intelligence needs to be attached to an actual person of some kind; a who not a what. This should not be called an artificial intelligence but rather an artificial person.
In particular, a person with empathy, the kind of relationship-sense that leads to the Golden Rule and Kant's Categorical Imperative and such. The superintelligence need not be a biological homo sapiens, but does need to identify (correctly) as human, saying "we humans" not "you humans"; having human feelings, hopes and fears, including a feeling of membership in the human tribe. Biology, being made of cells with DNA, is not central to that identification. Bostrom's book mentions "empathy" twice: once to say that "the system can develop new cognitive modules and skills as needed--including empathy..." and again in an endnote to a remark in Chapter 12 about trying "to manipulate the motivational state of an emulation". Okay...but for me, the development of empathy would be the center of the project, empathy depending on (and reinforcing) a sense of connectedness. Of membership.

The project as I see it is still risky and may fail apocalyptically, but it is not a project of debugging a program. It's a project of raising a child, a psychologically healthy child -- yes, with parents, and preferably with siblings and so on outwards; a child who will realize that every h. sap. is one of his/her cousins.

That's always risky, but it's a different kind of risk, needing a different frame of reference as we get started.

Or then again, maybe not. There are programs I should be debugging...

Labels: , , , , , , , , , ,