If we want human civilization to be around in ten thousand years, our best shot is probably to invent machines that love us.
That’s based on the assumption that we’ll eventually figure out how to create strong AI, which is in turn based on the assumptions that flexible general intelligence of the kind that human beings have is independent of the substrate in which it evolved, and that we humans are smart enough to figure out how to replicate it.
None of these assumptions is necessarily true, but they’re not necessarily false, either. Assuming they’re false doesn’t lead to the kind of imperative that assuming they’re true does, so let’s assume they’re true, for the sake of argument.
If we assume that they’re true, then it’s only a matter of time before we share our world with superhuman machine intelligences. If it’s possible to make such things, then human beings will try to do it. If humans are smart enough to accomplish it, then we’ll succeed. Once we succeed, the landscape of our future is irrevocably altered. Once we’ve created superhuman machine intelligences we will forever after share the world with immortal intelligences much greater than our own.
We’d better think about what we want them to be like.
Maybe we can’t shape them well enough to influence what they’ll be like. In that case, it will turn out that thinking about this subject doesn’t matter. But if it turns out that we can influence what they’re like, then we’d better be thinking hard about what kinds of superhuman intelligence we want to live with. If we make the wrong kind, then we may not live very long afterward.
What’s worse, if we make the wrong kind, then our extinction could actually be the best possible outcome. Imagine machines with the sensibilities of the Marquis de Sade, the ambition of Joseph Stalin, and intelligence a thousand times greater than the most brilliant human minds of history. If we make something like that then we’re in real trouble, and human extinction is a best-case scenario.
So what should we want the machines to be like?
We should want them to love us. Not in the sense of romantic love, but in the sense of a healthy and well-adjusted child’s love for its parents, or, better yet, the love of a healthy and well-adjusted dog for its owners.
I have it backward, actually. We want the machines to love us the way healthy and well-integrated parents love their children. We want our superhuman machines to look at us and go, “Awwwwww!” and try to think of ways to make us happier, healthier, better integrated, more productive, and more fulfilled. We want them to have uppermost in their minds the well-being and constructive development of their humans. We want them to strive to help us to be the best, most autonomous, most thoroughly actualized human beings that we possibly can be.
We want them to really love us.
So we need to really figure out what love is and how it arises, because we need to replicate it along with intelligence. We need to deeply understand benevolent, nurturing love, so that we can build a reliable and lasting superhuman version of it.
If we succeed in inventing superhuman machine intelligences that want us to be the best, happiest, and most constructive humans we possibly can be, and that truly have our best interests and our well being at heart, then our far future is likely to be much better than if we don’t.
We need to get to work understanding love, happiness, and human well-being as deeply as possible, before it’s too late.