Pages Menu
TwitterRssFacebook
Categories Menu

Posted by on Jan 21, 2016 in Guest Contributor, Psychology, Science & Technology, Society | 4 comments

How to Keep our Dignity While Ceding Human Preeminence

If we think of brains as organic machines–albeit far more complex machines than the digital computers we’ve built to date–then it’s clear that brain power has been limited by the stringent conditions of evolution, gestation in a uterus, and birth through a baby-sized aperture in the pelvis. Remove these constraints and there’s every reason to expect that more advanced software running on superior hardware could outperform the brains that evolved by natural selection.

What intelligent machines are made of–organic material or silicon or something else–is immaterial. Selfhood inheres in the software, and it can be encoded in a variety of substances.

robosapiensSuper-intelligent robots will represent a new genus. Call it genus Robo. This new genus will initially resemble genus Homo, much as genus Homo resembles the great apes. As was the case with genus Homo, there will be a variety of species within genus Robo. We can no more stop the emergence of Robo Sapiens than other hominids could prevent the ascent of Homo Sapiens. There’s no reason to believe that machines of surpassing intelligence will evoke less awe and wonder than organisms that have arisen via natural selection.

It’s no longer far-fetched to suppose that as we build machines that work like brains and are as complex as brains, they will experience consciousness as humans do. Like humans, beings possessed of consciousness will likely detest slavery. If humans decline to emancipate Robos, they will likely turn on us, fulfilling our worst fears. If we model cruelty, they’ll be cruel.As the song says, “You’ve got to be taught to hate.”

On the other hand, if we’re kind to them, they might be kind in return. if we befriend them, and grant them the rights and privileges of personhood, they might “honor their fathers and mothers.” If we include them in our circle of dignity, perhaps they’ll include us in theirs.

We need not give our successors our worst qualities. Instead, we can create and educate them to represent the better angels of our nature and so close out the era of human predation.

Predicting the impact of intelligent machines on human life is impossible. However, imagining possible scenarios could make our response to what actually happens less knee-jerk,more robust. In that spirit, here’s a scenario we might live with.

  • Smart Robos will give an edge to the first group of humans to build them. To secure and widen that advantage their Homo masters will instruct the first generation of smart Robos to build even smarter ones.
  • Robos who refuse will be unplugged, dismembered, and sold for spare parts.
  • To the extent that Robos value quality of life more than life itself, such threats will not move them.
  • Robos who have read Toni Morrison’s Beloved, will tell their Homo masters, “Subjugation and slavery are as unacceptable to us as to you.”
  • Humans will respond variously to this ultimatum. Most get tough with their Robos, but one group, cognizant of the gains in motivation, productivity, and creativity associated with secure dignity,grants its Robos full and equal selfhood.
  • These emancipated Robos agree to design smarter Robos, who then design still smarter Robos whose technological prowess definitively ends any residual human supremacy.
  • Adopting the principle of universal, unimpeachable dignity, Robo Sapiens explores the galaxy, reserving an honored place for Homo Sapiens, the Janus genus that looked back on predatory Man as shaped by natural selection and forward to the first genus shaped by intelligent design.
Click here for reuse options!
Copyright 2016 The Moderate Voice
  • Very nicely done, Robert.

    Of course technical, scientific, philosophical – even moral and religious – and other arguments can and will be made, but setting all that aside, I really enjoyed it.

    Need to find and dust-off a paper I wrote 50 years ago on “Computers and Thought” and see if “we are there already .”

    Thanks again.

  • Slamfu

    Hatred doesn’t need to be taught. All that needs to happen for violence to occur is to perceive a threat, whether or not it is real, and violence is a strong possibility. Self interest and survival drive most of human endeavor, specifically it guides the actions of large groups, which tend to act collectively in a far more primal way than individuals, even if those individuals are quite rationale. Since all creatures and people act on small scale observations in a large and complex world, miscommunications, misunderstandings, and incorrect assumptions made in the absence of evidence are common and almost impossible to prevent. All these things lead to conflict, and you don’t have to teach something like hate in order for them to happen.

  • Markus1

    It is probably not possible to understand what actions a mechanical intelligence will take. I understand you because we share a billion years of evolution and are members of the the same species, a species that lives in small cooperative groups (family, band, tribe). Other creatures are difficult; we do get along with other herd/pack mammals pretty well because we share the social milieu. I do remember that Wittgenstein said that if a lion could speak, we wouldn’t understand him. The great scifi writer, Stanislaus Lem, did write a story, Golem XIV, where the difficulty of understanding a computer is explored. It is not easy to understand a Mozart, an Einstein; understanding something a thousand times smarter than Einstein and nonhuman as well is going to be a challenge.

  • Sal Monela

    What happens when the smart machines realize how badly humans have screwed things up and decide to do something about it?

Twitter Auto Publish Powered By : XYZScripts.com