senckađ
Group745
Group745
Group745
Group745
Group745
Group745
Thought Leaders in association withPartners in Crime
Group745

Human Rights for AI?

28/07/2016
Experiential Marketing
New York, USA
261
Share
INFLUENCER: Jason Alan Snyder, Chief Technology Officer at Momentum Worldwide, looks to the future of artificial intelligence and the ethical dilemmas it may bring

Let’s jump ahead 20 years. Imagine strong artificial intelligence (AI) is integrated into society. Beings with non-biological brains work and play alongside people in every nation. Some of these robots might look like us, speak like us, and act like us. So here’s the question - should they have human rights?

Let’s assume the function and reason for human rights are similar to the function and cause of evolution. Human rights help to develop and maintain functional, self-improving societies. Evolution perpetuates the continual development of functional, reproducible biological systems. Therefore we have to assume just as humans have evolved, and will continue to evolve, human rights will continue to evolve as well. It may take some time but strong AI will eventually develop strong sentience and emotion, the AI experience of sentience and emotion will likely be significantly different from the human experience.

Today machine intelligence is a pervasive part of our lives. It augments our own intelligence. Consider Google, GPS Navigation systems, e-mail, etc. all of these things are extensions of our intelligence. Soon they will be a part of our minds. And soon after that they will have minds of their own.

So should a 'thinking' machine have human rights? We may be getting very close to the point of being able to build machines that emulate (or exhibit, depending upon one's perspective) consciousness. Seriously considering what that might imply is useful now, before the reality confronts us. Moreover, thinking through the details of whether to assign rights to seemingly self-aware machines will allow us to examine other messy ethical issues in ways which give us some emotional distance.

The combination of steady advances in hardware sophistication and new advances in cognitive science suggest that breakthroughs in machine consciousness are entirely possible. As "traditional" approaches to AI have faltered, it's quite possible that a breakthrough will come more as an "A-ha!" moment, a realisation of a new paradigm, rather than as the accumulation of a long history of close-but-not-quite attempts. But even absent a conscious self-awareness setting for your iPhone or Android device, there are good reasons to consider ahead of time what we will and will not accept as "proof" of consciousness, and what limitations there should be on the rights of self-aware non-humans. At the very least, we should be aware of how the idea of self-aware machines can be abused.

Corporations that own computers and robots might seek to encourage a belief in their autonomy. Why? To escape liability for their actions. Insurance pressures might move us in the direction of computer systems being considered as moral agents. Given the close association between rights and responsibilities in legislative and ethical theory, such a move might also lead to a consideration of legal personhood for computers. The best way to push back against the pressures to treat computers as autonomous would be to think carefully about what moral agency for a computer would mean, how we might be able to determine it, and the implications of that determination for our interaction with machines.

The fact that at least parts of any computational AI software will have been written by humans is also worth bearing in mind. If the "ethics engine" and "morality algorithms" ultimately come down to programming decisions, we must be cautious about trusting the machine's statements -- just like we have well-founded reasons to be concerned about the reliability of electronic voting systems. One problem is that efforts to make machines more 'sociable' in both behaviour and appearance short-circuits our logical reaction and appeals directly to emotions.

Why is all this worth considering? My work focuses on the fields of marketing and advertising. These industries focus on persuasion. That means shifting one’s beliefs and feelings. And the reasons for those shifts are largely financially motivated.  As the marking and advertising industry becomes automated with the help of AI – like every other industry – the controls of the machine intelligence actively engaged in those efforts require governance. I have come to refer to this as our entering an era of “ethical persuasion.” Leaving machine intelligence unchecked to persuade both humans and the AI that everyday takes more proxy of our financial decisions is in many ways the canary in the coalmine for AI’s influence on culture.

As we give more proxy to AI every day - perhaps most frightening in this dynamic is that emotions may be unnecessary to continue reproduction in a post-strong AI world. But they will still likely be useful in preserving human rights. We are far from the technology to prove whether a strong AI experiences sentience. Indeed, we don’t yet have strong AI. So how will we humans know whether a computer is strongly intelligent? We could ask it. But first we have to define our terms, and therein exists the dilemma. Paradoxically, strong AI may be best at defining these terms itself.

Credits
Work from Momentum Worldwide
Lost in the World
CEAFA
22/01/2024
8
0
13
0
7
0
ALL THEIR WORK