Photo: Deep Mind
Thinking about ‘the human’
I just finished watching a compelling, and emotionally moving documentary on DeepMind’s AlphaGo’s victory over the world’s best Go player, Lee Sedol. That was back in 2016. It seems like such a long time ago now, as if we’ve been living with this new reality of super-human AI for so long, that we take it for granted that machines are better than us at so much now, and that there are fewer and fewer places where we simple, mortal humans can still excel.
What is evident, too, is that machines are going to keep getting better, smarter, and start to weave themselves more and more into our lives: into our workplaces, into our play, and into our bodies.
One thing that struck me looking back on AlphaGo’s triumph is how we are increasingly conceiving of ourselves – that is, what it means to be human – in comparison to our mechanical rivals. We seem to mark out a certain territory, call it ‘human’, and put everything else outside.
Which is fine, except in the case of robots and AI, they are coming to occupy more and more of the territory that we used to think of as exclusively ours. Which means that we are forever shifting on to ever-slippier ground when trying to define really important questions, like what does it mean to be human? What’s special about us? And, consequently: To whom do we want to bestow ethical consideration?
One philosopher I enjoy reading very much and I think can help us think about these issues is David Gunkel, who asks, ‘Can machines have rights?’ We shouldn’t answer this question, Gunkel says, based on whether or not machines successfully attain human-like capabilities or not. Rather, Gunkel, and others, such as Mark Coeckelbergh, object to the way that we decide who gets rights in the first place.
The problem with the traditional approach to ethics, according to Gunkel, rests with the very way we make decisions as to what gets included, and what we exclude. And remember, this is a system that has been often used (and still too often is) to exclude not just machines but others, such as – at various points – women, children, people with different skin colour, and people of different abilities.
Traditionally, we decide what to include, Gunkel explains, based on a set of a priori characteristics. This Gunkel calls the ‘properties approach’: we first define criteria, or mark or territory, and then ask whether a particular entity – a person, or a machine, meets that criteria. Such a system, Gunkel demonstrates, is always going to be normative and will always support existing structures of power. And as we’ve seen with our treatment of robots and AI (and to many others before and since), such a system is open to abuse and manipulation. We make the rules, and when it looks like we’re losing the game, we change the rules to our advantage.
Alternatively, Gunkel and Coeckelbergh, following the work of Emmanuel Levinas, advocate for an approach known as ‘social-relational ethics’, which doesn’t start with a priori criteria. Rather than seeing things as having moral value because of its internal characteristics, in social-relational ethics value is attributed to things within a social context. Instead of a priori characteristics, things (or people) are deemed to have moral standing because of their a posteriori social interactions.
This approach, when applied to moral consideration, means that instead of moral significance residing in the subject, or the object, the important consideration is the relation between the two. Relationships are more important than the thing-in-itself.
Social-relational ethics offers a way out of the power struggles inherent in the ‘properties approaches’ that dominate other moral systems. Social-relational ethics bestows rights on the basis of how we relate and respond to things, or each other. Such an approach is very post-human, in that traditional humanist ethics relies on establishing a priori definitions.
Social-relational ethics, therefore, knocks the traditionally-conceived human from the privileged place of power: that human doesn’t get to set the rules, be the referee that sits in judgment over who is and isn’t playing fair, and is no longer allowed to change the rules if it don’t like the way the game is going.
Social-relational ethics will also save us from our constant preoccupation with definitions. Social-relational ethics are fluid, and deal with immediate social relations between two objects. By allowing us to step back from the endless battles of boundary drawing, we might not be burdened by our desperate need to distinguish an ‘us’ and a ‘them’, or clearly demarcating between ‘human’ and ‘machine’.
Such a move would have impacts beyond robotics, too, which is something I and my colleagues would like to consider in future posts. Because technology is going to have greater impacts on human brains and bodies in the very near future. The line between human and machine is going to become less distinct, harder to maintain. Real, living cyborg posthumans are no longer the strange stuff of future sci-fi films but are increasingly to be found among our classmates, our colleagues, our partners, our children. And we need new ways to deal with these humans, without drawing more and more lines and erecting barriers that we can’t possibly maintain.
These are just some initial thoughts. As always, we welcome feedback; you can contact the author directly via email or via Twitter. Check the #ethics hashtag for future entries.
Further reading
- Gunkel, Robot Rights (MIT Press), 2018.
- Coeckelbergh, M. Robot rights? Towards a social-relational justification of moral consideration. Ethics and Information Technology 12, 209-221 (2010). https://doi.org/10.1007/s10676-010-9235-5