Emotional AI is tempting and terrifying in equal measure – LBB’s Laura Swinton explored the promise and perils with Huge’s Michael Horn and Wayne Deakin and anthropologist Dr Beth Singler
Artificial intelligence is coming for our emotions… how do we feel about that?
That was the question Huge posed last week at D&AD. They set up a big social experiment at the creative jamboree in London and hosted a panel to examine what it really means for individuals and society when brands and governments can peer into our emotional inner lives by simply pointing a camera at our faces.
The event was initiated in recognition of the exciting opportunities that are afforded to brands thanks to a potent combination of facial recognition technology and emotion-identifying AI. And, coincidentally, it occurred just as San Francisco announced a ban on municipal use of the tech and a man in Wales mounted legal action against his own biometric data – his face – being recorded against his will. While the business possibilities of this tech are hugely exciting it’s also something that’s a hot topic for the general public.
The timely panel brought together Huge’s Chief Data Officer Michael Horn, EMEA ECD Wayne Deakin and Cambridge University anthropologist and research fellow in AI, Dr Beth Singler, to thrash out the humanising and dehumanising aspects of this emotional AI tech.
From Huge’s perspective, the drive to focus on this particular topic stems from its roots in user-centric design. While it’s tempting to race ahead with new tech, focusing solely on short term tactical advantages, the very human question of how people feel about being exposed to and potentially exploited by it is something that brands shouldn’t overlook.
“The ethos of the agency is around user-centric design principles,” says Michael. “And so from our standpoint every time we engage in a new technology we have an amazing research team walking people through the experience of being exposed to technology and having in-depth interviews afterwards to really understand what it is you inhibit in yourself or what impact it has on you as a human.”
It’s lightyears ahead of the old creativity vs. data debate we’ve seen re-hashed a million times. Combining experience, technology and creativity is Huge’s happy place, after all. Rather it’s about the need to have a more nuanced, less defensive and constantly open discussion about the wider implications of the technology agencies deploy.
In the exhibition space, Huge invited the creative community to take part in their experiment. Attendees could watch a series of ads dealing with the topic of AI and their reactions were monitored. Playing with the idea that the technology is hugely polarising, yet comes with a substantial grey area, the graphics around the event were starkly monochromatic.
“I think that sort of stuff is really hard and that’s part of the reason, graphically, we designed this pop up experiment is that people are in roughly two schools. They take a lot of the cultural memes and understanding of what AI is – it’s dark, it’s evil, it’s surveillance – or they take the lighter, service camp – how many people here use Alexa and Siri – and it’s about context,” explained Wayne, who was keen to discuss the many positive ways that emotional identification, for example AR glasses that helped people with autism discern emotions in others and cars that can tell when drivers are getting sleepy.
To really dig into the complexities of how the tech is impacting society, Dr Beth Singler has been examining the topic from an anthropological view point. And she says much of the negative interpretations of AI are enmeshed in Western assumptions.
“In the western, Anglophone sphere we’re more influenced by the dystopic stories. And they have to be dystopic for them to be ‘stories’. Lest people think the tendency towards the dystopic says something inherent about us, but I think it says a lot about story structure. You can’t have a nice utopian story because we would get bored,” says Beth. “It’s also important to think about the other narratives that people don’t notice so much. Pay attention to where cultural differences incline people to think about AI more beneficently.”
One interesting example of this cultural nuance is around the topic of privacy. In more communal cultures or places where, for socio-economic reasons, people have to live more closely together, the concept of what is and isn’t private fluctuates.
And this nuance and complexity means that practitioners should, however optimistically they view the technology, constantly question the ethical parameters of what they’re attempting to do. And that’s particularly true with artificial intelligence, where tech moves like lightening but regulators and lawmakers move like mud.
For Michael and his team, some of the trickier topics they’ve had to navigate include the topic of ‘informed consent’. “Informed consent by definition means that someone must be informed – but there’s such a steep learning curve of understanding not just that there’s a video camera watching you, but that that footage is being enriched, that AI is processing that into identity to recognise you, into emotions,” says Michael. How informed can we really expect people to be?
Another ethical dilemma, related to consent, is how AI monitoring changes public space. However, suggests Beth, these considerations are not really new – it’s just that the rapid advancements have pushed them to the fore.
“It’s interesting, as an anthropologist thinking about ethical dilemmas. These are questions we have to ask ourselves already – we go into spaces to observe people… at what point do we tell people that we are observing them? And do we continually remind them?,” she reflects.
“There’s a lot of debate around the ethics of AI as this is an entirely new field but a lot of those questions have existed continuously for centuries. AI is, for the moment, a disjoint and a disruption for us to reflect more on what we want society to look like, what we want humans to be able to do. It’s certainly a good point for conversation but unfortunately in some cases the technology is rushing so far ahead of where people can keep up with it.”
From a creative standpoint, though, these crunchy ethical concerns are just the merest flavour of the meaty concepts waiting for hungry minds to tussle with. From Wayne’s point of view as a creative and a creative leader, it’s an area that young creatives looking forward to their career and older creatives seeking to keep evolving ought to explore.
“You want to be able to design future products, future interfaces that will be able to adapt visually, narratively, using AI and how it’s come on board and I think it’s a really exciting new territory,” he argues. “I would say it’s really important that anybody in this room who is pushing forward in their craft or their filed to embrace AI and make it a force for good. Like any technology, we reach a tipping point: will the evil empire get it.? And the more people who use AI for good and for doing cool shit that is meaningful and useful, then it becomes a really powerful force.”