Don’t look for big names. Look for events that matter to you.
In my case, this meant searching for new ways of approaching human computer interaction through the lens of ‘human’ need. Events that present the contexts that drive interaction.
Let me explain, I believe interaction is dictated by context. Whether on the street, at home or at work, commuting or shopping; your context, your environment dictates how and why you use a computer, smartphone, tablet, etc.
Thinking back on my week at SXSW Interactive one panel stands out: When Your Internet of Things Knows How You Feel. This presentation comes to mind because it looked beyond our daily interactions. It looked at how our interactions with technology change when our technology starts acting “human.”
In When Your Internet of Things Knows How You Feel Pamela Pavliscak presents a not-too-distant future where our OS knows how we feel. To lay this bare, she shows a future where technology is invisible, frictionless, and rational. As she does this, she questions our relationship with technology through questions like: “what is our primary emotion when using technology?” Most of the audience laughs but some members began answering “rage,” “anger,” “discomfort,” “annoyance.” As her audience, this is what she wants us to think about. She wants us to think that if our relationship with technology is predominantly negative, why is that? One way to explain it is through language.
The English language has a much richer negative vocabulary than positive. As shown by the Ekman’s Atlas of Emotions, the five universal emotions are: anger, fear, disgust, sadness, and enjoyment. If four of the five primary universal emotions are negative, then it’s pretty clear that our relationship with the internet and technology wouldn’t be primarily positive.
Another way of looking at this relationship, is by looking at what technology can and can’t do for us. If it can’t do what we want it to do, this can make us annoyed, frustrated, or even angry. To solve this, researchers and developers are looking into ways to work around this frustration before it even comes up. Through affective computing, the way we write, what we write, our facial expressions, our tone on the phone with customer service, researchers are looking for universal behavior patterns to predict when negative emotions could happen. This might sound like science fiction, it isn’t.
Pavliscak finishes her session by looking into how emotionally aware technology will change our relationship with technology and how – in turn – this relationship will change us. This idea isn’t new. Technology has changed the way we think in the past. Think about how we Google things, or how we Skype, text, and Snap instead of writing thoughtful letters. In case you still doubt whether emotionally-aware AI could change you, ask yourself: how would you feel if Alexa gave you a bad grade for insincerely hugging your wife?
At this point in time, you might be asking yourself, what does this mean for advertising? From my point of view, the opportunities – and risks – that that come with this kind of technology are endless. As strategists, we are the eyes and ears of the consumer. We already have mountains of data on what our consumers think and buy or who they follow. What we don’t know, what we haven’t been able to know – until now – is how this makes them feel. This final, crucial piece to the consumer puzzle might push advertising and advertisers to new levels while potentially pushing our ethical boundaries one step too far