Have any of you ever thought about what a life without a mouse or keyboard would be like when using your computer? I suppose if I’d asked you this question 5 years or so ago you would probably think I was mental and what a stupid question. Maybe you still do now, but I honestly think that considering how quickly technology is advancing, it won’t be long before this happens. Just think, we already have facial recognition to log into our laptops, so is it that hard to believe that it won’t be long before we can incorporate our gestures with our voice to control the things around us?
Image 1) An example of 2D gestures that could be used to interact with products (Digital Trends).
Siri and Amazon Echo are perfect examples as to how technology has advanced in such a short space of time – we can now use products without touching them but by just speaking to them. Who would’ve thought it eh – we get these smart products to give us the answers we’re after or to carry out the tasks we’re now too lazy to do ourselves (we all know that one person who now says “Alexa turn on my TV” rather than just clicking a button). This is because it’s the latest technology that has been designed into an intuitive user experience. It allows your tasks to be seamlessly carried out with minimal effort – why move when you can just speak.
Image 2) Amazon Echo dot
Image 3) iPhone 6 with Siri activated
Considering how quickly voice activated products have come into our world, I believe that it will be sooner than we all realise that gesture controlled, smart(er) devices will be in our homes, work places and lives in general. For this additional ‘input’ to be incorporated into our everyday lives, a new way of designing user experiences for it will need to considered.
For example, the gestures you would do, if put in front of a large gesture controlled screen now, would likely be the same as those you use on your mobile (without the touch). However, how you use your mobile and how the person sitting next to you uses theirs could be completely different (e.g. android vs apple users).
Looking at image 4 below, most people use one hand to zoom in on a mobile, but as the screen size increases wouldn’t it make sense to use two? If so, does this mean that it would make sense to use two on a gestured controlled screen? Or one because it allows us to put minimal effort into controlling our screens? There is no right or wrong way now, but if gesture controlled devices were to take off then there would need to be.
(UX Director at Exippl) stated in her blog for InVision, “We’ll need to design truly multimodal experiences, combining various inputs in a seamless flow. We should add “gesture-friendly” to our vocabulary.”
Being designers in this decade allows us to be truly innovative by merging all user inputs and designing appropriately to create the best user experience for our customers. “People are seeking out experiences – not technologies.” (Bobby Gill, 2016
have already been working on inventing gesture control interactions (seen below), however, this is not to say that they will be the only or most intuitive. We are currently in a period of fluidity and excitement where new interactions can be introduced into our lives that break normal behaviours. The opportunity to be innovative and blue sky is now. Responding to user needs and using their inputs is what is driving innovation within our digital and ‘smart’ world. Who knows, maybe gesture controlled devices won’t take off as well as voice activated ones did, but if they do, they will only be successful if new user experiences are explored, tested and built.
Gestures from left to right: Button pressing, turning dial, slider and panning page