How the future of user interfaces looks

By Christian Chesher

The term “zero UI” (zero user interface) has been thrown around quite a bit the past few years, but probably should not be taken too literally. Some kind of user interface (UI) will always be needed if we want people to interact with digital systems.

But more UI's are moving away from familiar, screen-based forms. Here are some ideas on how UIs are changing, what some of the barriers associated with these changes are and what the future of UI looks like.

Other types of user interfaces:

 1.     Ambient

Ambient interfaces are electronic environments that are responsive to the presence of people. An ambient interface has multiple elements that make up the experience of using it, while remaining subtle. One example everyone will be familiar with (which tends to go unnoticed) is walking into a supermarket. The doors automatically open which frames our experience of the store. We would think differently of our in-store experience if this feature was absent.

 2.     Haptic

A haptic interface relies on touch sensations and physical contact for the user to produce an input and an output. A basic example of a haptic interface is a rumple strip; the user knows to pay attention to where they are positioned on the road when they feel the car go over them. Similarly, a smart device will vibrate to indicate a notification. There are different types of vibrations to tell us what type of notification we are receiving so we can determine whether a response is required immediately, or if our response can wait.

3.     Voice

Voice user interfaces (VUIs) make human interaction with computers possible through a voice/speech platform that initiates an automated response. An example of a VUI is the virtual assistant in a smartphone that is controlled by a user’s voice. Having these assistants on hand creates an ease to inquiry without needing to type out a request. VUIs are becoming more prevalent in society with the likes of Amazon Alexa, as well as smart-home technologies. This means we have greater control over our environment without needing to physically interact with a device.

 4.     Gesture

Gestural interfaces are the interaction with computers and smart devices via gestures from the human body, typically through hand movements. Interfaces like these are generally facilitated by sensors and cameras. The MYO armband links to your smart device while allowing you to control it through gestures made with your arms and hands.

Barriers to adoption for emerging user interfaces

Over time we have become deeply invested in the screens on our devices. For convenience, and out of habit, we have come to shape ourselves around these devices and their UI. This gives us three hurdles to overcome before we see wider adoption of non-graphical UIs.

 1.     Social deviance

People are conditioned to conform to societal expectations when in public. Speaking to a VUI can feel extremely awkward for people and so users choose to not use their virtual assistants when they can be observed by strangers. In the context of the home interfaces like Amazon Alexa do not encounter this problem. Increased home usage of VUIs could help break down this barrier. However, as we saw in our own experimentation with Amazon Alexa skill development people’s willingness to engage VUIs in public spaces is still relatively low.

Gestural interfaces also require people to behave in unconventional ways, waving arms around with no apparent reason. This fear of social deviance currently prevents gestural interface adoption. For gestural interfaces to be successful they will need widely accepted simple actions rather than exaggerated gestures.

 2.     Unfamiliarity

People are accustomed to interacting with a screen and so alternative interfaces can be a confusing, unfamiliar experience that hinders adoption.

While asking to check the weather or the time has become more common place with VUIs, more complicated searches encounter the problem of speech biases. People are not yet familiar with the phraseology required to make complicated requests, and equally the brains behind the VUIs do not understand conversational speech at a complex enough level to be useful in certain contexts.

Although gestural interfaces have been around for some time, they still lack established guidelines on basic input functions. Products using gestural interfaces are still too clunky and intrusive and are unique to each platform, which is blocking wider adoption. With gestural interfaces people are often not aware that an action is required to engage the interface. 

Haptic interfaces are currently blocked by a lack of understanding of affordance. Affordance is how an interface communicates its function. Similar to how a doorknob indicates that a piece of wood is actually a door, haptic interfaces such as the vibration of a phone indicates that you have received a text. There’s still a way to go before we develop instincts about whether we can interact with an object or environment, and how.

 3.     Privacy

Earlier this year we looked at how privacy and security concerns were a barrier for banks when designing VUIs. People are not currently comfortable with VUIs like Amazon Alexa recording everything we say to tailor the response to each individual user. Alexa’s features are expanding and while we can check our bank balance just by asking most people don’t feel comfortable with their balance being read out loud. If the device could record and log what we say and react with intelligence tailored to us based on those recordings, and the context of the situation, we would then feel more natural interacting with the device.

Privacy issues are also existent with ambient interfaces. For example, if the interface is constantly ready to sense a human, how does someone go about not wanting to be sensed? People could easily feel intruded upon as the interface operates in the background, working without the user realising.

Social deviance and unfamiliarity will probably become smaller barriers with the passage of time, as seen with the adoption and social usage of smartphones. However, privacy will be harder to predict for the future as it depends on a larger shift in society – people could become more open to sharing certain private details, or they could become increasingly protective of such information and barriers will become stronger. 

The benefits of alternative UIs

Screen-based, graphical interfaces are currently the dominant UI; however, since screens are linear and two dimensional they limit our ability to integrate the digital and physical parts of our world. There are more ways of inputting information into a system than our fingers tapping on a screen. Alternative UIs take us away from the screen and encourage us to interact with the physical world around us.

Part of what makes many non-graphical UIs interesting is that they fit more seamlessly and unobtrusively into environments. As non-graphical UIs become more prevalent they encourage designers to adapt their products to fit the needs of the user and the environment. How comfortable people are to use these interfaces in public without scrutiny depends as much upon the usability of the technology as the acceptance as a social norm.

There are more ways of inputting information into a system than our fingers tapping on a screen. Alternative UIs take us away from the screen and encourage us to interact with the physical world around us.

The future of alternative UIs

Alternate UIs aim to help us be more natural in our digitally-enriched environments. Becoming more aware of their potential and changing our behaviour accordingly will require a societal shift, not unlike the adoption of smartphones into everyday life. We’ll need to become comfortable speaking to our devices, especially in public. These non-graphical interfaces are being developed now, and the challenge to create devices with fluid input functions to handle these UIs will fall to the Apple’s and Amazon’s of this world.

Only when solutions that overcome privacy concerns are created will people begin to let non-graphical UIs more freely benefit their lives. These interfaces will need to create frictionless, fluid and natural interactions which is the goal of non-graphical UIs. 

Designers will need to begin designing these UIs with the knowledge that, while adoption will come slow because of unfamiliarity, people will eventually become comfortable and things will move from different, to new, to commonplace. 

We can look to children for insights into how to design for some of these interfaces. Everything is new to children and so they do not harbour the same feelings of unfamiliarity that plague adults. Additionally, children are less attuned to social deviance and are therefore more open to breaking societal rules. Asia has also made unique steps in designing for the future as some non-graphic interfaces are already common place and widely accepted. 

We need to accept and prepare for change as it will shape our future. Alternative UIs are already here and the more accepting we are of them the more opportunity they have to enrich our world beyond the screen.

What do you think?