Multi-screen gaming experiences

By Philip Morton

At this year’s E3 trade show in Los Angeles, one of the clearest themes was multi-screen gaming.

Nintendo’s Wii U was present following its announcement in 2011, while Microsoft unveiled Xbox SmartGlass, allowing tablets and phones to interact with the Xbox 360. There were also signs of third-party developers taking a greater interest in using multiple screens.

Adding a second screen or device to a digital experience opens up a multitude of new gameplay options, but isn’t without its risks. If UX pitfalls aren’t managed well, businesses won’t be able to capitalise on the potential of multiple screens.

Divided attention

By definition, multi-screen experiences divide the user’s attention and managing this can be tricky. At this year’s E3, a number of playable demos of the Wii U suffered from this problem. It wasn’t clear whether you were meant to be looking at the TV or the screen on the GamePad controller.

There was quite a difference between first- and third-party titles. Nintendo’s own games made more inventive use of the GamePad, instructing players to hold the controller up to the screen and move it around, switching frequently between one screen and the other. These showcased the technology well, but could be very confusing.

Games from other publishers were more conservative in their use of the second screen, with most placing the in-game map on the GamePad for reference when needed. This kept the TV as the primary focus for people’s attention, a less disorientating experience.

The key takeaway here is to ensure that users know which screen they should look at and when they should switch their attention to the other. There must be a clear reason for having information on the second screen and players shouldn’t have to jump back and forth between them too often. 

Different uses for different content

Xbox SmartGlass allows tablets and smartphones to interact with the Xbox 360 to enhance both video content and games. Once connected, a mobile device can be accessed by the console and acts as a second screen.

One example shown was the HBO Go app, which showed hit series Game of Thrones playing on the TV while a tablet displayed a map and highlighted each location in the show. Another demo saw a game being played on the TV while a map was shown on a smartphone. Microsoft presented these two scenarios as equally valid uses of the technology, but in practice this may not be true.

Watching TV is a passive action, leaving your hands free to hold and control another device. Gaming is different though; you’re actively playing the game, using the controller in your hands and are focused on the TV screen. This doesn’t leave you with any spare hands to use your other device, unless you pause the game. This suggests a somewhat awkward usage pattern when used with games.

Communicating gameplay to users

It’s relatively easy to communicate traditional games to consumers: all you do is show images and videos of the gameplay on the TV. With multi-screen experience, this is more challenging. Publishers need to help players understand what the game will be like, particularly whether they will be standing up or sitting down, and how much movement is required.

With the Wii, Xbox Kinect and PlayStation Move, all three console manufacturers have experience of marketing games with novel control mechanics and this will be key for multi-screen gaming. As more games become multi-screen experiences, the way their use it illustrated will have to become more about what people are doing, rather than what is on the TV.

The most promising example yet: Watch Dogs

The most impressive multi-screen experience at E3 was one that was barely publicised. Ubisoft’s Watch Dogs was the standout title of the show but its companion iPad app was only shown as an afterthought in the demo at their booth. Yet it was the app and not the game which everyone who saw the demo was talking about afterwards.

Its iPad app showed a 3D rendition of the game’s city in its signature black and white digital block style. Its streets could be navigated using simple touch gestures and locations examined, allowing you to plan missions in advance.

While the developers were non-committal over its exact features, they described being able to see you and your friends’ mission logs, the ability to follow characters in real time and help out friends in their missions while they were playing on a console. The app is intended both for use while playing the game and while away from it, something that other developers and publishers have talked about, but not shown. 

Conclusions

This year’s E3 showed that the industry are experimenting with multi-screen gaming, but that no-one has yet cracked the problem of how best to deliver it. To succeed, publishers must focus on providing a compelling reason to use more than one screen and avoid compromising the user experience in the process.
 

Philip Morton

I help businesses create better products and services by putting customer insight at the heart of the design process. In the last six years, I've worked with the likes of Sony PlayStation, HSBC, Sega, Tesco and TSB. In that time, I've seen our research, design and strategy work improve both the experience for customers and commercial outcomes for clients.

View Philip's profile

What do you think?