Tuesday, March 24, 2009

Keeping Users Oriented in Complex Multitouch Interfaces

Last week, I started a series of posts about the greatest interface challenges for Multitouch applications.

People love multitouch because it feels natural and easy to use. There are a couple of reasons for this:
1. Users are able to directly manipulate content in a nonmodal interface

2. Users are able to use kinesthetic (touch) memory to remember where thing are and how to accomplish tasks, just like they would other physical tasks.

These requirements create almost insurmountable layout limitations for a complex system.

Let's start with the first limitation:

Fitts' Law

Fitts' law
is a remarkably successful predictor of appropriate button size in multitouch systems:



If a common interface like Microsoft Word were modified for Surface using Fitts' Law, The screen would look something like this.

Ignoring the fact that data entry applications don't work well in multitouch, it's easy to see a much larger problem: There's no room for the actual content.

Modal Interfaces

Simply putting all of the controllers on screen (actually, less than 1/10 of Word's 1800 features are directly accessible in this view) makes the interface unusable.

The only way to enable a feature rich application is to HIDE A LOT OF CONTROLLERS.

This strategy is called a Modal Interface. If you Google this term, you will find nothing but vehement opposition for this strategy. HCI people have an almost religious hate for this stuff...because it's hard to learn, it provokes user error and is frustrating as hell when users don't know what mode they are in. Suddenly nothing works.

When interfaces are highly modal they are, by nature, less intuitive. For these interfaces to work, users need to know that modal and "quasi-modal" solutions (content and controllers that appear on command and disappear when released) exist, how to call them up and how to release them.

This can rapidly become overwhelming to new users...Even in a relatively simple, superbly designed application, like the tablet-enabled Alias Sketchbook Pro.


State Orientation

For modal interfaces to work, they require that the user always know the current application states and modifiers available to them. This can be as simple as changing the treatment of content and controllers to show which application elements are currently active: like these buttons from the Vista Application Bar


Often times, in multitouch applications, more is required. The application needs to show how multiple pieces of content relate. As seen in the Reactable demo below, this can be extremely "pixel inefficient".


When the users approach new interfaces that have numerous possible states, it is remarkably difficult to avoid confusion and keep the user oriented. Doing so requires the extensive use of affordances. Each of these affordances needs to be large for the user to actually notice state changes.

This leads us full circle to the maximum usable size of a multitouch surface: the comfortable breadth of an outstretched arm...or about 36". Bottom line: In multitouch, the there will never ever ever be enough pixels.

What do you think? What ideas do you have for running around these usability traps?

No comments:

Post a Comment