Tuesday, March 17, 2009

The Problem with Zooming Interfaces ZUI

Recently, I've been looking at using zooming interfaces in multitouch, like the much vaunted Seadragon and Google earth. The underlying problem with these sorts of interfaces is that, while direct manipulation of the content is extremely compelling, it's very difficult to index between content that have weak relationships to eachother.

It's easy to get from this:


To this:


...but how do users efficiently get from one part of the image map or datacloud to another without a structured hierarchy (zooming is effectively a way to move up and down a tree)?



To get to highly unrelated data over here:



This still strikes me as 1 of the top 3 unsolved interface issues in multitouch (more to come on the other 2). In GUI's this function would typically be handled by a search box or similar command line interface.

Google is a great example of an interface that uses this strategy to move efficiently from one link in a datacloud to another ... but multitouch interfaces are fantastically bad at character input.

How would you go about doing this in multitouch?

7 comments:

  1. We've overcome this with the use of the zoom ring. For those who haven't seen this before, it's demonstrated in this video:
    http://www.youtube.com/watch?v=JISuTo3ALm4
    and in a more practical application at 2:00 in this video:
    http://www.youtube.com/watch?v=prDBRCKF66o

    ReplyDelete
  2. What about "fish eye" map distortion methods as a way to zoom without losing context? (There are lots of CHI papers on it if you Google it.) Or am I misunderstanding the question?

    ReplyDelete
  3. Hi Kevin and JoshB,

    I love magic magnifiers and I'm well aware of the variety of magic magnifiers our there.

    The problem with these is that, if users are Going to maintain their orientation, either the background or the foreground images must be limited in size. If the maximum usable area on a tabletop display is ~35" diagonal, this really means that you can only have one magic magnifier open at any one time.

    One solution to the maximum display size issue is to use of overlap and Z-depth (like in the images in the illustration). In cases like this, changes in magnification can affect percieved z-depth, disorienting users.

    ReplyDelete
  4. Another way to zoom and not lose context so much is to use head position to trigger extreme zooming. There was a "lean into the picture" paper at CHI last year that demonstrated this. The same thing has also been done in immersive VR applications. (There was UNC or UVA work on this, I believe.) Leaning beyond a certain point triggers zooming.

    This idea won't work on multi-user surfaces, of course...

    ReplyDelete
  5. It's an interesting problem and a cross section of multitouch and data visualization. Personally, I prefer the call-out as in the third image down to the magnifier. The Magnifier works well but it also obscures the marco-level view making more difficult to figure out where you are. If that makes sense.

    ReplyDelete
  6. @ Richard,

    This is the essence of the problem...if the two datapoints of interest are highly unrelated, how does one navigate between them in a hierarchical or relational visualization (like a zooming interface)?

    If the user wanted to access two specific links, for instance, the link "yes" illustration #2 and the link "alcest" in illustration #4, how might the user know where they are in the first place?

    A constantly available small map of the whole datacloud seems like a great feature, but it is nowhere near as powerful or flexible as the simplest search box.

    @ Kevin,

    There were some interesting demos of this at NYU a couple of years ago as well. I agree with you that gaze tracking has a lot of potential in vertically mounted interfaces.

    Beyond sensing scale, cameras that track the user's eyes also provide really rich contextual data. I really look forward to seeing this technology evolve.

    ReplyDelete
  7. In my opinion there are two major issues here. The first would be how to find something in such a dataset without a searchbox, the second, how to navigate in it effectively using a multitouch interface.

    Since these datasets represent the data ordered according to the relations between the data points, some clustering is usually possible. Displaying the clusters can improve the searchability of the dataset for the user. Introducing hierarchical clustering, where there are smaller and smaller clusters displayed as one zooms in would help even more. We are still nowhere near a searchbox concerning efficiency though.

    For navigation, I would implement a method to navigate along the relations. User could look arounf by scrolling with one finger, could scroll with two fingers and zoom along relations with the two fingered zooming gesture. Zooming along a relation would also align the camera gradually to a viewpoint above the line representing the relation. Which relation to choose for zooming would be determined based on the middle point of the line between the two fingers as they are placed on the touch surface. The chosen relation would light up. To aid orientation, a thumbnail of the overview of the whole dataset should be displayed in one corner. An arrow should represent the place and orientation of the user view.

    ReplyDelete