Tuesday, December 30, 2008

New Multitouch Interface Conventions, pp. 25-26

<-- Use table of contents at left to search Book Club comments Dan Saffer’s, Designing Gestural Interfaces discusses emerging interfaces conventions and discusses their various weaknesses and benefits:

Cursors
Hovers
Double-Click
Right Click
Drop Down Menus
Cut & Paste
Multiselect
Select Default Buttons
Undo


Comments
While this is a good start to a list of interface components, there needs to be a more strategic approach to this sort of thing.

Every day, I hear gripes and justifications about what doesn’t work on multitouch. The fact that many GUI/WIMP solutions, like Radio Buttons, suck doesn’t mean that an appropriate variation isn’t required.

Here are some interesting thoughts about how a new interaction framework might work:


Top Level Framework
We need some variation of WIMP. I took a go at it a while back and called it PATA:


Special thanks to Tim Huckaby of Interknowlogy for their Angiographer demo.


Contextual Menus

For more complex applications, it’s hard to think about a world without markup menus of some sort. Here’s an interesting video from the Media Interaction Lab at Upper Austria University of Applied Sciences:



Content Selection and Organization Tools

One of the key issues in direct manipulation is how to sort content/icons. Multitouch needs a consistent framework for doing this…



What's still missing?

7 comments:

  1. I don't like how Dan just throws out things like Undo, Cut/Paste, and Multiselect as things that don't work well in gestural interfaces. While reading, I considered what if the same arguments were made during the transition from command line to GUI applications when WIMP interfaces started out.

    "How can you select more than one file to move with a pointer without typing in a list of files?"

    It's a naive question in both cases. We have developed many ways to do that. Same thing will happen with gestural and multitouch interfaces. It's up to the application designer. (Also the industry to standardize a bit.)

    Some of the things on that list belong on Layer 4: Intent (adding onto my post on a previous blog.) The application will take a gesture or set of gestures and figure out that the user intends to undo, multiselect, cut/paste, whatever. The second video you posted shows this perfectly.

    ReplyDelete
  2. I'm not completely sold on PATA.

    Aren't Places and Things the same?
    And aren't Auras really just special Animations?

    That leaves Things and Animations. Your thoughts on those two are good. (I think Objects is a better name than Things, though.) Perhaps Gestures would be a good addition.

    I've seen other discussions on what comes after WIMP. One had this analogy that I liked:

    GUI : WIMP : WYSIWYG (What you see...)
    NUI : * : WYDIWYG (What you do is what you get)

    * = Objects, Animations, and Gestures?

    ReplyDelete
  3. Auras. I never thought of that but it is interesting. My question is, are auras really a necessary requirement for multitouch the same way a pointer is in WIMP? I'm not convinced of that.

    ReplyDelete
  4. The purpose is actually slightly different...Pointers are useful for 2 reasons:

    1. To help us track the relationship between mouse and screen.

    2. To confirm that user intent has been registered.


    Touch is innately good at the first objective...not so good at the second.

    Without Auras, how could users know that the Surface is "alive" when touching non-interactive areas of the screen?

    ReplyDelete
  5. In the PATA model,

    Places refer to the virtual environment.

    Things refer to physical artifacts.

    WYDIWYG ~ I think Ali Vassai came up with that one...very clever.

    I appreciate the comments. This is a concept worth kicking around. Keep 'em coming.

    ReplyDelete
  6. For Multitouch to move beyond novelty, it need to support complex applications and expert users.

    Many of the problems JoshB describes are solvable through a more complex gesture library.

    For example, there are three powerful pretenders to the right click: "touch and hold", "two fingered touch" and "double tap"

    These gestures aren't used because they are hard to discover, making them innappropriate for the public spaces that Dan envisions...but fine for expert users.

    ReplyDelete
  7. I wanted to share my thoughts from my perspective -sorry about the length!

    WYDIWYG is a good way of looking at the interaction. I like the way 3D is used in the PATA video clip, and I think the the aura concept can be broadened to the use of subtle music/sounds as well.

    Thoughts:
    I'm seeing a trend to "optimize" WIMP-type applications so they can be manipulated by gesture and touch. People who program kiosks, ATM's and POS touch screens are examples of what I'm talking about.

    Touch and hold, two-fingered touch, and double-tap are just a slight transformation of the WIMP world-view, and in my opinion, are still WIMP (wimpy!).

    The mouse interaction "pretenders" are fine for using legacy productivity applications, OK in the short run.

    For example, I have an HP TouchSmart, but I don't use the touch screen as often as I'd hoped. Try using using Visual Studio to code something on a touch screen!

    There is so much more that can be done!

    I know from the touch-screen prototype/demos I've worked on in various classes that applications that support collaboration and decision-making are important, and not just for work.

    What do people DO, really?

    First of all, we are social beings, most of us.

    Think of what we share and discuss with others, and think about what sort of interactions on a display might best correspond with this interaction.

    I'll share some of the things I've been DOING recently that involved some sort of technology + communication/collaboration with others:

    Travel planning - I recently went on a cruise and with various family members, selected activities I wanted to do on the ship as well plan my shore excursions.

    Picture sharing- I came back from the cruise with lots of pictures that I uploaded on Flickr.

    Related to this process:
    Picture annotating, tagging, choosing/comparing & editing it would be SO cool if I could use two sliders to enhance my pictures just so! Maybe a multi-touch widget could help me with this.

    Talking on the phone and responding to e-mail with friends and family members about the pictures, and what they wanted to see on Flickr- "you know, the ones of the dogs in Jamaica"... of which I had about 68!

    Financial planning with my husband. (I took info-viz last semester, so I know the possibilities are there.)

    Using the touch-screen to check-in at my eye-doctor's office:

    This was a user-unfriendly experience. Such a nice little screen. I was provided with a WIMPY PowerPoint-like interface which was confusing to use- and time consuming!

    Shopping at the new Super Wal-Mart: I asked the greeter if there was a map, and he said, "Food is over there, and the rest of the stuff is that way."

    Flat panel displays were all over the place, but they weren't interactive. I had no idea where anything was, and the few items I was looking at had no bar-code tags. There wasn't a clerk in sight. And the Wal-Mart TV rolled on-and-on via the display above my head.I hope you don't mind that this is a long comment, but I wanted to share my thoughts from my perspective.

    (I'm a middle-aged female who didn't take computer courses until about 5 years ago, at first, to learn how to make web-based interactive games that could be used on interactive whiteboards as well as mobile devices.)

    WYDIWYG is a good way of looking at the interaction. I like the way 3D is used in the PATA video clip, and I think the the aura concept can be broadened to the use of subtle music/sounds as well.

    Thoughts:
    I'm seeing a trend to "optimize" WIMP-type applications so they can be manipulated by gesture and touch. People who program kiosks, ATM's and POS touch screens are examples of what I'm talking about.

    Touch and hold, two-fingered touch, and double-tap are just a slight transformation of the WIMP world-view, and in my opinion, are still WIMP (wimpy!).

    The mouse interaction "pretenders" are fine for using legacy productivity applications, OK in the short run.

    For example, I have an HP TouchSmart, but I don't use the touch screen as often as I'd hoped. Try using using Visual Studio to code something on a touch screen!

    There is so much more that can be done!

    I know from the touch-screen prototype/demos I've worked on in various classes that applications that support collaboration and decision-making are important, and not just for work.

    What do people DO, really?

    First of all, we are social beings, most of us.

    Think of what we share and discuss with others, and think about what sort of interactions on a display might best correspond with this interaction.

    Here are some of the things I've been DOING recently that involved some sort of technology + communication/collaboration with others:

    Travel planning - I recently went on a cruise and with various family members, selected activities I wanted to do on the ship as well plan my shore excursions.

    Picture sharing- I came back from the cruise with lots of pictures that I uploaded on Flickr.

    Related to this process:
    Picture annotating, tagging, choosing/comparing & editing it would be SO cool if I could use two sliders to enhance my pictures just so!

    Talking on the phone and responding to e-mail with friends and family members about the pictures, and what they wanted to see on Flickr- "you know, the ones of the dogs in Jamaica"... of which I had about 68!

    Financial planning with my husband. (I took info-viz last semester, so I know the possibilities are there.)

    Using the touch-screen to check-in at my eye-doctor's office:

    This was a user-unfriendly experience. Such a nice little screen. I was provided with a WIMPY PowerPoint-like interface which was confusing to use- and time consuming!

    Shopping at the new Super Wal-Mart: I asked the greeter if there was a map, and he said, "Food is over there, and the rest of the stuff is that way."

    Flat panel displays were all over the store, but of course,they weren't interactive. I had no idea where anything was, and the few items I was looking at had no bar-code tags. There wasn't a clerk in sight. And Wal-Mart TV rolled on-and-on via the display above my head. If I could only harness the display and have the talking head answer my questions! I gave up on my shopping trip when I was in the facial lotion/potion section. Too many choices & too much fine print to read.

    Shopping at the mall with my young-adult daughters... I'll have to hold my thoughts on that one for now!

    Some suggestions:

    I think the artist/designers, (even dancers,) who are interested in multi-touch and gesture interaction have some interesting things to consider.

    PixelSumo's posts about multi-touch and surface interaction applications

    Hand Dance Actualization Machine Inspired by Rave Glowsticking

    ReplyDelete