Tuesday, December 30, 2008

Attributes of Gestures, pp. 22-25

<-- Use table of contents at left to search comments


According to Dan Saffer's, Designing Gestural Interfaces, gestures can be made up of these components:

Presence

Position

Motion

Pressure

Size

Orientation

Including Objects

Number of Touchpoints / Combinations of Touchpoints

Sequence

Number of Participants.

Comments

In the interest of creating more memorable buckets, how about this:

Gestural devices measure presence, intent and identity by sensing:

Rate

How often an event occurs

Order

The sequence in which events occur

Position

Where the events occur


WHAT DO YOU THINK?


1 comment:

  1. There are different layers of the gestural interface. I'm thinking of this similar to the TCP/IP stack. Here is my Gestural Application Stack:

    Layer 0: Device
    Raw sensor data is measured

    Layer 1: Event
    Dan's list could be re-interpreted as the type of data within an event (which is determined by the type of sensor, reference his list of sensor types). Each event can have one or more data types. For example, touch a Microsoft Surface and it can detect Presence, Position, Size, Orientation, Objects.

    Layer 2: Gesture
    Here is where your Rate, Order, and Position goes, but really it's up to the application or API to collect all the events within a timeslice, organize them, and recognize them as gestures.
    Building on the above example, touching the Surface by itself is not a gesture, but if you touch and release within a certain timeframe, it becomes a tap gesture. Alternately, if you touch and hold for longer, or move your finger, then release, it could become a hold gesture or a move gesture. Each gesture consists of multiple events.

    The same set of events could be interpreted as different gestures, depending upon what the application is expecting or cares about. That move gesture could be a hold gesture if the application doesn't care if the user moves the finger a little bit, or a lot.

    Your comment "Gestural devices measure presence, intent and identity by sensing:" would be more accurate as:
    "Gestural applications recognize the user's (or users') intent by interpreting:"

    (Presence and identity really belong on layer 0, depending upon whether a sensor supports those.)

    ReplyDelete