Tuesday, December 30, 2008

New Multitouch Interface Conventions, pp. 25-26

<-- Use table of contents at left to search Book Club comments Dan Saffer’s, Designing Gestural Interfaces discusses emerging interfaces conventions and discusses their various weaknesses and benefits:

Right Click
Drop Down Menus
Cut & Paste
Select Default Buttons

While this is a good start to a list of interface components, there needs to be a more strategic approach to this sort of thing.

Every day, I hear gripes and justifications about what doesn’t work on multitouch. The fact that many GUI/WIMP solutions, like Radio Buttons, suck doesn’t mean that an appropriate variation isn’t required.

Here are some interesting thoughts about how a new interaction framework might work:

Top Level Framework
We need some variation of WIMP. I took a go at it a while back and called it PATA:

Special thanks to Tim Huckaby of Interknowlogy for their Angiographer demo.

Contextual Menus

For more complex applications, it’s hard to think about a world without markup menus of some sort. Here’s an interesting video from the Media Interaction Lab at Upper Austria University of Applied Sciences:

Content Selection and Organization Tools

One of the key issues in direct manipulation is how to sort content/icons. Multitouch needs a consistent framework for doing this…

What's still missing?

Multitouch Design Job Doing Museum Exhibits

Ideum, a New Mexico-based group that does interactive design and development for museums is looking for a Devigner...Seems like an interesting position.

Here's the link:

Attributes of Gestures, pp. 22-25

<-- Use table of contents at left to search comments

According to Dan Saffer's, Designing Gestural Interfaces, gestures can be made up of these components:







Including Objects

Number of Touchpoints / Combinations of Touchpoints


Number of Participants.


In the interest of creating more memorable buckets, how about this:

Gestural devices measure presence, intent and identity by sensing:


How often an event occurs


The sequence in which events occur


Where the events occur


Characteristics of Good Gestural Interfaces, pp. 19-22

<-- Use table of contents at left to search comments

From Dan Saffer's, Designing Gestural Interfaces, pp. 19-22

Discoverable, Trustworthy, Responsive, Meaningful, Smart, Clever, Playful, Pleasurable, Good.


Here are my criteria for a good interface:


Interactions are easy to discover, get to and repeat

(Unless they have the potential to be destructive or security risks)


The features actually make life easier by either:

Adding functionality

Combining sets of commands in a more intuitive way

Increasing navigability.


The feature bloat helps users achieve tasks they care about

by decreasing the effort, frustration and knowledge required to accomplish their task.

Dan’s insightful and exhaustive list seems to fit into these categories.

Josh Blake of Infostrat, Lynn Marentette of interactivemultimediatechnology.com and I have already started a discussion about this at:


Please, chime in!

Chapter 1, Part 3 Outline: How to Design a Gesture

<-- Use table of contents at left to search comments

Here's the next bit of the outline

7. Gestural Interfaces

a. Characteristics of:

i. Discoverable

ii. Trustworthy

iii. Responsive

iv. Meaningful

v. Smart

vi. Clever

vii. Playful

viii. Pleasurable

ix. Good

b. Attributes of Gestures

i. Presence

ii. Position

iv. Motion

v. Pressure

vi. Size

vii. Orientation

viii. Including Objects

ix. Number/combinations of Touchpoints

x. Sequence

xi. Number of Participants.

c. Interface Conventions

i. Cursors

ii. Hovers

iii. Double-Click

iv. Right Click

v. Drop Down Menus

vi. Cut & Paste

vii. Multiselect

viii. Select Default Buttons

ix. Undo

d. States and Modes

i. Switching between states and modes should be avoided.

8. Determining the Appropriate Gesture

a. Elements of a Gesture

i. Sensors

ii. Steps in Task

iii. Physiology of Body

b. Using People to Determine the Right Gesture

i. If you watch users in situ, they will often act out the right gesture, naturally.

c. Matching the Gesture to the Behavior

i. There is generally an intuitive freeform gesture for any given command.

Tuesday, December 23, 2008

New App From Razorfish Atlanta

Microsoft Surface Financial Services Application - Razorfish Demo from razorfish - emerging experiences on Vimeo.

Deciding to build a new interaction is a complex decision. The choice should really boil down to three questions though :
Is it Discoverable?
Is it Useful?
Is it Desirable?

Two key interactions in this app never passed these filters. As a result, they make the app very difficult to use.

Object Recognition

-How would a casual user know to put coins on the table? (A written message to put a coin on the table isn't the answer)
- Why is this useful? (A touchscreen could do this much more easily)
- Does this improve users lives or is it a gimmick?

Changing Modes and Expanding the "Ring"

- How would a user know to do this? (The app has a little animation to shows users to do this. The interaction is that unintuitive.)
- Isn't there an easier, more direct way to change modes (like simply tapping the little chits that are attached to the coins)
- While I get that it's useful to know more about my bank's services, why would I want to do it in a Surface Scatterview?

Your Process Creates Your Product

I am certain that there was a lack of paper prototyping on this app. It's tempting to drop right into code on a project like this. The application is small. It looks simple and there are always painful budget constraints. Post mortems on over a dozen apps have confirmed that these are fuzzy headed arguments, at best.

They consistently lead to cost over runs and usability nightmares 100% of the time.

Designing for Surface isn't like designing for GUI. There aren't any standards. as a result, most new interactions don't work. Creating great, intuitive interfaces requires figuring out AND USER TESTING them BEFORE investing in code.

Wednesday, December 17, 2008

Multitouch Design Jobs

Hi All,

I just ran into these:

Interaction Designer, Plano, TX

Interactive Rich Media Designer Silverlight & Microsoft Surface, San Francisco or San Jose

Happy Holidays.

The Secret to Selling Surface

Need drives adoption of B2B products. Not want.

While the first chapter of Designing Gesural Interfaces implicitly asks,
"Where might multitouch be wanted?"

As a product strategist, I think a more important question that has to be answered first:

Whose big hairy problem does multitouch solve?

The first step in answering this question for a new product category, like multitouch vision systems is identifying:
1. Features - What unique things the product does
2. Benefits - What these features might be good for.

Here are the F+B's I've seen touted
for multitouch vision systems.

Let’s lay out the unique features that can be designed into a vision-based system, then figure out what sort of benefits they can create.

Multitouch vision systems, like Surface, have four distinct features. They may be more useful in some environments than others:

Blob Orientation
Optical Tagging/Object Outline
Bitmap Analysis
Ease of Cleaning the Tabletop
Intuitive (I don't completely buy this one)

What benefits would you add?

To date, the pitches for Surface that we have heard fit into 3 big buckets:
Order Accuracy
Group Decision Making


Cheap, Readily Available Dumb Objects
One of the core problems with physical computing is that it is so easy for the artifacts that control it to walk off.

Domino tags make vision-based multitouch systems compelling. You can always print more (if you think that big black blobs are ugly, it’s easy enough to print the with an IR absorbent or reflective ink. Ask your printer about varnishes).

Here’s an interesting example from Infostrat:

Durable & Easy to Clean
Independent of what any manufacturer’s rep tells you, no matter how durable a touchscreen is, it’s gonna break and it will always be at an inconvenient time.

While projection systems require maintenance, lamp replacement schedules are highly predictable.

Touchscreen bezels are germ magnets. Vision-based systems, like Surface can be designed with a flat, replaceable top.

Sanitary issues are huge in healthcare and public facilities. The device will either have a million hands a day on it…or be in an environment filled with biohazards.

The best medical electronics are sealed and don’t require ventilation. It is definitely possible to design a vision system with this in mind. I don’t think Surface was designed this way.

Users Don’t Need to Make Direct Contact With the Surface
User can wear white gloves (Think surgical gloves. Dark colors don’t work very well).

It may be possible to cover tabletop with a thin piece of material for use in clean rooms (I haven’t actually tried this and have no idea how it would impact camera calibration/false positives).

Order Accuracy
Multitouch solutions are great for kiosks that require high order accuracy and order confirmation. This is because they combine the accuracy of a barcode reader with the flexibility of a touchscreen… If only Surface could read UPC code.

Group Decision Making (Social Computing)
The ability to work together on one screen or over telepresence.

What benefits would you add?

What sorts of clients have
these big hairy NEEDS?

Surface Eye Candy from Cynergy

Cynergy plays with .wpf Code Effects and Pixelshaders | Smart Particles.

Finally, a photoviewing demo without Scatterview...thank goodness!

Multitouch Demos from Electronic Visualization Lab

It seems like these transparent physical artifacts are the beginning of a rich idea. How could they be used more effectively?

Financial Planning Demo From Accenture / Avenade

As mentioned last week, banking apps seem to be the first big fad on Surface. Here's Accenture's version.

While I like the concept, I'm not sure why it is being developed for Surface instead of a lower end capacitive product.
- It doesn't seem to leverage Surface's two differentiating factors:
Tagged Artifacts and the ability to measure blob orientation.
- UI development could run a lot faster in Flash
- The physical dimensions of Surface seem inappropriate for this scenario. I'm not sure where the banker's legs are. From what I can tell they should be inside the Surface table.

A Thought
It would be nice for customers to take away a printed copy of the converstations

1. Is a signature on Surface legally binding?
2. Usability testing I have seen suggests that, on Surface, 3D objects should be in one point perspective, with the POV at the center of the table. This way, all viewers can read the object. Do people find the 3D charts in this video difficult to read?

More notes later.

Tuesday, December 16, 2008

Please Comment on Posts About Designing Gestural Interfaces

Posts are organized on the left hand side of the page. Please leave your thoughts.

I've put up comments to date from our team for the first 19 pages. I'll be cranking on a summary of insights from this section tonight.

I'll make sure to post them in the morning, crediting you, of course!

pp. 16-19, Deciding When to Use a Gesture-Based System, Post 2

<-- Use table of contents at left for searching Book Club comments
b. Gestural Interfaces are Appropriate for
i. More Natural Interactions
ii. Less Cumbersome or Hardware
iii. More Flexibility
iv. More Nuance v. More Fun

More Natural Interactions

There’s a difference between perception and quality.

Last night, I was having dinner with a master mechanic from Boeing, He said something interesting that seems relevant:

“Today, fabrication is done to unimaginable tolerances,
but that doesn’t mean it is of higher quality.

Prior to CNC machining,
I could actually make higher quality parts
because mechanical feedback from my lathe
let me really understand the material
and how the parts would fit together.”

This speaks to what I call “The Fallacy of Computation”.

GUI’s high-precisoin interface has removed the subtlety from our interactions. Quality comes from those minute, unmeasurable, subtle decisions we make.

How can your interface bring back that feeling of quality?

Less Cumbersome or Visible Hardware

This is a great point…and it’s why kiosks have ended up being driven by touch screens, but it ignores a whole range of new productivity applications that will use artifacts.

The reality is that many of these artifacts may need to be “throw-aways”.

I think Surface’s main play is likely to be in Quick Serve Retail, for just this reason. Until DI systems can easily read UPC tags, physical computing will be limited to smart systems can print custom optical tags on all items in the system…

It would be perfectly easy to imagine printing trays, hamburger wrappers and the like.

More Flexibility

This is an oft-missed insight in business.

The key to killer interactions has nothing to do with the capability of the hardware or the device. They are just enabling factors.

Runaway tech products radically decrease the complexity of their customers lives.

Flexibility is key piece of this puzzle.
The iphone SDK/OS uses multitouch to put 90% of the features/apps users want less than 3 clicks away from any other feature. When Apple started its iPhone business, they approached Verizon Wireless first, not AT&T.

Verizon looked at the iphone’s featureset/cost relative to HTC and RIM…and told Steve Jobs to buzz off.

People may buy on features, but they stay for convenience.

Nuance and Fun

Ah, nuance and fun. I believe this is what actually differentiates products, but it's oh, so hard to sell up the chain of command.

pp. 16-19, Deciding When to Use a Gesture-Based System, Post 1

<-- Use table of contents at left to search Book Club comments
a. Inappropriate for
i. Heavy Data Input
ii. Reliance on The Visual
iii. Reliance on The Physical


A random thought on innovation: It is often less profitable to consider how new technology enhances existing applications than to find new niches.

What applications does GUI do so poorly that either no one has tried a solution or operators are paid over $100/hour for it?

Heavy Data Input/Reliance on the Visual

It's an inconvenient truth that Keyboards aren’t going anywhere. How long did it take you to learn QWERTY? Would you put that effort into a new interface? Probably not.

This viewpoint misses the great opportunity for multitouch and what Apple calls “Data Fusion”. Keyboards were originally laid out to be clumsy and to slow down data entry…because machines couldn’t keep up with our fingers.

Might there be a touch screen data entry solutions that’s better than keyboards? For what applications?

Here's a nice variation on touchscreen keyboards:

If a better data entry mechanism isn’t possible, how about a contextual keyboard, similar to Art Lebedev’s

It would be easy to layout an inexpensive, translucent keypad that could act as a projection screen, go on top of touchscreen and provide tactile feedback on top of a device like this...

In an off note, see the minimal angle on this table. This will make it much easier to use on an extended basis.

Reliance on the Physical

“The broader and more physical the gesture, the harder it is to do.”

This is the core problem with “drag and drop” solutions…is there a way for users to efficiently combine commands before entering them?

Inappropriate for the Context

This is the #1 issue that I see Surface Developers ignoring.

The physical environment that surrounds a product like Surface is the key to adoption.
When I was working on banking kiosks, we discovered that issues that had nothing to do with the device or software had a significant impact on adoption:
- People walking behind users
Flat overhead lighting
- Music/auditory feedback

Paco Underhill’s Why We Buy is a great book to start learning about these issues.

pp. 12-16, Components of NUI Systems

<-- Use table of contents at left to search book club comments

a. Sensors – Comparators – Actuators
i. Current Sensor Types in Mass Usage

The book describes a range of sensors that are in mass usage. This is interesting, but the big question is why were the sensors, and for that matter, comparators and actuators adopted. When you look at the arc of adoption of all of the sensors listed above, it is inversely related to cost.

Check out this link on the Foster Curve for more info

Dan Klitsner of KID Group, one of the most successful inventors of interactive toys, successfully uses the reliability of this metrics to identify opportunities and to project which toy concepts will be most successful.

- Which sensors, comparators and actuators are most likely freefall in the next two years?

- How can you leverage them?

- What is the Foster Curve for multi-touch likely to look like?

- How will that impact adoption in your space?

b. Overview of Multi-touch Systems
i. Resistive
ii. Surface Wave
iii. Capacitive/Dielectric
iv. Infrared
v. Vision Systems
1. FTIR - Frustrated Total Internal Reflection
2. DI – Diffused Illumination
a. Front
b. Rear

www.nuigroup.com is a great resource for more info about these things.

p. 12, Pubic Restrooms and Freeform gestures

<-- Use table of contents at left to search book club comments

a. Adoption driven by:

i. Low cost sensors
ii. Cost issues
iii. Hygene

NUI is cool...but what is going to drive sales? I like this post because it points to two key issues related to Multitouch:
1. Multitouch, esp. Vision systems will have to difficulty retrofitting into existing environments.
- Ambient Infrared (During the elections, MSNBC had to filter all the lights in their studio in order to use Surface).
- Hygene (esp. in healthcare environments).
- Formfactor (it's hard to sit down and use Surface...there's nowhere to put your knees).

2. What's the price/performance ratio. Application development for Surface is running about $100-300k right now. I could design, prototype and get well into manufacturing an IR-controlled paper towel dispenser for that price.

Chapter 1, Part 2: How Gestural Systems are Built

<-- Use table of contents at left to search comments

Here's the next bit of the outline:
4. Pubic Restrooms have been an
early adoption site for Freeform gestures

a. Adoption driven by:
i. Low cost sensors
ii. Cost issues
iii. Hygene

5. Components of NUI Systems
a. Sensors – Comparators – Actuators
i. Current Sensor Types in Mass Usage
1. Pressure
2. Light
3. Proximity
4. Acoustic
5. Tilt
6. Motion
7. Orientation

b. Overview of Multi-touch Systems
i. Resistive
ii. Surface Wave
iii. Capacitive/Dielectric
iv. Infrared
v. Vision Systems
1. FTIR - Frustrated Total Internal Reflection
2. DI – Diffused Illumination
a. Front
b. Rear

6. Deciding When to Use a Gesture-Based System
a. Inappropriate for:
i. Heavy Data Input
ii. Reliance on The Visual
iii. Reliance on The Physical

b. Appropriate for
i. More Natural Interactions
ii. Less Cumbersome or Hardware
iii. More Flexibility
iv. More Nuance
v. More Fun

The Designing Gestural Interfaces Book Club Is Up and Running

Hi All,

OK, I've put up the first 5+-posts and added a list of links on the left hand side.

How it the interface working? Is this navigable?

I'm off to lunch and meetings. I'll be back with more blog madness by late afternoon.

Chapter 1, Part One Take Aways

The Point & Do Perspective

Suddenly, after 25 years in the shadows, multitouch is being towted as the next big thing. Don’t get me wrong, I think that gestural computing is super cool, but there’s another reason the computer industry has turned its PR spotlight our way.

Computer hardware manufacturers and software publishers are having some serious problems that a multi-touch revolution would get them out of:

Hardware Problems
1. Competition squeezed the margin out of all but the highest end desktops a number of years ago.
2. Notebooks are outselling desktops.
3. Notebooks are rapidly becoming commodity products, if they aren’t already.
4. The old sources of pricing power: networking, display size and processor speed have or are becoming table stakes.

Software Problems
1. GUI has pretty much matured.
2. Few of the leading software apps will drive 10-20% growth through even more feature bloat.
3. Low Cost shareware and cloud apps are nipping at the heals of market leaders.

To get each new generation of value, customers need to buy both the new hardware and software solution. This is a nifty gambit...if it works.

A Multitouch Product Road Map
Apple, Microsoft and HP like multitouch because it provides a reliable product roadmap for the industry that is independent of expensive components. It is likely to look something like this:
Rev. 1 - Single Touch interaction
Rev. 2 - 2-point Touch Interaction.
Rev. 3 - “True” Multi-touch
Rev. 4 - Interaction with Physical objects.

Why Dan Saffer's Designing Gestural Interfaces is Important
Like any high growth market, there will be those who create real value and the hangers on. The people and organizations that succeed in any endeavor are the ones that know their niche and excel at it.

Designing Gestural Interfaces is the first meaningful attempt to put some clear definition around this field, its history and its niches.

Knowing this will help us and our clients:
- Identify our unique value
- Identify historical precedents
- Clarify market opportunities within the field

pp. 7-11, A Brief History of Gestures

<-- Use table of contents at left to search comments

Big Events in the History of Gestures

a. Started in 1982 at U of Toronto
b. First touchscreens in restaurant order systems in the late 1980s
c. First PC HP 150

d. There are a spate of “office of the future” research projects in the early 1990’s
e. Kiosks move into public facilities and retail in the late 90’s and early 00’s

f. VR gloves play a niche role in the gaming industry
g. 2006, Nintendo introduces Wii
h. 2007, Apple introduces iPhone

i. Other handset manufacturers introduce multitouch
ii. Jeff Han’s hardware is featured on CNN
iii. Microsoft launches Surface

Edwin Schlossberg of ESIDesign once turned to me and said, you know interactive technology isn’t new. A light switch is interactive.

Yup, gestural interaction has been bopping around in different incarnations longer than either Apple or Microsoft want us to believe.

pp. 3-7, Types of Gestures, Post #2

<-- Use table of contents at left to search comments

Main Interface Paradigms
i. Direct Manipulation
ii. Indirect Manipulation

As multitouch applications grow more complex, these two ideas will continue to blend together. The reality is that Direct Manipulation, though an incredibly cool idea, is only appropriate in very limited circumstances. For complex multi-touch UIs to work, they will have to get past this bit of ideology and find new ways to keep content and controllers as tightly coupled as possible.

For instance, direct contact with control points in an like this character animation program might be designed so that the overall motion of the character could be understood, even if some pixels were occluded by hands:

In a scenario like the video scrubber below, the user might want to touch or circle the pixels to be tracked and then to scrub off screen. This way, viewer can see any changes to the image.

While direct manipulation is innately more intuitive than indirect manipulation, carefully animation the way content and controllers animate into existence can do a lot to prime users about how to use them.

pp. 6-7, Embodied Interactions


I was recently in Japan with my girlfriend.

While the bathroom, I pushed a button on the wall thought meant to run the faucet. My little experiment resulted in my girlfriend’s head bouncing off the ceiling when the washlet (a sort of robotic bidet) decided to let loose with a volley of cold water.

The point is that embodied interactions only work if users understand the cues that tell them what the product does and how to use it. This is a perfect example of when the content (the sprayer) and the controller (the button) should have been more tightly coupled.

pp. 3-7, Types of Gestures

<-- Use table of contents at left to search book discussion

While the focus of the book on gestures, it seems important to discuss them in terms of the different classes of device that enable the gestures, instead of the other way around.

Two Main Categories of Gesture Sensing Hardware:
i. Touchscreen

It seems that there is a need to differentiate between:
- 2-point devices
- “true” multitouch devices
- Devices that recognize objects/orientation

ii. Freeform

In the process of defining this space, it seems that every researcher and talking head has come up with a new set of terms to define their slightly different philosophy: tangible media, context aware computing, gestural interfaces, ubicomp, physical computing, etc.

Freeform gestures seem like a worth attempt to bucket these “different fields” into a larger idea. It feels like to big a catch-all, so I would propose three categories of gestural interfaces:
- Smart Objects – everything from the towel Dispenser to your Cellphone
- Smart Systems – RFID, Kiosks that store your info to the WWW.
- Smart Environments – everything from Military Command Centers to CAVEs to John
Underkoffler’s new projects.

I know that the edges of these definitions are slippery, but it’s important to draw lines somewhere.

pp. 1-2, Multitouch Adoption is Happening at an Incredible Speed

<-- Use table of contents at left to search comments

Multitouch has moved from Hollywood fantasy
to geometric product growth in a 6 year period.


John Underkoffler’s work at the MIT Media Lab actually started in the mid-90’s, Fingerworks, who sold itself to Apple was actually producing multitouch products at the time Minority Report came out, in 2002.

That said, the level of public interest and the rate of adoption has been incredible. What characteristics do gestural interfaces need to gain/maintain for them to maintain their magic?
- Jonathan Brill

Hi all. Welcome to the Multitouch Book Club.

<-- Use table of contents at left for searching Book Club comments

Thanks for coming out to comment.

A special thanks to my friends, colleagues and fellow bloggers who have helped in getting the word out and to Dan Saffer who put together this amazing tome.

It’s so rich with content that I’m going to have to divide up the first 45 pages into several posts.

Each series of postings will cover ~10 pages of the book.
To make things easy to track, I’ll post an outline, like the one below before each new series of posts.

Outline of Pages 1-11

1.Multitouch Adoption is Happening at an Incredible Speed, pp. 1-2
----a. Multitouch has moved from Hollywood fantasy to geometric
-------product growth in a 6 year period

2.Types of Gestures: pp. 3-7
----a. Two Main Categories of Gesture Sensing Hardware:
----b. Two Main Interface paradigms
-------i. Direct Manipulation
------ii. Indirect Manipulation
----c. Embodied Interactions

3. History of Gestures, p. 7-11
----a. Started in 1982 at U of Toronto
----b. First touchscreens in restaurant order systems
-------in the late 1980s
----c. First PC HP 150
----d. There are a spate of “office of the future”
-------research projects in the early 1990’s
----e. Kiosks move into public facilities and retail
-------in the late 90’s and early 00’s
----f. VR gloves play a niche role in the gaming industry
----g. 2006, Nintendo introduces Wii
----h. 2007, Apple introduces iPhone
----i. 2008
------i. Other handset manufacturers introduce multitouch
-----ii. Jeff Han’s hardware is featured on CNN
----iii. Microsoft launches Surface

Saturday, December 13, 2008

Putting Virtual Earth to Good Use on Surface

Infusion has been doing great Surface work for a mutual client, so I sat down with their founder, Greg Brill (no relation) last week to talk multitouch. I look forward showing those projects to you as soon as they become public.

Here's something I can show now. If there's anything that drives me more nuts than photoviewing demos on Surface, it's the ubiquitous rip offs of Jeff Han's Worldwind demo at TED.

Infusion put themselves at risk and did one, but theirs is cool...and VERY Useful.

Here's their answer.

How might one use artifacts to magify or modify datasets on the table?

Friday, December 12, 2008

Join Our Multitouch Book Club

<-- Use table of contents at left for searching Book Club comments

Dan Saffer's, Designing Gestural Interfaces: Touchscreens and Interactive Devices was recently published by O'Reilly (a .pdf of the first 45 pages at this link). I picked up my copy today and will be commenting as I read the book.

It would be great for you to Follow Along and Comment Back

You can pick up the book in atoms here or bits here

Several of the people here have signed up to comment. We look forward to hearing your thoughts as well. We'll start posting on the first 45 pages by Tuesday.

Here's the link to this post. Please, get your colleagues involved, too!

Surface Meets A Wireless Controller

Has anybody else noticed that many of the compelling tabletop computing demos use the touch surface to control a vertically mounted screen?

Here's Another:

Raptor: Building Games with a Tabletop Computer from David Smith on Vimeo.

Interesting PhD. work from David Smith of Queens College.

I like:
1. That the game controller's physical presence keeps users oriented as to control clusters. I've seen a lot of behind closed doors stuff, but this is the first public demonstration of a smart object that works for me.

2. The 3D manipulation of the terrain. I can't really see, but the depth of the holes seems to be time-based. How could that idea be used to convey ambient information?

3. That the "Car Stamp" artifact is actually a model of a car. This may seem a bit cheesy, but it sure makes it easy to remember which tool to use.

Clarity Consulting Delivers 3 Surface Demos

Clarity Consulting of Chicago recently dropped these three demos. This is an impressive first go around with multitouch on .wpf.

I'm a bit unclear about why these apps are implemented on Surface though. They could be modified to work on any two point multitouch device.

Here are my first impressions:

3D Media Viewer

Prototype 3D Media Viewer for Surface from Clarity Consulting on Vimeo.
This is filled with interesting ideas: 3D manipulation, contextual sorting, Media organization.

Here are a couple thoughts on improving it:
1. Use atmospheric distortion (warmer light in the front, cooler light in the back) to separate the foreground and background.
2. A tagging solution that makes it clear what each cluster of files is...esp. the small "file clusters" in the background.
3. A contextual markup menu that makes it simple to change search criteria.

T-Shirt Designer

Prototype T-Shirt Designer for Surface from Clarity Consulting on Vimeo.

This seems to be a rehash of IdentityMine's Snowboard Designer application. See below:

Surface Promo from IdentityMine on Vimeo.
Aside from a $100k of flashy polish, three things make the snowboard app. more compelling:
1. Progressive disclosure. New task options and content are introduced as they are needed.
2. Interactive elements (like the color picker) are tightly coupled to the content that they effect.
3. The way that content animates into place suggests what to do and what order to do it in.


Prototype Whiteboard for Surface from Clarity Consulting on Vimeo.

The demo looks cool, and there are a number of interesting elements. I like how an untagged artifact is being used to erase and that the whiteboard is scalable.

It's unclear to me how a user would discover key features like scale and save.

It would be great to have:
1. A tool that enables users to make marks that are both more (like a pencil) and less (like a sponge) precise than their hands.

2. A tool that lets users select, cut, paste and save parts of their paintings.

Hi Clarity Consulting. Welcome to the Surface Community. We look forward to seeing more great work!

~ Your Friends at the Peanut Gallery