Thursday, January 29, 2009

Interacting With Web 3.0 Via Mobile Devices

So, I've been looking around at the world the last few months and seeing a future we discussed at the Media Lab start to emerge. Mobile video projectors are getting attached to the cloud. At the same time, video and GIS tools are creating a rich 3D dataset of the physical world.

Samsung announced a video projector/cellphone at CES. (here's anupdated video...The previous video was removed)

MERL Has been exploring how devices like this could become interactive.

Tools like Photosynth could, theoretically, create a correlated 3D map of virtual spaces.

What does it mean when you can interact with any physical surface and possibly see how the surface has been "touched" before?

Here's an inspirational video

The Unfinished Swan - Tech Demo 9/2008 from Ian Dallas on Vimeo.

There has been alot of talk about semantic web/web 3.0. I think what may be missing from this conversation is the dominant interface for web 3.0 will be mobile devices...and it will be used to create context aware environments.

How might this work? What might this mean?


  1. ok, that MERL project is new to me and extremely impressive!
    Although, I tend to think this would scale easier using wearable displays (like glasses with transparent screens). Then you don't have to deal with actually projecting the image, you just look around and use head tracking along with wii-like pointers.
    Anyway, I've been waiting for this kind of stuff since reading about the MIT wearable computing group back in the 90's. I bet the idea that we all walked around with no HUDs and only experience digital content on small define screens instead of seeing the data integrating with the real world will seem quaint to the next generation.

  2. @ Seraph,

    There are problems with extended use of HUDs. They tend to cause a lot of eye stress. Data transfer is slow. The battery formfactor is extremely limited. Users have to put them on their head to use them, making it hard to share a device.

    Handheld projectors embedded in cell phones intrigue me because they allow people to share information in a new way ... and keep more private interactivity on the phone.

  3. well I think the limitations of HUDs that you listed are just problems with the implementations to date. I could argue that hand held projectors are impractical because they are too dim, low-res, and hard on batteries. But that will get better with time. I think HUDs are just as likely to overcome their obstacles.
    But however it's done, I think the continued blending of the digital and real worlds is going to be very cool.

  4. I think HUDs are incredibly cool as well. I wanty one myself, but here's why I'm not holding my breath:

    The problems with HUDs are the result of materials and biomechanics. Unless the device is going to scan directly onto you retina (a difficult and dangerous task) strong prisms are necessary to bring the content into focus. These prisms, by their nature, cause eyestrain.

    The mass that one can realistically add to a pair of eyeglasses without them becoming uncomfortable is maybe a couple of ounces.

    Typically, battery density increases by ~5-7%/year.

    This means that if you want an untethered solution, it is many years out. Assuming that you want to run your HUD in daylight, a video processor and a high-bandwidth wireless network for an 8 hour work day, the battery size is going to be a challenge for years to come.

    Per the handheld projectors, they aren't perfect, but the brightness and power consumption are pretty good. Yes, a bigger battery would be nice...but I am more likely to put that in my pocket than on the bridge of my nose.