So, I've been looking around at the world the last few months and seeing a future we discussed at the Media Lab start to emerge. Mobile video projectors are getting attached to the cloud. At the same time, video and GIS tools are creating a rich 3D dataset of the physical world.
Samsung announced a video projector/cellphone at CES. (here's anupdated video...The previous video was removed)
MERL Has been exploring how devices like this could become interactive.
Tools like Photosynth could, theoretically, create a correlated 3D map of virtual spaces.
What does it mean when you can interact with any physical surface and possibly see how the surface has been "touched" before?
Here's an inspirational video
The Unfinished Swan - Tech Demo 9/2008 from Ian Dallas on Vimeo.
There has been alot of talk about semantic web/web 3.0. I think what may be missing from this conversation is the dominant interface for web 3.0 will be mobile devices...and it will be used to create context aware environments.
How might this work? What might this mean?