New user interface paradigm with Microsoft Surface
I was lucky enough to be able to play with Microsoft Surface and its Software Development Kit (SDK) recently and to present it at the Student Technology Day in London on the 1st of October. It is an impressive device and an even better platform, extremely simple and fun to use and just as good to develop for. If you have never seen it in action, I definitely recommend you take a look at some of these videos to get a good idea of what the device actually is.
A few words of explanation on how the hardware works (from the Surface team): “Surface uses cameras to sense objects, hand gestures and touch. This user input is then processed and displayed on the surface using rear projection. Specifically Surface uses a rear projection system which displays an image onto the underside of a thin diffuser which is the table's surface. Infrared light is also projected onto the underside of the diffuser. Objects such as fingers are visible through the diffuser by series of infrared–sensitive cameras, positioned underneath the surface of the tabletop. An image processing system processes the camera images to detect fingers, custom tags and other objects such as paint brushes when touching the display. The objects recognized with this system are reported to applications running in the computer so that they can react to object shapes, 2D bar codes (tags), movement and touch.”
Sounds very complex but the result is something extremely intuitive and easy to use where the most notable thing about the interface is that... there is no interface (at least as we define it traditionally). Given the fact that the Surface has no “up” or “down” (people can sit around it) and that it recognises touch (and as such there is no competition to get hold of the keyboard or mouse) the whole paradigm of how to write software for it (and in particular interfaces) is completely new.
From a software point of view, the Surface SDK builds on top of .NET and developers can use either Windows Presentation Foundation or XNA to develop software that will take advantage of the capabilities of the platform. Because of that, developers will find familiar constructs that allow them to leverage their existing knowledge to create software.
If you are familiar with WPF, for example, you will know that it provides a number of controls that developers can implement in their code and bind these controls to data sources and this will populate the interface with elements that the users can interact with (for example a grid of customer data, a series of photos, etc). The Surface SDK adds additional controls which behave in a new way but that can be controlled in the same way the “traditional” WPF ones are. One control I believe is going to be extremely popular is the ScatterView control and, if you are interested, I would definitely recommend you read this blog post and look at the video there.
More details about the SDK will be released at the Microsoft Professional Developers Conference (PDC) in Los Angeles at the end of October (and no, unfortunately I will not be there) where Brad Carpenter and Robert Levy from the Surface Team will talk about all around surface and (possibly even more interesting) how the Surface SDK aligns with the multi-touch developer roadmap for Windows 7 and WPF.