Last month at TED, Gary Flake showcased “Pivot”, which is an incredible visualization approach to navigating large sets of data on the web. If you’ve seen the Seadragon demonstrations before, you’ll recognize this as a novel application of Seadragon. There a number of reasons this demo catches me – it’s a weaving together of so many things that are current for me right now, namely innovation through user-experience design, thinking about Cloud technology as a platform for innovation and the Semantic Web as a trend upon Cloud technology that is going to fundamentally alter our ability to transact on the web.

First of all, I just recommend that you invest 6 minutes of your life in watching the video. What really stands out for me here, is the experience of “the content is the data” and the opportunity to directly manipulate the data as you drill into the data set. As we start looking at gestural interfaces, it’s clear that the ability to quickly move through a data set in an intutive, engaging and revealing fashion is quite literally at our fingertips.

I think when we reach behind the glass from the user-experience, we recognize that this is a great example of one of the vectors of innovation that Kevin Lynch often talks about, “Client and Cloud”. That this rich client is able to sit atop platforms of information that exist in the cloud (such as Wikipedia), presents incredible value to such an application. However, in order to be able to offer this kind of navigation in the cloud, it requires that there be some structure and meaning to data that is presented there, and some ability to draw inference between relationships between data that exist in different clouds.

Inferring meaning about data (and particularly data in the Cloud) is a key research topic in the Semantic Web, while being able to connect and move data between clouds is also an emerging challenge that there is huge opportunity to address. I’d like to talk about disparate clouds, moving data across clouds, sharing data between clouds, and “Interclouds” in another entry…but for now, let’s drill a little more on the Semantic Web.

Inferring data about data

If you follow through some of the comments from viewers, a shortcoming that is pointed out (and this is still just a research project) is the need for all of the data to be tagged effectively and accurately, and with cross-referential tags. For the demo at TED, it’s expected that the data was pre-organized and tagged; for this to reach mass market, we’d need to solve that problem in a more scalable way.

If you want to read more about the Semantic Web, I’d highly recommend “Pull” by David Siegel as a clear and detailed guide to the Semantic Web, and the opportunities it presents. The Semantic Web – being able to infer meaning about content and from that meaning enabling a fundamental transition from pushing information to pulling it.

David talks about the difference between “Content” and “Data”, and that the only thing that isn’t metadata, is content. When we think about the content that we produce, whether that be documents, videos, photographs, manufacturing drawings, there is “Format” metadata that can be automatically attached at content creation time.

For a photograph for instance, we see a tremendous amount of metadata attached by digital cameras through an interchangable and agreed upon data format, EXIF. By tagging photos at content creation time, EXIF allows us to record date and time information, camera settings such as aperture, shutter speed, focal length, shutter time and ISO speed, as well as thumbnail images. Over and above EXIF, camera phones can geotag a photograph, recording geographic position of a photograph, so you can unambiguously capture where a photograph was taken.

The bigger challenge facing the semantic web, is what David calls “Content Metadata”, which by definition is more about the meaning of the photograph. You may have taken the photograph in Paris on a Nikon camera on Thursday afternoon; but how do you record who is in the photograph, that it’s a photograph of Notre Dame, that it’s a fashion shoot for a particular magazine, the name of the models in the shoot and the particular clothes they are wearing ? That’s a much richer tapestry of data about the content, that enables the more powerful searching and exploration that Gary’s Pivot demo shows.

We see some attempts at post-processing of content to extract content metadata; for instance face recognition in photos such as the work startups like Face.com are undertaking.

However, I think if we look at how the problem of format metadata was solved, we have a clue to how we can also address these problems for content metadata. At content production time, the metadata likely exists in some other format, in some other tool. The fashion photographer has likely booked their models, booked their wardrobe, described where and when their shot is taking place and who that shot is for. They may have sourced the clothing from a particular retailer, and the retail information for that clothing also exists, including colors, materials and perhaps even inventory. If we think about how we can capture that metadata in the tools that are used at the most early phases of content creation, and if we think about the ways in which that metadata can be preserved throughout the content creation and production process, then the final content can collect metadata throughout each stage of the production and creation chain.

When that data appears in the cloud, it can appear with meaning.

I’ve talked about photographs, but this applies in all other opportunities around content creation. Videos start with storyboards, describing actors, locations, camera angles, lighting and shoots. Manufacturing begins in CAD tools where materials, colors, weights, etc, are all recorded … but that information often becomes discarded or lost as a 3D model of a shoe becomes a flat comp in Photoshop becomes a JPG on the web.

I believe there’s an enabling technology for the semantic web, which is working our way from back to front, all the way to the beginning of the content creation process.

Summary

Gary’s demonstration of Pivot is a fantastic user-experience; direct manipulation of the content presented as the data, the ability to move seamlessly from trends and representations all the way to the very content itself.

However, powerful experience on the glass depends upon power behind the glass; the Cloud and the Semantic Web are as fundamental innovations as the innovations we are seeing in user-experience design delivered on the Web.

To bring this back to Kevin’s Vectors of Innovation, it’s the relationship between Client and Cloud. What’s exciting to me, is what emerges when we innovate on both sides of the glass.