Corn Flakes, Cuisine, and Why You Need to Care About Web 3.0
Sometimes online culture really makes me wonder. This past week the Twittering world was up in arms about a minor change to the functionality of the tool with regards to replies. I’m not going to go into details, the info is all here, and here, and here. Feel free to wear yourself out with it.
What’s most amazing to me is that while everyone was busy complaining about Twitter, two incredible things happened that almost nobody mentioned.
First, the long-anticipated Wolfram|Alpha semantic search tool was released. Wolfram|Alpha is the vision of scientist Stephen Wolfram. His idea was to “make all systematic knowledge immediately computable and accessible to everyone.” In plain English, what he’s trying to do is link all the data in the world together, to create a system of which you can ask nearly any question. The system is designed to work more the way humans think – by creating relationships between seemingly disparate data. For example, let’s say you want to know the nutritional value of your breakfast. Currently, there are tools out there that will help you calculate that. Google will do it. But the problem is, you can’t just ask Google, “How many calories are in my bowl of cornflakes and glass of orange juice?”. If you don’t believe me, go try it. You’ll get results for Orange juice, mostly. You may get some results for Corn Flakes too. But you won’t get your definitive answer without much clicking and calculating.
Wolfram|Alpha hopes to change all that. If you go there and type in “calories in 1 bowl of corn flakes + a glass of OJ”, not only will you get the total number of calories, but you’ll get a nutritional breakdown of protein, carbs, fat, and vitamins of the entire meal.
This changes everything – and if you don’t agree with me, read on.
The second amazing thing that happened this week while everyone was otherwise distracted with their twittering was that Google also announced their first step towards the Semantic Web. Yes, Google. Called Rich Snippets, the concept is to enhance search results with linked data. Currently, when you look up information on a new restaurant in town, you might get a web site link, or a Google map and phone number. If you do a second search for reviews, you may get some links to reviews. You will have to go seek out and look at their menu online to find out if their prices are affordable. You can go to a specific site such as Restaurantica and find even more information. So now you are 4 or 5 searches into things just to figure out if you might want to eat there.
The plan with Rich Snippets is to allow for all of this information to appear in one search result. You simply search for the restaurant’s name and the results will include reviews, price points, locations and perhaps even a direct link to make a reservation. Linked data will make it all possible.
But…but I went and tried it and it doesn’t work very well. This technology sucks!
You know what? You’re right. It’s far, far, FAR from perfect. Half the time you go on Wolfram|Alpha and you’ll and error message that says the system is over capacity (though the W|A scientists do a have a sense of humour – the error message is reminiscent of HAL from “2001 A Space Odyssey”). Google Rich Snippets technology is reliant upon web designers embedding special mark up codes within their content that will allow Google’s linking algorithms to grab the data. That means gaining buy-in and changing behaviours of the development community – no easy feat.
But…but I don’t need to worry about this stuff yet. Besides….Twitter!
Is the technology a long way out? It’s not as far away as you think. Remember way back when online video was a challenge to deal with? Nobody wanted to use it because it sucked bandwidth, was choppy and low quality at best, and crashed more browsers than it displayed in. That was waaaay back in 2003. YouTube was launched in 2005. That was only 4 years ago, for anyone who may have lost count. Now, how many online videos have you watched so far today? My point exactly.
The point is, Web 3.0 is here. Yes, yes…social media is glorious, fabulous, and has fundamentally changed the way we socialize, communicate, do business and interact.
But, the Semantic Web is going to be an even bigger shift. If you are even remotely involved or interested in technology, and especially if your business relies on it, do your homework and learn about this stuff NOW. Because in 2 years or less, online communication will have reached critical mass and will be as ubiquitous as the telephone. The world will have moved on from Twitter and Facebook.
The next big thing is not social media. Want to be the next thought leader? Then spend your time thinking about this:
The next big thing is information and how we use it.
That is all.
I’ve heard these arguments for years, Susan, and as valid as they seem, nothing’s going to happen–from the perspective of a critical mass shift–for YEARS. There’s too much to do.
You talk about versions. I’m thinking in singularities: there is one web, version it all you want, but one web. Your blog post is information, this comment is information; how will John the Plumber who’s never touched a computer or cellphone in his life use what we’re creating?
Oh, so I just ran a mathematical search in that Wolfram engine. I don’t know what I typed-in, other than a random series of letters and symbols. See the result: http://www38.wolframalpha.com/input/?i=pi%2F(sin(x^3)%2Fy)%2B16%25
Ari Herzog’s last blog post..2 Years, 1 Month, and 10 Days Later
Great post, and I definitely agree anyone looking for the future of the web should be looking at the semantic web.
You can tell just by the major complaints about the web right now “there’s too much noise” “I don’t have time to sort through it all..” etc.
And thanks for sharing the Wolfram Alpha tool… very fascinating!
Kelly Rusk’s last blog post..Next Social Media Breakfast May 6 with Remarkk’s Mark Kuznicki
Sue, you’re not the only one pointing people to try new tools to drive innovation in the direction of the semantic. Ari Herzog mentions Joe the plumber. How about Jonas the Chef?
In his latest blog (http://d8c.org/2009/05/18/nerdy-chef/), which was pointed to me by the Exec. Chef/Owner of Epicuria, Jonas points to Wolfram Alpha as one of several new tools that people in his industry need to start paying attention to. Wolfram Alpha provides more “value” than traditional tools like Google. It readily gives information that chefs can “really use.”
Such more than likely stems from Wolfram Alpha’s strengths: 1) careful input of selective information from reputable sources and 2) natural language interface. The tool thus carries automatic legitimacy and authority. Not only can it reason, which is already novel, but it reasons with information that was already selected to be legitimate.
And yes, this is also Wolfram Alpha’s weakness because the information base, upon which the tool reasons, needs to be maintained. It will be labour intensive. Semantic technologies aim to add meta data to existing user-generated information so semantic-aware application can reason with it. Wolfram Alpha is essentially a proof of concept, employing a closed information base.
I feel Jonas is alluding to chefs to think outside the box, perhaps think at the molecular level and try avant garde techniques using such things a cryo and immersion circulators.
In the same vein, everyone should try new things, especially with respect to the world wide web. There is much to do to make the semantic web a reality, but the impetus must be put to the “nerds” to move in that direction.
I want more leverage-able information with my first query. I want to be able to converse with the web, be it on a smart phone or computer, with conversational prose. I want to be able to make use of crowd sourcing beyond summarizing dozens of blog entries, tweets, or conversations in an online forum. I want all information on the World Wide Web to be more accessible.
I think Joe the plumber would appreciate such too. This is especially true when he becomes aware of services like AskAroundOttawa (http://www.askaround.ca) that recommend him, by name, to potential clients because he does good work.
This is an incredible post, Sue. Semantic web has been this little nagging topic in my mind because I’ve started hearing about it so much lately. It’s been building in presence, but social media is still getting almost all the attention right now.
The power and potential of the semantic web should be making everyone feel a little uncomfortable, especially if they’re not paying attention to it. Even early adopters need to start thinking a hundred miles ahead to be ready to leverage it.
You might enjoy a post I wrote about information aggregation. It’s in line with your last comment about the next big thing being how we use information. I agree!!
In lieu of a trackback:
“How Can Big Media Get Back in the Game? The Big Bang Business Model”
Michelle Tripp’s last blog post..Old Media Falling Into The “Digeration Gap”
Tim Berners-Lee has been trying to get the semantic web going for years, but there’s a problem. You mentioned it:
I don’t think you stressed how big this one is. There’s a real chicken-and-egg problem here:
– why mark up your site using RDFs (I assume Google’s tech uses RDF – the XML format developed by TBL’s W3C Semantic Group) when there are no apps to use it;
– why develop apps when there’s no data?
Another problem was best illustrated when I saw TBL trying to convince people to prime the pump to break this deadlock a few years back: while online publishing was easy to grok (“it’s like paper publishing, but online), the SW is hard to explain. The previous metaphors just don’t serve.
The potential goes way beyond Wolfram, particularly when coupled to ambient intelligence and natural language processing. Can you imagine being able to get any answer to any question about anything, anywhere, at any time?
PS Some useful links on SW: http://delicious.com/mathew/semantic
[…] SuzeMuse: Corn Flakes, Cuisine, and Why You Need to Care About Web 3.0 […]
Great post and I agree with you that a shift is under way. But it’s interesting that you picked out two technologies that have a similar output, but a very different technical foundation. Wolfram|Alpha uses all sorts of cool proprietary techniques to crunch and calculate data. Rich snippets require that publishers mark up their data. So, when the former doesn’t work well, it’s an algorithmic problem. When the latter works well, it’s a data problem. Hmmm. *thinking*
Great post Sue, and you are spot on.
We at the OpenCalais Initiative are seeing a steady uptick in the pace of adoption of our free metatagging service — and not just by the bleeding edge.
More and more mainstream publishers are preparing their content for interoperability and tapping the Linked Data cloud — and not just for some future state vision of a “web to come.’
Rather, because they seek immediate term increases in productivity from automating metatagging, grasp the value of enhancing their content with linked data assets for free, and want to be able to automatically create topic hubs and microsites for improved SEO and reader engagement.