A few items I found this week on the noisy Internet started me thinking a bit about our industry, and why content management is actually interesting. No really, stay with me on this one….
When I was a kid, I always figured that one day there would be a super awesome computer that I could ask to do anything and it would instantly comply. Factual things, like “How far is it from here to the moon?” tasks like “Design for me my own awesome personal car based on things I like” or “Do my homework” and other, more esoteric things like “Does that girl in Math class who keeps poking me with her compass really like me?”
Now that I’m something of a grownup, we’re a lot closer to that kind of a solution actually existing. In fact, the Internet could potentially do pretty well at all of these questions. And yet, things aren’t exactly as I imagined them.
For starters, when I imagined my computer, I figured it would, you know compute things. Like, it would somehow measure the distance between here and the moon. And that it would build me blueprints for my new car by computing complex algorithms and lots of, you know — computery figuring out of stuff. The magic of the Internet as we know it is not really powered by computation at all. It’s people powered.
At its heart, the most useful things on the Internet are all just content management. They’re reordering, re-indexing and re-presenting existing information. For its almost limitless usefulness, Google itself doesn’t “know” much of anything. The only thing Google actually computes is how to relate existing content in the form of webpages, and how to present it at the appropriate time. (That’s no easy task, either, by the sounds of it. Just look at the amount of power they consume to bring those results to us.)
Which is why I was particularly intrigued to hear of Stephen Wolfram’s new project, Wolfram Alpha. It looks like this is an effort to produce a computation-based system, more in line with the computer of my juvenile fantasy. Unlike Google, Wolfram Alpha plans to actually compute answers, not just find them. It’s taking a seriously hardcore computer science approach to the problem of knowledge. I would dearly love to see this thing succeed, but I suspect that it won’t live up to pre-launch expectations.
At the other end of town, the other thing that got me thinking was Amy’s new project for South by Southwest (SXSW), the Pepsicozeitgeist. It’s a near-realtime, twitter-powered look at interactions between people at the conference. (It’s largely inspired by her original Twistori site, which does much the same thing for Twitter.) As well as being a fascinating time-waster, it’s a classic case of remixing people and their content, slicing and dicing information from the twitter data cloud. The computation going on in these solutions is all in looking for patterns in the content — finding the best ways to relate things — in divining commonality and relationships, and counting results.
In fact, when I look at all the amazing products and innovations that have arrived since web 2.0’clock, they all share this common element. Reasonably lightweight, simple computations being quickly processed against a collectively valuable, up-to date content repository. That’s what lets you find things, and learn new things, make new associations. And underneath it all is Content Management. Regardless of whatever it used to be about, it is now about preserving, maintaining and making these data repositories current, available, and interoperable so that value can be derived from them. Which is actually pretty interesting and important.
These new data libraries of information we are building for enterprises need to be constructed more like Twitter, and less like virtual paper archives if they are going to be useful to us in these ways.
At least, until some giant Hal 9000 type computer can do all our thinking for us. Meanwhile, I guess we should all just keep typing.