Since my presentation with Kingsley Martin at TECHSHOW 2010, I have been giving more and more thought to the evolution and future of KM. Previously we have posted on 3 Geeks about the impact of AFAs on KM, but I think there is a bigger under-current that will drive the next generation of KM. That under-current comes in the shape of massive amounts of data.
In May of this year, predictions were the world will soon pass the zettabyte threshold for the amount of data in existence (think 1 billion terrabytes). As the e-discovery world has already learned – humans are unable to deal with large quantities of data in a meaningful way. The obvious answer to this question has been technology. But so far that technology has focused on search and collaboration (Web 2.0 included). Most KM systems are focused on better-faster-cheaper search and retrieval systems. This approach will suffice to a point. Crossing the zettabyte threshold may be the metaphorical line-in-the-sand where search falls apart.
What is needed now is technology that analyzes our data/information/knowledge for us. Our reservoir of knowledge has become like the oceans. We can’t understand the vastness and nature of them while standing on the shore. The resulting inability to ask the right questions limits the advance of our knowledge.
So now we need our technology to take on this new role. I recall a conversation with a data miner a few years back that illustrates this point. He was analyzing (in 3 dimensions – whatever that is) 911 call data. He was able to determine where and when police should be deployed based on this analysis. 911 call centers were not originally designed with this question in mind. The data revealed the question as a result of the analysis.
KM 3.0, as a set of analysis systems, will be the kind of tool best situated to address this challenge of oceanic volumes of knowledge. I have seen a few emerging tools heading in this direction (e.g. Lexis for MS Office), but will be keeping my eye on the horizon for more examples to appear.