![]() |
Image [cc] travelin’ librarian |
I’m going to paraphrase an Archimedes quote – “Give me a big enough database, and I’ll forecast the world.” In a way, that is the ultimate goal for those of us in the Knowledge, Information, and Intelligence profession. We have a deep craving and expectation that if we just had all available information in a database, combined with acquired knowledge of how to interpret that information, plus an advanced algorithm to pull it all together, we could predict what is going to happen next. For those of us in the legal span, we think of it in smaller chunks of “who do we need to hire?” or, “what can we forecast as legal exposure for our clients?” However, we are not the only ones that believe that “if we could just compile all of the data, we could actually predict the future.”
According to an article on the BBC News website, that idea may not be as unobtainable as you might think. The US government, through its Open Source Centre, combined with BBC Monitoring and numerous online news resources may be blazing a trail in finding a way to predict the future through data analysis. Although it is much more complicated than this, here is the basic formula they are feeding into a supercomputer:
Supercomputer
x 100 million articles
x automated sentiment
x geocoding
Predicting Trouble
The Supercomputer (Nautilus at the University of Tennessee), processed all of these articles (legally obtained through agreements, or available on the open web), added in a factor of “sentiment” (positive or negative tone), and put a geographic location on it and voilà… trouble can be predicted.
Now, at this point, I’ve got two competing ideas jumping around in my head.
- Is this simply a “Monday Morning Quarterback” Issue?
- Could this be a huge deal and produce the next “WestlawNext 2.0”?