This article in Slate shows how an author’s prediction of information overload as a result of the internet’s pervasiveness has now come to fruition a decade later.
At the surface, that might not strike you as a blockbuster idea, but when you consider the problems he predicted, many have not only come true, they still very much exist as the main problem around corporate IT.
The writer, David Shenk, reflects on his book called “Data Smog” that he wrote back in 1997. In the book, he contended that “Attention gets diverted… conversations and trains-of-thought interrupted; skepticism short-circuited; stillness and silence all but eliminated. Probably the greatest overall threat is that so many potentially meaningful experiences can easily be supplanted by merely thrilling experiences.”
Out in the consumer space (and now, in the enterprise space), one company in Mountain View, Calif. has been doing pretty well. As Shenk writes in his article, “The smartest, grandest filter of them all, of course, is Google. Like many people a decade ago, I was utterly blind to the possibilities of a search engine that could get smarter every hour by tracking not only our questions but also where we went to find answers. Google cuts through the dreck in astonishing ways.”
The reason Google has fared well (aside from the obvious innovations it created in the search space) is because it empowers the users to find what they want, where as other types of tools in the work place, including old and clunky BI and KM tools seem to be more about empowering the business first, user second.
A few months ago, at the Enterprise 2.0 conference, I sat in on a panel session hosted by Web 2.0 expert Stowe Boyd, who talked about how the edge dissolves the center. (His blog, by the way, is really worth checking out).
Are you approaching your massive amounts of data by starting with your users?
Here are some numbers to chew on as you go about dealing with this explosion in your business. As this IDC/EMC survey showed, In 2006, the amount of digital information created, captured, and replicated was 1,288 x 1018 bits. In computer parlance, that’s 161 exabytes or 161 billion gigabytes (see sidebar). This is about 3 million times the information
in all the books ever written.”