What keeps IT guys up at night? All that bad data their bosses are using to run the business.
In case you missed it the other day, a hacker group briefly took control of the AP Newswire service’s Twitter account and sent out a message that the White House had been attacked. In the two minutes it took to expose the message as a fake Wall Street took a 145 point plunge.
What keeps IT guys up at night? All that bad data their bosses are using to run the business.
In case you missed it the other day, a hacker group briefly took control of the AP Newswire service’s Twitter account and sent out a message that the White House had been attacked. In the two minutes it took to expose the message as a fake Wall Street took a 145 point plunge. While the market quickly recovered, that Twitter hack highlighted an important issue: if a mere 140 characters of momentary misinformation could lead people to make really expensive decisions, imagine what all that undiscovered bad data is costing.
Apparently the folks in IT departments are doing just that.
A newly published survey from the Aberdeen Group reports that IT professional’s number one concern for 2013 is “too many business decisions are based on inaccurate / incomplete data.”
Turns out in this day and age of social media, big data, and the Internet of Everything, IT guys are being forced to store bigger and bigger volumes of stuff with absolutely no way to check it for accuracy in any reasonable timeframe. Meanwhile people throughout the company are increasingly performing real time analysis, doing data visualizations, and running self-serve intelligence tools against it to make what could be very misinformed (or worse yet, disastrous) business decisions.
That’s an interesting and often overlooked point for those of us who don’t dwell among the racks in the server room.
In a world of increasingly easy-to-use and powerful business intelligence (BI) and big data analysis tools, more and more of us are attempting to glean fresh insight via our own data visualizations and analysis without giving much consideration to the accuracy of the underlying information. Maybe if we, like the IT pros, were aware how many of our spreadsheets had errors, how often data entry clerks misspell a city’s name, or that not everything on Twitter is real, we might have a little less trust in all these software tools that magically “turn data into insight.”
One other notable result from the Aberdeen survey was the percentage of IT people who said that data isolation was becoming a real problem. 31% of the respondents reported “data is too fragmented / siloed to develop a clear picture of the business.” By contrast, a mere 6% said that was an issue for them last year.
It seems all those data silos businesses spent so much time and money knocking down over the past decade are being replaced by a new generation of super-sized silos (under the guise of highly-specialized “big data” and analytics tools) which is, in turn, exacerbating the data quality headaches.
You can’t envy the situation with which IT pros are faced. Data volumes are accelerating and it’s doubtful people will abandon the immediacy of self-serve tools, making it highly unlikely anyone will ever get an upper hand on the totality of their data quality and siloing issues. The best they can hope is to break these problems into manageable chunks and tackle them individually. As this applies to cross-functional / departmental business intelligence and analytics the best approach is undoubtedly three-fold:
- Identify the data subsets actually required for analysis and pull them from their individual silos into a central repository
- Perform data quality analysis (standardization, error correction, data enrichment, etc.) on that sub-setted data
- Use the processed data for analysis, reporting and decision-making
By establishing these processes IT departments can cut the business data deluge to a manageable flow of good information, ensuring that the decision-makers downstream aren’t using two wrongs to make a bad insight.