In my last post, I discussed some of the key points in the 5th annual Digital Universe study from IDC, released by EMC in June. Here, I consider a few more: some of the implications of the changes in sourcing on security and privacy, the importance of considering transient data, where volumes are a number of orders of magnitude higher, and a gentle reminder that bigger is not necessarily the nub of the problem.
In my last post, I discussed some of the key points in the 5th annual Digital Universe study from IDC, released by EMC in June. Here, I consider a few more: some of the implications of the changes in sourcing on security and privacy, the importance of considering transient data, where volumes are a number of orders of magnitude higher, and a gentle reminder that bigger is not necessarily the nub of the problem.
Let’s start with transient data. IDC notes that “a gigabyte of stored content can generate a petabyte or more of transient data that we typically don’t store (e.g., digital TV signals we watch but don’t record, voice calls that are made digital in the network backbone for the duration of a call)”. Now, as an old data warehousing geek, that type of statement typically rings alarm bells: what if we miss some business value in the data that we never stored? How can we ever recheck at a future date the results of an old analysis we made in real-time? We used to regularly encounter this problem with DW implementations that focused on aggregated data, often because of the cost of storing the detailed data. Over the years, decreasing storage costs meant that more warehouses moved to storing the detailed data. But now, it seems like we are facing the problem again. However, from a gigabyte to a petabyte is a factor of a million! And, as the study points out, the “growth of the [permanent] digital universe continues to outpace the growth of storage capacity”. So, this is probably a bridge to far for hardware evolution.
The implication (for me) is that our old paradigm about the need to keep raw, detailed data needs to be reconsidered, at least for certain types of data. This leads to the point about “big data” and whether the issue is really about size at all. The focus on size, which is the sound-bite for this study and most of the talk about big data, distracts us from the reality that this expanding universe of data contains some very different types of data to traditional business data and comes from a very different class of sources. Simplistically, we can see two very different types of big data: (1) human-generated content, such as voice and video and (2) machine metric data such as website server logs and RFID sensor event data. Both types are clearly big in volume, but in terms of structure, information value per gigabyte, retention needs and more, they are very different beasts. And interesting to note that some vendors are beginning to specialize. Infobright, for example, is focusing on what they call “machine-generated data”, a class of big data that is particularly suited to their technical strengths.
Finally, a quick comment on security and privacy. The study identifies the issues: “Less than a third of the information in the digital universe can be said to have at least minimal security or protection; only about half the information that should be protected is protected.” Given how much information that consumers are willing to post on social networking sites or share with businesses in order to get a 1% discount, this is a significant issue that proponents of big data and data warehousing projects. As we bring this data from social networking sources into our internal information-based decision-making systems, we will increasingly expose our business to possible charges of misusing information, exposing personal information, and so on.
There are many more thought-provoking observations in the Digital Universe study. Well worth a read for anybody considering integrating data warehouse and big data.