As an old proponent of the Enterprise Data Warehouse or EDW (well, let me stick my neck out and claim to be its first proponent, although I labeled it the BDW – Business Data Warehouse, or Barry Devlin’s Warehouse!), I’ve had many debates over the years about the relative merits of consolidating and reconciling data in an EDW for subsequent querying vs. sending the query to a disparate set of data sources. Unlike some traditionalists, I concluded as far back as 2002 that there existed good use cases for both approaches.
As an old proponent of the Enterprise Data Warehouse or EDW (well, let me stick my neck out and claim to be its first proponent, although I labeled it the BDW – Business Data Warehouse, or Barry Devlin’s Warehouse!), I’ve had many debates over the years about the relative merits of consolidating and reconciling data in an EDW for subsequent querying vs. sending the query to a disparate set of data sources. Unlike some traditionalists, I concluded as far back as 2002 that there existed good use cases for both approaches. I still stick with that belief. So, the current excitement and name-space explosion about the topic leaves me a touch bemused.
But I found myself more confused than bemused when I read Stephen Swoyer’s article Why Data Virtualization Trumps Data Federation Alone in the Dec. 1 TDWI “BI This Week” newsletter. Quoting Philip Russom, research manager with TDWI Research, and author of a new Checklist Report from TDWI Research, Data Integration for Real-Time Data Warehousing and Data Virtualization, he says: “[D]ata virtualization must abstract the underlying complexity and provide a business-friendly view of trusted data on demand. To avoid confusion, it’s best to think of data federation as a subset or component of data virtualization. In that context, you can see that a traditional approach to federation is somewhat basic or simple compared to the greater functionality of data virtualization”.
OK, maybe I’m getting old, but that didn’t help me a lot to understand why data virtualization trumps data federation alone. So, I went to the Checklist Report, where I found a definition: “For the purposes of this Checklist Report, let’s define data virtualization as the pooling of data integration resources”, whereas traditional data federation “only federates data from many different data sources in real time”, the latter from a table sourced by Informatica, the sponsor of the report. When I read the rest of the table, it finally dawned on me that I was in marketing territory. Try this for size: “[Data virtualization] proactively identifies and fixes data quality issues on the fly in the same tool”! How would that work?
Let me try to clarify the conundrum of virtualization, federation, enterprise information integration and even mash-ups, at least from my (perhaps over-simplified) viewpoint. They’re all roughly equivalent – there may be highly nuanced differences, but the nuances depend on which vendor you’re talking to. They all provide a mechanism for decomposing a request for information into sub-requests that are sent to disparate and distributed data sources unbeknownst to the user, receive the answers and combine them into a single response. In order to do that, they all have some amount of metadata that allows locates and describes the information sources, a set of adapters (often called by different names) that know how to talk with different data sources, and, for want of a better description, a layer that insulates the user from all of the complexity underneath.
But, whatever you call it (and let’s call it data virtualization for now – the term with allegedly the greatest cachet), is it a good idea? Should you do it? I believe the answer today is a resounding yes – there is far too much information of too many varieties to ever succeed in getting it into a single EDW. There is an ever growing business demand for access to near real-time information that ETL, however trickle-fed, struggles to satisfy. And, yes, there are dangers and drawbacks to data virtualization, just as there are to ETL. And the biggest drawback, despite Informatica’s claim to the contrary, is that you have to be really, really careful about data quality.
By the way, I am open to being proven wrong on this last point; it’s only by our mistakes that we learn! Personally, I could use a tool that “proactively identifies and fixes data quality issues on the fly”.