It is often thought that Apache® Hadoop based data lakes are a potential panacea to thorny data management issues long germane to relational databases. After all, the (mistaken) belief goes, you can simply dump all your data into Hadoop’s file system, and via schema on read magic, your desired result sets will appear with very little effort. However, data management—even for Hadoop—isn’t going away and in fact, probably never will.
If you’ve never read Tyler Brûlé’s columns in the Financial Times, you’re really missing something. Mr. Brûlé’s column is a Sunday morning staple where he comments on design, style, business, travel and more. Even better, Mr. Brûlé was recently paired with the FT’s Lucy Kellaway in an article where they discussed Mr. Brûlé’s obsession with cleanliness, order, and aesthetics.
Reading along, I found some interesting parallels with Mr. Brûlé’s observations on office clutter, and relational database design practices.
For example, Mr. Brûlé despises anarchy. In the article he pointed to a staffer’s empty Evian water bottle on a desk, complaining that such items take away from his emphasis on office décor. And most certainly, Mr. Brûlé does not like jackets on the back of chairs, and anything else that takes away from the intended design and decoration of the office. Why? “There needs to be a rule of law, or else where does it end?” he says. Otherwise “people will come in with wheelie suitcases, or with plastic hangers and dry cleaning.”
While some may find all this attention to detail slightly amusing, Mr. Brûlé does not. And neither does your company database administrator (DBA). That’s because in order to provide accurate reports, BI visualizations and powerful analytics, there are significant efforts that must take place to identify, model, transform, curate and secure data in a relational database. In essence, all the work to make your data look as clean, ordered and useful as Mr. Brûlé’s office is an ongoing process handled by your data stewards and DBAs.
Now let’s get back to Hadoop. Almost no one who works with Hadoop on a daily basis would suggest that data can simply be dumped into Hadoop’s file system and be of high value to rank and file business users.
Want to store sensitive data in your data lake? You’ll most certainly be doubling down on your efforts to lockup key data, especially since Hadoop security is evolving. In addition to data security, you’ll still have to contend with metadata management, architecture and design, and governance in Hadoop. Indeed, none of these data management issues are going away if you’re planning on allowing Hadoop to serve as a true lake or “hub” for all your organization’s data.
Are you just starting out with your Hadoop data lake and not quite there yet in terms of clear and accepted data management processes? It probably seems like a gargantuan task at first, but as the old yarn goes, you eat the elephant one bite at time. Or in the words of the esteemed Tyler Brûlé; “It’s a daily effort to adhere to set standards. (But) you need to aspire to something.” Even if that “something” is a well governed, managed and secured Hadoop based data lake.