Big Data is a vague term so business user beware. You have to understand what big data can actually do and what are its limitations. As you map your strategy, it’s critical to ask the right questions to ensure you ultimately net useful information.
Big Data is a vague term so business user beware. You have to understand what big data can actually do and what are its limitations. As you map your strategy, it’s critical to ask the right questions to ensure you ultimately net useful information.
Businesses are right to be concerned about being left behind as competitors and colleagues leverage big data to achieve a variety of business goals. But before being swept up in the wave, take a step back and consider these five questions to ensure you set on the right path:
1. What’s your problem?
This seems like an obvious question, but companies feeling pressured to become “data-driven,” may race ahead without first properly defining the problems (or opportunities) at hand. Are you a business analyst who can’t fit the data you require into Excel? Are you unable to access your company’s big data in the first place? Are you a chief information officer charged with reducing the wait time for query returns? Are you a non-technical user tired of waiting days or weeks for query results? Is your data structured or unstructured? All of the above?
Of course, one of the problems you might face is budget, particularly at startups and small-to-medium-sized businesses. The price of data warehousing and proprietary hardware can be prohibitive. If affordability is an issue, map out a strategy based on software that runs on commodity hardware and does not require data warehousing.
2. What’s the price you pay for free (open source) software?
There’s been a lot of hoopla over Hadoop and, while it serves as a fantastic open source solution for some business needs, free doesn’t mean there’s no price to pay. Hadoop runs on commodity hardware and that requires an investment, as does the power and connectivity it requires.
The core Hadoop distribution is free and open source software, when obtained from a few key Hadoop vendors. But some vendors have proprietary Hadoop distributions, and even the open source distributions have proprietary add-on management tools. Unless you’re downloading your Hadoop components from the Apache Software Foundation, you’re on the road to the same software license and lock-in concerns you have with commercial.
And let’s not forget the salaries of the data scientists required for deployment and management. If you’ve got a big wallet for IT and the hardware to boot, Hadoop might be right for you. But not everything is “Hadoopable.”
This leads me to the next question.
3. Does size matter? (Your businesses’ size and the size of your data).
The conversation around Big Data has lingered largely around petabytes. However, most businesses use terabytes of data. When working in the terabyte range, the overhead of a big cluster of machines may not pay off. You might find that legacy solutions are unnecessarily super-sized for the needs of your business.
If you fall within the TB scale, you are within single server range. You can keep cost down and simplicity up by aiming for a single server solution. Just a short ten years ago, a single computer could only handle gigabytes of data but now commodity hardware can handle terabytes, opening up a range of options that were previously unavailable.
4. Where is your data?
If most of your data is on-premise, your strategy should be different than in situations where the majority is in the cloud. For example, if your data is sitting on the Amazon or Rackspace cloud, then running a big data solution within that framework makes sense because the data is easy to move within that environment. However, if most of your data resides on-premise and you’re considering running your big data queries in the cloud, think again. Big data is difficult to move around and keeping it synced when uploading to the cloud poses many challenges. Better to remain within the on-premise environment in such cases.
5. What is the distinction between the various technologies?
There are three types of technologies currently utilized for big data analytics: software database appliances, hardware database appliances, and distributed databases.
Software database appliances are deployed on commodity hardware, generally on a single computer so they are generally affordable and architected simply. Examples are relational databases such as SQL server or MySQL, as well SiSense’s ElastiCube technology.
Hardware database appliances are comprised of proprietary software bundled with proprietary (i.e. expensive) hardware. Proprietary hardware has more powerful specs than commodity hardware but can cost 50 times more.
Distributed databases refer to software that is deployed on a cluster of computers, allowing it to “parallelize” resource-intensive processing operations. This involves complex architecture.
Other technologies you may encounter, such as in-memory or OLAP cubes, are smaller scale technologies that do not directly tackle big data. The data loaded into these data mart technologies has been significantly trimmed down prior to being loaded, typically by one of the big data technologies mentioned above.