Image via CrunchBase Ok so you are setting out to build the next Google and are considering using a Map/Reduce based data access strategy over traditional SQL. Just as you need a database server to process SQL queries you also…
Ok so you are setting out to build the next Google and are considering using a Map/Reduce based data access strategy over traditional SQL. Just as you need a database server to process SQL queries you also require the underlying infrastructure to manage your data and to execute your Map/Reduce routines. Hadoop is one such system that is gaining acceptance, being co-developed and implemented for data analytics purposes at Yahoo and Facebook amongst others.
Hadoop is the system that allows unstructured data to be distributed across hundreds or thousands of machines forming shared nothing clusters, and the execution of Map/Reduce routines to run on the data in that cluster. Hadoop has its own filesystem which replicates data to multiple nodes to ensure if one node holding data goes down, there are at least 2 other nodes from which to retrieve that piece of information. This protects the data availability from node failure, something which is critical when there are many nodes in a cluster (aka RAID at a server level).
So will Hadoop outperform a RDBMS? Well unless you are dealing with very large volumes of unstructured data (hundreds of GB, TB’s or PB’s) and have large numbers of machines available you will likely find the performance of Hadoop running a Map/Reduce query much slower than a comparable SQL query on a relational database. Hadoop uses a brute force access method whereas RDBMS’s have optimization methods for accessing data such as indexes and read-ahead. The benefits really do only come into play when the positive of mass parallelism is achieved, or the data is unstructured to the point where no RDBMS optimizations can be applied to help the performance of queries. Indeed benchmarks from the Hadoop site show performance significantly slower in straight line query performance when compared to a relational DB on small scale tests.
|
MySql 5.0.27 | Hadoop-0.15.2 |
Data | B-tree disk table (MyISAM) | Text files (access_log) |
Machine | 1 | 2 |
Rows | 5,914,669 | 5,914,669 |
Results | 100 | 100 |
Time | 4.43 sec | 172.30 sec |
But with all benchmarks everything has to be taken into consideration. For example, if the data starts life in a text file in the file system (e.g. a log file) the cost associated with extracting that data from the text file and structuring it into a standard schema and loading it into the RDBMS has to be considered. And if you have to do that for 1000 or 10,000 log files that may take minutes or hours or days to do (with Hadoop you still have to copy the files to its file system). It may also be practically impossible to load such data into a RDBMS for some environments as data could be generated in such a volume that a load process into a RDBMS cannot keep up. So while using Hadoop your query time may be slower (speed improves with more nodes in the cluster) but potentially your access time to the data may be improved.
Also as there aren’t any mainstream RDBMS’s that scale to thousands of nodes, at some point the sheer mass of brute force processing power will outperform the optimized, but restricted on scale, relational access methods.
So while Hadoop and Map/Reduce are gaining more popularity it shouldn’t be considered a like for like alternative to a relational RDBMS for most applications. It is a specialized tool with a specialized set of criteria that need to be fulfilled to achieve benefit over more traditional approaches.
Link to original postInnovations in information management