Hadoop is the leading open-source software framework developed for scalable, reliable and distributed computing. With the world producing data in the zettabyte range there is a growing need for cheap, scalable, reliable and fast computing to process and make sense of all of this data. The underlying technology for Hadoop framework was created by Google as there was no software in the market that fit Google needs. Indexing the web and analysing search patterns required deep and computationally extensive analytics that would help Google to improve their user behaviour algorithms.
Hadoop is the leading open-source software framework developed for scalable, reliable and distributed computing. With the world producing data in the zettabyte range there is a growing need for cheap, scalable, reliable and fast computing to process and make sense of all of this data. The underlying technology for Hadoop framework was created by Google as there was no software in the market that fit Google needs. Indexing the web and analysing search patterns required deep and computationally extensive analytics that would help Google to improve their user behaviour algorithms. Hadoop is built just for that as it runs on a large number of machines that share the workload to optimise performance. Moreover, Hadoop replicates the data throughout the machines ensuring that the processing of data will not be disrupted if one or multiple machines stop working. Hadoop has been extensively developed over the years adding new technologies and features to existing software creating the ecosystem we have today.
HDFS – or Hadoop Distributed File System is the primary storage system used for Hadoop. It is the key tool for managing Big Data and supporting analytic applications in a scalable, cheap and rapid way. Hadoop is usually used on low-cost commodity machines, where server failures are fairly common. To accommodate a high failure environment the file system is designed to distribute data throughout different servers in different server racks making the data highly available. Moreover, when HDFS takes in data it breaks it down into smaller blocks that get assigned to different nodes in a cluster which allows for parallel processing, increasing the speed in which the data is managed.
Hadoop Yarn is a programming model for processing and generating large sets of data. Yarn is the successor of Hadoop MapReduce. The original MapReduce is no longer viable in today’s environment. MapReduce was created 10 years ago, as the size of data being created increased dramatically so did the time in which MapReduce could process the ever growing amounts of data, ranging from minutes to hours. Secondly, programing MapReduce jobs is a time consuming and complex task that requires extensive training. And lastly, MapReduce did not fit all business scenarios as it was created for the single purpose of indexing the web. Yarn provides many benefits over its predecessor. Yarn provides better scalability due to distributed life-cycle management and support for multiple MapReduce API’s in a single cluster. It allows for faster processing and coupled with the in-memory capabilities of other software such as Apache Spark it is comes close to real-time processing. Yarn also supports many frameworks eliminating the need for MapReduce and making it more flexible for different use cases.
Apache Hive is a data warehouse management and analytics system that is built for Hadoop. Hive was initially developed by Facebook, but soon after became an open-source project and is being used by many other companies ever since. Apache hive uses a SQL like scripting language called HiveQL that can convert queries to MapReduce, Apache Tez and Spark jobs.
Apache Pig is a platform for analysing large sets of data. It includes a high level scripting language called Pig Latin that automates a lot of the manual coding comparing it to using Java for MapReduce jobs. Apache Pig is somewhat similar to Apache Hive though some users say that it is easier to transition to Hive rather than Pig if you come from a RDBMS SQL background. However, both platforms have a place in the market. Hive is more optimised to run standard queries and is easier to pick up where as Pig is better for tasks that require more customisation.
Apache Hbase is a non-relational database that runs on top of HDFS. This schema-less database supports in-memory caching via block cache and bloom filters that provide near real-time access to large datasets, making it especially useful for sparse data which are common in many Big Data use cases. However, it is not a replacement for a relational database as it does not speak SQL, support cross record transactions or joins.
Hadoop has become the low cost industry standard ecosystem for securely analysing high volume data from a variety of enterprise sources. We specialise in helping organisations leverage this advanced stack to rapidly understand their data landscape to deliver faster and more insightful reporting, analytics and analysis. Contact us for more details.
The post The Hadoop Ecosystem: HDFS, Yarn, Hive, Pig, HBase and growing… appeared first on Data To Value.