An Introduction to HDFS

We are going to start a series of blog on HDFS, Zookeeper, HBase & OpenTSDB and will see How to setup an OpenTSDB cluster using these services. In this blog, we will study about HDFS.

HDFS:

The Hadoop Distributed File System (HDFS) is a Java-based distributed file system that is fault-tolerant, scalable and extremely easy to expand. It is designed to run on commodity hardware and can be deployed on low-cost hardware. HDFS is the primary distributed storage for Hadoop applications. It provides interfaces for applications to move closer to data.

Architecture:

HDFS architecture contains NameNode, DataNode & Secondary NameNode. 

HDFS has a master/slave architecture.

hdfsArchitecture.png

NameNode – An HDFS cluster consists of a single NameNode(Master Server), which manages the file system namespace and regulates access to files by clients.  It maintains and manages the file system metadata.

E.g. what blocks make up a file, and on which datanodes those blocks are stored.

DataNode – There are a number of DataNodes, usually one per node in the cluster, which manages storage attached to the nodes that they run on. DataNode in HDFS stores the actual data. We can  add more datanodes to increase the space available

Secondary NameNode – The secondary NameNode service is not a standby secondary NameNode, despite its name. Specifically, it does not offer High Availability (HA) for the NameNode.

Why Secondary NameNode?

  • The NameNode stores modifications to the file system as a log appended to a native file system file.
  • When a NameNode starts up, it reads HDFS state from an image file, fsimage, and then applies edits from the edits log file.
  • It then writes new HDFS state to the fsimage and starts normal operation with an empty edits file.
  • Since NameNode merges fsimage and edits files only during start up, the edits log file could get very large over time on a busy cluster.
  • Another side effect of a larger edits file is that next restart of NameNode takes longer.
  • The Secondary NameNode merges the fsimage and the edits log files periodically and keeps edits log size within a limit.
  • It is usually run on a different machine than the primary NameNode since its memory requirements are on the same order as the primary NameNode.

Key Features:

Failure tolerant – data is duplicated across multiple DataNodes to protect against machine failures. The default is a replication factor of 3 (every block is stored on three machines, if you have 3 datanodes avalible).

Scalability – data transfers happen directly with the DataNodes so your read/write capacity scales fairly well with the number of DataNodes

Space – need more disk space? Just add more DataNodes and re-balance

Industry standard – Other distributed applications are built on top of HDFS (HBase, Map-Reduce)

HDFS is designed to process large data sets with write-once-read-many semantics, it is not for low latency access.

Data Organization:

  • Each file written into HDFS is split into 64 MB or 128 MB data blocks.
  • Each block is stored on one or more nodes
  • Each copy of the block is called replica

Block placement policy

  • The first replica is placed on the local node.
  • The second replica is placed in a different rack.
  • The third replica is placed in the same rack as the second replica.

Setup HDFS Cluster:

For creating HDFS cluster we are going to use Docker. For Docker image details: https://hub.docker.com/u/uhopper/

Steps:

  • Create a docker Swarm Network.

NameNode

  • Create Enviorment Vriable file (namenode_env) for NameNode in VM1
  • Create NameNode on VM1:

DataNode:

  • Create Enviorment Vriable file (datanode_env) for DataNode in all 3 VMs.
  • Create DataNode1 on VM1:
  • Create DataNode2 on VM2:
  • Create DataNode3 on VM3.

In all vms check all container is up and running by executing docker ps

Once all container are up and running go to your vm1, open a browser and open http://localhost:50070/dfshealth.html#tab-datanode

You will see output likehdfs

HDFS CLI:

hdfsCli

In this blog, we studied about HDFS and how to create  3 node HDFS cluster. In the next blog, we will study about the zookeeper and will create a zookeeper cluster.

References:

https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html

https://hadoop.apache.org/docs/r2.4.1/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html

https://docs.docker.com/network/network-tutorial-overlay/#walkthrough

https://hub.docker.com/u/uhopper/

 

 

 

4 thoughts on “An Introduction to HDFS”

Leave a Reply

Your email address will not be published. Required fields are marked *