An Introduction to ZOOKEEPER

In our last blog, we talked about the HDFS cluster. Which is needed for deploying OpenTSDB in clustered mode. Continuing to the series, In this blog, we are going to talk about ZOOKEEPER which will be used by HBase & OpenTSDB in the cluster.

Before talking about ZooKeeper lets understand what is Distributed system and the need of ZooKeeper in Distributed System.

Distributed System:

When a software system composed of independent computing entities linked together by a computer network and its components communicate and coordinate with each other to achieve a common goal is known and Distributed System.  ex: Multiplayer online games like Clash of Clan

Advantages of distributed System:

Scalability: We can easily expand the distributed system by adding more machines in the cluster.

Redundancy: All the machine in the cluster provides the same services, so if any one of them is not available then also works will not be stopped (Show must go on 😊)

The process within these system needs some type of agreement to run correctly and efficiently. This type of agreement is also known as Distributed Coordination.

We can build our own coordinating system, however, that will take the lot of work and it is not a trivial task. The problem comes in implementing a correct fault-tolerant solution.

So, Is there any alternative we can use?🤔

We can use some robust coordination service like ZooKeeper.

What is ZooKeeper?

ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. ZooKeeper is simple, distributed, reliable and fast.

maintaining configuration information: It maintains cluster configuration info which is shared across all the nodes in the cluster.

naming:  Zookeeper can be used as naming service, using which once node in the cluster can find another node in the large cluster ex: 1000 node cluster

providing distributed synchronization: We can also use zookeeper for solving distributed synchronization problem in the cluster by using locks, Queues etc.

providing group services: Zookeepr also helps in group service like the sellection of a master in the cluster(Leader election process).

ZooKeeper can work in replicated mode as well as standalone mode.

Replicated Mode:

In Replicated Mode, there are multiple servers involved. One of the server is selected as leader and others are followers. If the leader fails one of the followers is elected as master.

zkservice.jpg

The servers in the cluster must know about each other. They maintain an in-memory image of state, along with a transaction logs and snapshots in a persistent store. As long as a majority of the servers are available, the ZooKeeper service will be available.

Clients can connect to a single ZooKeeper server. However, when a client is started they are provided a list of servers, so when connection to the connected server fails then the client can connect to any other server in the cluster.

Read operation can perform read operation from any of the servers in the cluster but write operation must need to go through the leader.

Standalone Mode:

ZooKeeper can also run in the standalone mode. In this mode, all the clients are connected to a single zookeeper server.

ZKStanalone

In this mode, we lose the benefits of replication and high availability.

ZooKeeper Data Model:

ZooKeeper has a hierarchal namespace. The namespace can have data associated with it as well as children. Paths to nodes are always expressed as canonical, absolute, slash-separated paths; there are no relative reference. These namespaces are organized much like a file system in Linux.

zkaZondes.png

Znode:

Every node in a ZooKeeper tree is referred as znode. Znodes maintain a stat structure that includes version numbers for data changes, acl(Acess Controle List) changes. Data is stored in znode.

Deploye ZooKeeper:

To deploy zookeeper we will use official zookeeper docker image

In Server1:

run given yml server1 “docker-compose -f zoo1.yml up -d

In Server2:

run given yml server2″docker-compose -f zoo2.yml up -d

In Server3:

run given yml in server3 “docker-compose -f zoo3.yml up -d

To check the status of zookeeper:

Run:
root@host:~#  nc localhost 2181 
stats
You can try given command in all server in check for  mode in the output to find which is leader and follower.
*Note: Replace server1IP, server2IP, and server3IP in all yml with there respective value.
References:

4 thoughts on “An Introduction to ZOOKEEPER”

Leave a Reply

Your email address will not be published. Required fields are marked *