How We Can Contribute Limited Storage as Slave Node in Distributed Cluster (Hadoop Cluster)?

Raj Kumar Vishwakarma
3 min readNov 18, 2020

--

Sometimes in the Hadoop cluster ,it may required that we do not want contribute the whole storage as Slave (Data Node).To solve this challenge i am providing one of the way here using partition concept.

So Let’s Start:

I have already created the hadoop cluster on the top of AWS cloud having 1 master (Name Node) and 1 slave node (Data Node).

For storage i have Created the EBS volume of 8 GB and attached it to Data Node (Slave).

Previously i have contributed whole storage (10 GB) of data node as we can see below :

Suppose now i have the requirement to contribute the 5GB of EBS volume i have created (i.e of 8GB in size).

For this i have to create partition 5GB ,to create partition first we have to go inside the volume with help of following command:

#fdisk <drive> (see in the image)

now with help internal command (See below ) of the drive i have created the primary partition of 5 GB .

after creating partition ,Now i have created the new directory /dnode then i have format the partion /dev/xvdf1 by command

# mkfs.ext4 /dev/xvdf1

atlast i have mounted the /dev/xvdf1 with /dnode

As we can see in the image:

I have successfully created the partition of 5GB .

Now move to another part i.e we have to made some changes in the hdfs configuration file.

Now Go to /etc/hadoop and then edit hdfs-site.xml

Previous:

After Edit:

That’s it we have done all the steps.

After Starting the hadoop :

now let’s see what happened

We can clearly see that i have successfully contribute the storage of 5GB of my EBS Volume.

For any query ping to me on linkedin

Source:

--

--

No responses yet