How to Contribute the Limited amount of Storage as a slave to the Cluster.

To contribute limited amount of storage to Hadoop Cluster we must know something about Hadoop.
What is Hadoop Cluster..?
A Hadoop cluster is a hardware cluster used to facilitate utilization of open-source Hadoop technology for data handling. The cluster consists of a group of nodes, which are processes running on either a physical or virtual machine. The Hadoop cluster works in coordination to deal with unstructured data and produce data results.
Prerequisites
- AWS account
- Hadoop cluster set up with 2 slaves atleast.
Lets start..
Firstly we need a Hadoop cluster on AWS with atleast 2 Datanodes and obviously 1 Masternode.
Here it is my cluster with 2 slaves and 1 master.

Here you can see that Slaves are contributing the precisely same volume(20 gb each)to cluster that we have given in root directory of slaves.

Lets create one more Volume and attach it to any one of the slaves so that these slaves could contribute as much as of limited storage to the cluster as they want.

As you can see that we are creating one volume of 1GB (you can give the size of the volume according to your needs) ,also we should select the same Availability zone as that of our slaves so that it would be easier to attach it to them.


Now the Volume has been attached to one of our Slaves.

Further we have to create the Partitions of that Volume

After creating the partitions we have to Format them.

Formatting is done successfully . Now we have to create one directory and then we have to Mount that partition to that directory .


After mounting the partition , we updated that folder in hdfs-site.xml file.
Now we have to first stop the services of datanode and then start them again.

Finally you can check the storage capacity that the slave to which volume is attached is contributing to the cluster.

It is exactly same storage that we provided to that volume.
This is how we can contribute the limited amount of Storage to the Cluster.