In Hadoop Cluster How to contribute limited amount of storage from Slave (DataNode) to cluster.

Aim : DataNode should have to share/configure only 2GB storage out of 3GB of externally attached EBS volume.
✔ Hadoop cluster
For performing this task I have made hadoop cluster of masternode and one data node. Both are launched on AWS cloud.

I have create 3GB of EBS volume in AWS cloud.

Now we have to attach 3GB of EBS volume to the datanode.

Check whether external EBS is attached or not

We can also check using command ‘# fdisk -l’. This command will show all the hard disk/volume attached to system.

To see all mounted volumes to the system use command ‘# df -h’

In this fig. we can see that there is no such storage that we have attached. It means we can’t use this external storage directly. To store data in that external storage we have to follow some steps as follows:
- Make partition in storage
- Format that partition.
- Mount the partition.
Make Partition :
Use command ‘# fdisk /dev/xvdf’

Format Partition :
Use command ‘#mkfs.ext4 /dev/xvdf1’

Mount Partition :
Partition looks like a directory but we can’t access partition directly. To use partition we have to link it with one of the directory. We can create directory manually as ‘# mkdir /extdrive’ and link them as ‘# mount /dev/xvdf1 /extdrive’

After mounting partition check that whether that partition is successfully attached or not. For that use command ‘# df -h’

Now we can use this volume to store data.
First install hadoop & jdk software in namenode & datanode.

Namenode :
- After installation go to directory ‘/etc/hadoop/’ & configure file ‘hdfs-core.xml’ as

2. After this configure ‘core-site.xml’ file as

3. Format namenode using command ‘# hadoop namenode -format’

4. Start namenode service using command as ‘# hadoop-daemon.sh start namenode. After this we can see namenode is started or not using command ‘# jps”

Datanode :
- After installation go to directory ‘/etc/hadoop/’ & configure file ‘hdfs-core.xml’ as

2. After this configure ‘core-site.xml’ file as

3. Start datanode service using command as ‘# hadoop-daemon.sh start datanode. After this we can see datanode is started or not using command ‘# jps”

✔ Now inside the NameNode
- checking report of namenode by using following command:
‘# hadoop dfsadmin -report’

✔ Conclusion :
Data node can share limited storage after partitioning of storage devices.