Do you face problem of over uses of HDFS for datanode, frequently it becomes 100% and hence result in a imbalance cluster, thinking of how to solve this problem, for this what we can do is put a parameter called "dfs.datanode.du.reserved" so this will reserve the non HDFS uses disk space and hence leaving the some space remaining for non HDFS uses and solving disk overuses of HDFS .
All the question that scared me now i am trying to scare them .. so that they cant scare others :)
Monday, January 20, 2014
Hadoop over utilization of Hdfs
Subscribe to:
Post Comments (Atom)
Featured Posts
#Linux Commands Unveiled: #date, #uname, #hostname, #hostid, #arch, #nproc
#Linux Commands Unveiled: #date, #uname, #hostname, #hostid, #arch, #nproc Linux is an open-source operating system that is loved by millio...
-
Hadoop is a batch processing system, and Hadoop jobs tend to have high latency and incur substantial overhead in job submission and sched...
-
Print numbers in order : #!/bin/bash for i in $(seq 0 4) do for j in $(seq $i -1 0) do echo -n $j done echo done Will gi...
No comments:
Post a Comment
Thank you for Commenting Will reply soon ......