Monday, January 20, 2014

Hadoop over utilization of Hdfs

Do you face problem of over uses of HDFS for datanode, frequently it becomes 100% and hence result in a imbalance cluster, thinking of how to solve this problem, for this what we can do is put a parameter called "dfs.datanode.du.reserved" so this will reserve the non HDFS uses disk space and hence leaving the some space remaining for non HDFS uses and solving disk overuses of HDFS .

No comments:

Post a Comment

Thank you for Commenting Will reply soon ......

Featured Posts

Permission Denied while deleteing a file or folder in Windows

“ Access denied ” while deleting a directory usually means the folder is in use, protected, owned by another user, or you lack sufficient pe...