Do you face problem of over uses of HDFS for datanode, frequently it becomes 100% and hence result in a imbalance cluster, thinking of how to solve this problem, for this what we can do is put a parameter called "dfs.datanode.du.reserved" so this will reserve the non HDFS uses disk space and hence leaving the some space remaining for non HDFS uses and solving disk overuses of HDFS .
All the question that scared me now i am trying to scare them .. so that they cant scare others :)
Monday, January 20, 2014
Hadoop over utilization of Hdfs
Subscribe to:
Post Comments (Atom)
Featured Posts
Run Commands for Windows
🖥️ CPL Files (Control Panel Applets) Run via Win + R → filename.cpl Command Opens appwiz.cpl P...
-
Configuration config = HBaseConfiguration.create(); Job job = new Job(config,"ExampleReadWrite"); job.setJarByClass(MyReadWriteJo...
-
Use the following function which is built in percentile(BIGINT col, p) and set p to be 0.5 and will calculate the median credit : ...
No comments:
Post a Comment
Thank you for Commenting Will reply soon ......