Thursday, September 19, 2013

Too many fetch-failures, hadoop mapreduce job, hadoop mapreduce Too many fetch-failures

This may happen due to following reasons:

1. Wrong DNS Entry, Hosts file entries, 

Description:

Nodes are not able to communicate with each other, so check if nodes where this error comes you can ping or  nslookup to that node

2. Tasktracker http thread

Description:

Check of the value of this tag in mapreduce-site.xml, if it is lower make it somewhat higer around 100.

No comments:

Post a Comment

Thank you for Commenting Will reply soon ......

Featured Posts

Enable shared folders in ubuntu in vmware?

 To enable Shared Folders in Ubuntu (VM) on VMware , follow these steps: Step 1: Enable Shared Folders in VMware Settings Power...