Sunday, November 27, 2011

Cluster Setup HADOOP


This document describes how to install, configure and manage non-trivial Hadoop clusters ranging from a few nodes to extremely large clusters with thousands of nodes.
To play with Hadoop, you may first want to install Hadoop on a single machine (see Hadoop Quick Start).

Pre-requisites

  1. Make sure all requisite software is installed on all nodes in your cluster.
  2. Get the Hadoop software.

Installation

Installing a Hadoop cluster typically involves unpacking the software on all the machines in the cluster.
Typically one machine in the cluster is designated as the NameNode and another machine the as JobTracker, exclusively. These are the masters. The rest of the machines in the cluster act as bothDataNode and TaskTracker. These are the slaves.
The root of the distribution is referred to as HADOOP_HOME. All machines in the cluster usually have the same HADOOP_HOME path.

Configuration

The following sections describe how to configure a Hadoop cluster.

Configuration Files

Hadoop configuration is driven by two types of important configuration files:
  1. Read-only default configuration - src/core/core-default.xmlsrc/hdfs/hdfs-default.xml and src/mapred/mapred-default.xml.
  2. Site-specific configuration - conf/core-site.xmlconf/hdfs-site.xml and conf/mapred-site.xml.
To learn more about how the Hadoop framework is controlled by these configuration files, look here.
Additionally, you can control the Hadoop scripts found in the bin/ directory of the distribution, by setting site-specific values via the conf/hadoop-env.sh.

Site Configuration

To configure the Hadoop cluster you will need to configure the environment in which the Hadoop daemons execute as well as the configuration parameters for the Hadoop daemons.
The Hadoop daemons are NameNode/DataNode and JobTracker/TaskTracker.

Configuring the Environment of the Hadoop Daemons

Administrators should use the conf/hadoop-env.sh script to do site-specific customization of the Hadoop daemons' process environment.
At the very least you should specify the JAVA_HOME so that it is correctly defined on each remote node.
Administrators can configure individual daemons using the configuration options HADOOP_*_OPTS. Various options available are shown below in the table.
DaemonConfigure Options
NameNodeHADOOP_NAMENODE_OPTS
DataNodeHADOOP_DATANODE_OPTS
SecondaryNamenodeHADOOP_SECONDARYNAMENODE_OPTS
JobTrackerHADOOP_JOBTRACKER_OPTS
TaskTrackerHADOOP_TASKTRACKER_OPTS
For example, To configure Namenode to use parallelGC, the following statement should be added in hadoop-env.sh :
export HADOOP_NAMENODE_OPTS="-XX:+UseParallelGC ${HADOOP_NAMENODE_OPTS}" 
Other useful configuration parameters that you can customize include:
  • HADOOP_LOG_DIR - The directory where the daemons' log files are stored. They are automatically created if they don't exist.
  • HADOOP_HEAPSIZE - The maximum amount of heapsize to use, in MB e.g. 1000MB. This is used to configure the heap size for the hadoop daemon. By default, the value is 1000MB.

Configuring the Hadoop Daemons

This section deals with important parameters to be specified in the following:
conf/core-site.xml:
ParameterValueNotes
fs.default.nameURI of NameNode.hdfs://hostname/

conf/hdfs-site.xml:
ParameterValueNotes
dfs.name.dirPath on the local filesystem where the NameNode stores the namespace and transactions logs persistently.If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy.
dfs.data.dirComma separated list of paths on the local filesystem of a DataNode where it should store its blocks.If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices.

conf/mapred-site.xml:
ParameterValueNotes
mapred.job.trackerHost or IP and port ofJobTracker.host:port pair.
mapred.system.dirPath on the HDFS where where the Map/Reduce framework stores system files e.g./hadoop/mapred/system/.This is in the default filesystem (HDFS) and must be accessible from both the server and client machines.
mapred.local.dirComma-separated list of paths on the local filesystem where temporary Map/Reduce data is written.Multiple paths help spread disk i/o.
mapred.tasktracker.{map|reduce}.tasks.maximumThe maximum number of Map/Reduce tasks, which are run simultaneously on a givenTaskTracker, individually.Defaults to 2 (2 maps and 2 reduces), but vary it depending on your hardware.
dfs.hosts/dfs.hosts.excludeList of permitted/excluded DataNodes.If necessary, use these files to control the list of allowable datanodes.
mapred.hosts/mapred.hosts.excludeList of permitted/excluded TaskTrackers.If necessary, use these files to control the list of allowable TaskTrackers.
mapred.queue.namesComma separated list of queues to which jobs can be submitted.The Map/Reduce system always supports atleast one queue with the name as default. Hence, this parameter's value should always contain the string default. Some job schedulers supported in Hadoop, like the Capacity Scheduler, support multiple queues. If such a scheduler is being used, the list of configured queue names must be specified here. Once queues are defined, users can submit jobs to a queue using the property name mapred.job.queue.name in the job configuration. There could be a separate configuration file for configuring properties of these queues that is managed by the scheduler. Refer to the documentation of the scheduler for information on the same.
mapred.acls.enabledSpecifies whether ACLs are supported for controlling job submission and administrationIf true, ACLs would be checked while submitting and administering jobs. ACLs can be specified using the configuration parameters of the form mapred.queue.queue-name.acl-name, defined below.
mapred.queue.queue-name.acl-submit-jobList of users and groups that can submit jobs to the specifiedqueue-name.The list of users and groups are both comma separated list of names. The two lists are separated by a blank. Example: user1,user2 group1,group2. If you wish to define only a list of groups, provide a blank at the beginning of the value.
mapred.queue.queue-name.acl-administer-jobList of users and groups that can change the priority or kill jobs that have been submitted to the specified queue-name.The list of users and groups are both comma separated list of names. The two lists are separated by a blank. Example: user1,user2 group1,group2. If you wish to define only a list of groups, provide a blank at the beginning of the value. Note that an owner of a job can always change the priority or kill his/her own job, irrespective of the ACLs.
Typically all the above parameters are marked as final to ensure that they cannot be overriden by user-applications.
Real-World Cluster Configurations
This section lists some non-default configuration parameters which have been used to run the sort benchmark on very large clusters.
  • Some non-default configuration values used to run sort900, that is 9TB of data sorted on a cluster with 900 nodes:
    Configuration FileParameterValueNotes
    conf/hdfs-site.xmldfs.block.size134217728HDFS blocksize of 128MB for large file-systems.
    conf/hdfs-site.xmldfs.namenode.handler.count40More NameNode server threads to handle RPCs from large number of DataNodes.
    conf/mapred-site.xmlmapred.reduce.parallel.copies20Higher number of parallel copies run by reduces to fetch outputs from very large number of maps.
    conf/mapred-site.xmlmapred.child.java.opts-Xmx512MLarger heap-size for child jvms of maps/reduces.
    conf/core-site.xmlfs.inmemory.size.mb200Larger amount of memory allocated for the in-memory file-system used to merge map-outputs at the reduces.
    conf/core-site.xmlio.sort.factor100More streams merged at once while sorting files.
    conf/core-site.xmlio.sort.mb200Higher memory-limit while sorting data.
    conf/core-site.xmlio.file.buffer.size131072Size of read/write buffer used in SequenceFiles.
  • Updates to some configuration values to run sort1400 and sort2000, that is 14TB of data sorted on 1400 nodes and 20TB of data sorted on 2000 nodes:
    Configuration FileParameterValueNotes
    conf/mapred-site.xmlmapred.job.tracker.handler.count60More JobTracker server threads to handle RPCs from large number of TaskTrackers.
    conf/mapred-site.xmlmapred.reduce.parallel.copies50
    conf/mapred-site.xmltasktracker.http.threads50More worker threads for the TaskTracker's http server. The http server is used by reduces to fetch intermediate map-outputs.
    conf/mapred-site.xmlmapred.child.java.opts-Xmx1024MLarger heap-size for child jvms of maps/reduces.

Slaves

Typically you choose one machine in the cluster to act as the NameNode and one machine as to act as the JobTracker, exclusively. The rest of the machines act as both a DataNode andTaskTracker and are referred to as slaves.
List all slave hostnames or IP addresses in your conf/slaves file, one per line.

Logging

Hadoop uses the Apache log4j via the Apache Commons Logging framework for logging. Edit the conf/log4j.properties file to customize the Hadoop daemons' logging configuration (log-formats and so on).
History Logging
The job history files are stored in central location hadoop.job.history.location which can be on DFS also, whose default value is ${HADOOP_LOG_DIR}/history. The history web UI is accessible from job tracker web UI.
The history files are also logged to user specified directory hadoop.job.history.user.location which defaults to job output directory. The files are stored in "_logs/history/" in the specified directory. Hence, by default they will be in "mapred.output.dir/_logs/history/". User can stop logging by giving the value none for hadoop.job.history.user.location
User can view the history logs summary in specified directory using the following command
$ bin/hadoop job -history output-dir
This command will print job details, failed and killed tip details.
More details about the job such as successful tasks and task attempts made for each task can be viewed using the following command
$ bin/hadoop job -history all output-dir 
Once all the necessary configuration is complete, distribute the files to the HADOOP_CONF_DIR directory on all the machines, typically ${HADOOP_HOME}/conf.

Cluster Restartability

Map/Reduce

The job tracker restart can recover running jobs if mapred.jobtracker.restart.recover is set true and JobHistory logging is enabled. Alsomapred.jobtracker.job.history.block.size value should be set to an optimal value to dump job history to disk as soon as possible, the typical value is 3145728(3MB).

Hadoop Rack Awareness

The HDFS and the Map/Reduce components are rack-aware.
The NameNode and the JobTracker obtains the rack id of the slaves in the cluster by invoking an API resolve in an administrator configured module. The API resolves the slave's DNS name (also IP address) to a rack id. What module to use can be configured using the configuration item topology.node.switch.mapping.impl. The default implementation of the same runs a script/command configured using topology.script.file.name. If topology.script.file.name is not set, the rack id /default-rack is returned for any passed IP address. The additional configuration in the Map/Reduce part is mapred.cache.task.levels which determines the number of levels (in the network topology) of caches. So, for example, if it is the default value of 2, two levels of caches will be constructed - one for hosts (host -> task mapping) and another for racks (rack -> task mapping).

Hadoop Startup

To start a Hadoop cluster you will need to start both the HDFS and Map/Reduce cluster.
Format a new distributed filesystem:
$ bin/hadoop namenode -format
Start the HDFS with the following command, run on the designated NameNode:
$ bin/start-dfs.sh
The bin/start-dfs.sh script also consults the ${HADOOP_CONF_DIR}/slaves file on the NameNode and starts the DataNode daemon on all the listed slaves.
Start Map-Reduce with the following command, run on the designated JobTracker:
$ bin/start-mapred.sh
The bin/start-mapred.sh script also consults the ${HADOOP_CONF_DIR}/slaves file on the JobTracker and starts the TaskTracker daemon on all the listed slaves.

Hadoop Shutdown

Stop HDFS with the following command, run on the designated NameNode:
$ bin/stop-dfs.sh
The bin/stop-dfs.sh script also consults the ${HADOOP_CONF_DIR}/slaves file on the NameNode and stops the DataNode daemon on all the listed slaves.
Stop Map/Reduce with the following command, run on the designated the designated JobTracker:
$ bin/stop-mapred.sh 
The bin/stop-mapred.sh script also consults the ${HADOOP_CONF_DIR}/slaves file on the JobTracker and stops the TaskTracker daemon on all the listed slaves.

No comments:

Post a Comment

Thank you for Commenting Will reply soon ......

Featured Posts

Enhancing Unix Proficiency: A Deeper Look at the 'Sleep' Command and Signals

Hashtags: #Unix #SleepCommand #Signals #UnixTutorial #ProcessManagement In the world of Unix commands, there are often tools that, at first ...