Sunday, November 27, 2011

Quick install HBase in “pseudo distributed” mode and connect from Java


On first reading of the HBase documentation, setting up pseudo distributed mode sounds very simple. The problem is that there a lot of gotchas, which can make life very difficult indeed. Consequently, many people follow a twisted journey to their final destination, and when they finally get there, aren’t sure which of the measures they took were needed, and which were not. This is reflected by a degree of misinformation on the Web, and I will try and present here a reasonably minimal way of getting up and running (that is not even to say that every step I take is absolutely necessary even, but I’ll mention where I’m not sure).
Step 1: Check your IP setup
I believe this is one of the main causes of the weirdness that can happen. So, if you’re on Ubuntu check your hosts file. If you see something like:
127.0.0.1 localhost
127.0.1.1 <server fqn> <server name, as in /etc/hostname>

get rid of the second line, and change to
127.0.0.1 locahost
<server ip> <server fqn> <server name, as in /etc/hostname>

e.g.
127.0.0.1 localhost
23.201.99.100 hbase.mycompany.com hbase

If you don’t do this, the region servers will resolve their addresses to 127.0.1.1. This information will be stored inside the ZooKeeper instance that HBase runs (the directory and lock manager used by the system to configure and synchronize a running HBase cluster). When manipulating remote HBase data, client code libraries actually connect to ZooKeeper to find the address of the region server maintaining the data. In this case, they will be given 127.0.1.1 which resolves to the client machine. duh!
Step 2: Install Hadoop Packages
Hadoop is quite a big subject – hell the book has over 500 pages. That’s why it is great that there is a company making pre-packaged distributions called cloudera. So my recommendation here is to go with those packages. Perform the following steps, but check the important notes before proceeding:
a/ If you are on Debian, you need to modify your Apt Repository so you can pickup the packages. In the instructions following, if you are running a recent Ubuntu distro like karmic, then configure your cloudera.list to pickup the packages for “jaunty-testing”. Make sure you choose hadoop-0.20 or better.
Instructions http://archive.cloudera.com/docs/_apt.html
b/ Install the packages setting up hadoop in standalone mode
Instructions http://archive.cloudera.com/docs/_installing_hadoop_standalone_mode.html
c/ Install the package the sets up the pseudo distributed configuration
Instructions http://archive.cloudera.com/docs/cdh2-pseudo-distributed.html
IMPORTANT NOTES
You should begin the above process with your system in a completely hadoop-free state to be sure the steps will work correctly. For example, if you have an entry for a hadoop user in your /etc/passwds file that is different to the one the config package wants to install, installation of the config package can fail. Furthermore, old items may have the wrong permissions which may cause later steps to fail. To find everything on your system you need to remove, do:

cd /
find -name "*hadoop*"

and

cd /etc
grep -R hadoop
Step 3: Prepare user “hadoop”
We are going to make it possible to login as user hadoop (or rather, do a sudo -i -u hadoop). This will make it possible to easily edit for example configuration files while keeping their owner as hadoop. We are also going to run HBase as user hadoop.
Change the following entry in /etc/passwd

hadoop:x:104:112:Hadoop User,,,:/var/run/hadoop-0.20:/bin/false

to

hadoop:x:104:112:Hadoop User,,,:/var/run/hadoop-0.20:/bin/bash

There is a lot of talk on the Web about setting up ssh for the hadoop user, so that hadoop can ssh to different nodes without specifying a password. I’m not sure that this is necessary any more, but the weight of recommendation (including here http://hadoop.apache.org/common/docs/current/quickstart.html) persuadesd me to do this anyway. So next:

# sudo -i -u hadoop
hadoop$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
hadoop$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
Step 4: Configure hadoop
Open /etc/hadoop/conf/hadoop-env.sh and make sure your Java home is correctly set e.g.

export JAVA_HOME=/usr/lib/jvm/java-6-sun-1.6.0.15
Step 5: Test hadoop
Checkout the Web-based admin interfaces e.g.
http://hbase.mycompany.com:50070/
and
http://hbase.mycompany.com:50030/
Step 6: Install the HBase package
You need to download and install the latest version from http://www.apache.org/dyn/closer.cgi/hadoop/hbase/. Proceed as root as follows:

# cd ~
# wget http://apache.mirror.anlx.net/hadoop/hbase/hbase-0.20.3/hbase-0.20.3.tar.gz
# tar -xzf hbase-0.20.3.tar.gz
# mv hbase-0.20.3 /usr/lib
# cd /etc/alternatives
# ln -s /usr/lib/hbase-0.20.3 hbase-lib
# cd /usr/lib
# ln -s /etc/alternatives/hbase-lib hbase
# chown -R hadoop:hadoop hbase
Step 7: Configure HBase
Now, login as hadoop and go to its conf directory

# sudo -i -u hadoop
hadoop$ cd /usr/lib/hbase/conf

a/ Then update the Java classpath in hbase-env.sh, just like you did for hadoop-env.sh
b/ Inside hbase-site.xml, configure hbase.roodir and hbase.master. The result should look something like below, notes following:

<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:8020/hbase</value>
<description>The directory shared by region servers.
Should be fully-qualified to include the filesystem to use.
E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR
</description>
</property>
<property>
<name>hbase.master</name>
<value>23.201.99.100:60000</value>
<description>The host and port that the HBase master runs at.
</description>
</property>
</configuration>

NOTES
1/ hbase.rootdir must specify a host and port number exactly the same as specified by fs.default.name inside /etc/hadoop/conf/core-site.xml. Basically it tells HBase where to find the distributed file system.
2/ hbase.master specifies the interface that the HBase master, or rather the Zookeeper instance HBase will start (more later) will listen on. It must be externally addressable for clients to connect. This is a good point to double-check the IP setup step at the beginning of this post.
Step 8: Start up HBase
If you have not already started hadoop, then start it e.g. as described by cloudera:

for service in /etc/init.d/hadoop-0.20-*
do
sudo $service start
done

Next, start HBase as the hadoop user:

hadoop$ /usr/lib/hbase/bin/start-hbase.sh
Step 9: Check HBase is up and running
Open up the HBase Web UI e.g. http://hbase.mycompany.com:60010
Step 10: Open HBase shell, create a table and column family
You need to login to hbase, and create a table and column family that will be used by the Java client example:

me$ /usr/lib/hbase/bin/hbase shell
hbase(main):001:0> create "myLittleHBaseTable", "myLittleFamily"
Step 11: Create Java client project
Create your sample Java client project with code, as described for example athttp://hadoop.apache.org/hbase/docs/r0.20.3/api/index.html.
Next, you need to add a special hbase-site.xml file to its classpath. It should specify the ZooKeeper quorum (the minimum running ZooKeeper instances, in this case, your hbase server). The client contacts ZooKeeper to find out where the master, region servers etc are (ZooKeeper acts as a directory for clients, but it also acts a definitive description of the HBase cluster and synchronization system for its own nodes). Based upon the foregoing, the contents will look something like:

<configuration>
<property>
<name>hbase.zookeeper.quorum</name>
<value>hbase.mycompany.com</value>
<description>The host and port that the HBase master runs at.
A value of 'local' runs the master and a regionserver in
a single process.
</description>
</property>
</configuration>
Step 12: Yeeeha. Build your Java client, and debug
Watch your e.g. NetBeans Ouput window for a trace of what is hopefully happening… welcome to the world of HBase
http://ria101.wordpress.com/2010/01/28/setup-hbase-in-pseudo-distributed-mode-and-connect-java-client/
source : 

No comments:

Post a Comment

Thank you for Commenting Will reply soon ......

Featured Posts

#Linux Commands Unveiled: #date, #uname, #hostname, #hostid, #arch, #nproc

 #Linux Commands Unveiled: #date, #uname, #hostname, #hostid, #arch, #nproc Linux is an open-source operating system that is loved by millio...