Hadoop Quickstart: Use Whirr to automate standup of your distributed cluster on Rackspace

We have previously provided a Quickstart guide to standing up Rackspace cloud servers (and have one for Amazon servers as well). These are very low cost ways of building reliable, production ready capabilities for enterprise use (commercial and government).  And Bryan Halfpap has provided a Quickstart guide which shows you how to build a Hadoop Cluster (leveraging Cloudera’s CDH3).  Using Bryan’s guide you can have a Hadoop Cluster up and running in under 20 minutes.

With this post we would like to provide you with some additional tips that flow from these other posts. We will show you how to build clusters even faster using another common tool in community use, Whirr.

What is Whirr? Apache Whirr is a set of libraries for running cloud services. Here is more from http://whirr.apache.org/ 

Whirr provides:

  • A cloud-neutral way to run services. You don’t have to worry about the idiosyncrasies of each provider.
  • A common service API. The details of provisioning are particular to the service.
  • Smart defaults for services. You can get a properly configured system running quickly, while still being able to override settings as needed.

And the great news is you can use Whirr as a command line tool for deploying clusters.

If you follow the tips below you can use Whirr to quickly standup distributed clusters. Our assumptions in this guide are that you have stood up RedHat severs using our Rackspace tutorial. But if this is not the case you should be able to easily modify the tips below to suit your situation.

SSH into your Rackspace account by terminal window:

sudo ssh root@50.56.237.236

After logging in, it is always a good idea to make sure you have the latest packages. In Red Hat, type:

sudo yum upgrade

Now it is time to install Whirr.  This is easy since you are running RedHat. RedHat uses YUM, a package management application that makes software installation easy. Type:

yum install whirr

Your installation will be complete in under a minute.

You will now need to generate a keypair for use with Whirr. This will let you enable secure communications with the Whirr cluster without needing passwords. To do that, enter the following command:

ssh-keygen -t rsa -P ”

You will see:

Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):

Just hit “enter”.

You will see something like:

Created directory ‘/root/.ssh’.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
c6:31:f7:f5:97:e4:8c:b3:2a:f4:0d:a0:93:e4:c1:06 root@RedHat6-1-Hadoop-Test
The key’s randomart image is:
+–[ RSA 2048]—-+
| |
| E |
| o o . .. |
| * = . .=..|
| + S . .o +o|
| * . . o .|
| o . o. |
| . … |
| .. |
+—————–+

Now you must define your Whirr cluster. you do that by creating a properties file. For simplicity, you will name it hadoop.properties. You will need your rackspace username and API to fill out the whirr properties file.  Your API is found in my account page under “API Access”

You can create the properties file many ways. Here is how to do it in nano:

nano hadoop.properties

Now  enter the following info in that file, subsituting your login and API info for what you see below:

whirr.cluster-name=myhadoopcluster
whirr.instance-templates=1 hadoop-jobtracker+hadoop-namenode,1 hadoop-datanode+$
whirr.provider=cloudservers-us
whirr.identity=enteryourloginidfromrackspace
whirr.credential=[youuseyourownapi]
whirr.private-key-file=/root/.ssh/id_rsa
whirr.public-key-file=/root/.ssh/id_rsa.pub
whirr.cluster-user=newusers
whirr.hadoop-install-function=install_cdh_hadoop
whirr.hadoop-configure-function=configure_cdh_hadoop

Now to launch a cluster, type:

$ whirr launch-cluster –config hadoop.properties

This will take a few moments to run. As it runs you should see messages like:

Bootstrapping cluster
Configuring template
Starting 1 node(s) with roles [hadoop-datanode]
Configuring template
Starting 1 node(s) with roles [hadoop-jobtracker, hadoop-namenode]

As things are started up, servers are being automatically built. Keep watching your e-mail, you will be getting notices of server standup. Remember, this is costing you money. When you finish using your clusters you will want to terminate them. You can do that through Whirr or by just nuking the servers using your Rackspace account and control panel.

Note the info being provided in the terminal window. Information is being provided on the instances being stood up.  As you skim this info you will notice a couple URL’s are provided that give you a web UI into the namenode and job tracker. For example, mine are:

Namenode web UI available at http://50.56.211.206:50070

Jobtracker web UI available at http://50.56.211.206:50030

You will also see that a site file was created for you at:

/root/.whirr/myhadoopcluster/hadoop-site.xml

You need to update your your local Hadoop configuration to use this file.  Type the following commands:

cp -r /etc/hadoop-0.20/conf.empty /etc/hadoop-0.20/conf.whirr
rm -f /etc/hadoop-0.20/conf.whirr/*-site.xml
cp ~/.whirr/myhadoopcluster/hadoop-site.xml /etc/hadoop-0.20/conf.whirr
alternatives –install /etc/hadoop-0.20/conf hadoop-0.20-conf /etc/hadoop-0.20/conf.whirr 50
alternatives –display hadoop-0.20-conf

A proxy script was created for you at:

/root/.whirr/myhadoopcluster/hadoop-proxy.sh

You should now start that proxy.  It is there for security reasons.  All traffic from the network where your client is running is proxied through the master node of the cluster using an SSH tunnel.  This script launches the proxy.  Run the following command to launch the script:

~/.whirr/myhadoopcluster/hadoop-proxy.sh

If that doesn’t run make sure you have the right permissions on the file by

chmod +rwx hadoop-proxy.sh

Then try again.

With the above you are now able to use your Hadoop Cluster.

Prove that by browsing HDFS:

hadoop fs -ls /

Now it is time to run a MapReduce job!  We are going to use one of the example programs provided in the Hadoop installation. The program is in the file Hadoop-*examples*.jar  First, a lets review list of options available form the program. See these by entering:

hadoop jar $HADOOP_HOME/hadoop-examples-*.jar

You will see:

An example program must be given as the first argument.
Valid program names are:
aggregatewordcount: An Aggregate based map/reduce program that counts the words in the input files.
aggregatewordhist: An Aggregate based map/reduce program that computes the histogram of the words in the input files.
dbcount: An example job that count the pageview counts from a database.
grep: A map/reduce program that counts the matches of a regex in the input.
join: A job that effects a join over sorted, equally partitioned datasets
multifilewc: A job that counts words from several files.
pentomino: A map/reduce tile laying program to find solutions to pentomino problems.
pi: A map/reduce program that estimates Pi using monte-carlo method.
randomtextwriter: A map/reduce program that writes 10GB of random textual data per node.
randomwriter: A map/reduce program that writes 10GB of random data per node.
secondarysort: An example defining a secondary sort to the reduce.
sleep: A job that sleeps at each map and reduce task.
sort: A map/reduce program that sorts the data written by the random writer.
sudoku: A sudoku solver.
teragen: Generate data for the terasort
terasort: Run the terasort
teravalidate: Checking results of terasort
wordcount: A map/reduce program that counts the words in the input files.

So lets put this info to use. We will make a directory put some info in there, and run the wordcount program:

 

$ export HADOOP_HOME=/usr/lib/hadoop
$ hadoop fs -mkdir input
$ hadoop fs -put $HADOOP_HOME/CHANGES.txt input
$ hadoop jar $HADOOP_HOME/hadoop-examples-*.jar wordcount input output
$ hadoop fs -cat output/part-* | head

Now you are off and running.

You now have a platform capable of scaling to very large jobs. And it runs CDH3, the most reliable, capable distribution of Hadoop and related technologies. Let the fun begin!

But one final note. Think about the lifecycle of your system. At some time you will need to spin it down and turn it off.  To destroy the cluster gracefully using Whirr, enter this command:

whirr destroy-cluster –config hadoop.properties

Using the information above you can create and better manage Hadoop Clusters on Rackspace very easily. This is how we create our CDH Clusters.  In future posts we will show you how to get data prepared for analysis and how to run some queries.  We will also provide tips on how to use Cloudera’s free management tools and how to upgrade to Cloudera Enterprise when you are ready.

 

Sign up for your free CTOvision Pro trial today for unique insights, exclusive content and special reporting.

CTOvision Pro Special Technology Assessments

We produce special technology reviews continuously updated for CTOvision Pro members. Categories we cover include:

  • Analytical Tools - With a special focus on technologies that can make dramatic positive improvements for enterprise analysts.
  • Big Data - We cover the technologies that help organizations deal with massive quantities of data.
  • Cloud Computing - We curate information on the technologies enabling enterprise use of the cloud.
  • Communications - Advances in communications are revolutionizing how data gets moved.
  • GreenIT - A great and virtuous reason to modernize!
  • Infrastructure  - Modernizing Infrastructure can have dramatic benefits on functionality while reducing operating costs.
  • Mobile - This revolution is empowering the workforce in ways few of us ever dreamed of.
  • Security  -  There are real needs for enhancements to security systems.
  • Visualization  - Connecting computers with humans.
  • Hot Technologies - Firms we believe warrant special attention.

 

Recent Research

USN Quarterly Industry Day at Charleston: What you need to know to compete

Request Your Invite to the 20 May 2014 Andreessen Horowitz Fed Forum in DC

Amazon Hopeful that Fire TV will Spread

What The Enterprise IT Professional Needs To Know About Git and GitHub

3D Printing… At Home?

Tech Firms Seeking To Serve Federal Missions: Here is how to follow the money

Creating The New Cyber Warrior: Eight South Carolina Universities Compete

Mobile Gamers: Fun-Seeking but Fickle

Update from DIA CTO, CIO and Chief Engineer on ICITE and Enterprise Apps

Pew Report: Increasing Technology Use among Seniors

Finding The Elusive Data Scientist In The Federal Space

DoD Public And Private Cloud Mandates: And insights from a deployed communications professional on why it matters

solid
About Bob Gourley

Bob Gourley is the publisher of CTOvision.com and DelphiBrief.com and the new analysis focused Analyst One Bob's background is as an all source intelligence analyst and an enterprise CTO. Find him on Twitter at @BobGourley

  • Abi Sach

    i tried that on rackspace cloud and yum install whirr dont work… what i am i missing ?

  • Abi Sach

    i tried that on rackspace cloud and yum install whirr dont work… what i am i missing ?

  • Molly

    Bob – I followed your instructions and they worked *almost* :)     I am no Linux Admin, so forgive me if these corrections are wrong…    (I am on CentOS 6) 
     
    In the Hadoop.Properties file, you need to set the “whirr.instance-templates” as follows:  
     
    whirr.instance-templates=1 hadoop-jobtracker+hadoop-namenode,1 hadoop-datanode+hadoop-tasktracker
     
    NOT: whirr.instance-templates=1 hadoop-jobtracker+hadoop-namenode,1 hadoop-datanode+$
     
    For the SSH key pair, the command ssh-keygen -t rsa -P ”  didn’t work for me because it required a passphrase.  I’m no linux sysadmin, so there is probably a simple way to work around this, but I used 
    simply the command ssh-keygen and that worked for me.  
     
    Love reading your stuff – very helpful! 

  • Molly

    Bob – I followed your instructions and they worked *almost* :)     I am no Linux Admin, so forgive me if these corrections are wrong…    (I am on CentOS 6) 
     
    In the Hadoop.Properties file, you need to set the “whirr.instance-templates” as follows:  
     
    whirr.instance-templates=1 hadoop-jobtracker+hadoop-namenode,1 hadoop-datanode+hadoop-tasktracker
     
    NOT: whirr.instance-templates=1 hadoop-jobtracker+hadoop-namenode,1 hadoop-datanode+$
     
    For the SSH key pair, the command ssh-keygen -t rsa -P ”  didn’t work for me because it required a passphrase.  I’m no linux sysadmin, so there is probably a simple way to work around this, but I used 
    simply the command ssh-keygen and that worked for me.  
     
    Love reading your stuff – very helpful! 

  • dave_walker

    While Whirr’s free,open source and pretty nifty, it does have its shortcomings – specifically, when I had a close look at it, it only works with AWS and Rackspace (although, presumably it would work with other providers who have attempted to clone the AWS API).
     
    I had a chat at Cloud World Forum last week with an informative chap from http://www.dynamicops.com/ , who told me quite a lot about their Cloud Suite; they are putting a lot of effort into enabling it to manage deployments across a whole bunch of cloud service providers.
     
    Obviously, DynamicOps aren’t the only people working on this. The marketplace for multi-provider cloud management tools must be really kicking off…

  • dave_walker

    While Whirr’s free,open source and pretty nifty, it does have its shortcomings – specifically, when I had a close look at it, it only works with AWS and Rackspace (although, presumably it would work with other providers who have attempted to clone the AWS API).
     
    I had a chat at Cloud World Forum last week with an informative chap from http://www.dynamicops.com/ , who told me quite a lot about their Cloud Suite; they are putting a lot of effort into enabling it to manage deployments across a whole bunch of cloud service providers.
     
    Obviously, DynamicOps aren’t the only people working on this. The marketplace for multi-provider cloud management tools must be really kicking off…

  • Abi Sach

    i tried that on rackspace cloud and yum install whirr dont work… what i am i missing ?