How-to: Create a Simple Hadoop Cluster with VirtualBox

Set up a CDH-based Hadoop cluster in less than an hour using VirtualBox and Cloudera Manager.

Thanks to Christian Javet for his permission to republish his blog post below!

I wanted to get familiar with the big data world, and decided to test Hadoop. Initially, I used Cloudera’s pre-built virtual machine with its full Apache Hadoop suite pre-configured (called Cloudera QuickStart VM), and gave it a try. It was a really interesting and informative experience. The QuickStart VM is fully functional and you can test many Hadoop services, even though it is running as a single-node cluster.

I wondered what it would take to install a small four-node cluster…

I did some research and I found this excellent video on YouTube presenting a step by step explanation on how to setup a cluster with VMware and Cloudera. I adapted this tutorial to use VirtualBox instead, and this article describes the steps used.

Overview

High-level diagram of the VirtualBox VM cluster running Hadoop nodes

The overall approach is simple. We create a virtual machine, we configure it with the required parameters and settings to act as a cluster node (specially the network settings). This referenced virtual machine is then cloned as many times as there will be nodes in the Hadoop cluster. Only a limited set of changes are then needed to finalize the node to be operational (only the hostname and IP address need to be defined).

In this article, I created a 4 nodes cluster. The first node, which will run most of the cluster services, requires more memory (8GB) than the other 3 nodes (2GB). Overall we will allocate 14GB of memory, so ensure that the host machine has sufficient memory, otherwise this will impact your experience negatively.

Preparation

The prerequisites for this tutorial is that you should have the latest VirtualBox installed (you can download it for free); We will be using the CentOS 6.5 Linux distribution (you can download the CentOS x86_64bit DVD iso image).

Base VM Image creation

VM creation

Create the reference virtual machine, with the following parameters:

  • Bridge network
  • Enough disk space (more than 40GB)
  • 2 GB of RAM
  • Setup the DVD to point to the CentOS iso image

when you install CentOS, you can specify the option ‘expert text’, for a faster OS installation with minimum set of packages.

Network Configuration

Perform changes in the following files to setup the network configuration that will allow all cluster nodes to interact.

/etc/resolv.conf

 

/etc/sysconfig/network

 

/etc/sysconfig/network-scripts/ifcfg-eth0

 

/etc/selinux/config

 

/etc/yum/pluginconf.d/fastestmirror.conf

 

Initialize the network by restarting the network services:

 

Installation of VM Additions

You should now update all the packages and reboot the virtual machine:

 

In the VirtualBox menu, select Devices, and then Insert Guest…. This insert a DVD with the iso image of the guest additions in the DVD Player of the VM, mount the DVD with the following commands to access this DVD:

 

Follow instructions from this web page.

Setup Cluster Hosts

Define all the hosts in the /etc/hosts file in order to simplify the access, in case you do not have a DNS setup where this can be defined. Obviously add more hosts if you want to have more nodes in your cluster.

/etc/hosts

 

Setup SSH

To also simplify the access between hosts, install and setup SSH keys and defined them as already authorized

 

Modify the ssh configuration file. Uncomment the following line and change the value to no; this will prevent the question when connecting with SSH to the host.

/etc/ssh/ssh_config

 

Shutdown and Clone

At this stage, shutdown the system with the following command:

 

We will now create the server nodes that will be members of the cluster.

in VirtualBox, clone the base server, using the ‘Linked Clone’ option and name the nodes hadoop1, hadoop2, hadoop3 and hadoop4.

For the first node (hadoop1), change the memory settings to 8GB of memory. Most of the roles will be installed on this node, and therefore it is important that it have sufficient memory available.

Clones Customization

For every node, proceed with the following operations:

Modify the hostname of the server, change the following line in the file:

/etc/sysconfig/network

 

Where [n] = 1..4 (up to the number of nodes)

Modify the fixed IP address of the server, change the following line in the file:

/etc/sysconfig/network-scripts/ifcfg-eth0

 

Where [n] = 1..4 (up to the number of nodes)

Let’s restart the networking services and reboot the server, so that the above changes takes effect:

 

at this stage we have four running virtual machines with CentOS correctly configured.

Four Virtual Machines running on VirtualBox, ready to be setup in the Cloudera cluster.

Install Cloudera Manager on hadoop1

Download and run the Cloudera Manager Installer, which will simplify greatly the rest of the installation and setup process.

 

Use a web browser and connect to http://hadoop1.example.com:7180 (or http://10.0.1.201:7180 if you have not added the hostnames into a DNS or hosts file).

To continue the installation, you will have to select the Cloudera free license version. You will then have to define which nodes will be used in the cluster. Just enter all the nodes you have defined in the previous steps(e.g. hadoop1.example.com) separated by a space. Click on the “Search” button. You can then used the root password (or the SSH keys you have generated) to automate the connectivty to the different nodes. Install all packages and services onto the 1st node.

Once this is done, you will select additional service components; just select everything by default. The installation will continue and will complete.

Using the Hadoop Cluster

Now that we have an operational Hadoop cluster, there are two main interfaces that you will use to operate the cluster: Cloudera Manager and Hue.

Cloudera Manager

Use a web browser and connect to http://hadoop1.example.com:7180 (or http://10.0.1.201:7180 if you have not added the hostnames into a DNS or hosts file).

Cloudera Manager homepage, presenting cluster health dashboards

Hue

Similarly to Cloudera Manager, you can access the Hue administration site by accessing: http://hadoop1.example.com:8888, where you will be able to access the different services that you have installed on the cluster.

Hue interface, and here more specifically, an Impala saved queries window.

Conclusions

I have been able to create a small Hadoop cluster in probably less than a hour, largely thanks to the Cloudera Manager Installer, which simplifies the installation to the simplest of operation. It is now possible to execute and use the various examples installed on the cluster, as well as understand the interactions between the nodes. Comments and remarks are welcome!

Filed under:

17 Responses
  • Ofir Manor / January 29, 2014 / 6:02 AM

    Great post!
    Alternatively, users can investigate using lxc on their virtualbox to create lightweight VMs. This redues the memory requirements significantly – to the sum of all Java processes, no operating system overheads.
    I documented my experience here with a slightly different setup
    http://ofirm.wordpress.com/2014/01/05/creating-a-virtualized-fully-distributed-hadoop-cluster-using-linux-containers/

  • Christian Javet / February 02, 2014 / 12:14 PM

    Hi, thanks for the comment. I checked your post and I found it great and I will definitively look into it!

  • Pavel / March 03, 2014 / 8:45 AM

    Hi, Cristian!

    I am trying to repeat your steps but I am failing at the begining.
    Please tell a bit more about host gateway configuration.

    Your host gateway is configured to be 10.0.1.1
    How to get my host gateway IP adress?
    If this host gateway is my physical computer, how I can configure it to use it as gateway in virtual cluster?
    If this host gateway is special virtual machine – can you give me the link, how I must configure it?

  • Simon / March 26, 2014 / 9:39 AM

    Pavel – please read up about setting up bridged mode on VirtualBox. The range of IP addresses you will be using should be in the range of actual platform you are running VirtualBox on: and you should be specifying the same gateway to your router that goes to the internet to get all those packages.

    Great tutorial. A very minor point would be that you need to pull off the ssh private key onto the main host so that the web browser can pick it up.

  • Chris / May 27, 2014 / 9:17 PM

    I got a clean copy of rpmforge from: http://apt.sw.be/redhat/el6/en/x86_64/rpmforge/RPMS/

  • Chris / May 28, 2014 / 10:53 AM

    Thank you for an excellent tutorial. I was able to complete this and get the Cloud Manager and Hue running.

    I have a question. I followed and completed this when I was at home, on my home network. I shut all of the VM’s off and went to bed. When I got to work, I fired them all up again and found that the CloudManager and Hue urls are not working. Why might be this be? Does is have something to do with the host machine being on a different network?

    Thank you for any insight!
    Chris

  • Anthony Bisong / May 30, 2014 / 7:25 AM

    Chris the reason the CloudManager and Hue urls are no more working is because when you move from one network to another network the ip’s changed. You should go back to the /etc/hosts and update it with your new network ip’s

    Anthony Bisong

  • John / June 03, 2014 / 11:56 AM

    Quick question. I am able to complete all the steps up to where I begin installing; in particular, it freezes after i install Zookeeper and start on HBase. I’m guessing I allocated the memory incorrectly? Im using a 2 Ghz Core i7/8GB MacBook Pro (v 10.9.3). I Any ideas?

  • Steve Morin / June 16, 2014 / 11:02 PM

    You can do the same think on CentOS with in a single step

    http://stevemorin.blogspot.com/2014/06/setup-single-node-hadoop-2-cluster-with.html?view=classic

  • StephenC / June 17, 2014 / 4:00 PM

    Hey Christian,

    Followed all the steps but when I get to this section:
    Use a web browser and connect to http://hadoop1.example.com:7180 (or http://10.0.1.201:7180 if you have not added the hostnames into a DNS or hosts file).

    Nothing happens for me, I get this error on firefox:

    Firefox can’t establish a connection to the server at hadoop1.example.com:8888.

    Any idea why? Should this be done on the base node or Hadoop1 ?

  • Chris / June 29, 2014 / 7:31 PM

    Thank you, I was able to get this up and running.

    I am just starting to learn about these technologies and have recently been running Mappers and Reducers written in Java on Hadoop 0202 in stand-alone mode on my Mac.

    I would like to graduate to this environment, but I’m not sure what to do… at all. Where do I go to learn how to use this awesome set up I now have : )

    With the stand alone, I worked on the command line to compile my java and then run hadoop with an input file and a jar.

    Thanks for any pointers, tips, resources!

  • LeeDog / July 06, 2014 / 10:11 PM

    Thanks, Christian / Cloudera. Great post.

    Just a couple points:

    Perl must be installed before the guest additions, or it fails. You have to dig into the mentioned log to determine this.

    VirtualBox 4.3 (on Mac OS X with 8GB) wouldn’t let me adjust the first node VM up to 8GB after constructing it initially from 2GB. Will construct base node with 8GB and scale the others back to 2GB.

    Thanks,
    Lee

  • Andrew Zhang / July 28, 2014 / 2:14 PM

    I was failed to install at the end with error: “can not receive heartbeat from the agent”. I checked the cludera doc and it requires this command work: “host -v -t A hostname“:
    http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/latest/CDH4-Installation-Guide/cdh4ig_topic_11_1.html

    However, my images on the VMWare shows this:

    /root
    (root@cchadoop1)\>host -v -t A hostname
    Trying “cchadoop1.hsd1.ca.comcast.net”
    Trying “cchadoop1″
    Host cchadoop1 not found: 3(NXDOMAIN)
    Received 102 bytes from 75.75.75.75#53 in 14 ms

    Did you actually check if this works?

    host -v -t A hostname

  • Cheriat / September 17, 2014 / 9:06 AM

    Thank you so much for the post. It’s very interesting.
    This is my first test for multiple nodes. I have problems when installing perl and openssh-clients. What worries me is the message that appears when I run the command yum-y install perl. I got the folowing error message “Could not retrives mirrorlist http://mirrorlist.centos.or … 14 : PyCURL ERROR 6 – “Couldn’t resolve host ‘mirrorlist.centos.org’”. I checked all the files and they are all set as you indicated. Did I miss something? Thank you for your help.

  • Francisco / September 29, 2014 / 9:07 PM

    Hi.
    I followed all steps and I was able to install cdh5. Now, I’m trying to run a job from Pentaho using map reduce, but I’m having troubles with the jobtracker port. I have tried to configured, adding some sentences to file mapped-site.xml:

    mapred.job.tracker
    myhost.com:8021

    Could you help me please.
    Thanks.

  • Francisco / September 29, 2014 / 9:17 PM

    Hi Cheriat.
    Your problem might be the IP address you are using for every node. For example, what I did was:
    - my IP address is 10.10.1.192 (the IP of the host)
    - then, my node 1 IP is 10.10.1.100 and so on with the number of nodes that you want.
    - node 2 10.10.1.101
    - node 3 10.10.1.102
    - etc.

    After restart, you can check if your config is OK, by doing ping: ping http://www.sun.com

    Hope this help you.

  • prassan / October 10, 2014 / 10:44 PM

    Hi sir,

    I am trying setup the multi node cluster using the steps you have provided here ,but i have confused on this step,
    mkdir /media/VBGuest
    mount -r /dev/cdrom /media/VBGuest
    while mount the cd to this ‘/media/VBGuest’ location its showing ”You must specify the filesystem type ‘error message.can you please help me on this,

    one more doubt is the what is the link -’FOLLOW THE BELOW PAGE ‘ after that ?why i should i do that ?

    Thank you,
    Prassanna

Leave a comment


1 + one =