Hadoop Graphing with Cacti

An important part of making sure Apache Hadoop works well for all users is developing and maintaining strong relationships with the folks who run Hadoop day in and day out. Edward Capriolo keeps About.com’s Hadoop cluster happy, and we frequently chew the fat with Ed on issues ranging from administrative best practices to monitoring. Ed’s been an invaluable resource as we beta test our distribution and chase down bugs before our official releases. Today’s article looks at some of Ed’s tricks for monitoring Hadoop with Cacti through JMX. -Christophe

You may have already read Philip’s Hadoop Metrics post, which provides a general overview of the Hadoop Metrics system. Here, we’ll examine Hadoop monitoring with Cacti through JMX.

What is Cacti?

Cacti is an RRD front end. You can learn more about it on the Cacti website.

Cacti differs from Ganglia in that Cacti polls using SNMP or shell scripts while applications push data at Ganglia. Both Ganglia and Cacti have feature overlaps, but for those with a large Cacti deployment, installing a secondary statistic system just for Hadoop may not be an option.

I have had great success over the years graphing everything from user CPU, NetApp disk reads to environmental sensors with Cacti. When I saw the information in Hadoop JMX, I started working on a set of Hadoop templates, hadoop-cacti-jtg. My goal was to provide visual representation for all pertinent Hadoop JMX information.

Administrators and developers can use these templates to better manage Hadoop and understand how it is working behind the scenes. Currently, the package has several predefined graphs covering the Hadoop NameNode and DataNode. Let’s walk through some of them.

Hadoop Capacity

Hadoop Capacity provides the same type of information you get from monitoring a standard disk. The top black line represents the maximum capacity. This is all the possible storage on all currently active DataNodes.

You also have the used and free capacity information stacked on top of each other. You can use these variables to trend your file system growth. In most cases your file system should be growing steadily, assuming you have batch processes running on a schedule. You may want to use a Cacti Threshhold alarm at 80%. If the alarm goes off, it’s good practice to clean up unused files, or you can take the lazy way and order more DataNodes :)

hadoop_name_cap.png

If you are wondering why the sum of used plus free is not equal to capacity, then remember that Hadoop has reserve for each DataNode. Also, your disk file system might have a reserve. If a disk is solely devoted to serving HDFS, you can tune the reserve down with the following string:

tunefs -m <percent>

Live vs. Dead Nodes

The Hadoop live and dead node information is available on the NameNode’s web interface. This stack-style graph shows both values together. Blue represents the number of live DataNodes, while the red area of the graph shows the number of dead DataNodes. If you are using the Cacti Threshhold system, you can use it to set off a warning if the number of Dead DataNodes exceeds 20%.

NameNode Stats

Hadoop JMX gives us a breakdown of file operations by type. This graph provides details about requests to which the NameNode is responding. I ran several teragens and terasorts from the examples.jar. Below, we can see the process both creating and reading files from the system as the map reduce jobs run.

DataNode Blocks

The DataNode statistics are similar to the NameNode statistics. This graph template can be applied to each DataNode, allowing you to track BlocksRead, BlocksWritten, BlocksRemoved, and BlocksReplicated. You can use this to find “hot spots” in your data. A hot spot is a piece of data that is commonly or frequently accessed. Increasing the replication to those files would help by spreading the access to other DataNodes.

Cacti Extras

Cacti offers many excellent out-of-the box features. The following add-on features are helpful for monitoring Hadoop deployments. You can find these on the Cacti site:

  • Linux Full CPU Graph – Adds IOWait and other kernel states. The default CPU graph only shows nice, user, and system.
  • Linux Full Memory Graph – The standard memory graph does not show swap usage.
  • Disk Utilization Graph – You can graph bytes written to physical devices from SNMP. This is helpful for underlying disk utilization and maximum possible disk performance.
  • RealTime Plugin – Used to graph data at 5-second intervals. By default, Cacti is running at 1-minute or 5-minute intervals, which is not helpful for Hadoop since the JMX is probably updating at 5-minute intervals. However, it is generally useful for real time reporting of other SNMP information.
  • THold Plugin – The Threshold plugin creates some overlap between Nagios and Cacti, and sends alarms when data exceeds high or low values.
  • Aggregate Plugin – The aggregate plugin is ideal for graphing clusters into a single graph. You may want to graph the “Open File Count” across several nodes – this plugin makes the graphing process fast and easy.

Where to go from Here

If you want to see the Hadoop Cacti templates in action, check out the Live Sample (user: hadoop, password: hadoop). To get started, simply follow the Installation Instructions. The project has the Apache V2 license. You can view the Source Repository. A Hudson system provides the latest build if you want to dig into the project source code.

4 Responses
  • Alexey Kovyrin / December 17, 2009 / 12:53 PM

    In your repo DataNode cacti template for 0.20 contains hbase rs tamplate, but not the datanode one. Could you please replace it with the actual datanode template?

    Thanks.

  • Jon Stevens / December 24, 2010 / 11:16 PM

    I’ve created a project called jmxtrans. This is effectively the missing connector between JMX and whatever logging / graphing package that you can dream up.

    jmxtrans is very powerful tool which reads json configuration files of servers/ports and jmx domains / attributes / types and then outputs the data in whatever format you want via special ‘Writer’ objects which you can code up yourself. It does this with a very efficient engine design that will scale to querying thousands of machines.

    The core engine is pretty solid and there are writers for cacti/rrdtool, graphite and stdout.

    This is a far more complete solution for creating a visual representation of what your hadoop cluster is doing.

  • Jilles / January 14, 2011 / 6:45 AM

    The live demo doesn’t work anymore:

    FATAL: Cannot connect to MySQL server on ‘localhost’. Please make sure you have specified a valid MySQL database name in ‘include/config.php’

  • David / January 23, 2013 / 8:03 PM

    Is there anyone that have an updated version for installing cacti with the latest version of CDH4?

Leave a comment


three + = 9