Common Questions and Requests From Our Users

A few months ago we announced the Cloudera Distribution for Hadoop.  We’re happy to report that lots of people have started using our distribution, and our GetSatisfaction product (which is essentially a message board about our products) has seen lots of good Hadoop questions and answers.  We thought it would be worthwhile to share some of the interesting questions and requests we’ve seen from our users.

Question: How do I backup my name node metadata?
(Editor’s note: The information in this section pertains to CDH3 only. For CDH4, refer to the CDH4 High Availability Guide.) The name node (NN) stores all of the HDFS metadata, which includes file names, directory structures, and block locations.  This metadata is stored in memory for fast lookup, but the NN also maintains two on-disk data structures to ensure that metadata is persisted.  The first structure stored is a snapshot of the in-memory metadata, and the second structure stored is an edit log of changes that have been made since the snapshot was last taken.  The secondary name node (2NN) is in charge of fetching the snapshot and edit log from the NN and merging the two into a new snapshot, which is then sent back to the NN.  Once the NN gets the new snapshot, it clears its edit log, and the process repeats.  Take a look at our other blog post about multi-host secondary name nodes for more information about configuring the 2NN.

There are two types of metadata backups that one should implement, and each type solves a different problem.  I will talk about each of these backup strategies separately.  The first backup strategy is used to ensure that no metadata is lost in the event of a NN failure, whether that failure be disks dying, power supplies catching fire, or some other unforeseen loss of the NN or its local data.  The way to avoid losing NN metadata in the event of a crash is to configure dfs.name.dir such that it writes to several local disks and at least one NFS mount.  dfs.name.dir takes a comma-separated list of local filesystem paths, so an example configuration might look like “/hdd1/hadoop/dfs/name,/hdd2/hadoop/dfs/name,/mnt/nfs/hadoop/dfs/name”.  The purpose of storing data on several local hard drives is to avoid data loss in the case of a single drive failing.  The purpose of storing data on a NFS mount is to avoid data loss in the case of the NN machine going down entirely.  With at least two local drives and one NFS mount storing the same NN metadata, you should be well protected from losing any data from a crash.  To be fair, NFS isn’t the only solution for mounting a remote file system, but it’s the de facto standard for Hadoop.

The second backup strategy is used to allow recovery from accidental data loss due to user error (such as a careless hadoop fs -rmr /*).  As mentioned earlier, the 2NN is able to fetch the NN’s metadata snapshot and edits log over HTTP.  That said, if you’d like to perform hourly or nightly backups of the NN metadata, you can do so by querying the following URLs:

  • Snapshot: http://nn.domain.com:50070/getimage?getimage=1
  • Edits log: http://nn.domain.com:50070/getimage?getedit=1

Note that using LVM snapshots to backup the snapshot and edit log is also a good idea; LVM snapshots allow for more reliable backups.

To recover from a NN failure, or to restore from a backup, just take the edits log and snapshot — either from the NFS server or from your backup archive — and place them in the following places:

  • Snapshot: dfs.name.dir/current/fsimage
  • Edits log: dfs.name.dir/current/edits

Note that the NN daemon should not be running when you change its snapshot and edits log.

Bonus link: learn more about protecting data node metadata.

Request: I want Cloudera’s Distribution for Hadoop on Mac OS X.
We distribute our Distribution for Hadoop by providing RPMs, DEBs, and AMIs.  RPMs are installable on Redhat-based Linux distributions such as CentOS, RHEL, and Fedora. DEBs are installable on Debian-based Linux distributions such as Debian and Ubuntu. AMIs are machine images used for running our distribution in EC2. We’ve had several people request packaging for Mac OS X.  Our near-term solution for getting our distribution on Macs is to provide tarballs similar to the tarballs you download for vanilla Hadoop.  Perhaps at some later time we’ll provide self-installing DMGs, but we don’t have them on the road map.

Question: My MapReduce jobs throw Exceptions saying that I’ve ran out of disk space, but I have plenty of disk space.  What’s up?
Most users who run into this have misconfigured hadoop.tmp.dir. If you’re using vanilla Hadoop, then hadoop.tmp.dir will be configured to /tmp.  This is problematic, because most Linux installations have a quota on /tmp, making Hadoop think it’s out of disk space when it tries to write temporary data.  Be sure to configure hadoop.tmp.dir to a directory that has plenty of space; it’s fine for hadoop.tmp.dir to write to the same partitions (not directories, though) as dfs.data.dir, dfs.name.dir, etc.  As long as each of these parameters write to different folders, Hadoop will manage disk space in a reasonable way.

Also worth noting is that mapred.local.dir should be configured to write to multiple disks, rather than relying on hadoop.tmp.dir.

Request: We want Pig 0.2.0!
Yes, we know :).  We’ve had several requests for Pig 0.2.0 to be included in our distribution.  It’s coming soon!  Stay tuned for an announcement.

Question: I have a big NFS server at my disposal; how can I use it for my Hadoop cluster?
One of Hadoop’s design goals was to avoid having data be stored in a single place such as a NFS server.  Hadoop is able to analyze and compute lots of data because data is distributed across many nodes, allowing for spatial locality when computing, and also allowing for files to be read from several data nodes in parallel.  If a NFS mount were used to store HDFS data, then the NFS server would certainly become a bottleneck and would slow down your entire cluster, because several task trackers would request data from the NFS server at once, probably lighting the NFS server on fire and bringing your job to a crawl.  Though a NFS server can’t really help you compute more data faster, it can make your operational tasks easier.  An ops engineer can use NFS to ensure that each node has the same Hadoop code, dependent files (e.g., if you’re using Hadoop Streaming), and consistent home directories.  As long as Hadoop is not reading or writing HDFS data to or from a NFS mount, the NFS server should not be a bottleneck.  A NFS server could also be used to collect Hadoop logs from all machines in a small cluster, though using Scribe for log collection is a much more scalable solution.

I have more questions!
Do you have more questions about Hadoop, Pig, Hive, or our Distribution for Hadoop?  There are several ways in which you can get your questions answered.  If you have a general Hadoop, Hive, or Pig question, then you are most likely to get the best response on the user lists: pig-user, hive-user, hadoop-user.  Cloudera engineers participate in these lists a lot as well.  If you have questions about Cloudera’s Distribution for Hadoop, then post a message to our GetSatisfaction message board.  We’re always happy to help out.

3 Responses
  • Tim / May 31, 2009 / 4:07 PM

    Hello,
    I was wondering where I can get the tarball from. I have been looking around and so far have only seen debian/rpm distributions.

    Thanks!
    TIm

  • alex / June 01, 2009 / 10:21 AM

    Hi Tim,

    Unfortunately we don’t provide a tarball of our distribution yet. We’re working on this, though, so stay tuned!

    Thanks for your patience.

    Alex

Leave a comment


four × 2 =