Online Apache HBase Backups with CopyTable

CopyTable is a simple Apache HBase utility that, unsurprisingly, can be used for copying individual tables within an HBase cluster or from one HBase cluster to another. In this blog post, we’ll talk about what this tool is, why you would want to use it, how to use it, and some common configuration caveats.

Use cases:

CopyTable is at its core an Apache Hadoop MapReduce job that uses the standard HBase Scan read-path interface to read records from an individual table and writes them to another table (possibly on a separate cluster) using the standard HBase Put write-path interface. It can be used for many purposes:

  • Internal copy of a table (Poor man’s snapshot)
  • Remote HBase instance backup
  • Incremental HBase table copies
  • Partial HBase table copies and HBase table schema changes

Assumptions and limitations:

The CopyTable tool has some basic assumptions and limitations. First, if being used in the multi-cluster situation, both clusters must be online and the target instance needs to have the target table present with the same column families defined as the source table.

Since the tool uses standards scans and puts, the target cluster doesn’t have to have the same number of nodes or regions.  In fact, it can have different numbers of tables, different numbers of region servers, and could have completely different region split boundaries. Since we are copying entire tables, you can use performance optimization settings like setting larger scanner caching values for more efficiency. Using the put interface also means that copies can be made between clusters of different minor versions. (0.90.4 -> 0.90.6, CDH3u3 -> CDH3u4) or versions that are wire compatible (0.92.1 -> 0.94.0).

Finally, HBase only provides row-level ACID guarantees; this means while a CopyTable is going on, newly inserted or updated rows may occur and these concurrent edits will either be completely included or completely excluded. While rows will be consistent, there is no guarantees about the consistency, causality, or order of puts on the other rows.

Internal copy of a table (Poor man’s snapshot)

Versions of HBase up to and including the most recent 0.94.x versions do not support table snapshotting. Despite HBase’s ACID limitations, CopyTable can be used as a naive snapshotting mechanism that makes a physical copy of a particular table.

Let’s say that we have a table, tableOrig with column-families cf1 and cf2. We want to copy all its data to tableCopy. We need to first create tableCopy with the same column families:

We can then create and copy the table with a new name on the same HBase instance:

This starts an MR job that will copy the data.

Remote HBase instance backup

Let’s say we want to copy data to another cluster. This could be a one-off backup, a periodic job or could be for bootstrapping for cross-cluster replication. In this example, we’ll have two separate clusters: srcCluster and dstCluster.

In this multi-cluster case, CopyTable is a push process — your source will be the HBase instance your current hbase-site.xml refers to and the added arguments point to the destination cluster and table. This also assumes that all of the MR TaskTrackers can access all the HBase and ZK nodes in the destination cluster. This mechanism for configuration also means that you could run this as a job on a remote cluster by overriding the hbase/mr configs to use settings from any accessible remote cluster and specify the ZK nodes in the destination cluster. This could be useful if you wanted to copy data from an HBase cluster with lower SLAs and didn’t want to run MR jobs on them directly.

You will use the the –peer.adr setting to specify the destination cluster’s ZK ensemble (e.g. the cluster you are copying to). For this we need the ZK quorum’s IP and port as well as the HBase root ZK node for our HBase instance. Let’s say one of these machine is srcClusterZK (listed in hbase.zookeeper.quorum) and that we are using the default zk client port 2181 (hbase.zookeeper.property.clientPort) and the default ZK znode parent /hbase (zookeeper.znode.parent). (Note: If you had two HBase instances using the same ZK, you’d need a different zookeeper.znode.parent for each cluster.

Note that you can use the –new.name argument with the –peer.adr to copy to a differently named table on the dstCluster.

This will copy data from tableOrig on the srcCluster to the dstCluster’s tableCopy table.

Incremental HBase table copies

Once you have a copy of a table on a destination cluster, how do you do copy new data that is later written to the source cluster? Naively, you could run the CopyTable job again and copy over the entire table. However, CopyTable provides a more efficient incremental copy mechanism that just copies the updated rows from the srcCluster to the backup dstCluster specified in a window of time. Thus, after the initial copy, you could then have a periodic cron job that copies data from only the previous hour from srcCluster to the dstCuster.

This is done by specifying the –starttime and –endtime arguments. Times are specified as decimal milliseconds since unix epoch time.

Partial HBase table copies and HBase table schema changes

By default, CopyTable will copy all column families from matching rows. CopyTable provides options for only copying data from specific column-families. This could be useful for copying original source data and excluding derived data column families that are added by follow on processing.

By adding these arguments we only copy data from the specified column families.

  • –families=srcCf1
  • –families=srcCf1,srcCf2

Starting from 0.92.0 you can copy while changing the column family name:

  • –families=srcCf1:dstCf1
    • copy from srcCf1 to dstCf1 
  • –families=srcCf1:dstCf1,dstCf2,srcCf3:dstCf3
    • copy from srcCf1 to destCf1, copy dstCf2 to dstCf2 (no rename), and srcCf3 to dstCf3

Please note that dstCf* must be present in the dstCluster table!

Starting from 0.94.0 new options are offered to copy delete markers and to include a limited number of overwritten versions. Previously, if a row is deleted in the source cluster, the delete would not be copied — instead that a stale version of that row would remain in the destination cluster. This takes advantage of some of the 0.94.0 release’s advanced features.

  • –versions=vers
    • where vers is the number of cell versions to copy (default is 1 aka the latest only)
  • –all.cells 
    • also copy delete markers and deleted cells

Common Pitfalls

The HBase client in the 0.90.x, 0.92.x, and 0.94.x versions always use zoo.cfg if it is in the classpath, even if an hbase-site.xml file specifies other ZooKeeper quorum configuration settings. This “feature” causes a problem common in CDH3 HBase because its packages default to including a directory where zoo.cfg lives in HBase’s classpath. This can and has lead to frustration when trying to use CopyTable (HBASE-4614). The workaround for this is to exclude the zoo.cfg file from your HBase’s classpath and to specify ZooKeeper configuration properties in your hbase-site.xml file. http://hbase.apache.org/book.html#zookeeper

Conclusion

CopyTable provides simple but effective disaster recovery insurance for HBase 0.90.x (CDH3) deployments. In conjunction with the replication feature found and supported in CDH4’s HBase 0.92.x based HBase, CopyTable’s incremental features become less valuable but its core functionality is important for bootstrapping a replicated table. While more advanced features such as HBase snapshots (HBASE-50) may aid with disaster recovery when it gets implemented, CopyTable will still be a useful tool for the HBase administrator.

Filed under:

9 Responses
  • Tom Goren / June 06, 2012 / 7:45 AM

    Excellent guide to CopyTable!

    Just wanted to add my 2 cents, as I used CopyTable in a script for the purpose of migrating HBase tables from one cluster to another.

    If nothing else, a nice take away from my script is automatic creation of all the column families on the destination cluster, so as to avoid doing it manually.

    Here it is:
    http://tech.tomgoren.com/archives/284

    It uses Python and the Thrift API.

    Thanks!

    • Jonathan Hsieh / June 06, 2012 / 8:34 AM

      Hey Tom,

      Sounds like the auto cf creation feature might be useful to add as an option to CopyTable upstream. Have you considered contributing code or at least filing an issue/feature request upstream here so we can add functionality? (https://issues.apache.org/jira/browse/HBASE, click create issue).

      The other good news is that in a soon to be published blog post, we’ll talk about wire compatibility and compatibility between major version — hopefully after an upgrade to that version, you won’t have to do that upgrade process again!

      Jon.

  • Joe Travaglini / July 03, 2012 / 11:03 AM

    Jon,
    Nice write up, but I am confused by something.

    In the section about a poor man’s snapshot, shouldn’t the first command read:

    srcCluster$ echo “create ‘tableCopy’, ‘cf1′, ‘cf2′” | hbase shell

    If I’m interpreting how this works correctly, the shell should be labelled ‘srcCluster’ as it’s internal (i.e. only one cluster involved), and the pre-seed command should contain the ultimate name of the copied table, in this case, ‘tableCopy’.

    • Jonathan Hsieh / July 03, 2012 / 3:07 PM

      Hi Joe,

      You are correct. I’ve updated the post with the change.

      Thanks!
      Jon.

  • PsyberS / December 07, 2012 / 2:12 PM

    There are some typos. The commands in the incremental section state:

    hbase org.apache.hadoop.HBase.mapreduce.CopyTable

    but not the case is wrong on ‘HBase’. It should be:

    hbase org.apache.hadoop.hbase.mapreduce.CopyTable

    • Jonathan Hsieh / January 23, 2013 / 4:11 PM

      Thanks! I’ve updated the post to fix the typos.

      -Jon.

  • shailesh / February 06, 2013 / 10:44 PM

    Hi Jon,

    This is indeed helpful.

    I have one query -
    In the section “Partial HBase table copies..” you have mentioned that starting version 0.94.0 there are options to copy delete markers as well to the destination cluster so a deleted row on source cluster does not remain for ever on the destination. But, will this work right even after a major compaction has happened on the source cluster. Since major compaction will cleanup the delete markers (and the actual rows), how can copyTables fetch the delete markers?

    Thanks!

  • Maziyar Mirabedini / March 21, 2013 / 4:53 PM

    Hi there,
    Thanks for the great write up!
    We’re using CopyTable to copy a table from one cluster to another. But its very slow. We noticed that CopyTable is copying data to one region server at a time. Is there anyway we can speed up things or copy to multiple region servers at the same time?
    Thanks!

Leave a comment


2 × nine =