How HBase in CDP Can Leverage Amazon’s S3

Cloudera Data Platform (CDP) Data Hub provides an out-of-the-box solution that allows Apache HBase deployments to use Amazon Simple Storage Service (S3) as its main persistence layer for saving table data. Amazon S3 is an object store which offers a high degree of durability with a pay-per-use cost structure. There is no server-side component to run or manage for S3 — all that is needed is the S3 client library and AWS credentials. However, HBase requires a consistent and atomic filesystem which means that it cannot directly use S3 because it is an eventually consistent object store. Both CDH and HDP have only provided HBase solely using HDFS because there have been long-standing impediments that prevented HBase from natively using S3. To address these issues, we’ve built an out-of-the-box solution which we are delivering for the first time via CDP. When you launch an Operational Database (HBase) cluster on CDP, HBase StoreFiles (the backing files for HBase tables) are stored in S3 and HBase write-ahead-logs (WAL) are stored in an HDFS instance run alongside HBase per usual.

We’ll briefly describe each component that goes into this architecture and the role that each fulfills.

The S3A filesystem adapter is provided by Hadoop to access data in S3 via the standard FileSystem APIs. The S3A adapter allows applications written against Hadoop APIs to access data in S3 using a URI of the form “s3a://my_bucket/” instead of “hdfs://namenode:8020/” like with HDFS. The ability to specify a default “filesystem” to use makes “lift-and-shift” style migrations from on-prem clusters with HDFS to cloud-based clusters with S3 extremely simple. HBase can be configured with a base storage location (e.g. a directory in HDFS or an S3 bucket) for all application data which allows HBase to function the same, regardless of whether that data is in HDFS or S3.

S3Guard is a part of the Apache Hadoop project which provides consistent directory listing and object status for the S3A adapter, transparent to the application. To accomplish this, S3Guard uses a consistent, distributed database to track changes made to S3 and guarantees that a client always sees the correct state from S3. Without S3Guard, HBase might not see a new StoreFile which was added to an HBase table. If HBase even temporarily did not observe a file, this could cause data-loss in HBase. However, S3guard doesn’t provide everything that HBase requires to use S3.

HBase Object Store Semantics (or just “HBOSS”) is a new software project under the Apache HBase project specifically built to bridge the gap between S3Guard and HBase. HBOSS is a facade on top of the S3A adapter and S3Guard which uses a distributed lock to ensure that HBase operations can atomically manipulate its files on S3. One example where HBase requires atomicity is a directory rename. With the S3 client, a rename is implemented as a copy of the source data to the destination followed by a deletion of the source data. Without the locking that HBOSS provides, HBase might see the rename operation in-progress which could cause a data-loss. To accomplish this distributed locking, HBOSS uses Apache ZooKeeper. Re-use of ZooKeeper is by design since HBase already requires a ZooKeeper instance to ensure that all HBase services are operating together. Thus, incorporating HBOSS requires no additional service management burden and closes the gap on what HBase requires to use S3 with S3Guard.

Configuring HBase to use S3 for its StoreFiles has many benefits to our users. One such benefit is that users can decouple their storage and compute. If there are times in which no access to HBase is necessary, HBase can be cleanly shut down and all compute resources reclaimed to eliminate any cost of operation. When HBase access is needed again, the HBase cluster can be recreated, pointing to the same data in S3. Upon startup, HBase can re-initialize itself solely from the data in S3.

Using S3 to store HBase StoreFiles does have some challenges. One such problem is the increased latency for a random lookup to a file in S3 as compared to HDFS. Increased latency in S3 access would result in a HBase Gets and Scans taking longer than they would normally take with HDFS. S3 latencies vary from 10’s to 100’s of milliseconds as compared to the 0.1 to 9 millisecond range with HDFS. CDP can reduce the impact of this S3 latency by automatically configuring HBase to use the BucketCache. With the BucketCache enabled, S3 latencies are only experienced the first time a StoreFile is read out of S3. After HBase reads a file, it will try to cache the raw data to replace slow S3 reads with fast local memory reads. When an HBase cluster is launched via CDP, it is automatically configured to cache recently read data from S3 in-memory to deliver faster reads of “hot” data.

We’re extremely pleased to provide these new capabilities to our users.

This article about OpDB template in CDP Data Hub is outdated as the template is removed since CDP 7.2.14. Please try out COD instead. COD also supports the cloud storage usage on HBase clusters.

Krishna Maheshwari
Director of Product Management
More by this author
Josh Elser
More by this author
Wellington Chevreuil
More by this author

Leave a comment

Your email address will not be published. Links are not permitted in comments.