Cloudera Engineering Blog · HBase Posts

How-to: Use the HBase Thrift Interface, Part 3 – Using Scans

The conclusion to this series covers how to use scans, and considerations for choosing the Thrift or REST APIs.

In this series of how-tos, you have learned how to use Apache HBase’s Thrift interface. Part 1 covered the basics of the API, working with Thrift, and some boilerplate code for connecting to Thrift. Part 2 showed how to insert and to get multiple rows at a time. In this third and final post, you will learn how to use scans and some considerations when choosing between REST and Thrift.

Scanning with Thrift

Sneak Preview: HBaseCon 2014 "Operations" Track

HBaseCon 2014 “Operations” track reveals best practices used by some of the world’s largest production-cluster operators.

The HBaseCon 2014 (May 5, 2014 in San Francisco) agenda is particularly strong in the area of operations. Thanks again, Program Committee!

Sneak Preview: HBaseCon 2014 General Session

The HBaseCon 2014 General Session – with keynotes by Facebook, Google, and Salesforce.com engineers – is arguably the best ever.

HBaseCon 2014 (May 5, 2014 in San Francisco) is coming very, very soon. Over the next few weeks, as I did for last year’s conference, I’ll be bringing you sneak previews of session content (across Operations, Features & Internals, Ecosystem, and Case Studies tracks) accepted by the Program Committee.

HBaseCon 2014: Speakers, Keynotes, and Sessions Announced

Users of diverse, real-world HBase deployments around the world present at this year’s event.

This year’s agenda for HBaseCon, the conference for the Apache HBase community (developers, operators, contributors), looks “Stack-ed” with can’t-miss keynotes and breakouts. Program committee, you really came through (again).

Secrets of Cloudera Support: Inside Our Own Enterprise Data Hub

Cloudera’s own enterprise data hub is yielding great results for providing world-class customer support.

Here at Cloudera, we are constantly pushing the envelope to give our customers world-class support. One of the cornerstones of this effort is the Cloudera Support Interface (CSI), which we’ve described in prior blog posts (here and here). Through CSI, our support team is able to quickly reason about a customer’s environment, search for information related to a case currently being worked, and much more.

Pro Tips for Pitching an HBaseCon Talk

These suggestions from the Program Committee offer an inside track to getting your talk accepted!

With HBaseCon 2014 (in San Francisco on May 5) Call for Papers closing in just over three weeks (on Feb. 14 — sooner than you think), there’s no better time than “now” to start thinking about your proposal.

It’s a Three-peat! HBaseCon 2014 Call for Papers and Early Bird Registration Now Open

The third-annual HBaseCon is now open for business. Submit your paper or register today for early bird savings!

Seems like only yesterday that droves of Apache HBase developers, committers/contributors, operators, and other enthusiasts converged in San Francisco for HBaseCon 2013 — nearly 800 of them, in fact. 

This Month (and Year) in the Ecosystem (December 2013)

Welcome to our sixth edition of “This Month in the Ecosystem,” a digest of highlights from December 2013 (never intended to be comprehensive; for completeness, see the excellent Hadoop Weekly).

With the close of 2013, we also thought it appropriate to include some high points from across the year (not listed in any particular order):

What are HBase Compactions?

The compactions model is changing drastically with CDH 5/HBase 0.96. Here’s what you need to know.

Apache HBase is a distributed data store based upon a log-structured merge tree, so optimal read performance would come from having only one file per store (Column Family). However, that ideal isn’t possible during periods of heavy incoming writes. Instead, HBase will try to combine HFiles to reduce the maximum number of disk seeks needed for a read. This process is called compaction.

How-to: Use the HBase Thrift Interface, Part 2: Inserting/Getting Rows

The second how-to in a series about using the Apache HBase Thrift API

Last time, we covered the fundamentals about connecting to Thrift via Python. This time, you’ll learn how to insert and get multiple rows at a time. 

Working with Tables

Newer Posts Older Posts