Cloudera Developer Blog · How-to Posts

How-to: Manage Heterogeneous Hardware for Apache Hadoop using Cloudera Manager

In a prior blog post, Omar explained two important concepts introduced in Cloudera Manager 4.5: Role Groups and Host Templates. In this post, I’ll demonstrate how to use role groups and host templates to easily expand an existing CDH cluster onto heterogeneous hardware. If you haven’t already looked at Omar’s post, I’d recommend doing so before reading this one, as I’ll assume you are familiar with role groups and host templates.

Although these instructions/screenshots are premised on Cloudera Manager 4.5, they are valid for subsequent releases as well.

Initial State and Goal

How-to: Use the Apache HBase REST Interface, Part 3

This how-to is the third in a series that explores the use of the Apache HBase REST interface. Part 1 covered HBase REST fundamentals, some Python caveats, and table administration. Part 2 showed you how to insert multiple rows simultaneously using XML and JSON. Part 3 below will show how to get multiple rows using XML and JSON.

Getting Rows with XML

Using a GET verb, you can retrieve a single row or a group of rows based on their row keys. (You can read more about the multiple value URL format here.) Here we are going to use the simple wildcard character or asterisk (*) to get all rows that start with a specific string. In this example, we can load every line of Shakespeare’s comedies with “shakespeare-comedies-*”. This also requires that our row key(s) be laid out by “AUTHOR-WORK-LINENUMBER”.

How-to: Use the Apache Oozie REST API

Apache Oozie has a Java client and a Java API for submitting and monitoring jobs, but what if you want to use Oozie from another language or a non-Java system? Oozie provides a Web Services API, which is an HTTP REST API. That is, you can do anything with Oozie simply by making requests to the Oozie server over HTTP. In fact, this is how the Oozie client and Oozie Java API themselves talk to the Oozie server. 

In this how-to, I’ll explain how the REST API works.

What is REST?

How-to: Easily Configure and Manage Clusters in Cloudera Manager 4.5

Helping users manage hundreds of configurations for the growing family of Apache Hadoop services has always been one of Cloudera Manager’s main goals. Prior to version 4.5, it was possible to set configurations at the service (e.g. hdfs), role type (e.g. all datanodes), or individual role level (e.g. the datanode on machine17). An individual role would inherit the configurations set at the service and role-type levels. Configurations made at the role level would override those from the role-type level. While this approach offers flexibility when configuring clusters, it was tedious to configure subsets of roles in the same way.

In Cloudera Manager 4.5, this issue is addressed with the introduction of role groups. For each role type, you can create role groups and assign configurations to them. The members of those groups then inherit those configurations. For example, in a cluster with heterogeneous hardware, a datanode role group can be created for each host type and the datanodes running on those hosts can be assigned to their corresponding role group. That makes it possible to tweak the configurations for all the datanodes running on the same hardware by modifying the configurations of one role group.

How-to: Configure Eclipse for Hadoop Contributions

Contributing to Apache Hadoop or writing custom pluggable modules requires modifying Hadoop’s source code. While it is perfectly fine to use a text editor to modify Java source, modern IDEs simplify navigation and debugging of large Java projects like Hadoop significantly. Eclipse is a popular choice thanks to its broad user base and multitude of available plugins.

This post covers configuring Eclipse to modify Hadoop’s source. (Developing applications against CDH using Eclipse is covered in a different post.) Hadoop has changed a great deal since our previous post on configuring Eclipse for Hadoop development; here we’ll revisit configuring Eclipse for the latest “flavors” of Hadoop. Note that trunk and other release branches differ in their directory structure, feature set, and build tools they use. (The EclipseEnvironment Hadoop wiki page is a good starting point for development on trunk.)

How-to: Automate Your Hadoop Cluster from Java

One of the complexities of Apache Hadoop is the need to deploy clusters of servers, potentially on a regular basis. At Cloudera, which at any time maintains hundreds of test and development clusters in different configurations, this process presents a lot of operational headaches if not done in an automated fashion. In this post, I’ll describe an approach to cluster automation that works for us, as well as many of our customers and partners.

Taming Complexity

At Cloudera engineering, we have a big support matrix: We work on many versions of CDH (multiple release trains, plus things like rolling upgrade testing), and CDH works across a wide variety of OS distros (RHEL 5 & 6, Ubuntu Precise & Lucid, Debian Squeeze, and SLES 11), and complex configuration combinations — highly available HDFS or simple HDFS, Kerberized or non-secure, using YARN or MR1 as the execution framework, etc. Clearly, we need an easy way to spin-up a new cluster that has the desired setup, which we can subsequently use for integration, testing, customer support, demos, and so on.

Algorithms Every Data Scientist Should Know: Reservoir Sampling

Data scientists, that peculiar mix of software engineer and statistician, are notoriously difficult to interview. One approach that I’ve used over the years is to pose a problem that requires some mixture of algorithm design and probability theory in order to come up with an answer. Here’s an example of this type of question that has been popular in Silicon Valley for a number of years: 

Say you have a stream of items of large and unknown length that we can only iterate over once. Create an algorithm that randomly chooses an item from this stream such that each item is equally likely to be selected.

How-to: Use the Apache HBase REST Interface, Part 2

This how-to is the second in a series that explores the use of the Apache HBase REST interface. Part 1 covered HBase REST fundamentals, some Python caveats, and table administration. Part 2 below will show you how to insert multiple rows at once using XML and JSON. The full code samples can be found on GitHub.

Adding Rows With XML

The REST interface would be useless without the ability to add and update row values. The interface gives us this ability with the POST verb. By posting new rows, we can add new rows or update existing rows using the same row key.

How-to: Create a CDH Cluster on Amazon EC2 via Cloudera Manager

Editor’s Note (added Feb. 28, 2014): The instructions below are deprecated for Cloudera Manager releases beyond 4.5. Please refer to this doc for instructions pertaining to releases 4.6 and later.

Cloudera Manager includes a new express installation wizard for Amazon Web Services (AWS) EC2. Its goal is to enable Cloudera Manager users to provision CDH clusters and Cloudera Impala (the open source distributed query engine for Apache Hadoop) on EC2 as easily as possible (for testing and development purposes only, not supported for production workloads) - and thus is currently the fastest way to provision a Cloudera Manager-managed cluster in EC2.

How-to: Analyze Twitter Data with Hue

Hue 2.2 , the open source web-based interface that makes Apache Hadoop easier to use, lets you interact with Hadoop services from within your browser without having to go to a command-line interface. It features different applications like an Apache Hive editor and Apache Oozie dashboard and workflow builder.

This post is based on our “Analyzing Twitter Data with Hadoop” sample app and details how the same results can be achieved through Hue in a simpler way. Moreover, all the code and examples of the previous series have been updated to the recent CDH4.2 release.

Collecting Data

Newer Posts Older Posts