How-to: Automate Your Cluster with Cloudera Manager API

API access was a new feature introduced in Cloudera Manager 4.0 (download free edition here.). Although not visible in the UI, this feature is very powerful, providing programmatic access to cluster operations (such as configuration and restart) and monitoring information (such as health and metrics). This article walks through an example of setting up a 4-node HDFS and MapReduce cluster via the Cloudera Manager (CM) API.

Cloudera Manager API Basics

The CM API is an HTTP REST API, using JSON serialization. The API is served on the same host and port as the CM web UI, and does not require an extra process or extra configuration. The API supports HTTP Basic Authentication, accepting the same users and credentials as the Web UI. API users have the same privileges as they do in the web UI world.

You can read the full API documentation here.

Interacting with the API

The most basic way to use the API is by making HTTP calls directly using tools like curl. For example, to obtain the status of service hdfs2 in cluster dev01 (note: italics are used for interactive shell code throughout):

 

The API also comes with a Python client for your convenience. To do the same in Python:

 

You can expect to see client bindings in more languages. A Java client is in the works right now.

Setting up a Cluster

Next I will demonstrate an API Python script that defines, configures and starts a cluster. You are about to see some of the low-level details of Cloudera Manager. Compared with the UI wizard, the API route is more tedious. But the API provides flexibility and programmatic control. You will also notice that this setup process does not require my cluster to be online (until the very last step where I start the services.) This has proven useful to people who are stamping out pre-configured clusters.

Step 1. Define the Cluster

 

This creates a handle on the API. The ApiResource object also accept other optional arguments such as port, TLS, and API version. With that, I created a cluster called prod01 on version CDH4. The handle to the cluster is returned as part of the call.

Step 2. Create HDFS Service and Roles

Now we can create the services. HDFS comes first:

 

At this point, if I query the different role types supported by hdfs01, I will get:

 

Now I am going to create 1 NameNode, 1 Secondary NameNode and 4 DataNodes.

 

Most of the code is performing host creation. That is required for role creation, as each role needs to be assigned to a host. In the end, the first host is assigned the NameNode, the Secondary NameNode and a DataNode. The rest are DataNodes.

At this point, if I query the first host, I can see the correct roles assigned to it:

 

Step 3. Configure HDFS

Service configuration is separated into service-wide configuration and role type configuration. Service-wide configuration is typically settings that affect multiple role types, such as HDFS replication factor. Role type configuration is a template that gets inherited by specific role instances. For example, at the role type template level, I can set all DataNodes to use 3 data directories. And I can override that for specific DataNodes by setting the role-level configuration.

How do I find out the configuration keys used by CM? For example, how do I know that dfs_replication is the key for setting replication factor? I query the service:

 

Note the view="full" argument. Without it, the API returns only the configs that are set to non-default values:

 

Step 4. Create MapReduce Service and Roles

This step is similar to the HDFS one. I assign a TaskTracker to each node, and the JobTracker to the first one.

 

Step 5. Configure MapReduce

Here is the code to configure the “mr01″ service.:

 

Two items deserve elaboration. First is the hdfs_service. Rather than asking the user for the equivalent of “fs.defaultFS”, a MapReduce service depends on an HDFS service, and derives its HDFS access parameters based on how that HDFS service is configured.

Second, the “gateway” role type is unique to CM. It represents a client. A gateway role does not run any daemons. It simply receives client configuration, as part of the “deploy client configuration” process, which we will perform later.

Step 6. Start HDFS

HDFS is ready to start. This is the step that requires the cluster nodes to be up, CDH installed, and Cloudera Manager Agents running. (The API does not perform software installation.) As part of the preparation, I did that, and pointed the CM agents to the CM server by editing the server_host in /etc/cloudera-scm-agent/config.ini.

Now I can format HDFS and start it.

 

Each of the cmd object represents an asynchronous command. I then wait for their completion and assert that they have succeeded. Then I deploy the HDFS client configuration to the host running hdfs01-nn.

 

Step 7. Start MapReduce

The JobTracker will not start unless /tmp exists in HDFS.

 

Now we can finish the rest:

 

Note that users have not been setup, and their home directories do not exist. But we can run a job as “hdfs”:

Advanced Usage

The Cloudera Manager API provides a lot more than configuration and service life-cycle management. You can also obtain service health information and metrics (for the Enterprise Edition), and configure Cloudera Manager itself. Here are some resources for your exploration:

bc Wong is a Software Engineering Manager at Cloudera, currently working on Cloudera Manager.

No Responses

Leave a comment


8 × one =