How-to: Use Cascading Pattern with R and CDH

Our thanks to Concurrent Inc. for the how-to below about using Cascading Pattern with CDH. Cloudera recently tested CDH 4.4 with the Cascading Compatibility Test Suite verifying compatibility with Cascading 2.2.

Cascading Pattern is a machine-learning project within the Cascading development framework used to build enterprise data workflows. Cascading provides an abstraction layer on top of Apache Hadoop and other computing topologies that allows enterprises to leverage existing skills and resources to build data processing applications on Hadoop, without the need for specialized Hadoop skills.

Pattern, in particular, leverages an industry standard called Predictive Model Markup Language (PMML), which allows data scientists to leverage their favorite statistical and analytics tools (such as R, SAS, Oracle, and so on) to export predictive models and quickly run them on data sets stored in Hadoop. Pattern’s benefits include reduced development costs, time savings, and reduced licensing issues at scale – all while leveraging Hadoop clusters, core competencies of analytics staff, and existing intellectual property in the predictive models.

By using Cascading Pattern, predictive modeling can now be exported as PMML from a variety of analytics frameworks, then run on Hadoop at scale. This approach saves licensing costs, allows for applications to scale-out, and directly integrates predictive modeling — expressed as Cascading apps — within other business logic.

In this how-to, you will learn how to create a simple example model using Cascading, R, and CDH.

Step 1: Set Up Your Environment

In this section, we will go through the steps needed to set up your environment.

  • To set up Java for your environment, download Java and follow the installation instructions. Version 1.6.x was used to create the examples used here.

    • Get the JDK, not the JRE.

    • Install according to vendor instructions.

    • Be sure to set the JAVA_HOME environment variable correctly.

  • To set up Gradle for your environment, download Gradle and follow the installation instructions. Version 1.4 and later is required for some examples in this tutorial.

    • Install according to vendor instructions.

    • Be sure to set the GRADLE_HOME environment variable correctly.

  • Install CDH 4.4 in standalone mode.

  • Set up R and RStudio for your environment by visiting:

Step 2: Get the Source Code

Navigate to the Pattern Github project and in the bottom right corner of the screen, click Download ZIP to download a ZIP compressed archive of the source code. When complete, unzip and move the directory “pattern” to a location on your filesystem where you have space available to work.

Step 3: Create the Model

Navigate to the pattern directory, and then into its pattern-examples subdirectory. There is an example R script in examples/r/rf_pmml.R that creates a Random Forest model. This is representative of a predictive model for an anti-fraud classifier used in e-commerce apps.

 

Load the “rf_pmml.R” script into RStudio using the File menu and Open File.. option.

Click the Source button in the upper middle section of the screen. That will execute the R script and create the predictive model.

The last line saves the predictive model into a file called sample.rf.xml as PMML. PMML is XML-based and thus not optimal for humans to read, but it is efficient for machines to parse:

 

Cascading Pattern supports additional models, as well as ensembles, of the following models:

  • General Regression
  • Regression
  • Clustering
  • Tree
  • Mining

Step 4: Build Cascading

Now that we have a model created and exported as PMML, let’s work on running it at scale atop CDH.

In the pattern-examples directory, execute the following Bash shell commands:

 

That line invokes Gradle to run the build script build.gradle, and compile the Cascading Pattern example app. After that compiles, look for the built app as a JAR file in the build/libs subdirectory:

 

Now we’re ready to run this Cascading Pattern example app on CDH. First, we make sure to delete the output results (required by Hadoop). Then we run Hadoop: we specify the JAR file for the app, the PMML file using a --pmmlcommand line option, along with sample input data data/sample.tsv and the location of the output results:

 

After that runs, check the out/classify subdirectory. Look at the results of running the PMML model, which will be in the part-* partition files:

 

Let’s take a look at what we just built and ran. The source code for this example is located in the src/main/java/cascading/pattern/Main.java file:

 

Most of the code is the basic plumbing used for Cascading apps. The portions that are specific to Cascading Pattern and PMML are the few lines involving the pmmlPlanner object.

Filed under:

2 Responses
  • Paco Nathan / December 02, 2013 / 3:50 PM

    Great article ;) Glad to see the integration work.

  • Prathamesh Kalamkar / January 06, 2014 / 5:24 AM

    We are giving only 1 input file.How does it understand which data to be used for testing and building models? Should the splitting of training(data_train) and testing data(data) be done in R code?
    Can the sample input file be in HDFS?

Leave a comment


1 + = ten