Category Archives: Spark

Putting Machine Learning Models into Production

Categories: AI and Machine Learning Cloudera Data Science Workbench Spark

Once the data science is done (and you know where your data comes from, what it looks like, and what it can predict) comes the next big step: you now have to put your model into production and make it useful for the rest of the business. This is the start of the model operations life cycle. The key focus areas (detailed in the diagram below) are usually managed by machine learning engineers after the data scientists have done their work.

Read more

Visual Model Interpretability for Telco Churn in Cloudera Data Science Workbench

Categories: CDH Cloudera Data Science Workbench Fast Forward Labs Spark

Disclaimer: the scenario below is hypothetical.   Any similarity to any specific telecommunications company is purely coincidental.  

Although we use the example of a telecommunications company the following applies to every organization with customers or voluntary stakeholders.  

Introduction

Imagine that you are a Chief Data Officer at a major telecommunications provider and the CEO has asked you to overhaul the existing customer churn analytics.  The current process relies on manual export of data from dozens of data sources including ERP,

Read more

Fine-Grained Authorization with Apache Kudu and Impala

Categories: Impala Kudu Sentry Spark

Apache Impala supports fine-grained authorization via Apache Sentry on all of the tables it manages including Apache Kudu tables. Given Impala is a very common way to access the data stored in Kudu, this capability allows users deploying Impala and Kudu to fully secure the Kudu data in multi-tenant clusters even though Kudu does not yet have native fine-grained authorization of its own. This solution works because Kudu natively supports coarse-grained (all or nothing) authorization which enables blocking all access to Kudu directly except for the impala user and an optional whitelist of other trusted users.

Read more

Demystifying Spark Jobs to Optimize for Cost and Performance

Categories: Performance Spark

Apache Spark is one of the most popular engines for distributed data processing on Big Data clusters. Spark jobs come in all shapes, sizes and cluster form factors. Ranging from 10’s to 1000’s of nodes and executors, seconds to hours or even days for job duration, megabytes to petabytes of data and simple data scans to complicated analytical workloads. Throw in a growing number of streaming workloads to huge body of batch and machine learning jobs —

Read more

Using Native Math Libraries to Accelerate Spark Machine Learning Applications

Categories: AI and Machine Learning CDH Performance Spark

[Editor’s note: The original version of this article was published as part of our Guru How-To series for Data Science. Be sure to also check out the series for Cloudera Data Warehouse.]

 

Spark ML is one of the dominant frameworks for many major machine learning algorithms, such as the Alternating Least Squares (ALS) algorithm for recommendation systems, the Principal Component Analysis algorithm, and the Random Forest algorithm.

Read more