Tag Archives: data analysis

Deep learning with Apache MXNet on Cloudera Data Science Workbench

Categories: CDH Cloudera Data Science Workbench Data Science

With the abundance of deep learning frameworks available today, it can be difficult to know what to choose for any particular application. Given the contrasting strengths and weaknesses of these frameworks, the ability to work with and switch between more than one is particularly important. Recent Cloudera blogs have shown how examples of applying deep learning on the Cloudera ecosystem using popular frameworks Deeplearning4j, BigDL, and Keras+TensorFlow.

Read more

Understanding how Deep Learning learns to play SET®

Categories: CDH Cloudera Data Science Workbench Data Science

In the past few years, deep learning has seen incredible success in image recognition applications. In this post I examine how to train a convolutional neural network to recognize playing card images from a game called SET®, explore the structure of the model to get some insight into what it is “seeing”, and present a webcam application that uses the deployed model in a near-realtime setting.

SET is a card game where the objective is to find triples of cards,

Read more

Big Data Architecture Workshop

Categories: Training

Since the birth of big data, Cloudera University has been teaching developers, administrators, analysts, and data scientists how to use big data technologies. We have taught over 50,000 folks all of the details of using technologies from Apache such as HDFS, MapReduce, Hive, Impala, Sqoop, Flume, Kafka, Core Spark, Spark SQL, Spark Streaming, and Spark MLlib.

For administrators we’ve taught them how to plan, install, monitor, and troubleshoot clusters. For analysts we have shown them the power of SQL over large, diverse data sets.

Read more

How to Distribute your R code with sparklyr and Cloudera Data Science Workbench

Categories: CDH How-to Spark

sparklyr is a great opportunity for R users to leverage the distributed computation power of Apache Spark without a lot of additional learning. sparklyr acts as the backend of dplyr so that R users can write almost the same code for both local and distributed calculation over Spark SQL.

 

Since sparklyr v0.6, we can run R code across our Spark cluster with spark_apply().

Read more