Data analytics is increasingly being brought to bear to treat human disease, but as more and more health data is stored in computer databases, one significant challenge is how to perform analyses across these disparate databases. In this post I take a look at the Observational Health Data Sciences and Informatics (or OHDSI, pronounced “Odyssey”) program that was formed to address this challenge, and which today accounts for 1.26 billion patient records collectively stored across 64 databases in 17 countries.
With the abundance of deep learning frameworks available today, it can be difficult to know what to choose for any particular application. Given the contrasting strengths and weaknesses of these frameworks, the ability to work with and switch between more than one is particularly important. Recent Cloudera blogs have shown how examples of applying deep learning on the Cloudera ecosystem using popular frameworks Deeplearning4j, BigDL, and Keras+TensorFlow.
In the past few years, deep learning has seen incredible success in image recognition applications. In this post I examine how to train a convolutional neural network to recognize playing card images from a game called SET®, explore the structure of the model to get some insight into what it is “seeing”, and present a webcam application that uses the deployed model in a near-realtime setting.
SET is a card game where the objective is to find triples of cards,
Since the birth of big data, Cloudera University has been teaching developers, administrators, analysts, and data scientists how to use big data technologies. We have taught over 50,000 folks all of the details of using technologies from Apache such as HDFS, MapReduce, Hive, Impala, Sqoop, Flume, Kafka, Core Spark, Spark SQL, Spark Streaming, and Spark MLlib.
For administrators we’ve taught them how to plan, install, monitor, and troubleshoot clusters.
sparklyr is a great opportunity for R users to leverage the distributed computation power of Apache Spark without a lot of additional learning. sparklyr acts as the backend of dplyr so that R users can write almost the same code for both local and distributed calculation over Spark SQL.
Since sparklyr v0.6, we can run R code across our Spark cluster with spark_apply().