Category Archives: Hadoop

Cloudera Enterprise 5.10 is Now Available

Categories: CDH Cloud Cloudera Manager Cloudera Navigator Hadoop Hue Kudu

Cloudera is proud to announce that Cloudera Enterprise 5.10 is now generally available (GA). The highlights of this release include the GA of the new columnar storage engine Apache Kudu, improved cloud performance and cost-optimizations, and cloud-native data governance for Amazon S3.

As usual, there are also a number of quality enhancements and bug fixes (learn more about our multi-dimensional hardening/QA process) and other improvements across the stack. Here is a partial list of what’s included (see the Release Notes for a full list):

  • GA of Apache Kudu

Read More

How to secure ‘Internet exposed’ Apache Hadoop

Categories: Hadoop How-to Platform Security & Cybersecurity

You may have heard of the recent (and ongoing) hacks targeting open source database solutions like MongoDB and Apache Hadoop. From what we know, an unknown number of hackers scanned for internet-accessible installations that had been set up using the default, non-secure configuration. Finding the exposure, these hackers then accessed the systems and in some cases deleted the files or held them for ransom.

These attacks were not technologically sophisticated,

Read More

How-to: Deploy a Secure Enterprise Data Hub on Microsoft Azure – Part 1

Categories: CDH Cloud Hadoop How-to Ops and DevOps Platform Security & Cybersecurity

 

Learn how to use Cloudera Director, Microsoft Active Directory (AD DS, AD CS, AD DNS), SAMBA, and SSSD to deploy a secure EDH cluster for workloads in the public cloud.

Authenticating users in Apache Hadoop is the first line of security we recommend. Like most, if not all RDBMS, a user is provided with a username and a password to validate their identity. This is a requirement to access any data managed by those systems.

Read More

HDFS DataNode Scanners and Disk Checker Explained

Categories: CDH Hadoop HDFS

As many of us know, data in HDFS is stored in DataNodes, and HDFS can tolerate DataNode failures by replicating the same data to multiple DataNodes. But exactly what happens if some DataNodes’ disks are failing? This blog post explains how some of the background work is done on the DataNodes to help HDFS to manage its data across multiple DataNodes for fault tolerance. Particularly, we will explain block scanner, volume scanner,

Read More

How-to: Automate Your sparklyr Environment with Cloudera Director

Categories: Cloudera Manager Data Science Hadoop How-to Ops and DevOps Spark

Since the launch of sparklyr, working with Apache Spark in Apache Hadoop has become much easier for R users. sparklyr contains a dplyr interface into Spark and allows users to leverage crucial machine learning algorithms from Spark MLlib and H2O Sparkling Water. This greatly reduces the barrier of entry for R users in adopting Spark as a tool for big data and should go a long way in enabling R workloads to migrate to Hadoop.

Read More