It has been almost a month since Hadoop World: NYC, and things are just starting to get back to normal here at Cloudera HQ. We were thrilled to see over 500 Apache Hadoop enthusiasts descend upon New York City for the first major Hadoop event on the East Coast. The variety of applications, and the number of companies involved, were mind-boggling. For those of you who weren’t able to join us, we hope to see you at another event soon!
Around the world, individuals contribute to Hadoop and build community around the technology. This kind of collaboration is at the heart of open source software, and here at Cloudera, we feel privileged to be a part of the Apache Hadoop community.
Getting together in person is a great way to build community. On global projects, though, sharing information from those gatherings with people who are far way is a big challenge.
At Hadoop World NYC Cloudera announced a new product: Cloudera Desktop. Over the past several months this product has been my principal concern here at Cloudera where I’m the UI lead (actually, until about a week ago, I was the only UI developer).
If you aren’t familiar with Cloudera Desktop, you should check out this brief screencast:
Every day, we hear about people doing amazing things with Apache Hadoop. The variety of applications across industries is clear evidence that Hadoop is radically changing the way data is processed at scale. To drive that point home, we’re excited to host a guest blog post from the University of Maryland’s Michael Schatz. Michael and his team have built a system using Hadoop that drives the cost of analyzing a human genome below $100 —
Today at Hadoop World NYC, we’re announcing the availability of Cloudera Desktop, a unified and extensible graphical user interface for Hadoop. The product is free to download and can be used with either internal clusters or clusters running on public clouds.
At Cloudera, we’re focused on making Hadoop easy to install, configure, manage, and use for all organizations. While there exist many utilities for developers who work with Hadoop,