As Apache Hadoop continues to turn heads at startups and big enterprises alike, Cloudera has received several requests to offer certification in addition to our popular training programs.
Certification is a critical component of any software ecosystem, and especially so for open source projects with quickly expanding user bases. Certification allows developers to ensure their skills are up to date, and allows employers and customers to confidently identify individuals that are up for the challenge of solving problems with Hadoop.
Lately, we’ve been spending a lot of time on the East Coast, and one thing is clear: Hadoop is everywhere.
Hadoop usage on the East Coast tends to be slightly different. There are still web companies with armys of tech gurus, but there are also many “regular” industries and enterprises using and exploring Hadoop. It’s time to get together and learn a thing or two from one other.
Hadoop World: NYC 2009 will take place on October 2nd,
Administrators of HDFS clusters understand that the HDFS metadata is some of the most precious bits they have. While you might have hundreds of terabytes of information stored in HDFS, the NameNode’s metadata is the key that allows this information, spread across several million “blocks” to be reassembled into coherent, ordered files.
The techniques to preserve HDFS NameNode metadata are well established. You should store several copies across many separate local hard drives,
This piece is based on the talk Practical MapReduce that I gave at Hadoop User Group UK on April 14.
1. Use an appropriate MapReduce language
There are many languages and frameworks that sit on top of MapReduce, so its worth thinking up-front which one to use for a particular problem. There is no one-size-fits-all language; each has different strengths and weaknesses.
There’s been a lot of buzz about Apache Hadoop lately. Just the other day, some of our friends at Yahoo! reclaimed the terasort record from Google using Hadoop, and the folks at Facebook let on that they ingest 15 terabytes a day into their 2.5 petabyte Hadoop-powered data warehouse.
But many people still find themselves wondering just how all this works, and what it means to them. We get a lot of common questions while working with customers,