Tag Archives: Support

Job Scheduling in Apache Hadoop

Categories: Hadoop MapReduce

(guest blog post by Matei Zaharia)

When Apache Hadoop started out, it was designed mainly for running large batch jobs such as web indexing and log mining. Users submitted jobs to a queue, and the cluster ran them in order. However, as organizations placed more data in their Hadoop clusters and developed more computations they wanted to run, another use case became attractive: sharing a MapReduce cluster between multiple users.

Read more