Prefer IntelliJ IDEA over Eclipse? We’ve got you covered: learn how to get ready to contribute to Apache Hadoop via an IntelliJ project.
It’s generally useful to have an IDE at your disposal when you’re developing and debugging code. When I first started working on HDFS, I used Eclipse, but I’ve recently switched to JetBrains’ IntelliJ IDEA (specifically, version 13.1 Community Edition).
My main motivation was the ease of project setup in the face of Maven and Google Protocol Buffers (used in HDFS). The latter is an issue because the code generated by
protoc ends up in one of the target subdirectories, which can be a configuration headache. The problem is not that Eclipse can’t handle getting these files into the classpath — it’s that in my personal experience, configuration is cumbersome and it takes more time to set up a new project whenever I make a new clone. Conversely, IntelliJ’s
import maven project functionality seems to “just work”.
In this how-to, you’ll learn a few simple steps I use to create a new IntelliJ project for writing and debugging your code for Hadoop. (For Eclipse users, there is a similar post available here.) It assumes you already know how to clone, build, and run in a Hadoop repository using
git, and so on. These instructions have been tested with the Hadoop upstream trunk (github.com/apache/hadoop-common.git).
- Make a clone of a Hadoop repository (or use an existing sandbox). There should be a
hadoop-commondirectory at the top level when you’re finished.
- Do a clean and a full build using the
mvncommand in the CLI. I use
mvn clean install, but you should do whatever suits you.
- Start IntelliJ.
- Select File > Import Project…
- A “Select File or Directory” to Import wizard screen will pop up. Select the
hadoop-commondirectory from your repository clone. Click OK.
- An “Import Project” wizard screen will appear. Check “Import project from external model” and select “Maven”. Click Next.
- On the next screen, you don’t need to select any options. Click Next.
- On the next wizard screen, you do not need to select any profiles. Click Next.
- On the next wizard screen,
org.apache.hadoop.hadoop-main:N.M.P-SNAPSHOTwill already be selected. Click Next.
- On the next wizard screen, ensure that IntelliJ is pointing at a JDK 7 SDK (download JDK 7 here) for the JDK home path. Click Next.
- On the next wizard screen, give your project a project name. I typically set the “Project file location” to a sibling of the
hadoop-commondirectory, but that’s optional. Click Finish.
- IntelliJ will then tell you: “New projects can either be opened in a new window or replace the project in the existing window. How would you like to open the project?” I typically select “New Window” because it lets me different projects in different panels. Select one or the other.
- IntelliJ then imports the project.
- You can check that the project builds OK using Build > Rebuild Project. In the “Messages” panel I typically use the little icon on the left side of that window to Hide Warnings.
You’re ready for action. It’s easy to set up multiple projects that refer to different clones: just repeat the same steps above. Click on Window and you can switch between them.
A typical workflow for me is to edit either in my favorite editor or in IntelliJ. If the former, IntelliJ is very good about updating the source that it shows you. After editing, I’ll do Build > Rebuild Project, work out any compilation errors, and then either use the Debug or Run buttons. I like the fact that IntelliJ doesn’t make me mess around with Run/Debug configurations.
Some of my favorite commands and keystrokes (generally configurable) are:
- c-sh-N (Navigate > File), which lets me easily search for a file and open it
- c-F2 and c-F5 (stop the currently running process, and debug failed unit tests)
- In the Run menu, F7 (step into), F8 (step over), Shift-F8 (step out), and F9 (resume program) while in the debugger
- In the View > Tool Windows menu: Alt-1 (Project) and Alt-7 (Structure)
If you ever run into an error that says you need to add
webapps/hdfs to your classpath (which has happened to me when running some HDFS unit tests), taking the following steps should fix it (credit goes to this post from Stack Overflow):
- Select File > Project Structure…
- Click on Modules under “Project Settings.”
- Select the
- Select the Dependencies tab.
- Click the + sign on right side and select “Jars or directories.”
- From your clone hierarchy, select the
../hadoop-hdfs/hadoop-hdfs-project/targetdirectory. Click OK.
- Check all of the / directories (classes, jar directory, source archive directory). Click OK.
- Click OK (again).
Congratulations, you are now ready to contribute to Hadoop via an IntelliJ project!
Charles Lamb is a Software Engineer at Cloudera, currently working on HDFS.