Cloudera Impala: Real-Time Queries in Apache Hadoop, For Real

After a long period of intense engineering effort and user feedback, we are very pleased, and proud, to announce the Cloudera Impala project. This technology is a revolutionary one for Hadoop users, and we do not take that claim lightly.

When Google published its Dremel paper in 2010, we were as inspired as the rest of the community by the technical vision to bring real-time, ad hoc query capability to Apache Hadoop, complementing traditional MapReduce batch processing. Today, we are announcing a fully functional, open-sourced codebase that delivers on that vision – and, we believe, a bit more – which we call Cloudera Impala. An Impala binary is now available in public beta form, but if you would prefer to test-drive Impala via a pre-baked VM, we have one of those for you, too. (Links to all downloads and documentation are here.) You can also review the source code and testing harness at Github right now.

Impala raises the bar for query performance while retaining a familiar user experience. With Impala, you can query data, whether stored in HDFS or Apache HBase – including SELECT, JOIN, and aggregate functions – in real time. Furthermore, it uses the same metadata, SQL syntax (Hive SQL), ODBC driver and user interface (Hue Beeswax) as Apache Hive, providing a familiar and unified platform for batch-oriented or real-time queries. (For that reason, Hive users can utilize Impala with little setup overhead.) The first beta drop includes support for text files and SequenceFiles; SequenceFiles can be compressed as Snappy, GZIP, and BZIP (with Snappy recommended for maximum performance). Support for additional formats including Avro, RCFile, LZO text files, and the Parquet columnar format is planned for the production drop.

To avoid latency, Impala circumvents MapReduce to directly access the data through a specialized distributed query engine that is very similar to those found in commercial parallel RDBMSs. The result is order-of-magnitude faster performance than Hive, depending on the type of query and configuration. (See FAQ below for more details.) Note that this performance improvement has been confirmed by several large companies that have tested Impala on real-world workloads for several months now.

A high-level architectural view is below:

There are many advantages to this approach over alternative approaches for querying Hadoop data, including::

  • Thanks to local processing on data nodes, network bottlenecks are avoided.
  • A single, open, and unified metadata store can be utilized.
  • Costly data format conversion is unnecessary and thus no overhead is incurred.
  • All data is immediately query-able, with no delays for ETL.
  • All hardware is utilized for Impala queries as well as for MapReduce.
  • Only a single machine pool is needed to scale.

We encourage you to read the documentation for further technical details.

Finally, we’d like to answer some questions that we anticipate will be popular:

Is Impala open source?
Yes, Impala is 100% open source (Apache License). You can review the code for yourself at Github today.

How is Impala different than Dremel?
The first and principal difference is that Impala is open source and available for everyone to use, whereas Dremel is proprietary to Google.

Technically, Dremel achieves interactive response times over very large data sets through the use of two techniques:

  • A novel columnar storage format for nested relational data/data with nested structures
  • Distributed scalable aggregation algorithms, which allow the results of a query to be computed on thousands of machines in parallel.

The latter is borrowed from techniques developed for parallel DBMSs, which also inspired the creation of Impala. Unlike Dremel as described in the 2010 paper, which could only handle single-table queries, Impala already supports the full set of join operators that are one of the factors that make SQL so popular.

In order to realize the full performance benefits demonstrated by Dremel, Hadoop will shortly have an efficient columnar binary storage format called Parquet. But contrary to Dremel, Impala supports a range of popular file formats. This lets users run Impala on their existing data without having to “load” or transform it. It also lets users decide if they want to optimize for flexibility or just pure performance.

To sum it up, Impala plus Parquet will achieve the query performance described in the Dremel paper, but surpass what is described there in SQL functionality.

How much faster are Impala queries than Hive ones, really?
The precise amount of performance improvement is highly dependent on a number of factors:

  • Hardware configuration: Impala is generally able to take full advantage of hardware resources and specifically generates less CPU load than Hive, which often translates into higher observed aggregate I/O bandwidth than with Hive. Impala of course cannot go faster than the hardware permits, so any hardware bottlenecks will limit the observed speedup. For purely I/O bound queries, we typically see performance gains in the range of 3-4x.
  • Complexity of the query: Queries that require multiple MapReduce phases in Hive or require reduce-side joins will see a higher speedup than, say, simple single-table aggregation queries. For queries with at least one join, we have seem performance gains of 7-45X.
  • Availability of main memory as a cache for table data: If the data accessed through the query comes out of the cache, the speedup will be more dramatic thanks to Impala’s superior efficiency. In those scenarios, we have seen speedups of 20x-90x over Hive even on simple aggregation queries.

Is Impala a replacement for MapReduce or Hive – or for traditional data warehouse infrastructure, for that matter?
No. There will continue be many viable use cases for MapReduce and Hive (for example, for long-running data transformation workloads) as well as traditional data warehouse frameworks (for example, for complex analytics on limited, structured data sets). Impala is a complement to those approaches, supporting use cases where users need to interact with very large data sets, across all data silos, to get focused result sets quickly.

Does the Impala Beta Release have any technical limitations?
As mentioned previously, supported file formats in the first beta drop include text files and SequenceFiles, with many other formats to be supported in the upcoming production release. Furthermore, currently all joins are done in a memory space no larger than that of the smallest node in the cluster; in production, joins will be done in aggregate memory. Lastly, no UDFs are possible at this time.

What are the technical requirements for the Impala Beta Release?
You will need to have CDH4.1 installed on RHEL/CentOS 6.2. We highly recommend the use of Cloudera Manager (Free or Enterprise Edition) to deploy and manage Impala because it takes care of distributed deployment and monitoring details automatically.

What is the support policy for the Impala Beta Release?
If you are an existing Cloudera customer with a bug, you may raise a Customer Support ticket and we will attempt to resolve it on a best-effort basis. If you are not an existing Cloudera customer, you may use our public JIRA instance or the impala-user mailing list, which will be monitored by Cloudera employees.

When will Impala be generally available for production use?
A production drop is planned for the first quarter of 2013. Customers may obtain commercial support in the form of a Cloudera Enterprise RTQ subscription at that time.

We hope that you take the opportunity to review the Impala source code, explore the beta release, download and install the VM, or any combination of the above. Your feedback in all cases is appreciated; we need your help to make Impala even better.

We will bring you further updates about Impala as we get closer to production availability. (Update: Read about Impala 1.0.)

Impala resources:
Impala source code
Impala downloads (Beta Release and VM)
Impala documentation
Public JIRA
Impala mailing list
- Free Impala training (Screencast)

(Added 10/30/2012) Third-party articles about Impala:
- GigaOm: Real-time query for Hadoop democratizes access to big data analytics (Oct. 22, 2012)
- Wired: Man Busts Out of Google, Rebuilds Top-Secret Query Machine (Oct. 24, 2012)
InformationWeek: Cloudera Debuts Real-Time Hadoop Query (Oct. 24, 2012)
GigaOm: Cloudera Makes SQL a First-Class Citizen on Hadoop (Oct. 24, 2012) 
- ZDNet: Cloudera’s Impala Brings Hadoop to SQL and BI (Oct. 25, 2012)
Wired: Marcel Kornacker Profile (Oct. 29, 2012)
- Dr. Dobbs: Cloudera Impala – Processing Petabytes at The Speed Of Thought (Oct. 29, 2012)

Marcel Kornacker is the architect of Impala. Prior to joining Cloudera, he was the lead developer for the query engine of Google’s F1 project.

Justin Erickson is the product manager for Impala.

Filed under:

17 Responses
  • Michael Hausenblas / October 24, 2012 / 11:58 AM

    Great stuff. Can you please clarify how Impala compares to the emerging work in the Apache Drill Incubator [1]?

    Cheers,
    Michael

    [1] http://incubator.apache.org/drill/

  • Sonal / October 25, 2012 / 11:44 AM

    Very excited to see Impala. The Dremel paper outlines efficient columnar storage for nested data. How does Impala achieve its speeds if data is not to be loaded in to the system?

    Thanks
    Sonal

  • Marcel Kornacker / October 31, 2012 / 8:38 PM

    To address Michael’s question:

    Drill has not been released yet, so it is premature to attempt a comparison. We will of course provide one once the beta is available.

    Marcel

  • Marcel Kornacker / October 31, 2012 / 9:06 PM

    To address Sonal’s question:

    The performance advantage you will see with Impala will always depend on the storage format of the data, among other things. Impala tries hard to be fast on ascii-encoded data (text files and sequencefile), but of course the parsing overhead will always show up as a performance penalty compared to something like ColumnIO or Trevni. Impala will also support Trevni in the GA release, as mentioned in the blog post.

    Regarding data loading: we are working on background conversion into Trevni, in a way that enables a logical table to be backed by a mix of data formats. New data would show up in, say, sequencefile format and eventually get converted into the more efficient Trevni columnar format, but all of the data would be queryable at all times, regardless of format.

    Marcel

  • RickW / November 07, 2012 / 2:13 AM

    How do I download and install Impala without the need for Cloudera Manager. I have an existing small Hadoop cluster running Hive. I would like to test Impala to see the performance difference.

  • Justin Kestelyn (@kestelyn) / November 12, 2012 / 5:40 PM

    Rick,

    Instructions for manual install are here:

    https://ccp.cloudera.com/display/IMPALA10BETADOC/Installing+Impala

  • Alex B / November 22, 2012 / 8:25 AM

    Can you please comment how Impala compares to Hadapt in terms of architecture ? As far as I understand in case of Hadapt ( and I could be wrong of course ) some transformation of the data to Postgre SQL is needed . That does not seems to be the case with Impala( at least in the current implementation) ?

    Thanks,
    Alex

  • Kang Xiao / December 03, 2012 / 6:59 AM

    Great stuff! We have tried it and impala shows about 2x speedup vs. hive on our simple query on test dataset.

    Could Marcel explain more about the main reasons that make impala faster?
    1. about columnar storage: it seems that hive can also benifit from columnar storage compared with text file.
    2. about distributed scalable aggregation algorithms: is there some details and examples about the algorithms?
    3. about join: if dataset can not fit into memory, how impala keep faster if impala use disk.
    4. about main memory as a cache for table data: is it a cache in impala for recently accessed data?

    Thanks!
    Kang

  • Marcel Kornacker / December 20, 2012 / 6:02 PM

    Regarding Alex’s question:

    That’s correct, Impala does read data directly from HDFS and HBase. Impala also relies on Apache Hive’s metastore for the mapping of files into tables, which means you can re-use your schema definitions if you’re already querying Hadoop through Hive.

    Hadapt runs a PostgreSql instance on each data node, and appears to require some form of data movement (and duplication of data storage) between Postgres and HDFS, but for the specifics of that architecture I would recommend consulting the Hadapt website.

    Marcel

  • Marcel Kornacker / December 20, 2012 / 6:32 PM

    Regarding Kang’s questions:

    1. Yes, the Trevni columnar storage format will be an open and general purpose storage format that will be available for any of the Hadoop processing frameworks, including Hive, MapReduce, and Pig.

    However, we expect to see greater performance gains from Trevni in Impala compared to what you’d see in Hive. The reason is that in a disk-based system, Impala is often I/O-bound, and a columnar format will reduce the total I/O volume, often by a substantial amount. Hive is often cpu-bound and will therefore benefit much less from a reduction in I/O volume.

    2. At the moment, Impala does a simple 2-stage aggregation: pre-aggregation is done by all executing backends, followed by a single, central merge aggregation step in the coordinator. In an upcoming release Impala will also support repartitioning aggregation, where the result of the pre-aggregation step is hash-partitioned across all executing backends, so that the total merge aggregation work is also distributed.

    3. Impala currently has the limitation that the right-hand side table of a join needs to fit into the memory of every executing backend. In the GA release, this will be relaxed, so that the right-hand side table will only have to fit into the *aggregate* memory of all executing backends. Disk-based join algorithms won’t be available until after the GA release.

    4. Impala does not maintain its own cache; instead, it relies on the OS buffer cache in order to keep frequently-accessed data in memory.

    Marcel

  • pollie / December 24, 2012 / 6:51 AM

    Will we be finally able to issue and UPDATE or DELETE statement, like in an RDMS

  • chetan conikee / December 29, 2012 / 10:58 AM

    Does Impala support JSON tuple UDF?

    https://cwiki.apache.org/Hive/languagemanual-udf.html#LanguageManualUDF-jsontuple

    We are currently dealing with very large JSON files that are being ETL’d (split into KV pairs) /queried upon thereafter.

    It would be great to store the JSON as a BLOB and use json tuple UDF to query upon.

  • Bill Blaney / February 03, 2013 / 11:19 AM

    This is exciting stuff. I’m currently researching Big Data products for a large government agency, and am looking forward to test driving Impala.

    I would like to know if there is any intention of providing Impala support for Accumulo. If so, is there a timeframe in which to expect it? Our agency deals with large amounts personal information that needs to be secured, and the combination of Impala and Accumulo could be just what’s needed to support Big Data in our environment.

  • Yan Zhou / April 29, 2013 / 11:38 AM

    I learned from the Impala session of the 2013 Strata that Impala is working with Berkeley AMPLab on caching. I’m wondering when the feature is planned to be released, and in what release? Thanks!

  • Dharmesh Purohit / May 07, 2013 / 8:31 AM

    Hi ,

    I read this article and I found it very nice I have been using Hadoop and sub projects since 2 years. I had one question describing below.

    I have lots of data in my netezza database. Can I load or connect my Netezza with Impala for the realtime querying and again results I can store into Netezza / Hive / HBase?

    Thanks in advance.

    Dharmesh

  • serge / September 04, 2013 / 3:49 PM

    Is it possible to run Impala with different hadoop distributions?

Leave a comment


five × 8 =