Working with UDFs in Apache Spark

Categories: Hadoop Spark

User-defined functions (UDFs) are a key feature of most SQL environments to extend the system’s built-in functionality.  UDFs allow developers to enable new functions in higher level languages such as SQL by abstracting their lower level language implementations.  Apache Spark is no exception, and offers a wide range of options for integrating UDFs with Spark SQL workflows.

In this blog post, we’ll review simple examples of Apache Spark UDF and UDAF (user-defined aggregate function) implementations in Python, Java and Scala.  We’ll also discuss the important UDF API features and integration points, including their current availability between releases.  To wrap up, we’ll touch on some of the important performance considerations that you should be aware of when choosing to leverage UDFs in your application.

Spark SQL UDFs

UDFs transform values from a single row within a table to produce a single corresponding output value per row.  For example, most SQL environments provide an UPPER function returning an uppercase version of the string provided as input.

Custom functions can be defined and registered as UDFs in Spark SQL with an associated alias that is made available to SQL queries.  As a simple example, we’ll define a UDF to convert temperatures in the following JSON data from degrees Celsius to degrees Fahrenheit:

The sample code below registers our conversion UDF using the SQL alias CTOF, then makes use of it from a SQL query to convert the temperatures for each city.  For brevity, creation of the SQLContext object and other boilerplate code is omitted, and links are provided below each code snippet to the full listing.




Note that Spark SQL defines UDF1 through UDF22 classes, supporting UDFs with up to 22 input parameters.  Our example above made use of UDF1 to handle our single temperature value as input.  Without updates to the Apache Spark source code, using arrays or structs as parameters can be helpful for applications requiring more than 22 inputs, and from a style perspective this may be preferred if you find yourself using UDF6 or higher.

Spark SQL UDAF functions

User-defined aggregate functions (UDAFs) act on multiple rows at once, return a single value as a result, and typically work together with the GROUP BY statement (for example COUNT or SUM).  To keep this example straightforward, we will implement a UDAF with alias SUMPRODUCT to calculate the retail value of all vehicles in stock grouped by make, given a price and an integer quantity in stock in the following data:

Apache Spark UDAF definitions are currently supported in Scala and Java by the extending UserDefinedAggregateFunction class.  Once defined, we can instantiate and register our SumProductAggregateFunction UDAF object under the alias SUMPRODUCT and make use of it from a SQL query, much in the same way that we did for our CTOF UDF in the previous example.


Additional UDF Support in Apache Spark

Spark SQL supports integration of existing Hive (Java or Scala) implementations of UDFs, UDAFs and also UDTFs.  As a side note UDTFs (user-defined table functions) can return multiple columns and rows – they are out of scope for this blog, although we may cover them in a future post.  Integrating existing Hive UDFs is a valuable alternative to re-implementing and registering the same logic using the approaches highlighted in our earlier examples, and is also helpful from a performance standpoint in PySpark as will be discussed in the next section.  Hive functions can be accessed from a HiveContext by including the JAR file containing the Hive UDF implementation using spark-submit’s –jars option, and by then declaring the function using CREATE TEMPORARY FUNCTION (as would be done in Hive[1] to include a UDF), for example:

Hive UDF definition in Java

Hive UDF access from Python

Note that Hive UDFs can only be invoked using Apache Spark’s SQL query language – in other words, they cannot be used with the Dataframe API’s domain-specific-language (DSL) as is the case for the UDF and UDAF functions we implemented in the examples above.

Alternatively, UDFs implemented in Scala and Java can be accessed from PySpark by including the implementation jar file (using the –jars option with spark-submit) and then accessing the UDF definition through the SparkContext object’s private reference to the executor JVM and underlying Scala or Java UDF implementations that are loaded from the jar file.  An excellent talk[2] by Holden Karau includes a discussion of this method.  Note that some of the Apache Spark private variables used in this technique are not officially intended for end-users.  This also provides the added benefit of allowing UDAFs (which currently must be defined in Java and Scala) to be used from PySpark as the example below demonstrates using the SUMPRODUCT UDAF that we defined in Scala earlier:

Scala UDAF definition

Scala UDAF from PySpark

UDF-related features are continuously being added to Apache Spark with each release.  Version 2.0 for example adds support for UDFs in R.  As a point of reference, the table below summarizes versions in which the key features discussed so far in this blog were introduced:

table summarizing versions in which the key features discussed so far in this blog were introduced

table summarizing versions in which the key features discussed so far in this blog were introduced

Performance Considerations

It’s important to understand the performance implications of Apache Spark’s UDF features.  Python UDFs for example (such as our CTOF function) result in data being serialized between the executor JVM and the Python interpreter running the UDF logic – this significantly reduces performance as compared to UDF implementations in Java or Scala.  Potential solutions to alleviate this serialization bottleneck include:

  1. Accessing a Hive UDF from PySpark as discussed in the previous section.  The Java UDF implementation is accessible directly by the executor JVM.  Note again that this approach only provides access to the UDF from the Apache Spark’s SQL query language.
  2. Making use of the approach also shown to access UDFs implemented in Java or Scala from PySpark, as we demonstrated using the previously defined Scala UDAF example.

In general, UDF logic should be as lean as possible, given that it will be called for each row.  As an example, a step in the UDF logic taking 100 milliseconds to complete will quickly lead to major performance issues when scaling to 1 billion rows.

Another important component of Spark SQL to be aware of is the Catalyst query optimizer. Its capabilities are expanding with every release and can often provide dramatic performance improvements to Spark SQL queries; however, arbitrary UDF implementation code may not be well understood by Catalyst (although future features[3] which analyze bytecode are being considered to address this).  As such, using Apache Spark’s built-in SQL query functions will often lead to the best performance and should be the first approach considered whenever introducing a UDF can be avoided.  Advanced users looking to more tightly couple their code with Catalyst can refer to the following talk[4] by Chris Fregly’s using …Expression.genCode to optimize UDF code, as well the new Apache Spark 2.0 experimental feature[5] which provides a pluggable API for custom Catalyst optimizer rules.


UDFs can be a helpful tool when Spark SQL’s built-in functionality needs to be extended.  This blog post provided a walk-through of UDF and UDAF implementation and discussed integration steps to make use of existing Java Hive UDFs inside of Spark SQL.  UDFs can be implemented in Python, Scala, Java and (in Spark 2.0) R, and UDAFs in Scala and Java.  When using UDFs with PySpark, data serialization costs must be factored in, and the two strategies discussed above to address this should be considered.  Finally, we touched on Spark SQL’s Catalyst optimizer and the performance reasons for sticking to the built-in SQL functions first before introducing UDFs in your solutions.



CDH Version:  5.8.0  (Apache Spark 1.6.0)



7 responses on “Working with UDFs in Apache Spark

  1. preyasee

    I am using version Spark 2.1 and Java 8
    I tried running the Java UDF Fahrenheit temperatures example, without change. I however get the following eror
    org.apache.spark.SparkException: Task not serializable

    Please help me resolve this error.
    Many Thanks

    This is where I get this error->
    results =sqlContext.sql(“SELECT city, CTOF(avgLow) AS avgLowF, CTOF(avgHigh) AS avgHighF FROM citytemps”);;

    where CTOF is defined as

    sqlContext.udf().register(“CTOF”, new UDF1() {
    public Double call(Double degreesCelcius) {
    return ((degreesCelcius * 9.0 / 5.0) + 32.0);
    }, DataTypes.DoubleType);

    1. Lovish Chaudhary

      The serializable class does not declare a static final serialVersionUID field of type long.

      you have to declare this inside ;

      private static final long serialVersionUID = 1025828189266787661L;

  2. Christan

    I am running a Spark 2.1.0 thrift server. I am trying to create Spark SQL UDF using scala in Spark-Shell. However, when I connect via Beeline, I am not able to see my UDF. How do I access the Spark SQL UDF in Beeline?
    I tried to set spark.sql.hive.thriftServer.singleSession=true when I started the Spark-shell and thriftserver but not helping.

  3. Julio

    Hi man,

    I wrote a lambda function and ran it in a Spark 1.6.1 . I tried to run the same script using Spark 2.1 and it doesn’t work.
    I can register the function but when I execute it in a SQL it gives me “evalutation error” .

    For while I can’t figure out what is happening.If I type tab the function is loaded but inside the sql code from de dataframe it is returning error. Did you had some issue about that?

  4. Julio

    Caused by: java.lang.UnsupportedOperationException: Cannot evaluate expression: udfformatDay(lpad(input[0, string, true], 2, 0))
    at org.apache.spark.sql.catalyst.expressions.Unevaluable$class.eval(Expression.scala:221)
    at org.apache.spark.sql.execution.python.PythonUDF.eval(PythonUDF.scala:27)

    The code is :
    udfformatDay = pyspark.sql.functions.udf(lambda x: time.strftime(“%a”, time.strptime(str(x)+month+year,”%d%m%Y”)), StringType())
    >>> udf
    udf( udfformatDay(
    or for 1.6.3

    sqlContext.registerFunction(“udfformatDay”, lambda x: time.strftime(“%a”, time.strptime(str(x)+month+year,”%d%m%Y”)), StringType())

  5. K.O.

    Can we access dataframe in UDF ? If so can you please explain a bit, and when it can cause an error like NPE.