Using Flume to Collect Apache 2 Web Server Logs

Flume is a flexible, scalable, and reliable system for collecting streaming data.   The Flume User Guide describes how to configure Flume, and the new Flume Cookbook contains instructions (called recipes) for common Flume use cases.  In this post, we present a recipe that describes the common use case of using a Flume node collect Apache 2 web servers logs in order to deliver them to HDFS.

Using Flume Agents for Apache 2.x Web Server Logging

To connect Flume to Apache 2.x servers, you will need to:

  • Configure web log file permissions
  • Tail the web logs or use piped logs to enable Flume to get data from the web server

This section will step through basic setup on default Ubuntu Lucid and default CentOS 5.5 installations. Then it will describe various ways of integrating Flume.

If You are Using CentOS / Red Hat Apache Servers

By default, CentOS’s Apache writes web logs to files owned by root and in group adm in 0644 (-rw-r–r–) mode. Flume is run as the flume user, so the Flume node is able to read the logs. Apache on CentOS/Red Hat servers defaults to writing logs to two files:

  • /var/log/httpd/access_log
  • /var/log/httpd/error_log

The simplest way to gather data from these files is to tail the files by configuring Flume nodes to use Flume’s tail source:

  • tail(“/var/log/httpd/access_log”)
  • tail(“/var/log/httpd/error_log”)

If You are Using Ubuntu Apache Servers

By default, Ubuntu servers write web logs to files owned by root and in group adm in 0640 (-rw-r—–) mode. Flume is run as the flume user and by default will not be able to read the files. One approach to allow the flume user to read the files is to add it to the adm group. Apache servers on Ubuntu defaults to writing logs to three files:

  • /var/log/apache2/access.log
  • /var/log/apache2/error.log
  • /var/log/apache2/other_vhosts_access.log

The simplest way to gather data from these files is by configuring Flume nodes to use Flume’s tail source:

  • tail(“/var/log/apache2/access.log”)
  • tail(“/var/log/apache2/error.log”)
  • tail(“/var/log/apache2/other_vhosts_access.log”)

Getting Log Entries from Piped Log Files

The Apache 2.x’s documentation describes using piped logging with the CustomLog descriptor. Their example uses the rotatelogs program to periodically write data to new files with a given prefix. Here are some example directives that could be in the httpd.conf/apache2.conf file.

LogFormat “%h %l %u %t \”%r\” %>s %b” common
CustomLog “|/usr/sbin/rotatelogs /var/log/apache2/foo_access_log 3600? common

TIP: In Ubuntu Lucid, these directives are in /etc/apache2/sites-available/default.  In CentOS 5.5, these directives are in /etc/httpd/conf/httpd.conf.

These directives configure Apache to write log files in /var/log/apache2/foo_access_log.xxxxx every hour (3600 seconds) using the “common” log format. You can configure a Flume node to use Flume’s tailDir source to read all files without modifying the Apache settings:

  • tailDir(“/var/log/apache2/”, “foo_access_log.*”)

The first argument is the directory, and the second is a regex that should match against the file name.  tailDir will watch the directory and tail all files that have matching file names.

Using Piped Logs

Instead of writing data to disk and then having Flume read it, you can have Flume ingest data directly from Apache.  To do so, modify the web server’s parameters and use its piped log feature by adding some directives to the Apache server’s configuration:

CustomLog "|flume node_nowatch -1 -n apache -c \'apache:console|agentBESink(\"collector\");\'" common
CustomLog "|flume node_nowatch -1 -n apache -c \'apache:console|agentDFOSink(\"collector\");\'" common

WARNING: By default, CentOS does not have Java required by the Flume node in user root‘s path. You can use alternatives to create a managed symlink in /usr/bin/ for the Java executable.

Using piped logs can be more efficient, but is riskier because Flume can deliver messages without saving on disk. Doing this, however, increases the probability of event loss. From a security point of view, this Flume node instance runs as Apache’s user which is often root according to the Apache manual.

NOTE: You could configure the one-shot mode node to deliver data directly to a collector. This can only be done at the best effort or disk-failover level. The prior examples use Flume nodes in one-shot mode which runs without contacting a master. Unfortunately, it means that one-shot mode cannot directly use the automatic chains or the end-to-end (E2E) reliability mode. This is because the automatic chains are generated by the master and because E2E mode delivers acknowledgements through the master.

However, you can have a one-shot Flume node deliver data to a Flume local node daemon where the reliable E2E mode can be used. In this setup we would have the following Apache directive:

CustomLog “|flume node_nowatch -1 -n apache -c \’apache:console|agentBESink(\”localhost\”, 12345);\’” common

Then you can have a Flume node setup to listen with the following configuration:

node : rpcSource(12345) | agentE2ESink(“collector”);

Since this daemon node is connected to the master, it can use the auto*Chains.

node : rpcSource(12345) | autoE2EChain;

NOTE: End-to-end mode attempts to ensure of delivery of data that enters the E2E sink. In this one-shot-node to reliable-node scenario, data is not safe until it gets to the E2E sink. However, since this is a local connection, it should only fail when the machine or processes fails. The one-shot node can be set to disk failover (DFO) mode in order to reduce the chance of message loss if the daemon node’s configuration changes. Recently, we have committed a lightweight flume logger called flogger that is implemented in C++ by Cloudera Intern, Dani Rayan. This utility can be used in place of the one-shot Flume node to reduce the required resource footprint.

This recipe is one of many from the growing Flume Cookbook. Currently we have written recipes for collecting data from syslog services, from scribe nodes, as well as techniques for testing Flume’s sources and sinks using the command line. If you have a Flume recipe you would like to share or would like to improve some our existing recipes, please contact us. We can add it to the Cookbook and help other users in the community! You can find us on IRC channel #flume at irc.freenode.net, on the flume-users mailing, or meet us in person in New York at Hadoop World 2010!

8 Responses
  • raszi / September 22, 2010 / 5:07 AM

    a semicolon is missing after the initial config parameters. the correct is:

    CustomLog “|flume node_nowatch -1 -n apache -c \’apache:console|agentBESink(\”collector\”);\’” common

  • Jonathan Hsieh / September 22, 2010 / 11:03 AM

    Raszi,

    Nice catch! I’ve updated the post with the corrections.

    Thanks,
    Jon.

  • adam / October 07, 2010 / 4:15 PM

    I like to use syslogTcp as a source and agentsink as a sink. The only problem with this is I am not able to format the syslog message. If I do than flume is not able to output it. By the way I am using syslog-ng and to test I am using “flume dump ‘syslogTcp(port)’”

    Any help would be appriciated.

    Adam

  • Jonathan Hsieh / October 07, 2010 / 4:55 PM

    Adam,

    I’m not quite sure what you mean when you say you are unable to “format the syslog message”. Could you send an email to flume-user mailing list with more details so we can better help you?

    https://groups.google.com/a/cloudera.org/group/flume-user/topics

    Thanks,
    Jon.

  • vwchong / August 12, 2012 / 10:45 PM

    I am new to flume and want to ask:
    It seems the usage of flume does not involve java programming, am I right? All the example I read is using tail command to read some log files.

  • Jonathan Hsieh / August 16, 2012 / 3:17 PM

    vwchong,

    This is correct. These day’s I’d suggest looking into the Flume-NG (v1.0+). It’s got a simplified design that is scaling up quite nicely.

    Jon.

Leave a comment


9 + five =