Skip to main content

Monitor Nginx or Apache web server log files with Netdata

Log files have been a critical resource for developers and system administrators who want to understand the health and performance of their web servers, and Netdata is taking important steps to make them even more valuable.

By parsing web server log files with Netdata, and seeing the volume of redirects, requests, or server errors over time, you can better understand what's happening on your infrastructure. Too many bad requests? Maybe a recent deploy missed a few small SVG icons. Too many requests? Time to batten down the hatches—it's a DDoS.

You can use the LTSV log format, track TLS and cipher usage, and the whole parser is faster than ever. In one test on a system with SSD storage, the collector consistently parsed the logs for 200,000 requests in 200ms, using ~30% of a single core.

The web_log collector is currently compatible with Nginx and Apache.

This guide will walk you through using the new Go-based web log collector to turn the logs these web servers constantly write to into real-time insights into your infrastructure.

Set up your web servers

As with all data sources, Netdata can auto-detect Nginx or Apache servers if you installed them using their standard installation procedures.

Almost all web server installations will need no configuration to start collecting metrics. As long as your web server has readable access log file, you can configure the web log plugin to access and parse it.

Custom configuration of the web log collector

The web log collector's default configuration comes with a few example jobs that should cover most Linux distributions and their default locations for log files:

# [ JOBS ]
jobs:
# NGINX
# debian, arch
- name: nginx
path: /var/log/nginx/access.log

# gentoo
- name: nginx
path: /var/log/nginx/localhost.access_log

# APACHE
# debian
- name: apache
path: /var/log/apache2/access.log

# gentoo
- name: apache
path: /var/log/apache2/access_log

# arch
- name: apache
path: /var/log/httpd/access_log

# debian
- name: apache_vhosts
path: /var/log/apache2/other_vhosts_access.log

# GUNICORN
- name: gunicorn
path: /var/log/gunicorn/access.log

- name: gunicorn
path: /var/log/gunicorn/gunicorn-access.log

However, if your log files were not auto-detected, it might be because they are in a different location. Try the default web_log.conf file.

./edit-config go.d/web_log.conf

To create a new custom configuration, you need to set the path parameter to point to your web server's access log file. You can give it a name as well, and set the log_type to auto.

jobs:
- name: example
path: /path/to/file.log
log_type: auto

Restart Netdata with sudo systemctl restart netdata, or the appropriate method for your system. Netdata should pick up your web server's access log and begin showing real-time charts!

Custom log formats and fields

The web log collector is capable of parsing custom Nginx and Apache log formats and presenting them as charts, but we'll leave that topic for a separate guide.

We do have extensive documentation on how to build custom parsing for Nginx and Apache logs.

Tweak web log collector alarms

Over time, we've created some default alarms for web log monitoring. These alarms are designed to work only when your web server is receiving more than 120 requests per minute. Otherwise, there's simply not enough data to make conclusions about what is "too few" or "too many."

You can also edit this file directly with edit-config:

./edit-config health.d/weblog.conf

For more information about editing the defaults or writing new alarm entities, see our health monitoring documentation.

What's next?

Now that you have web log collection up and running, we recommend you take a look at the collector's documentation for some ideas of how you can turn these rather "boring" logs into powerful real-time tools for keeping your servers happy.

Don't forget to give GitHub user Wing924 a big 👍 for his hard work in starting up the Go refactoring effort.

Was this page helpful?

Need further help?

Search for an answer in our community forum.

Contribute