Buffer your events with Redis

If you have to handle bigger bursts of events, you might want to use a buffer or message broker to hold your messages.

This is because Logstash is capable to handle a lot of messages in quite a short time but can’t deal with message storms e.g. an application going berserk or a ddos attack logged on a firewall. This is even more true when you are dealing with inputs that can’t handle backpressure like UDP inputs or snmptraps.

To achieve buffering you needed several Logstash instances which where combined with Redis instances.

Newer versions of Logstash offer multiple pipelines which basically means you can have several “instances” with their own configuration in one single Logstash installation.

What you want to do:

  • Install Redis
  • Make sure it only listens on localhost (default in most distribution)
  • set maxmemory and maxmemory-policy (comments explain what they mean
  • add at least two pipelines to your Logstash configuration
  • One pipeline has your inputs, no filters and only one output of type redis and no other and writes into a specific key (you can make up its name)
  • Another pipeline has only one input of type redis and no other and reads from the same key you used above. This pipeline has your filters and outputs.

Of course you can use this pattern for many specialised pipelines dealing only with certain types of messages. This can get you rid of many if clauses you had to use in older Logstash configurations to route your events to the correct filters.

Make especially sure you write and read from the correct keys in Logstash. Most of all be sure that no pipeline writes to the key it reads from. You’d create a loop and kill your installation in no time.

With this pattern Logstash can handle tons of events in a very short time becauase they all end up in Redis. Redis only works in memory and just dumps events to disk from time to time. In fact you can collect events almost as fast as your ram can write. When they are save in Redis, the other Logstash pipeline can take its time to read events whenever it has ressources available. This way you can handle a lot more events with the same ressources in a Logstash installation.