Home | Benchmarks | Categories | Atom Feed

Posted on Tue 04 November 2014 under Python

Django exception archaeology

Almost every single customer I've worked with has had a different way of capturing and analysing activity in their production systems. Researching and comparing offerings feels like an endless task and a source of endless debate. Regardless of the large number of choices, being able to centralise and analyse application server exceptions is an important step towards improving the reliability of code in production.

I wanted to see how Logstash using Elasticsearch as a data store and Kibana as an analysis tool could be used to monitor a Django application.

To start, I created a small Django application that served random data, raised exceptions occasionally via a catch-all view and had a celery task which would randomly pick a log level to record to every time a new task was scheduled. With all these combinations it should make for a good set of example data to dig through and analyse.

One Django project, many response types

I installed Django, django-celery, SQLAlchemy (needed to record celery's results to an SQLite file), gunicorn and python-logstash. I then created a project called faulty and an app within it called example.

$ pip install Django==1.7.1 \
              django-celery==3.1.16 \
              gunicorn==19.1.1 \
              SQLAlchemy==0.9.8 \
$ django-admin startproject faulty
$ cd faulty
$ django-admin startapp example

I made the following changes to faulty/settings.py:

import djcelery


BROKER_URL = 'django://'
CELERY_RESULT_BACKEND = 'db+sqlite:///results.db'
CELERYBEAT_SCHEDULER = "djcelery.schedulers.DatabaseScheduler"


    'version': 1,
    'disable_existing_loggers': False,
    'handlers': {
        'logstash': {
            'level': 'WARNING',
            'class': 'logstash.LogstashHandler',
            'host': 'localhost',
            'port': 5000,
            'version': 1,
    'loggers': {
        'django.request': {
            'handlers': ['logstash'],
            'level': 'WARNING',
            'propagate': True,
        'example.tasks': {
            'handlers': ['logstash'],
            'level': 'WARNING',

I then created a task that would accept a randomly-generated content string, its reported length and content type. It would create a sha1 hash of the content and output that hash, the reported content length and the content type to a random log level.

The content length would be the length before the content was wrapped. I could have derived the length in the task itself or hashed the content in the view but the purpose of this exercise is to create noise.


import hashlib
import logging
from random import choice

from celery import task
from celery.utils.log import get_task_logger

logger = get_task_logger(__name__)

def log_response(content, content_length, content_type):
    log_method = choice((logger.info,

    hasher = hashlib.sha1()
    log_method('{0} {1} {2}'.format(hasher.hexdigest(),
    return True

I configured the project to direct all traffic to a single endpoint.


from django.conf.urls import patterns, url

urlpatterns = patterns('',
    url(r'^$', 'example.views.endpoint'),

I then created a view that has a 25% chance of raising an exception and if it doesn't, generates random content, sends that content to the celery task and serves the content.


import mimetypes
from random import choice, randint
from textwrap import wrap

from django.http import HttpResponse

from .tasks import log_response

def endpoint(request):
    if randint(0, 3) == 3: # 25% chance of raising an error
        raise choice((AssertionError, IndexError, KeyError))

    # Pick a random content type:
    content_type = choice(mimetypes.types_map.values())
    content_length = randint(512, 2048)

    # Create a random string between 512 and 2048 characters long
    # and line wrap it every 50 characters
    choices = 'abcdefghijklmnopqrstuvwxyz0123456789!@#$%^&*(-_=+)'
    content = ''.join([choice(choices)
                       for _ in range(content_length)])
    content = '\n'.join(wrap(content, width=50))

    # Send the content, its length and type to this task
    log_response.delay(content, content_length, content_type)

    return HttpResponse(content, content_type=content_type)

Running the app server and task handler

I then setup the database (as celery needed it to keep track of its tasks) and launched celeryd and gunicorn.

$ python manage.py syncdb
$ python manage.py migrate
$ python manage.py celeryd &
$ gunicorn faulty.wsgi:application -b &

I tested the endpoint a few times and never did the responses look alike. Here is one response I got back:

$ curl -v localhost:8050
< HTTP/1.1 200 OK
< Server: gunicorn/19.1.1
< Date: Mon, 03 Nov 2014 09:53:34 GMT
< Connection: close
< Transfer-Encoding: chunked
< Vary: Cookie
< X-Frame-Options: SAMEORIGIN
< Content-Type: image/jpeg

And another response, this time returning HTTP 404:

$ curl -v localhost:8050
< HTTP/1.1 404 NOT FOUND
< Server: gunicorn/19.1.1
< Date: Mon, 03 Nov 2014 09:59:28 GMT
< Connection: close
< Transfer-Encoding: chunked
< Vary: Cookie
< X-Frame-Options: SAMEORIGIN
< Content-Type: text/html

<!DOCTYPE html>
<html lang="en">
  <meta http-equiv="content-type" content="text/html; charset=utf-8">
  <title>Page not found at /</title>

Exception archaeology

In this setup, Logstash will transport exceptions originating from Django, store them in Elasticsearch and Kibana will be used to view and analyse the exceptions.

These are some condensed installation commands I ran on my Ubuntu 14 system:

Install the Java runtime:

$ sudo apt install \

Install and start Elasticsearch:

$ wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.3.4.deb
$ sudo dpkg -i elasticsearch-1.3.4.deb
$ sudo /etc/init.d/elasticsearch start

Install Logstash:

$ wget https://download.elasticsearch.org/logstash/logstash/packages/debian/logstash_1.4.2-1-2c0f5a1_all.deb
$ sudo dpkg -i logstash_1.4.2-1-2c0f5a1_all.deb

Install Kibana and launch a web server serving its static content on port 8000:

$ wget https://download.elasticsearch.org/kibana/kibana/kibana-3.1.1.tar.gz
$ tar xzf kibana-3.1.1.tar.gz
$ cd kibana-3.1.1/
$ python -m SimpleHTTPServer &

Capturing exceptions

With all our required packages installed and the web application and task handler already running, I created the following configuration for Logstash. It collects messages from syslog and tcp:5000 (used by Django for reporting exceptions) and sends them both to Elasticsearch and prints them to stdout.

$ cat logstash.conf
input {
  tcp { port => 5000 codec => json }
  udp { port => 5000 type => syslog }

output {
  elasticsearch_http {
    host => ""
    port => 9200
  stdout { codec => rubydebug }

I used this command to launch Logstash:

$ /opt/logstash/bin/logstash agent -f logstash.conf

Generating traffic

I used ab to generate traffic for Django:

$ ab -n 200 -c 10

With a decent amount of exceptions captured and sent to Elasticsearch, I looked to see what index name Logstash had picked, it turned out to be logstash-2014.11.04:

$ curl --silent localhost:9200/_cat/indices?v
health index               pri rep docs.count docs.deleted store.size pri.store.size
yellow logstash-2014.11.04   5   1        848            0    453.8kb        453.8kb

I then open Kibana and loaded the default dashboard/file/logstash.json file with the URL below and saw an interface that looked like this.

Further efforts

I'm still looking at ways to tokenise the messages so their properties are easier to isolate, search and aggregate. At the moment messages are stored as a JSON blob rather than as separate keys in a dictionary:

       "message" => "{\"host\": \"ubuntu\", \"logger\": \"example.tasks\", \"type\": \"logstash\", \"tags\": [], \"path\": \"/home/mark/faulty/example/tasks.py\", \"@timestamp\": \"2014-11-04T12:37:17.477207Z\", \"@version\": 1, \"message\": \"371679e24301dc140fbb7d0fd7957adb9ad32e41 1625 audio/mpeg\", \"levelname\": \"ERROR\"}",
      "@version" => "1",
    "@timestamp" => "2014-11-04T12:37:17.477Z",
          "type" => "syslog",
          "host" => ""
       "message" => "{\"host\": \"ubuntu\", \"logger\": \"example.tasks\", \"type\": \"logstash\", \"tags\": [], \"path\": \"/home/mark/faulty/example/tasks.py\", \"@timestamp\": \"2014-11-04T12:37:17.567138Z\", \"@version\": 1, \"message\": \"9885e4c43c07e399a9a94d934e7f642a724e819b 1667 image/x-cmu-raster\", \"levelname\": \"CRITICAL\"}",
      "@version" => "1",
    "@timestamp" => "2014-11-04T12:37:17.567Z",
          "type" => "syslog",
          "host" => ""

The other task I want to look at is generating random data with distinctive distribution curves. With the way I use random.choice at the moment each choice tends to get picked as often as any other choice but in a random order. When summarising statistics almost every exception happens an equal number of times so the pie charts slices are even and time series lines overlap one another quiet often.

In [1]: from collections import Counter

In [2]: from random import randint

In [3]: Counter([randint(0, 3) for _ in range(0, 200)])
Out[3]: Counter({1: 51, 3: 51, 0: 50, 2: 48})
Thank you for taking the time to read this post. I offer both consulting and hands-on development services to clients in North America and Europe. If you'd like to discuss how my offerings can help your business please contact me via LinkedIn.

Copyright © 2014 - 2022 Mark Litwintschik. This site's template is based off a template by Giulio Fidente.