Maintaining Logging and/or stdout/stderr in Python Daemon
Written by: J Dawg
Every recipe that I’ve found for creating a daemon process in Python involves forking twice (for Unix) and then closing all open file descriptors. (See http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/ for an example).
This is all simple enough but I seem to have an issue. On the production machine that I am setting up, my daemon is aborting – silently since all open file descriptors were closed. I am having a tricky time debugging the issue currently and am wondering what the proper way to catch and log these errors are.
What is the right way to setup logging such that it continues to work after daemonizing? Do I just call
logging.basicConfig() a second time after daemonizing? What’s the right way to capture
stderr? I am fuzzy on the details of why all the files are closed. Ideally, my main code could just call
daemon_start(pid_file) and logging would continue to work.
I use the
python-daemon library for my daemonization behavior.
Interface described here:
It allows specifying a
files_preserve argument, to indicate any file descriptors that should not be closed when daemonizing.
If you need logging via the same
Handler instances before and after daemonizing, you can:
- First set up your logging Handlers using
- Log stuff
- Determine what file descriptors your
Handlers depend on. Unfortunately this is dependent on the
Handlersubclass. If your first-installed
StreamHandler, it’s the value of
logging.root.handlers.stream.fileno(); if your second-installed
SyslogHandler, you want the value of
logging.root.handlers.socket.fileno(); etc. This is messy :-(
- Daemonize your process by creating a
files_preserveequal to a list of the file descriptors you determined in step 3.
- Continue logging; your log files should not have been closed during the double-fork.
An alternative might be, as @Exelian suggested, to actually use different
Handler instances before and after the daemonziation. Immediately after daemonizing, destroy the existing handlers (by
deling them from
logger.root.handlers?) and create identical new ones; you can’t just re-call
basicConfig because of the issue that @dave-mankoff pointed out.
You can simplify the code for this if you set up your logging handler objects separately from your root logger object, and then add the handler objects as an independent step rather than doing it all at one time. The following should work for you.
import daemon import logging logger = logging.getLogger() logger.setLevel(logging.DEBUG) fh = logging.FileHandler("./foo.log") logger.addHandler(fh) context = daemon.DaemonContext( files_preserve = [ fh.stream, ], ) logger.debug( "Before daemonizing." ) context.open() logger.debug( "After daemonizing." )