Reindex Log Files

MailCleaner Support
Added about 1 month ago

Work in Progress

Script is not published yet.


MailCleaner has a script at /usr/mailcleaner/tools/ which can go through the existing /var/mailcleaner/log directory and index the logs again to ensure that logs for a given date are stored in the appropriate file according to the expectations of MailCleaner (eg. Management->Tracing/Logs) and the syslog rotation scheme. That expected scheme is that:

  • Current day's logs have no suffix
  • Previous day's logs have the suffix '.0'
  • Each day prior is compressed and the suffix is incremented, ie '.1.gz' for 2 days prior, '.2.gz' for 3 days prior.

There can be occasions where the nightly log rotation task fails, or it was run multiple times and so the contents of the files then no longer match the expected date based on the file suffix. In order to remedy this, the new tool can search through all of the existing logs and write each line to the correct file.


Because not all rotated files follow a simple format with the date included in each log line, there are a handful of files that are currently ignored during the re-indexing process:

  • clamav/freshclam.log
  • mailcleaner/mc_counts-cleaner.log
  • mailcleaner/summaries.log
  • updater4mc.log

These files are of less significance, because none of them are queried for Tracing, and typically the very end of the current file is usually all that is needed when investigating each corresponding logged process.

While some optimization has been done, especially for 'fast' mode, the script is still not incredibly fast and the time to complete can also be impacted by slow read/write of disks. The process is not threaded, so multiple cores don't help. A test with 250MB of (compressed) logs on a 2Ghz core with 7200 RPM drives took about 45 minutes in normal mode and just under 15 minutes in "fast" mode. Normal mode will take exponentially longer with more logs, while "fast" mode should grow linearly. You can probably expect it to take multiple hours on a machine with any reasonable amount of logs. For this reason, it is probably best to run one service at a time, during a period when you can accept a few minutes of down-time for that service.

Also note that it is recommended that you run this script early in the day. If you run it shortly before midnight, it is likely that the logs will be rotated while the script is working. The script will rotate the logs using the time at which it began, so will require an immediate rotation or re-re-indexing if it finishes after midnight.


If you would like to re-index the logs for select services, you can run the script with the name of each service as an argument:

/usr/mailcleaner/tools/ exim_stage1 exim_stage2 exim_stage4

The above would only re-index the 3 MTAs, but not other services. Without listing any services, the script will run on all available. Available services are: apache, clamav, exim_stage1, exim_stage2, exim_stage4, fail2ban, mailcleaner, mailscanner, mysql_master, and mysql_slave.

The script has the following additional options:
* --fast which will use a faster algorithm at the cost of having to read large portions of logs into memory at one time (potentially causing a crash for machines with many GBs of logs of a single type) and somewhat less comprehensively (it will assume that all logs are chronological within and between files and only correct where the file breaks are). By contrast, the normal mode uses extremely little memory, since it goes one output file at a time and directly writes anything it finds from all of the input files immediately, and should be comprehensive.
* --backup which will move the existing logs to /var/mailcleaner/log.bk. If there are already files for that service in that directory, they will be deleted at the start of the run to make room for the backups on the current run, so you will need to move that directory after each run, or create a copy before the first run to guarantee that you have an original backup.
* --stop which will stop the service before processing the current day's log and start it when that log is done. This will prevent a race condition where additional logs are written to that file between the time that the script writes the new file and supplants the existing one, resulting in a brief loss of data. The service will need to be restarted one time for each file type for that service (eg. once for each of mainlog, rejectlog and paniclog for each MTA).
* --help which provides a similar description of these options.