Integrate fail2ban with WordPress: Spam Log Plugin

Spam Log is a WordPress plugin that writes a log entry for every comment marked as spam. The log file is suitable for processing by fail2ban.

Recently, I’ve encountered some very aggressive WordPress spam bots. These bots post a new spam comment almost every minute for hours on end. Needless to say my spam queue is a mess. I wrote the following plugin to solve this problem.

What is Spam Log?

Spam Log is a simple WordPress plugin that logs a message every time a comment is marked as spam. Each log message includes the IP address of the poster and the comment’s ID. The log can easily be processed by fail2ban. fail2ban is a daemon that scans log files for misbehaving clients and bans them by IP address. Here is sample output generated by Spam Log:

2009-04-20 04:15:03 comment id=527 from host=83.233.30.32 marked as spam
2009-04-20 04:18:15 comment id=528 from host=83.233.30.32 marked as spam
2009-04-20 04:20:36 comment id=529 from host=83.233.30.32 marked as spam
2009-04-20 04:21:46 comment id=530 from host=83.233.30.32 marked as spam
2009-04-20 04:22:49 comment id=531 from host=83.233.30.32 marked as spam

Why use Spam Log and fail2ban if Akismet/wp-recaptcha/etc. is already catching all the spam?

  • Many spammers post 50+ comments a day from a single IP address. Even if every comment is correctly marked as spam, the volume alone means that you can’t easily monitor the spam queue for false positives. Spam Log and fail2ban should considerably reduce the total amount of spam.
  • Even if spam comments never appear on your blog, they still waste valuable resources on your server. Low-memory virtual servers need all available resources for serving legitimate users. Banning spammers at the firewall before they ever connect to your web server is very efficient.

Installation

Spam Log

  1. Upload the spam-log folder to the wp-content/plugins directory.
  2. Active the plugin through the WordPress Admin menu.
  3. Set the location of the spam log through Spam Log’s Options page in the WordPress Admin menu. By default, the location is set to wp-content/spam.log. The file or containing directory needs to be writeable by the user that the web server runs as. On Debian or Ubuntu systems, you can do the following:

$ sudo touch /path/to/spam.log
$ sudo chown www-data.www-data /path/to/spam.log

fail2ban Configuration

Create /etc/fail2ban/filter.d/spam-log.conf with the following contents:

[Definition]
failregex = ^\s*comment id=\d+ from host=<HOST> marked as spam$
ignoreregex =

Add the following lines to /etc/fail2ban/jail.local:

[spam-log]
enabled  = true
port     = http,https
filter   = spam-log
logpath  = /path/to/spam.log
maxretry = 5
findtime = 3600
bantime  = 86400

Change logpath to the path you set on Spam Log’s Options page. This configuration will ban an IP address for a day if it’s used to post 5 comments within an hour that are marked as spam. Warning: Some captcha plugins mark comments as spam when a user fails a captcha. Be careful decreasing maxretry if you’re using such a plugin as there’s a risk that you will ban legitimate users.

Download

spam-log-0.1.tar.gz
spam-log-0.1.zip

Log iptables Messages to a Separate File with rsyslog

Learn how to filter iptables log messages to a separate file. Two methods are presented: one using traditional syslog and one using rsyslog.

Firewall logging is very important, both to detect break-in attempts and to ensure that firewall rules are working properly. Unfortunately, it’s often difficult to predict in advance which rules and what information should be logged. Consequently, it’s common practice to err on the side of verbosity. Given the amount of traffic that any machine connected to the Internet is exposed to, it’s critical that firewall logs be separated from normal logs in order to ease monitoring. What follows are two methods to accomplish this using iptables on Linux. The first method uses traditional syslog facility/priority filtering. The second, more robust method filters based on message content with rsyslog.

The Old Way: Use a Fixed Priority for iptables

The traditional UNIX syslog service only has two ways to categorize, and consequently route, messages: facility and priority. Facilities include kernel, mail, daemon, etc. Priorities include emergency, alert, warning, debug, etc. The Linux iptables firewall runs in the kernel and therefore always has the facility set to kern. Using traditional syslog software, the only way you can separate iptables messages from other kernel messages is to set the priority on all iptables messages to something specific that hopefully isn’t used for other kernel logging.

For example, you could add something like the following to /etc/syslog.conf:

kern.=debug -/var/log/iptables.log

and specifically remove the kernel debugging messages from all other logs like so:

kern.*;kern.!=debug -/var/log/kern.log

and in each iptables logging rule use the command line option --log-level debug.

There are two distinct disadvantages to this approach. First, there’s no guarantee that other kernel components won’t use the priority you’ve set iptables to log at. There’s a real possibility that useful messages will be lost in the deluge of firewall logging. Second, this approach prevents you from actually setting meaningful priorities in your firewall logs. You might not care about random machines hammering Windows networking ports, but you definitely want to know about malformed packets reaching your server.

The New Way: Filter Based on Message Content with rsyslog

rsyslog is mostly a drop-in replacement for a tradtional syslog daemon–on Linux, klogd and sysklogd. In fact, on Debian and Ubuntu, you can simply:

$ sudo apt-get install rsyslog

and if you haven’t customized /etc/syslog.conf, logging should continue to work in precisely the same way. rsyslog has been the default syslog on Red Hat/Fedora based systems for a number of versions now, but if it’s not installed:

$ sudo yum install rsyslog

Configure iptables to Use a Unique Prefix

We’ll setup rsyslog to filter based on the beginning of a message from iptables. So, for each logging rule in your firewall script, add --log-prefix "iptables: ". Most firewall builder applications can be easily configured to add a prefix to every logging rule. For example, if you’re using firehol as I am, you could add:

FIREHOL_LOG_PREFIX="firehol: "

to /etc/firehol/firehol.conf.

Configure rsyslog to Filter Based on Prefix

Create /etc/rsyslog.d/iptables.conf with the following contents:

:msg, startswith, "iptables: " -/var/log/iptables.log
& ~

The first line means send all messages that start with “iptables: ” to /var/log/iptables.log. The second line means discard the messages that were matched in the previous line. The second line is of course optional, but it saves the trouble of explicitly filtering out firewall logs from subsequent syslog rules.

When I configured this on my own machines, I did notice one issue that may be a peculiarity of firehol, but it’s probably worth mentioning anyway. It seems that firehol adds an extra single quote at the beginning of log messages that needs to be matched in the rsyslog rule. For example, here’s a log message from firehol:

Apr 17 12:41:07 tick kernel: 'firehol: 'IN-internet':'IN=eth0 OUT= MAC=fe:fd:cf:c0:47:b5:00:0e:39:6f:48:00:08:00 SRC=189.137.225.191 DST=207.192.75.74 LEN=64 TOS=0x00 PREC=0x00 TTL=32 ID=5671 DF PROTO=TCP SPT=3549 DPT=5555 WINDOW=65535 RES=0x00 SYN URGP=0

Notice the extra quote after “kernel: ” and before “firehol: “. So, on my machine I configured the rsyslog filter like so:

:msg, startswith, "'firehol: " -/var/log/iptables.log
& ~

Configure iptables Log Rotation

Finally, since we’re logging to a new file, it’s useful to create a log rotation rule. Create a file /etc/logrotate.d/iptables with the following contents:

/var/log/iptables.log
{
	rotate 7
	daily
	missingok
	notifempty
	delaycompress
	compress
	postrotate
		invoke-rc.d rsyslog rotate > /dev/null
	endscript
}

The preceding script tells logrotate to rotate the firewall log daily and keep logs from the past seven days.

Using YSlow to Optimize Web Site Performance Continued

The second part of an article/tutorial on using the YSlow firebug extension to optimize web site performance.

This article is a continuation of a previous article. If you haven’t yet, read the previous article: Using YSlow to Optimize Web Site Performance.

In this post, I’ll  cover rules 5-13 and summarize the results of the optimizations that I made to my site.

YSlow’s 13 Rules to Improve Web Site Performance (rules 5-13)

A number of these rules are fairly simple, so I’ll cover some together.

5. Put CSS at the top

6. Put JS at the bottom

These rules are fairly easy to follow and most sites shouldn’t have to change anything. Putting CSS in the document <head> makes the page appear to load faster because of progressive rendering. It turns out that javascript blocks parallel downloads, so ideally you want to load javascript when everything else has already loaded. Also, if the javascript is hosted externally and the external server is slow, the rest of the page should load without problems.

7. Avoid CSS expressions

I’ll be honest here. I didn’t even know CSS had expressions until I saw this rule. YSlow recommends avoiding them because they are evaluated an absurd number of times.

8. Make JS and CSS external

External files can be cached indepently. If your users usually browse to more than one page on your site or you have frequent returning visitors, this should improve load times.

9. Reduce DNS lookups

DNS lookups can still take a fair amount of time even on a broadband connection. Basically, try to keep the number of unique host names fairly low. There is one competing performance benefit to serving page components from multiple hosts. It turns out that most browsers will only make two simultaneous connections to a given host. So, serving content from multiple hosts can speed up page load times especially if there are a lot of components.

10. Minify JS and CSS

Actually, in YSlow this rule says Minify JS, but on the explanation page, they added CSS. Minifying these files simply makes them smaller even if you’re already compressing them. The best minifier I found is the YUI Compressor, which handles both CSS and javascript. Here are the results of minifying my site’s stylesheet:

Uncompressed gzipped
Original 9814 bytes 2806 bytes
Minified 6164 bytes 1645 bytes
Savings 37.2% 41.4%

The YUI Compressor is pretty easy to use, but using it is going to get annoying unless you automate. To solve this problem, I once again modified my publish CSS script to do the minifying automatically. If you read the first part of this article and you’re wondering how many times I’m going to modify my publish script, this is the final version:

#!/bin/sh

SERIAL_FILE=serial.txt
OLD_SERIAL=`cat ${SERIAL_FILE}`
SERIAL=$((${OLD_SERIAL} + 1))
YUICOMPRESSOR_PATH=/home/avery/yuicompressor-2.4.2/build/yuicompressor-2.4.2.jar

cat	style.css.original \
	../../plugins/wp-recaptcha/recaptcha.css \
	../../plugins/deko-boko-a-recaptcha-contact-form-plugin/display/dekoboko.css \
	| java -jar ${YUICOMPRESSOR_PATH} --type css > style-${SERIAL}.css

sed "s/REPLACE_WITH_SERIAL/${SERIAL}/g" < header.php.original > header.php

rm style-${OLD_SERIAL}.css

echo ${SERIAL} > ${SERIAL_FILE}

Using publish scripts for stylesheets and javascript might seem like overkill at first, but that simple script has allowed me to make significant improvements that otherwise would be too time consuming to implement.

11. Avoid redirects

12. Remove duplicate scripts

These two rules are fairly easy to follow. Sometimes you can’t avoid redirects such as when you move to a new domain or restructure your site. Those sort of redirects are good and they maintain the reputation your site has built with search engines. That said, a lot of redirects can be eliminated simply by adding a trailing slash to a URL. Be especially mindful if you’re working on hand-coded sites rather than CMS-based sites. Remove duplicate scripts is pretty obvious. Unfortunately, Google Adsense serves the same scripts for each ad on a page and there’s no way to fix it.

13. Configure ETags

I won’t say much about ETags. This YSlow rule actually means configure ETags or remove them entirely. For a site being served off a single web server, ETags offer no benefits and some drawbacks including the fact that Apache generates invalid ETags for gzipped content. I recommend turning them off unless you’re willing to configure them per Yahoo’s ETags best practices. To disable ETags in Apache add the following to the virtual host configuration or the main .htaccess:

FileETag None

Results

Finally, here are the results that I obtained. First, my YSlow score:

Final YSlow Performance Tab
Final YSlow Performance Tab

I’m somewhat disappointed that after all of that, my score barely improved, moving from an F (59) to a D (63). Unfortunately, most of the remaining areas of optimization are either unrealistic as in Use a CDN or are out of my control.

What about real performance? The following tables include the minimum, maximum, and average times out of 10 page loads. First, with a cold cache:

Page Load Times Before and After Optimization (cold cache)
Minimum Maximum Average
Before Optimization 1.243s 1.890s 1.422s
After Optimization 1.024s 1.410s 1.215s
Speed Increase 17.6% 25.4% 14.6%

I’m actually fairly satisfied. Most of the optimizations I did would benefit repeat visitors rather than new visitors. A 15% performance improvement is not bad at all. Next, with a warm cache:

Page Load Times Before and After Optimization (warm cache)
Minimum Maximum Average
Before Optimization 0.817s 1.176s 0.968s
After Optimization 0.732s 0.920s 0.815s
Speed Increase 10.4% 21.8% 15.8%

These results baffled me. The improvements are only slightly better than the cold cache numbers and I really thought page loads would speed up more for repeat visitors. After some investigation, it turns out that I was loading pages in the wrong way. Specifically, I was hitting the f5 key to reload the page in my warm cache tests. Reloading with f5 has a special meaning in at least Firefox. It tells the browser to check that every item in the cache is the correct version even if the item is set to expire in 10 years. Here are the numbers without using f5:

Non-f5 Page Load Times After Optimization (warm cache)
Minimum Maximum Average
0.397s 0.525s 0.448s

Now, those are some fast page loads. I’m not interested enough to revert all my changes and re-test, but some statistics from YSlow suggest that I probably sped up warm cache page loads by a fair amount. Here’s the initial Stats tab, prior to any optimization:

Initial YSlow Stats Tab
Initial YSlow Stats Tab

And here’s the final Stats tab after all the optimizations:

Final YSlow Stats Tab
Final YSlow Stats Tab

Of particular note: in the case of a primed cache, I reduced the number of HTTP requests from 25 to 15.

Overall, I’m satisfied with the performance improvements. Even if the improvements are not amazing, working with YSlow is not particularly hard or time consuming. In fact, writing this article took much longer than the actual changes I made to my web site. A 15% reduction in load times is worth an hour or two of writing scripts and changing server configurations.

Using YSlow to Optimize Web Site Performance

An article/tutorial on using the YSlow firebug extension to optimize web site performance.

I’m a big fan of Google Tech Talks. Recently, I caught a particularly interesting one from Steve Souders called Life’s Too Short – Write Fast Code. Steve Souders used to work for Yahoo! as their chief web performance guru; he now does much the same at Google. The talk was a continuation of a previous lecture on a firebug extension that he wrote called YSlow. Suffice it to say, he knows what he’s talking about when it comes to high performance web sites. Anyway, it piqued my interest, so I decided to try to improve the performance of my own site. This article/tutorial and the next chronicle the changes I made and the results that I obtained.

YSlow’s 13 Rules to Improve Web Site Performance (rules 1-4)

YSlow grades web sites based on 13 different rules. Browse to your site, click on the YSlow icon in the lower right corner of Firefox, and click on the Performance tab. Here’s how one post on my site ranked before making any changes:

Initial YSlow Performance Tab
Initial YSlow Performance Tab

Clicking on any rule takes you to a page explaining it in more detail.

A quick note: the screenshot above was taken before I disabled Woopra, which is web analytics software similar to Google Analytics. I had tried it out a couple months ago and had simply forgotten that I had it enabled. Disabling Woopra improved my grades very slighty in a number of rules, but I don’t include that optimization in the main article because it’s technically a functional change to my site. As explained below, YSlow doesn’t make functional recommendations.

What follows are the changes that I made to my site, rule-by-rule.

1. Make fewer HTTP requests

This is one of the most important rules and it’s also one of the most difficult to implement. It’s worth mentioning at this point that YSlow’s general strategy isn’t to recommend changes that alter the actual content and functionality of a site, but instead suggest changes that will make the same content load faster. In other words, if you have a gallery with 20 images, the recommendation isn’t to cut the gallery down to 5 images. Consequently, Make fewer HTTP requests focuses on three components of web pages: javascript, stylesheets, and background images.

Expanding the rule will show counts of each component if they exceed YSlow’s thresolds:

Make fewer requests expanded
Make fewer requests expanded

In the case of my blog, the problem comes down to external javascript and CSS. To get a list of page components, click on the Components tab. They’re sorted by type so, it’s very easy to identify the problem files.

Javascript and CSS components
Javascript and CSS components

I can group these into 4 basic categories: Google Adsense, Google Analytics, reCAPTCHA, and WordPress plugins. Adsense has a particularly egregious number of external javascript files. Unfortunately, there’s nothing I can do about it as modifying the code is against Adsense policies. Analytics has just one javascript file and keeping that external is probably a good idea. Almost anyone who browses more than a handful of sites will have ga.js cached as Analytics use is pervasive. Similarly, there’s a good possibility that visitors will have the main reCAPTCHA javascript cached and the other code is dynamic so you don’t want to mess with it. Finally, there are the stylesheets from WordPress plugins.

Combining CSS into one file–especially when it’s all hosted on the same server–is a really good idea. In this case, I’m serving my main style.css file and two other stylesheets from the wp-recaptcha plugin and the Deko Boko plugin. I wrote a simple script to combine all of the files into one, like so:

#!/bin/sh

cat style.css.original \
    ../../plugins/wp-recaptcha/recaptcha.css \
    ../../plugins/deko-boko-a-recaptcha-contact-form-plugin/display/dekoboko.css \
    > style.css

I also modified both plugins to no longer include a link to their original stylesheets in the <head> section of the page. Search the PHP files for references to the original stylesheets and comment those lines out.

Before moving on, it’s worth mentioning that you should use CSS sprites if at all possible. Many sites use tens of icons per page. Combining those tiny icons into one sprite is a huge performance boost.

2. Use a CDN

One persistent criticism of YSlow is that it focuses on performance improvements to major sites. I don’t really agree with that; in fact, nearly every rule applies equally to both small and large sites. Use a CDN is the exception. A CDN allows you to serve content to users from geographically close servers. It’s a decent performance improvement, but it’s also incredibly expensive.

3. Add an Expires header

What’s the fastest HTTP request? One that never takes place. The idea behind far future Expires headers is that you should allow permanent caching of any page component that won’t change or even that is unlikely to change very often. If it does need to be modified, you simply rename it. To accomplish this using Apache, enable mod_expires and add the following to the virtual host configuration or the main .htaccess:

<IfModule mod_expires.c>
    ExpiresActive On
    ExpiresByType image/gif "access plus 10 years"
    ExpiresByType image/jpeg "access plus 10 years"
    ExpiresByType image/png "access plus 10 years"
    ExpiresByType text/css "access plus 10 years"
</IfModule>

Warning: The preceding configuration means that any time you change an image or stylesheet served my Apache, you must rename it. If a browser or proxy server has the component cached, it will never check to see if its version is the same as the version on your server. For images, this is probably not an annoyance; changing an image usually means uploading a new version in which case it will have a different file name anyway. For stylesheets, this can be annoying if you don’t automate the process.

I solved the stylesheet problem with a script. In the same directory as style.css, I created a text file like so:

$ echo 0 > serial.txt

Next, I modified the script that combined the 3 original stylesheets I was using like so:

#!/bin/sh

SERIAL_FILE=serial.txt
OLD_SERIAL=`cat ${SERIAL_FILE}`
SERIAL=$((${OLD_SERIAL} + 1))

cat style.css.original \
    ../../plugins/wp-recaptcha/recaptcha.css \
    ../../plugins/deko-boko-a-recaptcha-contact-form-plugin/display/dekoboko.css \
    > style-${SERIAL}.css

sed "s/REPLACE_WITH_SERIAL/${SERIAL}/g" < header.php.original > header.php

rm -f style-${OLD_SERIAL}.css

echo ${SERIAL} > ${SERIAL_FILE}

The script reads a number in from serial.txt, increments it, creates a new stylesheet with the number appended, and in this case modifies WordPress’ header.php to link to the new stylesheet. Every time I want a different style on my site, I modifiy style.css.original and run this script.

4. Gzip components

Everyone in every circumstance should enable mod_deflate. Way back in the early days of the web there were browsers that didn’t play well with gzip; they basically don’t exist anymore. In Apache, enable mod_deflate–it’s probably already enabled–and add the following to the virtual host configuration or the main .htacess:

<IfModule mod_deflate.c>
    AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css
</IfModule>

Continued Soon

When I started writing this post, I didn’t realize how long it was going to be. Part two, which will include the other 9 easier-to-implement and less important rules, should be up within a few days. I’ll also include some before-and-after performance statistics.

Part two is now finished: Using YSlow to Optimize Web Site Performance Continued.

Safely Removing External Drives in Linux

Simply unmounting a filesystem is not the ideal way to remove an external USB/firewire/SATA drive in Linux. This tutorial explains why and gives a solution.

Backstory

About a year ago I bought an external SATA drive for backups. My normal usage consisted of:

  1. Power on and connect the drive
  2. mount /media/backup
  3. Run my backup script
  4. umount /media/backup
  5. Power off and unplug the drive

This seemed to work pretty well–at the very least, I wasn’t losing data–except the drive made a strange sound when I powered it off. It wasn’t a normal drive spin down sound; it was louder and shorter. So, I googled for authoritative instructions on using external drives with Linux. While most sources suggest doing exactly what I did, it’s not ideal.

It turns out that most cheap external USB/SATA/firewire enclosures don’t properly issue a stop command to the drive when you flick the power switch. Instead, the power switch simply cuts power to the drive, which forces the drive to do an emergency head retract. If you think that sounds bad, you’re right. Emergency retracts aren’t going to brick your drive immediately, but if they occur regularly they’re putting a lot of unnecessary wear and tear on the drive. In fact, some drives monitor how often this happens with S.M.A.R.T. attribute 192. (Check Wikipedia’s S.M.A.R.T. page for a comprehensive list of attributes)

Solution

The solution is to spin down the drive via software before turning it off and unplugging it. The best way to do this is with a utility called scsiadd. This program can add and remove drives to Linux’s SCSI subsystem. Additionally, with fairly modern kernels, removing a device will issue a stop command, which is exactly what we’re looking for. Run:

$ sudo scsiadd -p

which should print something like:

Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
  Vendor: ATA      Model: SAMSUNG HD300LJ  Rev: ZT10
  Type:   Direct-Access                    ANSI  SCSI revision: 05
Host: scsi4 Channel: 00 Id: 00 Lun: 00
  Vendor: LITE-ON  Model: DVDRW LH-20A1L   Rev: BL05
  Type:   CD-ROM                           ANSI  SCSI revision: 05
Host: scsi5 Channel: 00 Id: 00 Lun: 00
  Vendor: ATA      Model: WDC WD10EACS-00Z Rev: 01.0
  Type:   Direct-Access                    ANSI  SCSI revision: 05

Identify the drive you want to remove and then issue:

$ sudo scsiadd -r host channel id lun

substituting the corresponding values from the scsiadd -p output. For example, if I wanted to remove “WDC WD10EACS-00Z”, I would run:

$ sudo scsiadd -r 5 0 0 0

If everything works, scsiadd should print:

Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
  Vendor: ATA      Model: SAMSUNG HD300LJ  Rev: ZT10
  Type:   Direct-Access                    ANSI  SCSI revision: 05
Host: scsi4 Channel: 00 Id: 00 Lun: 00
  Vendor: LITE-ON  Model: DVDRW LH-20A1L   Rev: BL05
  Type:   CD-ROM                           ANSI  SCSI revision: 05

You can double-check the end of dmesg. You should see:

[608188.235216] sd 5:0:0:0: [sdb] Synchronizing SCSI cache
[608188.235362] sd 5:0:0:0: [sdb] Stopping disk
[608188.794296] ata6.00: disabled

At this point, the drive is removed from Linux’s SCSI subsystem and it should not be spinning. It’s safe to unplug and turn off.

Using scsiadd directly can be inconvenient because it requires looking up the host, channel, id, and lun of the drive. I wrote a short script that will take a normal Linux device file like /dev/sdb, figure out the correct arguments to scsiadd, and run scsiadd -r. I use this script in my larger backup script.

#!/bin/sh

if [ $# -ne 1 ]; then
    echo "Usage: $0 <device>"
    exit 1
fi

if ! which lsscsi >/dev/null 2>&1; then
    echo "Error: lsscsi not installed";
    exit 1
fi

if ! which scsiadd >/dev/null 2>&1; then
    echo "Error: scsiadd not installed"
    exit 1
fi

device=`lsscsi | grep $1`
if [ -z "$device" ]; then
    echo "Error: could not find device: $1"
    exit 1
fi

hcil=`echo $device | awk \
    '{split(substr($0, 2, 7),a,":"); print a[1], a[2], a[3], a[4]}'`

scsiadd -r $hcil

It does require the lsscsi command to be present on the system.

Monitoring Hard Drive Health on Linux with smartmontools

S.M.A.R.T. is a system in modern hard drives designed to report conditions that may indicate impending failure. smartmontools is a free software package that can monitor S.M.A.R.T. attributes and run hard drive self-tests. Although smartmontools runs on a number of platforms, I will only cover installing and configuring it on Linux.

Continue reading “Monitoring Hard Drive Health on Linux with smartmontools”