Posted on Leave a comment

Blog Housekeeping & More Of The Same

Since I’ve been working on the backend servers a lot over the past few days, I’ve decided it was time to get some broken things on the blog fixed.

Firstly, the radiation monitor graphs. Originally I was using a Raspberry Pi to grab the data from the local monitor, and that was connecting via FTP to the server over in the datacentre to push it’s graph images. Since the server is now on the same local network as the monitor, there’s no need to faff about with FTP servers, so I’ve rejigged things with some perl scripts from cristianst85 over on GitHub, running on the web server itself.
I deviated from the suggested place to put the scripts on the server & opted to store everything within the Experimental Engineering hosting space, so it gets backed up at the same time as everything else on a nightly basis.

This is also accessible from the menu at top left, the script pulls data from the monitor & updates the images every 60 seconds via a cron job.

I’ve removed a couple of dead pages from the blog system, along with some backend tidying of the filesystem. Over the years things have gotten quite messy behind the scenes. This blog is actually getting quite large on disk, I’ve hit the 15GB mark, not including the database!

Caching is enabled for all posts on the blog now, this should help speed things up for repeat visitors, but as most of my content is (large) image based, this might be of limited help. I’m currently tuning the MySQL server for the load conditions, but this takes time, as every time I change some configuration settings I have to watch how things go for a few days, before tweaking some more.

Server Control Panels – More Of The Same

Sorry Sentora. I tried, and failed to convert over to using it as my new server control panel. Unfortunately it just doesn’t give me the same level of control over my systems, so I’ll be sticking with Virtualmin for the foreseeable future. Sentora stores everything in, (to me at least), very odd places under /var/ and gave me some odd results with “www.” versions of websites – some www. hosts would work fine, others wouldn’t at all & just redirect to the Sentora login interface instead. This wasn’t consistient between hosting accounts either, and since I didn’t have much time to get the migration underway, this problem was the main nail in the coffin.

Just storing everything under the sun in /var/ makes life a bit more awkward with the base CentOS install, as it allocates very little space to / by default, (no separate /var partition in default CentOS), giving most of the disk space to /home. Virtualmin on the other hand, stores website public files & Maildirs under /home, saving /var for MySQL databases & misc stuff.

The backup system provided is also utterly useless, there’s no restore function at all, and just piles everything in the account into a single archive. By comparison, Virtualmin has a very comprehensive backup system built in, that supports total automation of the process, along with full automatic restore functionality for when it’s needed.

Sentora did have some good points though:
It handled E-Mail logins & mail filters much more gracefully than Virtualmin does, and comes with Roundcube already built into the interface ready to use. With Virtualmin the options are to use the Usermin side of the system for E-Mail, which I find utterly awful to use, or install a webmail client under one of the hosted domains (my personal choice).
Mail filtering is taken care of with Sieve under Sentora, while Procmail does the job under Virtualmin.

Sentora does have a nicer, simpler, more friendly interface, but it hides most of the low-level system stuff away, while under Virtualmin *everything* on the system is accessible, and it provides control interfaces for all the common server daemons.

Posted on Leave a comment

Securing The New Server & Security In General

This was originally going to be part of another post, but it ended up getting more complex than I originally intended so it’s been given it’s own. I go into into many of my personal security practices, on both my public facing servers & personal machines. Since the intertubes are so central to life these days, good security is a must, especially since most people use the ‘net to do very sensitive operations, such as banking, it’s becoming even more essential to have strong security.

Since bringing the new server online & exposing it to the world, it’s been discovered in record time by the scum of the internet, SSH was under constant attack within 24 hours, and within that time there were over 20,000 failed login attempts in the logs.
This isn’t much of an issue, as I’ve got a strong Fail2Ban configuration running which at the moment is keeping track of some 30 IP addresses that are constantly trying to hammer their way in. No doubt these will be replaced with another string of attacks once they realise that those IPs are being dropped. I also prevent SSH login with passwords – RSA keys only here.
MySQL is the other main target to be concerned about – this is taken care of by disabling root login remotely, and dropping all MySQL traffic at the firewall that hasn’t come from 127.0.0.1.

Keeping the SSH keys on an external device & still keeping things simple just requires some tweaking to the .bashrc file in Linux:

alias ssh='ssh -i <Path To Keys>'

This little snippet makes the ssh client look somewhere else for the keys themselves, while keeping typing to a minimum in the Terminal. This assumes the external storage with the keys always mounts to the same location.

Everything else that can’t be totally blocked from outside access (IMAP, SMTP, FTP, etc), along with Fail2Ban protection, gets very strong passwords, unique to each account, (password reuse in any situation is a big no-no) and where possible TOTP-based two factor authentication is used for front end stuff, all the SSH keys, master passwords & backup codes are themselves kept offline, on encrypted storage, except for when they’re needed. General password management is taken care of by LastPass, and while they’ve been subject to a couple of rather serious vulnerabilities recently, these have been patched & it’s still probably one of the best options out there for a password vault.
There’s more information about those vulnerabilities on the LastPass blog here & here.


This level of security paranoia ensures that unauthorized access is made extremely difficult – an attacker would have to gain physical access to one of my mobile devices with the TOTP application, and have physical access to the storage where all the master keys are kept (along with it’s encryption key, which is safely stored in Meatware), to gain access to anything.
No security can ever be 100% perfect, there’s always going to be an attack surface somewhere, but I’ll certainly go as far as is reasonable, while not making my access a total pain, to keep that attack surface as small as possible,and therefore keeping the internet scum out of my systems.
The last layer of security is a personal VPN server, which keeps all traffic totally encrypted while it’s in transit across my ISP’s network, until it hits the end point server somewhere else in the world. Again, this isn’t perfect, as the data has to be decrypted *somewhere* along the chain.

Posted on Leave a comment

µRadMonitor RRDTool Graphing

I’ve been meaning to sort some local graphs out for a while for the radiation monitor, and I found a couple of scripts created by a couple of people over at the uRadMonitor forums for doing exactly this with RRDTool.

µRadLogger
µRadLogger

Using another Raspberry Pi I had lying around, I’ve implemented these scripts on a minimal Raspbian install, and with a couple of small modifications, the scripts upload the resulting graphs to the blog’s webserver via FTP every minute.

[snippet id=”1759″]

This script just grabs the current readings from the monitor, requiring access to it’s IP address for this.

[snippet id=”1760″]

This script sets up the RRDTool data files & directories.

[snippet id=”1761″]

The final script here does all the data collection from the monitor, updates the RRDTool data & runs the graph update. This runs from cron every minute.
I have added the command to automate FTP upload when it finishes with the graph generation.

This is going to be mounted next to the monitor itself, running from the same supply.

The Graphs are available over at this page.