Outliers in State Healthcare

I was privileged to be able to attend the SciBraai/Code4SA/ICFJ DataQuest event this past Saturday. A simple but effective formula – put a bunch of techies, data scientists and journalists in a room, ply them with coffee and a great braai, and give them free reign on all the data you have – then see what they come up with.

The entire initiative is very inspiring. Code for South Africa has made it their mission to drive data-driven journalism, and the event made it clear that there are stories in our data that the rest of us need to see.

My personal favorite was a project about our dams and water supply in the western cape, visibly illustrating where our water comes from – and it’s not what you think. It was also an eye-opener to learn how far our water supplies have dwindled since last year: the equivalent of 51x the volume of the Cape Town Stadium.

My project took on the healthcare angle. I teamed up with a data scientist who prefers to fly under the radar, and Bibi-Aisha, an eNCA reporter. We made use of the South African Hospitals Survey 2011-2012 data (link) – it was produced by a survey initiative several years ago, and is pretty comprehensive, covering public healthcare facilities right across South Africa.

Comprehensive, and sobering.

Map of SA public healthcare facilities c. 2012

The survey data tracked performance on several points – how often the hospitals were open, how many people were on staff, the leadership and governance, infrastructure, and so on. The facilities were largely self-rated, and a summary “Overall Performance” percentage was calculated for each one.

Survey summary

As far as we could see (and based on the report’s own standards), any hospital scoring 80% or above in the Overall Performance score was well-run. We split the remainder at 0-40% for red, and 41-79% for yellow. The first thing that hit us clear over the head was that only 16% of the facilities in this survey actually passed Government standards.

Of course, the obvious caveat: This data is 3  years old, and things have changed in healthcare since then. Personally I don’t think they could have changed that much, though, given the myriad other issues the Government has had to deal with over the last 3 years.

But there was something else surprising – we started finding outliers. Clinics and hospitals in remote areas, suffering the same issues as their neighbours, and yet were able to report higher scores despite that:

An outlier in the data

An outlier in the data

An outlier in the data

I think there’s a story there. For whatever reason, these facilities are performing better, and they might have lessons to share with the rest of us about how they’re doing it.

Related links:

If you build/clone/enhance/report on any of this, I’d love to hear about it!

Use Amazon S3 for quick-and-dirty MySQL backups

I manage a few basic web applications, and recently had to make a potentially breaking database change. I thought about just dumping out the database to a .sql file so I could restore it later, but then figured I might as well kill two birds with one stone, and solve the problem for good.

The server is a Debian box. I set up a shell script that dumps out a MySQL database, compresses it, and pushes it to Amazon S3. It’s a pretty straightforward setup.

On S3’s side I created a bucket (bucket-name) in the US Standard, then created an IAM user, saved the Access Key and Secret Key, and attached the following Inline Policy:

    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect": "Allow",
            "Action": [
            "Resource": [

That sets up the user to work specifically with one bucket in your S3 environment. I then installed the s3cmd commandline utility:

sudo apt-get install s3cmd

You need to configure it, which is done interactively:

s3cmd --configure

Once it’s configured, it just takes a basic shell script:


now=$(date +"%Y-%m-%d")

# Dump the file
mysqldump -u username -p password database > ~/backup.sql

# Compress
tar -czvf "$_file" backup.sql

# Push to Amazon
s3cmd put "$_file" s3://bucket-name/"$_file"

# Cleanup
rm "$_file"
rm ~/backup.sql

Set that to chmod +x and you’re good to go! Set that up to run via cron once a day, and maybe configure S3 to only retain the last 28 days of daily backups, and there you have it – instant backup solution, just add water :)

Preparing a new Debian 14.04.2 Server for Laravel 5.1

You’re gonna want to put on your admin hat! This guide is written for a new Ubuntu 14.04.2 LTS droplet built on digitalocean.com. We’ll go for repository versions of all these components. First, this guide is 99% accurate for 14.04.2 and will get you Nginx, MySQL and PHP taken care of. Bonus: Jenkins. I love Jenkins, mainly because it’s a one-install server that lets me schedule automated builds and other deployment tasks via my browser. It’s easy to set up, easy to secure, and once it’s set up you may never need to SSH into the server again. Just follow the regular instructions at http://pkg.jenkins-ci.org/debian/ – that will put Jenkins on port 8080, then follow this quick guide to enable some basic security. We’ll need some more software though. This is taken from the list of components that Homestead uses, and all commands are being run as the root user.

apt-get install git redis-server beanstalkd memcached php5-cli php5-mcrypt php5-curl nodejs node-legacyjs npm

An oddity here – you need to manually enable mcrypt before PHP can use it:

php5enmod mcrypt

With that done, install the tools Laravel uses:

npm install -g bower grunt gulp

And last but not least, Composer:

curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/bin --filename=composer

That will add “composer” as a CLI command in one go. So we’ve got a Laravel-capable server now. Let’s get a basic CI pipeline going using Jenkins. We’ll set up something simple that will pull changes from our develop branch into a develop directory on the server, and run any migrations or updates that are required. This is not the best way to handle production releases (you’ll want to use versioned-everything on the server), but it’ll do for a quick setup. Before getting started – database access. Default Laravel 5.1 applications are configured to connect as ‘forge’, to localhost with no password, and use the ‘forge’ database. You might have different details in your project, in which case you’ll need to configure MySQL with matching username, password and database. We’ll just set it up to handle Laravel’s system defaults here:

mysql -uroot -p
Enter password:
create database forge;
grant all on forge.* to 'forge'@'localhost' identified by '';

There’s a little more server-side prep to do – folders and SSH keys. Prepare the folders by ensuring the jenkins user has write access. I prefer having my projects live in a root folder, with symlinks out to the html root:

mkdir -p /projects/project-1/develop
cd /projects
chown -R jenkins:jenkins *

Now set up Jenkins’ SSH key. On the server, run:

su jenkins

Accept all the defaults. That should create a key in /var/lib/jenkins/.ssh/. Run:

cat /var/lib/jenkins/.ssh/id_rsa.pub

Copy the public key. Save it somewhere, if you’re going to use this same server to pull and compile multiple repositories. Right now you’ll just want to add it as a Deployment key under the repository Settings in bitbucket. Finally, we’ll set up the symlink. It’ll be broken until the project itself is checked out for the first time. Start by dropping the default html folder:

cd /var/www
rm -rf html

Now create a symlink that points to where the public subfolder will be:

ln -s /projects/project-1/develop/public/ html

Finally, to Jenkins. We’re going to use the most basic job possible – a straightforward series of commandline instructions. Create a new Freestyle project and give it a name. Under the Build heading, click Add Build Step -> Execute Shell. In there, we’ll just put the commands we would have run ourselves:

# Move to working directory
cd /projects/project-1/develop;

# If artisan exists and composer has run, bring the app down for maintenance
if [ -e artisan ] && [ -d vendor ]; then
    php artisan down

# If we've already cloned, update - otherwise clone from scratch
if [ -e artisan ]; then
    git pull
    git clone -b develop git@bitbucket.org:woganmay/project-1.git .;

# Ensure storage folders are writeable
chmod -R 0777 storage/;

# Run updates and migrations
composer update;
php artisan migrate --force;

# Bring us back out of maintenance mode
php artisan up;

Save the job config and hit Build Now. This will schedule a new job, and then run it immediately, showing the status under the Build History widget. You can view the output of the job as it runs by hovering over the little down arrow by the light, and clicking Console Output: 2015-06-27 21_26_08-planweek.com develop [Jenkins] This will show the raw output from all your commands. It’ll take a while the first time around, as it has to download all the composer packages for the first time. Subsequent runs will be a lot faster as it installs everything from cache. When it’s done you should see something like this towards the bottom of the Console Output: 2015-06-27 21_43_21-planweek.com develop #12 Console [Jenkins] That ‘Finished: SUCCESS’ is what we’re after. That means the project deployed correctly, and you should now be able to browse to it via HTTP. Future deployments can be kicked off by logging into Jenkins and clicking Build Now.

Securing a default Jenkins install

Jenkins has some insanely granular permission controls, but when you install it for the first time, the default is to allow 100% public access to everything. Obviously you’ll want to fix that.

First, click Manage Jenkins, then Configure Global Security. Tick Enable security, then select Jenkins’ own user database, and ensure Allow users to sign up is ticked, and save. We’re leaving Authorization on Anyone can do anything for now.

This will expose the sign up link on the top right:

2015-06-27 20_21_24-Manage Jenkins [Jenkins]

Sign up to create a login for yourself. You’ll be authenticated immediately. Go back to Manage Jenkins -> Configure Global Security, and flip the Authorization switch to Logged-in users can do anything. Save.

So that gives us a sane default – we have basic login control. You might want to disable Allow users to sign up if this instance of Jenkins is internet-facing.

We can go one additional step further – flip Authorization to Matrix-based security. Add your own username to the matrix, select Administer, and save. What this will do is remove ALL the rights Anonymous users have, so where the public might have seen the People list, or the Jobs list, now all an Anonymous user will get is a login prompt.

Youth Day 2016

Quoting the official Government of South Africa website:

In 1975 protests started in African schools after a directive from the then Bantu Education Department that Afrikaans had to be used on an equal basis with English as a language of instruction in secondary schools. The issue, however, was not so much the Afrikaans as the whole system of Bantu education which was characterised by separate schools and universities, poor facilities, overcrowded classrooms and inadequately trained teachers.

So really, not much has changed in the last 40 years, then!

Just something to remember on this, the 39th anniversary of the Soweto Day protests.

Recover Windows 8 Key from BIOS using Linux

Machines that ship with Windows 8 will have the product key written to the ACPI table on your hardware – so even if you reformat, the product key remains embedded in your machine. There are quite a few ways to recover this key if you’re running Windows, but in my case, I had reformatted and installed Ubuntu.

Turns out it’s not a problem. Install the free acpidump utility:

sudo apt-get install acpidump

Then run it with sudo:

sudo acpidump

Then check the output for a block starting with MSDM:


That blurred block there looks very much like a product key to me!

Change DNS Servers on the Telkom 921VNX PACE Router

The VDSL router that Telkom ships comes with a web-based admin panel, which for some reason will not let you configure the DNS servers that your LAN devices use. PuTTY to the rescue!

Use PuTTY (or whatever else) to SSH into your router – usually

Username: admin
Password: nology*/

You’ll get a very minimal CLI. To update the config related to your LAN, do:

cd LANDevice
cd 1
cd HostConfig

This will bring up a bunch of settings related to your LAN. One thing you’re looking for is the Min and Max address config items – this should define the IP range your devices are on:

LANDevice_1_HostConfig_MinAddress = []
LANDevice_1_HostConfig_MaxAddress = []

If your computer is not within that range, you may want to check LANDevice subdirectories 2 and on. If it matches though, you’ll see an entry you can’t edit via the web interface:

LANDevice_1_HostConfig_DNSServers = []

To set, for instance, Google Public DNS, you’d do:

set DNSServers,

That will result in your router cycling something, and your connection will go down for a bit. When it comes back up, the router will use those DNS servers to handle queries coming from the LAN. If that doesn’t work, try rebooting the router (just ‘reboot’ from the CLI).