Poor mans offsite backup

Before I got some proper offsite backup space* I used to make tarballs of files and dump them in my gmail. Not the most elegant solution but it worked and is free (I know about gmailfs, but I could never get it to work reliably).

This is the script I used to send the files:

#!/usr/bin/env python
import getpass
import optparse
import smtplib
import sys

from email import Encoders
from email.MIMEBase import MIMEBase
from email.MIMEMultipart import MIMEMultipart

opts = optparse.OptionParser()
opts.add_option(‘-u’, ‘–username’)
opts.add_option(‘-p’, ‘–password’)
(options, args) = opts.parse_args()
if len(args) < 1:
    print "Usage: send_file_to_gmail.py [-u user] [-p pass] filename"
    sys.exit(0)

if options.username == None:
    options.username = raw_input("Username: ")
if options.password == None:
    options.password = getpass.getpass()
if options.username[len(options.username)-10:] != "@gmail.com":
    options.username += "@gmail.com"

msg = MIMEMultipart()
msg[‘Subject’] = "Backup of %s" % args[0]
msg[‘From’] = options.username
msg[‘To’] = options.username

msg.preamble = "Backup of %s" % args[0]
msg.epilogue = " "

fp = open(args[0], "rb")
atfile = MIMEBase("application", "octet-stream")
atfile.set_payload(fp.read())
fp.close()
Encoders.encode_base64(atfile)
atfile.add_header("Content-Disposition", "attachment", filename=args[0])
msg.attach(atfile)

s = smtplib.SMTP()
#s.connect("localhost", "8025")
s.connect("smtp.gmail.com")
s.ehlo()
s.starttls()
s.ehlo()
s.login(options.username, options.password)
print "Sending %s" % args[0]
s.sendmail(options.username, options.username, msg.as_string())
s.close()
 

The above code is made available under the MIT license.

* While I am writing this the Dreamhost backup space I use has been down for about two weeks (it came back briefly a couple of days ago but is back down again now). I can’t complain too much as it is a free extra they throw in with the hosting package but I may have to restart my gmail sending if this keeps up.

Virgin Media are petty scumbags

For a few days now I’ve been working (a few minutes at a time) on writing a post about how to monitor your bandwidth usage using munin by directly querying your cable-modem. Today however, when I checked my nice munin graph I found it had stopped working at about 8AM. Some further diagnostic procedures revealed that the cable modem was no longer responding to SNMP requests.

After a bit of googling I discovered the reason for this is that my ISP, Virgin Media, have deliberately disabled SNMP access. Reports vary on the reasons for this, some claim “performance” others “security”, but both are utterly bogus reasons. All this does is deny customers basic information about their connection like bandwidth usage, and I can only conclude that Virgin Media want to make it more difficult for people to dispute their figures.

This isn’t enough to make me ditch VM, but it has pissed me off, and the next time I move house VM are going to be waaay down the list of providers I’ll look at.

PS. My router doesn’t support SNMP either so I can’t use that.

Who-T: Synaptics 1.1 and what your touchpad can do now

The synaptics driver. So, here’s a list of things that have changed recently with version 1.0 and 1.1.

Perhaps the most important changes have to to with auto-scaling. Synaptics obtains the touchpad dimensions from the kernel and adjusts speed, acceleration, the edges and more depending on these dimensions.

via Who-T: Synaptics 1.1 and what your touchpad can do now.

Let my kernel be Free

Playing a movie using Free driver

Playing a movie using Free driver

There has been some interesting progress on the open-source drivers for ATI graphics cards recently. It has long been a goal of mine to have a completely Free kernel, it’s why I bought this laptop rather than an NVidia-based one. I want better 3D performance than Intel can offer and was originally going to go NVidia because their Linux support was better. Then ATI released the first load of specs for their cards and I switched my preference. In the end I bought a Toshiba laptop with a Radeon Mobility 2600 graphics chip (also known as an M76 chip).

I’m not a zealot when it comes to Free software, I am a pragmatist, I use whatever works best for me. But experience has taught me that when it comes to Linux kernel code the non-free stuff is always problematic. Practically every kernel crash I’ve ever had can be traced back to some non-free driver I had in there. Who is to blame for this I’m not going to go into, there are arguments on both sides, but the fact is that if you want a completely stable Linux system you are better off keeping the kernel Free.

Of course it takes time to write drivers for something as complex as a graphics card, so up till now I’ve been using the proprietary fglrx drivers from ATI. A necessary compromise, since when I first installed Debian they were the only thing that would get X to even run. It works well enough once you learn to avoid doing the things that cause it to crash (like logging out of KDE), but it adds a hassle to upgrades (having to recompile a kernel module, which often fails and needs custom patches) and is rather finicky about its settings. I’ve periodically tested the state of the Free drivers, both radeon and radeonhd, and seen steady progress.

Today I found that they have finally reached the good-enough point where I can switch. The must-have feature for me is tear-free video playback that doesn’t cause my CPU usage to skyrocket (and hence kick the fan up to unacceptable noise levels). This means the driver needs accelerated 2D and xv support, which has now been achieved.

One quick point for anyone else wanting to switch to the Free driver from fglrx. It won’t work unless you uninstall fglrx, if you just install them side-by-side and tell X to use the Free driver it will show a corrupted image or a black screen then hang. I thought there had been a major regression until I tried removing fglrx, after which both free drivers worked perfectly.

I’m using the xserver-xorg-video-radeon driver from Debian sid, along with a custom kernel to get the required DRM support. The latest (2.6.29) kernel in sid doesn’t have the right versions of radeon.ko and drm.ko that are needed for the 2D acceleration to work. There are two ways to get these, you can either compile a new kernel using the drm-rawhide branch of Dave Arlie’s kernel http://git.kernel.org/?p=linux/kernel/git/airlied/drm-2.6.git;a=summary. Or you can just use the instructions on the X wiki to just compile new versions of those two modules.

I’m using a whole new kernel because I also want to experiment with kernel mode setting. Since radeonfb has never worked on this hardware and the best resolution vesafb can give me is 1024×768 (native res of the LCD is 1280×800), KMS is the only hope I have of getting a VT at the proper resolution.

There is no 3D support in the version of the driver I’m using, but support for that is being written as I type so I’m hopeful I won’t have to do without for long. It isn’t a huge loss for me since I only use it for playing KotOR under wine and if I get desperate I can reinstall it in Windows (I dual-boot the laptop). 3D would be good for when I switch to KDE4 but again, I can live without it for a while. I’m happy with what I have for now, and it won’t be that long before full 3D support comes along (and I won’t have to fiddle with code to get it, just aptitude upgrade).

Maintaining a WordPress install with git

A while ago I started tracking WordPress updates using Subversion. Instead of downloading the official release from the WordPress website I checked out the code from the stable branch of their Subversion repo. This made upgrading as simple as running “svn up” and I didn’t lose any modifications I had made to the code.

While this worked very well it does have one big drawback, it means I can’t put the themes and plugins I use into version control since they are inside the official svn checked out dirs.

Recently I’ve been switching to git and thought that switching the WordPress install to git would solve this problem. My first attempt was using the git-svn tool to convert the upstream subversion repo to a local git one. This would have enabled something very similar to what I was doing before with the “svn up” command, one git-svn command would have updated the code to the latest version.

The problem with that idea is that the WordPress repo contains over 10,000 revisions. Duplicating that via git-svn would have taken days and all I really need is the current version. I looked for a while to see if there was a way to limit the number of revisions copied but apparently there isn’t.

So, I fall back to searching the web and I came across Tracking WordPress using Git, which is pretty close to what I wanted. It is a little bit more work to do the updates that way but not that much.

What I need though is an internal dev version of the site on one branch, the live site on another and the upstream code on a third. Then I need to push the live branch to my web host, and I also need to be able to pull any emergency fixes I make direct on the live site back into my local git repos. Pushing changes with git is a little tricky since a push doesn’t update the checked out files, but I found a helpful blog entry A web focused git work-flow, which gives me exactly what I want.

The one remaining problem I have is I don’t completely understand how git merges work. The internal dev branch and the live branch have some differences I don’t want merged, paths etc. Whenever I try to merge the branches though it invariably automatically changes one branch to exactly match the other and auto-commits. I’m sure git can do this, merge some changes but not others without manual intervention each time but I haven’t worked out how yet.

Okay, after a bit more time working with this setup it seems the problem is bi-directional merges. Merging from dev to live works great right up until the point I do a merge from live to dev, at which point git seems to forget that there is some stuff that I don’t want merged next time I go from dev to live. Preventing the auto-commit makes this manageable though, to do this you (confusingly) need to specify “–no-commit –no-ff”. You’d think the no-commit would be enough on its own but apparently not.

So in my local copy I have 3 branches:

  • master, the extracted WordPress tarball
  • internal, my internal development/test site
  • external, copy of the live site

I have the following settings set on my local git repo (private details redacted) to make pushing changes easier, I don’t have to remember any parameters for the git push command.

branch.internal.remote=origin
branch.internal.merge=refs/heads/internal
remote.external.url=ssh://$USER@$HOST/~/$GIT_PATH
remote.external.fetch=+refs/heads/*:refs/remotes/external/*
branch.external.merge=external
branch.external.remote=external

Then on my web host I have the bare git repo at $GIT_PATH that gets pushed to, and the actual live site. Both of those only have the external branch.

To update the live site all I need to do is make the changes on my local copy of the external branch, then run “git push”. This sends the changes up to the bare repo, which then automatically updates the live site.

Then when a new version WordPress gets released, what I will do is

  1. Switch to the master branch.
  2. Delete everything except the git dir.
  3. Unpack the new files to replace the deleted ones.
  4. Commit those to git and it will work out what has changed for that release.
  5. Change to the other branches in turn and git merge from the master branch.

This should update me to the latest release without losing any of my local themes, plugins or mods.

I could go one step further and create “upstream” branches for each plugin I use in a similar manner to the main upstream WordPress branch. That would allow me to mod the plugins, and still upgrade them easily. I’m not going to do that though (yet) as I think it would be more trouble than it is worth.