>SQL annoyances

>So here’s a nice little pickle I got myself into: migrating an SQL Server 2008 database to another server this morning, I’m confronted with a nice little F-U message:

The database “x” cannot be opened because it is version 661. This server supports version 655 and earlier. A downgrade path is not supported.

Nice, eh? I thought it was sweet.

This cute little error is due to the fact that SQL Server 2008 and R2 have different “compatibility levels”. What this means, in essence, is that databases can’t be migrated from one flavor of SQL Server 2008 to the other using the traditional detach-attach method. Oh, and in case you’re wondering — no, you can’t use a simple backup-restore operation either; nice try, though.

So — am I screwed?

Note that this is only a problem if you’re going from an newer version of SQL Server to an older one (in my case, this was from SQL Server 2008 R2 to ‘plain old SQL Server 2008). If this is happening to you, don’t worry: there are ways to coax your database into its new environment. The simplest, of course, is to use the same version of SQL Server 2008 as your old machine. But perhaps this isn’t what you would like to hear. That’s certainly not what I wanted to hear: upgrades are free if and only if you have Software assurance.

So here are a few possibilities; each a wee bit suck-y, if you ask me:

  • Script all your database objects to a giant, mahoosive SQL file. Not great, but feasible if you have a small database.
  • Have both SQL Server 2008 and R2 running on the same machine; link instances, and run Export Data. Database Services and Replication features must be installed. Unfortunately, what this does is upgrade your shared components. Suck.

If you have any other means of doing this, be a pal and let me know, won’t you? 😀

>”My VMWare log partition is full!” – problem, cause, mitigation

>Hello folks 🙂  Been a while since I’ve last posted. I keep making vows that I will post regularly, and do so for about a month — and then, things get hectic again and I forget this site’s very existence. My solution is for me to quit whinging about how irregularly I post and continue to post relevant shite. No use posting for the purpose of posting, methinks. Fair enough?

Anyway, I finally got something off my plate today. It’s something that I’ve been meaning to write about, namely because the reason for its occurrence is unintuitive, it’s a silly problem to encounter in a production environment, and it’s relatively easy to resolve:

The problem

I first encountered this issue a few months ago; we’d been knee-deep into virtualizing a dozen servers for a client when, suddenly, the ESX machines stopped being able to start VM’s. We thought “OK, that’s weird”, poked around the VSphere Center logs. Queue a puzzling message: “No space left on device”. That couldn’t be right: the SAN we were using was brand new and practically empty. Since nothing else was working, we restarted the servers.

You can probably guess what happened next: physical servers come back up, and now none of the VM’s will start. Luv’ly.

Fortunately, we did finally decide to open up an SSH session in order to check out the logs there to see if there were any additional clues… and discovered that the /var/log directory (which has its own partition) was chock full of logs.

The cause

VMWare’s KB article explains this problem in detail, and actually provides a decent resolution… But here’s why I think this is unintuitive: although these ESX (and ESXi) boxes are *nix servers, absolutely everything is administered via the vSphere client.

The offensive security perks

Want to mess with the sysadmin? Flood his/her ESX box’s syslog file! That’s right, folks — by virtue of flooding the syslog file, the admin won’t be able to start a VM, use vMotion, etc etc…

A solution

One possible way to prevent this kind of issue is to rotate your logs; there’s a good explanation of how this is done here. Setup is rather simple; as a matter of fact, you’ll find that many distros have log rotation implemented out-of-the-box… So why hasn’t VMWare? I’m speculating, but I would imagine that since the only purpose of ESX is to run other machines, VMWare decided that 1) the volume of logs was low enough that they could do away with it, 2) they actually wanted to keep logs from being overwritten for debugging purposes and 3) they figured that in the worst case scenario it would be a way for administrators to be tipped off that something was wrong in the first place. Since this is pure speculation, I won’t go into how bad an idea this was or how a more elegant solution could have been found.

Nevertheless, if you are not ecstatic about losing valuable log information due to rotation, you could possibly set up your ESX boxes to log to a centralized rsyslog server over TLS. This is something that you should consider doing anyway – log consolidation’s a pretty hot topic nowadays.

On my side, I’ve written a very simple bash script which you can set to run as a cron job. It checks how much disk space is used on the log partitions and sends a message to syslog if it’s above 97% – you can then configure syslog to log to another server or set up swatch to e-mail you if the message ever shows up in your syslog:

export diskcheck=`df -h | grep /var/log | grep 9[789]%`
test -n $diskcheck && logger “Log disk is getting low on space: $diskcheck”

Silly, innit? But it works. Note, however, that if your log fills up really really fast, you might not get the message before it’s too late.

Well, that’s me for now. Back to work!

ADDENDUM: I’ve modded my script so that it can run as a service. The script below should be saved as /bin/vmwareDiskCheck.sh …


doservice () {
  while true; do
   export diskcheck=`df -h | grep /var/log | grep 9[789]%`
   test -n “$diskcheck” && logger “Log disk is getting low on space: $diskcheck”
   sleep 10

doservice &

… and this script should be saved as /etc/init.d/diskCheck:

# Init file for VMWare Log partition check
# chkconfig: 2345 55 25
# description: Routinely checks that /var/log isn’t too full.
# processname: diskcheck

# source function library
. /etc/rc.d/init.d/functions



start() {
  $path &

stop() {
  # use pgrep to determine the forked process
  # kill that process
  proc=`pgrep vmwareDiskCheck`
  kill $proc

case “$1” in
                echo $”Usage: $0 {start|stop|restart}”
exit $RETVAL

Comments or improvements welcome!

ADDENDUM 2: If you prefer a cron job, you can drop a script in your /etc/cron.hourly/ directory with the following code (don’t forget to make your script executable!)

  export diskcheck=`df -h | grep /var/log | grep 9[789]%`
  test -n “$diskcheck” && logger “Log disk is getting low on space: $diskcheck”