Automatically retrieve list of offline shares on all PCs of a domain

I wrote this short VBS script today to help out a client; basically, you can run this on an Active Directory domain as a login script to see if your users’ offline shares are correctly configured. In this case, each user is supposed to have a ‘U:’ drive that syncs with a file server whenever they’re on campus, and is available whenever they’re on the road. Sometimes, though, the configuration isn’t set for one reason or another… Hence the script.

Enjoy πŸ™‚

‘ u_sync_check: VBscript that grabs the information on offline files and saves it for later auditing.
‘ Written by Rick El-Darwish, 14.05.2013. Inspired by these sites:Β http://bit.ly/13YTyur andΒ http://bit.ly/17q8Dcx

‘ Grab the name of the computer executing this script:
Set wshShell = WScript.CreateObject( “WScript.Shell” )
strComputerName = wshShell.ExpandEnvironmentStrings( “%COMPUTERNAME%” )

‘ Compose the path of the file under which this information will be stored. TODO: replace the UNC below with a valid path!
outFile=”\myservermyshare” & strComputerName & “.txt”

‘ Create the actual file.
Set objFSO=CreateObject(“Scripting.FileSystemObject”)
Set objFile = objFSO.CreateTextFile(outFile,True)

‘ Create the WMI object and grab the list of all synched directories.
strComputer = “.”
Set objWMIService = GetObject(“winmgmts:\” & strComputer & “rootCIMV2”)

Set colItems = objWMIService.ExecQuery(“SELECT * FROM Win32_OfflineFilesItem WHERE ItemType=2”,,48)

‘ For each synched directory, write an entry in the file
For Each objItem In colItems
objFile.Write “ItemPath : ” & objItem.ItemPath
Next

‘ Close the file
objFile.Close

>”Inventory fun” follow-up… AKA Kace’s built-in service tag and warranty report

>Back in September, I wrote a script to grab Dell machines’ warranty information based on their service tags, which I had retrieved from Kaseya or LANSweeper. A pretty nifty trick, or so I thought…

Since then, I’ve been introduced to Kace — a suite of Dell tools for inventory management, scripting, software and patch deployment, and ghosting. Think of it as a solution that offers the functionality of your FOG server, LANSweeper and Kaseya.

The Kace solution is actually divided in two parts: one component handles inventory, application and update deployment, and scripting, while the other component handles ghosting and driver management. A “component”, in this context, can be a piece of hardware (a physical server that you connect to your LAN – the O/S is a custom Unix distro) or simply a virtual machine (a VMWare application which can run on your existing ESX or ESXi box). The config is rather light — the only thing these devices need is storage space. For those of you that are looking for a free / SOHO-level solution for all your sysadmin  needs: stick to OSCInventory, Zabbix, and FOG… These things have a price tag.  However, though not free, these are well worth it in my opinion.

I digress. I’ve been porting a lot of my existing stuff to Kace recently; this week, I’m working on the inventory report from September. It turns out that my work is pretty much done: perhaps unsurprisingly, Kace already has a report for extracting machine names, service tags and expiry dates. They have two boiler-plate reports, dubbed “Dell Warranty Expired” and “Dell Warranty Expires in the next 60 days”, which dishes out all the information you may need in HTML, CSV or TXT format.

In point of fact, I needed something a bit more customized; I don’t actually need the full warranty information but rather the date the warranty expires. This is because with our clients, the machines are amortized from the moment we get them to the moment they’re no longer covered under warranty.

The nice thing about Kace is that you can take a boiler-plate report, duplicate it and change the SQL request directly, like with LANSweeper. This makes reporting a cinch. Here’s my final report for all machines on my campus:

SELECT DISTINCT M.NAME AS MACHINE_NAME,M.CS_MODEL AS MODEL, DA.SERVICE_TAG, DA.SHIP_DATE, M.USER_LOGGED AS LAST_LOGGED_IN_USER,
DW.END_DATE AS EXPIRATION_DATE
FROM KBSYS.DELL_WARRANTY DW
LEFT JOIN KBSYS.DELL_ASSET DA ON (DW.SERVICE_TAG = DA.SERVICE_TAG)
LEFT JOIN MACHINE M ON (M.BIOS_SERIAL_NUMBER = DA.SERVICE_TAG OR M.BIOS_SERIAL_NUMBER = DA.PARENT_SERVICE_TAG)
WHERE M.CS_MANUFACTURER LIKE ‘%dell%’
AND M.BIOS_SERIAL_NUMBER!=”
AND DA.DISABLED != 1
AND DW.END_DATE = (SELECT MAX(END_DATE) FROM KBSYS.DELL_WARRANTY DW2 WHERE DW2.SERVICE_TAG=DW.SERVICE_TAG AND DW2.SERVICE_LEVEL_CODE=DW.SERVICE_LEVEL_CODE);

It gives you a nice simple list with the machine name, model, service tag, shipment date, warranty expiry date, and the user that last logged on to the system.  Cool eh? Or maybe I’m just easily impressed. Regardless, it saves me time… Yay!

>SQL annoyances

>So here’s a nice little pickle I got myself into: migrating an SQL Server 2008 database to another server this morning, I’m confronted with a nice little F-U message:

The database “x” cannot be opened because it is version 661. This server supports version 655 and earlier. A downgrade path is not supported.

Nice, eh? I thought it was sweet.

This cute little error is due to the fact that SQL Server 2008 and R2 have different “compatibility levels”. What this means, in essence, is that databases can’t be migrated from one flavor of SQL Server 2008 to the other using the traditional detach-attach method. Oh, and in case you’re wondering — no, you can’t use a simple backup-restore operation either; nice try, though.

So — am I screwed?

Note that this is only a problem if you’re going from an newer version of SQL Server to an older one (in my case, this was from SQL Server 2008 R2 to ‘plain old SQL Server 2008). If this is happening to you, don’t worry: there are ways to coax your database into its new environment. The simplest, of course, is to use the same version of SQL Server 2008 as your old machine. But perhaps this isn’t what you would like to hear. That’s certainly not what I wanted to hear: upgrades are free if and only if you have Software assurance.

So here are a few possibilities; each a wee bit suck-y, if you ask me:

  • Script all your database objects to a giant, mahoosive SQL file. Not great, but feasible if you have a small database.
  • Have both SQL Server 2008 and R2 running on the same machine; link instances, and run Export Data. Database Services and Replication features must be installed. Unfortunately, what this does is upgrade your shared components. Suck.

If you have any other means of doing this, be a pal and let me know, won’t you? πŸ˜€

>”My VMWare log partition is full!” – problem, cause, mitigation

>Hello folks πŸ™‚  Been a while since I’ve last posted. I keep making vows that I will post regularly, and do so for about a month — and then, things get hectic again and I forget this site’s very existence. My solution is for me to quit whinging about how irregularly I post and continue to post relevant shite. No use posting for the purpose of posting, methinks. Fair enough?

Anyway, I finally got something off my plate today. It’s something that I’ve been meaning to write about, namely because the reason for its occurrence is unintuitive, it’s a silly problem to encounter in a production environment, and it’s relatively easy to resolve:

The problem

I first encountered this issue a few months ago; we’d been knee-deep into virtualizing a dozen servers for a client when, suddenly, the ESX machines stopped being able to start VM’s. We thought “OK, that’s weird”, poked around the VSphere Center logs. Queue a puzzling message: “No space left on device”. That couldn’t be right: the SAN we were using was brand new and practically empty. Since nothing else was working, we restarted the servers.

You can probably guess what happened next: physical servers come back up, and now none of the VM’s will start. Luv’ly.

Fortunately, we did finally decide to open up an SSH session in order to check out the logs there to see if there were any additional clues… and discovered that the /var/log directory (which has its own partition) was chock full of logs.

The cause

VMWare’s KB article explains this problem in detail, and actually provides a decent resolution… But here’s why I think this is unintuitive: although these ESX (and ESXi) boxes are *nix servers, absolutely everything is administered via the vSphere client.

The offensive security perks

Want to mess with the sysadmin? Flood his/her ESX box’s syslog file! That’s right, folks — by virtue of flooding the syslog file, the admin won’t be able to start a VM, use vMotion, etc etc…

A solution

One possible way to prevent this kind of issue is to rotate your logs; there’s a good explanation of how this is done here. Setup is rather simple; as a matter of fact, you’ll find that many distros have log rotation implemented out-of-the-box… So why hasn’t VMWare? I’m speculating, but I would imagine that since the only purpose of ESX is to run other machines, VMWare decided that 1) the volume of logs was low enough that they could do away with it, 2) they actually wanted to keep logs from being overwritten for debugging purposes and 3) they figured that in the worst case scenario it would be a way for administrators to be tipped off that something was wrong in the first place. Since this is pure speculation, I won’t go into how bad an idea this was or how a more elegant solution could have been found.

Nevertheless, if you are not ecstatic about losing valuable log information due to rotation, you could possibly set up your ESX boxes to log to a centralized rsyslog server over TLS. This is something that you should consider doing anyway – log consolidation’s a pretty hot topic nowadays.

On my side, I’ve written a very simple bash script which you can set to run as a cron job. It checks how much disk space is used on the log partitions and sends a message to syslog if it’s above 97% – you can then configure syslog to log to another server or set up swatch to e-mail you if the message ever shows up in your syslog:

#!/bin/bash
export diskcheck=`df -h | grep /var/log | grep 9[789]%`
test -n $diskcheck && logger “Log disk is getting low on space: $diskcheck”

Silly, innit? But it works. Note, however, that if your log fills up really really fast, you might not get the message before it’s too late.

Well, that’s me for now. Back to work!

ADDENDUM: I’ve modded my script so that it can run as a service. The script below should be saved as /bin/vmwareDiskCheck.sh …

#!/bin/bash


doservice () {
  while true; do
   export diskcheck=`df -h | grep /var/log | grep 9[789]%`
   test -n “$diskcheck” && logger “Log disk is getting low on space: $diskcheck”
   sleep 10
  done
}


doservice &

… and this script should be saved as /etc/init.d/diskCheck:

#!/bin/bash
#
# Init file for VMWare Log partition check
#
# chkconfig: 2345 55 25
# description: Routinely checks that /var/log isn’t too full.
#
# processname: diskcheck

# source function library
. /etc/rc.d/init.d/functions

path=/bin/vmwareDiskCheck.sh

RETVAL=0

start() {
  $path &
}

stop() {
  # use pgrep to determine the forked process
  # kill that process
  proc=`pgrep vmwareDiskCheck`
  kill $proc
  RETVAL=1
}

case “$1” in
        start)
                start
                ;;
        stop)
                stop
                ;;
        restart)
                stop
                start
                ;;
        *)
                echo $”Usage: $0 {start|stop|restart}”
                RETVAL=1
esac
exit $RETVAL

Comments or improvements welcome!

ADDENDUM 2: If you prefer a cron job, you can drop a script in your /etc/cron.hourly/ directory with the following code (don’t forget to make your script executable!)

#!/bin/bash
  export diskcheck=`df -h | grep /var/log | grep 9[789]%`
  test -n “$diskcheck” && logger “Log disk is getting low on space: $diskcheck”

>Making your Regional Settings consistent across your domain

>I’m back in Geneva safe, sound, and all in one piece! SANS Barcelona was a great experience. Got to meet some fantastic folks and learn a whole lot more about forensics πŸ™‚

But this post isn’t about last week – I’m going to need a whole lot more time to blog about that. Today, I’m going to share a fairly simple but useful script that will allow you to apply regional settings consistently across your domain.

This week’s challenge

A colleague’s been trying to set up a script that would allow us to apply custom regional settings to a client’s machines; although the client is located in Switzerland, the environment is anglophone. If you simply set the regional settings to Switzerland, you get all the dates and times in French (or German). This is annoying – really annoying. But not impossible to circumvent; we generally set the regional settings to English and then customize the date, time, metric system, and currencies. One is tempted to just put up with the annoyance – but I figure that we’d also have to put up with the extra annoyance of user (legitimately) asking us why the dates are in an additional language… Plus, I’m anal retentive.

Shouldn’t you be able to do this via GPO?

It would seem natural that something like Regional Settings could be applied to an OU, a set of computers, or something like that using GPO. Oddly enough, this does not seem to be the case. In the past, we’ve dealt with this by customizing the default user’s regional settings when we set up the master image for our workstations. Theoretically, this works just fine but in application, we’ve often found that for some reason or another (patching, PEBKAC, what have you) the settings don’t perpetuate to the user.

Let me pause to explain the way Windows handles regional settings:

  1. Windows stores them in the registry – no big surprise there.
  2. However, it stores them in the user’s part of the registry (a.k.a the NTUSER.DAT file) – this makes a bit less sense at first, but it does mean that two users on one same box can have different regional settings – super important if, for instance, you’re using RDP in a large organization and users from different countries would be connecting to the same application server.
  3. This also means that if your user’s profile gets corrupted, or you start her/him off on a brand new machine with a brand new profile, there’s a chance the regional settings would not be perpetuated.

I’ve been tempted to say that stuff like regional settings isn’t critical. I mean, who cares if it’s dd/MM/yyyy or MM/dd/yyyy, right? No biggie. I mean, except if you’re in Finance and timely payments (not to mention currencies) are crucial. Or if you’re in admin and you’re organizing a lot of meetings and that inversion can create a lot of confusion with your attendees. Or if you’ve got a complicated trip planned and you don’t want to confuse 6 PM with 6 AM. Or if you’re an administrator or forensics analyst and you’re looking at a whole lot of timestamped entries at a time. Or if you…

So here was my suggested methodology. Nothing new, really – picked up the idea and elements of the script by browsing the forums. The idea is that you run the batch script below from a domain login script. Since you can attribute different login scripts to different users, you can easily create a script for each region in which you have users.

The script below makes use of the ver.exe file, which lists the full versions of windows. For a full listing of Windows versions, check this out: http://www.computerhope.com/whow.htm .

Any comments or thoughts are appreciated πŸ™‚

The Script

::Checks whether the machine windows XP or 7 and applies settings accordingly.
::WARNING – this would also apply to servers if you let it.

@echo off

:: Check if your machine is Windows XP
ver | find “5.1.2600”
if ERRORLEVEL 0 goto winXP

:: Checking if your machine is Windows 7
ver | find “6.1.7600”
if ERRORLEVEL 0 goto win7

::Vital – prevents this script from executing on servers…
goto :eof

:win7
REG add “HKCUControl PanelInternational” /v sShortTime /t REG_SZ /d “HH:mm tt” /f
REG add “HKCUControl PanelInternational” /v sYearMonth /t REG_SZ /d “MMMM, yyyy” /f

:winXP
REG add “HKCUControl PanelInternational” /v iMeasure /t REG_SZ /d “0” /f
REG add “HKCUControl PanelInternational” /v sCurrency /t REG_SZ /d “CHF” /f
REG add “HKCUControl PanelInternational” /v sLongDate /t REG_SZ /d “dddd, dd MMMM, yyyy” /f
REG add “HKCUControl PanelInternational” /v sShortDate /t REG_SZ /d “dd/MM/yy” /f
REG add “HKCUControl PanelInternational” /v sTimeFormat /t REG_SZ /d “HH:mm:ss” /f

>Network Access Control with a Juniper switch and RADIUS

>For the past two to three weeks, I’ve been testing out an EX4200 switch on loan to us from Juniper. It’s been pretty damn sweet – that thing has a lot of really great monitoring and prevention features.

One thing that’s been kicking my ass, though, is setting up the switch to do NAC using 802.1X. It’s been a combination of things, really: the learning curve is for the EX4200 (minimal as it is), the RADIUS implementation on Windows 2008 (IAS doesn’t exist anymore; it’s now NPS), incompatible packet formats… All in all, I’ve had configuration woes.

So here’s an attempt at turning this frown upside-down. I’m writing myself a guideline from scratch, in hopes that it’ll help me get it right! Writing’s good for that; to me, there’s no better cure for confusion than having to explain it to someone else.

The environment

Here’s a suggestion of the equipment you should use to get this right:

  • A good amount of room and light – you’re going to be hosting a lot of crap for what could be many days
  • 1 x power outlet, 1 x multi-plug. The multi-plug should have 5 plugs minimum, 7 plugs to be comfortable, and would preferably have an on-off switch. Make sure they’re grounded! Blowing up expensive switches is ill-advised, and might get you in trouble with your Finance department.
  • 1 x Juniper EX4200 switch (or equivalent). Should come with a serial cable.
  • 3 CAT5 cables or equivalent (ethernet, not crossover; if you don’t know the difference, you might want to consider finding that out first… :-D)
  • 1 x  linux workstation with a serial port, gtkterm, wireshark and a hypervisor (VirtualBox is free, easy, and yields good performance)
    • 1 x linux VM. radiusd will be installed on it
    • 1 x windows 2008 VM. Certificate services will be installed on it (optional)
  • 1 x linux workstation, with the FreeRADIUS client utilities (freeradius-utils package in ubuntu) installed.
  • 1 x Windows XP workstation (or Windows 7, if you prefer. For Pete’s sake, don’t talk to me about Vista)
  • 1 x laptop (or tablet) with wireless access to the ‘net – for note-taking and frequent research
Initial setup
Contrary to the Juniper setup guide, I don’t tend to start by mounting my equipment on the rack. The average server room is loud, cold, dark, and has entirely too little room to be faffing about in it trying to configure your new equipment. Seriously, you’re more likely to damage yourself and your existing equipment than anything.

Do pick somewhere roomy, dry, well-lit and safe from danger. A good example of this is your office. If you have one. Perhaps an infrequently used conference room. A corner table in the cafeteria is not an option, regardless of how temptingly close it is to the coffee machine. And, once again, remember: it might take you a few days to get this right, so maximize your chances.
Step One: configuring your admin workstation
This sounds silly. However, if you don’t have the right tools for diagnosis, you’re going to waste a whole lot of effort for no reason; in the very least, you should peruse this section to make sure you’ve got the ideal setup.
My admin workstation is a dual-core 32-bit Ubuntu box with 4 GB of RAM. Not particular fast, but it has enough space and memory to host a VM without dying a horrible, rattling death. The idea is to run your RADIUS server on a VM so that you can sniff traffic on it without having to set up port mirroring; additionally, you want to be able to snapshot you RADIUS server in order to try different permutations of your configs, or revert to an earlier state if you’ve screwed things up.
Another important thing is that your admin workstation must have a serial port and terminal. You can configure the switch without it, in all probability; be that as it may, you’ll want to at least be able to connect to the serial port so that you can get the switch’s status when you shut down for the day.
Step Two: the physical environment, VM, and switch
Once your admin workstation is ready, spend a little time prepping the environment. Get the multi-plug connected to the outlet and physically accessible at all times (you’ll find that you have to power-cycle the switch a lot). Do not plug the switch in yet! Place your admin workstation close to the switch, and plug in the serial cable and launch your terminal, so that when then switch turns on you can monitor its status as it boots up.
Before you get started on the switch, I recommend you install the base O/S of your RADIUS VM. Set up a barebones distro (I use ubuntu server LTS) with SSH on it, to start. When configuring the virtual hardware, make sure you have enough resources dedicated to it and, most importantly, set it to bridge the ethernet connection – don’t use NAT. The benefit of setting up the VM first is that while the O/S is installing you can focus on configuring the switch.
While system files are being copied to the VM, plug in your switch – it’ll start booting the minute it’s got power. Out of the box, it’s in initial setup mode; if you’ve bought it second-hand or inherited it, you’ll probably have to reset the factory settings; here’s how you do that:
  1. There are two buttons on the front panel of the switch – one to switch options (the menu button), and one to enter options (the enter button). Hit the menu button until you get to the Maintenance Menu, then press the enter button
  2. Within the maintenance menu, hit the menu button until you get to Reset Factory Settings, then press the enter button
  3. Press the enter button to confirm; the switch will display a message indicating that it’s resetting everything to factory settings.
These switches are configurable via two interfaces: the CLI interface (serial port), or the J-Web interface (http). I’ve found that there is really no benefit to using the CLI for the initial setup, so I’m going to explain how it works the J-Web way:
  1. Press the menu button until you get to Maintenance Menu, then press enter
  2. Press the menu button until you get to EZSetup, then press enter
  3. Plug your admin workstation into the first port of the switch in the front panel (port 0) – note that this differs on other models
  4. The manual suggests that the switch will automatically attribute an IP addy to your workstation via DHCP – that’s a bunch of hooey. You’ll need to configure a static IP address. By default, the network setup for the switch is 192.168.1.0/24, so you’ll probably want to attribute 192.168.1.2 to your admin box.
  5. Open a browser window to 192.168.1.1 and follow the initial setup instructions. If you follow the default options, you generally set up a default management VLAN of 0, and opt for in-band management. This works fine for simple configs – adapt your approach to your infrastructure if this isn’t the case, obviously.
This is pretty much where I’ve found the manual ceases to be useful. Yippee. So here are a few notes I’ve collected over the past few days:
I couple of helpful things to know about your EX4200
The first thing to know is that the web interface is pure gold. We use a lot of Cisco Catalyst switches – which are very powerful, of course, but the web interface really feels like an afterthought. The J-Web interface is badass. Here are just a few of the things I really like about it:
  • Via the web interface, you can assign MAC addresses to different ports via the Port Security section. You can also ‘trust DHCP’; didn’t get to play with that but presumably, you’d be able to initialize your switch by ticking this option, and then un-ticking it afterwards. (Note, of course, that MAC address filtering is not the safest option for NAC, since it is possible to spoof a MAC address. However, it will prevent your users from plugging in their personal computers and potentially infecting the entire campus with a worm…)
  • You can set up port mirroring on multiple ports, allowing you to perform specific analyses or even to ‘load balance’ your traffic sniffing.
  • Ever gotten an ARPwatch alert and not known where the hell a rogue computer’s connecting from? The J-Web interface allows you to see the switching table information; you can therefore see what port a rogue MAC address is on.
But I digress – let’s focus on a few things you need to know about the switch in order to understand how to configure it for 802.1X:
  • You apply 802.1X at the port level. This is to say that some of the ports of your switch can have 802.1X enabled, and some not. At first I thought, “crap – if I have to apply it to each port one by one, it’s going to take hours.” Not so: you can select multiple ports at the same time and enable 802.1X to your selection.
  • You apply 802.1X by “Setting the 802.1X profile”. You disable it by selecting one or many ports and clicking on “Delete”. Search me why they didn’t use a consistent vocabulary – but hey, it works.
  • Before you can set the 802.1X profile, you have to set the port’s role. This is analogous to Cisco’s smart ports; you essentially set up a profile for the device to which the port is connected. You can define your own, but there are a few pre-defined roles such as ‘default’, ‘desktop’, ‘switch’ or ‘phone’. If you don’t apply a port role, you simply can’t set the profile. Addendum: you can leave the RADIUS server’s role to none or default.
  • How does 802.1X work? God, I’ve read a gazillion definitions, some very academic, others very high-level; and quite frankly, I’ve found that all of them lack either clarity or detail. I finally found a decent overview in the following excerpt, which I’ve taken from an Avaya support article (click here for the full document). I’ll admit that the wikipedia section on the authentication progression helped, as well. If anyone out there has a better definition than the one below, please let me know:
 

At this point, you’re pretty much ready to attack the ‘hard part’ of the problem, which is getting the authenticator (switch), supplicant (workstation) and the authentication server (RADIUS box).

Step Three: getting 802.1X up and running
The first thing you’ll need to do is configure the RADIUS server. Your best bet is to go with Linux; I know that Windows NPS is tempting! It’s easy to perform the install, it integrates with all the other components in a windows network, it allows you to bind it to AD effortlessly, it provides more useful features than a swiss army knife. Yackity yackity. Linux might not be the easiest to install, but it’ll work once it’s installed; and it’ll continue to work for a long, long time. The last bloody thing you need is for your entire campus to go tits up every second wednesday of the month, if you know what I mean.
At this point, your VM is probably ready to be configured. Let’s get started with a basic configuration – one that will allow users to authenticate using user creds present in the radius users config file:
  1. Install RADIUS: sudo apt-get install freeradius
  2. Navigate to the /etc/freeradius directory (you’ll need to be logged in as root to do this)
  3. Edit the users file; add a user with only the Cleartext-Password attribute set (there should be examples of this in the file)
  4. Edit the clients file; set the RADIUS secret of the localhost (eventually, we’ll have to specify one for the juniper switch)
  5. Restart freeradius: /etc/init.d/freeradius restart
  6. Test freeradius: radtest <user> <password> localhost 10 <secret>
  7. If you have trouble connecting, I suggest you stop freeradius, then start it with -X. Consult this link for more information on debugging: http://wiki.freeradius.org/Basic_configuration_HOWTO

Note: stupidly, I had added a user to the users file with the same name as a user in my passwd file. By default, freeradius does have PAP enabled, and it does have priority over the users file.

Then, you should make the switch aware of the RADIUS server – in other words, configure the Authenticator to interface with the Authentication Server. To do this, follow these steps:
  1. Log onto the J-Web interface
  2. Go to the Configuration tab
  3. Go to the 802.1X section
  4. Click on the RADIUS Servers button toward the top right of the page, and add a new server. The terminology is a bit confusing here, so make sure that you put the RADIUS server IP in the destination address field, and the swtich IP in the source address field. Also, make sure you specify the correct port (1812). You’ll need to fill all the fields – the switch will happily take blanks but I’m pretty sure it mucks up the transmission.

Remember that the switch’s role as Authenticator will be to take the auth packet coming from the XP workstation, encapsulate it in an EAPOL frame and pass it on to the RADIUS box.

Also remember that in this case, the only machine that will be directly communicating with the RADIUS server is the juniper switch. A common mistake is to think that all hosts will need an entry in the clients file of freeradius – in fact, the only device that does is the juniper switch.

Step four: connecting an XP machine to your switch

You want to start by making sure that your XP box can connect without the additional mumbo jumbo – without preemptively setting any 802.1X profiles for the port, plug your XP machine in and ping the switch (presumably, you’re going to have to set up a static IP address before you’re able to connect). You should proceed only if you’re able to hit the switch successfully at this point; at the risk of stating the obvious, your workstation isn’t going to automagically fix itself once you’ve got the whole thing configured.

Next, you should make sure that your XP box is running the necessary service to talk 802.1X to the client. Once upon a time (in pre-SP3 versions of XP), this meant starting the Wireless Zero config service – which, you know, makes oh-so-much sense when you’re trying to set up a wired connection. As of SP3, the service you’re looking for is Wired AutoConfig – make sure it’s set to Automatic startup and is, in fact, started now. Navigate to Network Connections (Start > Settings > Network Connections), right-click on the wired NIC and hit Properties. You should see at least three tabs, one of which is ‘Authentication’; if you don’t see the Authentication tab, re-read the beginning of this paragraph and slap yourself on the wrist – you’re skipping steps.

In the Authentication tab, enable 802.1X authentication, set it to use Protected EAP, and hit the Settings button. Disable validation of the server cert (though you’re obviously going to want to revisit that one later to be safer), and set your authentication method to be MD5. Click on Configure and make sure that your machine isn’t using your windows creds to authenticate, though this may be the case later. Finally, make sure that Fast Reconnect is enabled and the two other options are disabled (once again – you’re going to have to revisit that later), and hit OK. Note that you should still be able to ping the switch; what we’ve done is configure an additional layer which is only used if it is needed. This is practical if a lot of your users have laptops.

This should be a decent config for now – very simple, starting easy. I find that it’s easier to start simple and work toward the more complicated configs incrementally; it reduces chances of getting a composite problem that isn’t easily fixed with the flick of a switch.

Returning where we left off in step 3, we’re now going to assign an 802.1X profile to the port on which your XP box is connected. Return to the J-Web interface, and navigate to Configure > Security > 802.1X. Click on your XP box port, then click Edit > Apply 802.1X profile. That will enable 802.1X security on the port. Note that you can select as many ports as you like and apply 802.1X to them in bulk.

Look at your XP box, now: you should get a bubble appearing over your network connection, indicating that you have to specify additional information to connect. If you click on it, you’re prompted for a user name and password. Great success! You’ve just set up RADIUS authentication.

Step five: expanding your horizons

Now that you’ve configured 802.1X on your wired network, you’re probably thinking “great, so now that’s done. Whatever happens, I don’t have to worry about network access control ever again.” Right?

WRONG!

You’re just getting started. This is pretty much the tip of the iceberg – iteration one of dozens, if not hundreds, of iterations to get your systems secure.  Let me show you what I’m talking about:

Your RADIUS server


Right now, your RADIUS server is sitting on your switch happily answering auth requests. You need to think about locking that baby down; for one thing, it should only accept requests from the switch – this can and should be done using ACL’s on the juniper and on the RADIUS box.

Monitoring


You should at least be logging failed requests: there aren’t a gazillion reasons why those should be occuring. Someone on your team might be (re)configuring a workstation; or maybe an end-user is trying something he/she shouldn’t be doing, like hooking up a personal device to the network or messing with the network config. Maybe someone’s trying to brute-force the connection… Use your imagination.

Monitoring’s nice, alerting is better. What system are you going to be using? How are your alerts delivered? Be sure to use the appropriate monitoring tool, make sure you’re not leaking information or generating unnecessary traffic, and spend time testing and pruning the data your system is generating. Data is not information if there’s too much shit to read! You want as little false positives as possible (don’t we all…) – so think about what scenarios you should truly be following up on and cut down on the noise.

Encryption and validation

What we’ve configured up above is insecure:

  1. Packets are not encrypted 
  2. Authentication is done via MD5 challenge – MD5 has some vulnerabilities (c.f. this paper).
  3. If you don’t choose a good RADIUS shared secret, it can be crackable. In fact, RADIUS itself does possess a number of vulnerabilities (c.f. Joshua Hill’s article on untruth.org) which, if you’re not careful, could lead to DOS conditions or password compromise (and if your RADIUS box is relying on Active Directory for authentication, the extent of the damage could be consequential).
Quite a few of the vulnerabilities mentioned above can be circumvented by restricting access to the RADIUS server from the network (as mentioned above), getting relevant alerts when a new device is plugged in, and using a different authentication mechanism (MS-CHAP-V2 might be an option, but it’s definitely not 100% secure either as you can see).

LDAP

Speaking of MS-CHAP-V2… Have you considered using LDAP? [Edit – was going to add some derisive comment pointed at a company we all know and love. But that’d be below me… Heheh, right.]

If you’re using Active Directory in your infrastructure, you can of course hook your RADIUS server up to that. An added benefit of this is that you’ll be disabling both the user’s access to services and the network from a centralized location. The disadvantage is that, once again, your network access is relying on a centralized AAA system that has been known to fail. Plus, there’s a good chance that if you’re able to compromise credentials via the RADIUS server, you’d pretty much have the keys to the kingdom. If you’re going to use LDAP with RADIUS, you should at least be thinking about using TLS for encryption (which requires setting up a Cert Authority).

Quarantine Checks

You probably noticed the ‘Enable quarantine checks’ checkbox when configuring your client in step four. Sounds enticing, doesn’t it? Essentially, what it means is that your client would accept participating in Statement of Health checks; of course, it does mean that you have to set up a Windows Network Policy Server. If you’re doing that, then you might as well set up RADIUS on the same server. c.f. step three to see why I think this is a bad idea…

Multiple connections on a single port, and special ports


One of 802.1X’s potential pitfalls is that of multiple machines connected to a single port. What happens if somebody comes in with a switch of their own? They could plug it into their ethernet connection and put a corporate machine on it as well as an external machine. If the switch isn’t configured correctly, the corporate machine could answer the RADIUS auth requests and the external machine would have all the access to the network it needs.

The EX4200 has provisioned for this possibility, and can be locked down in two following ways:

  1. You can prohibit a port from having multiple devices connected to it
  2. You can configure the switch to require that all devices connected to the port need to authenticate
  3. You can use MAC address filtering
Nope, it’s not a mistake – I indicated that there were two ways to lock down the switch and listed three. That’s because the third’s not a viable option: I’m stating it so I can shoot it down right away, because it’s no ridiculously easy to circumvent.
I think that further research into the implementation of option two is necessary to be able to sign off on it fully… But that, like many other elements of this article, is for another time.
Conclusions
I hope that you found this little intro to 802.1X and Juniper useful. It’s meant as an introduction and is  by no means comprehensive, but will hopefully get you thinking about the various infrastructural and security aspects that you need to consider when implementing such a mechanism.
Frightfully sorry about some of the gaps in explanations – if you write to me or comment on my post, I will do what I can to answer any questions!

>Customizing your list’s look & feel in Sharepoint 2010

>I’ve been working on a Sharepoint 2010 intranet for a client; a real pain in the arse was setting up a feed with an image in the web part, customized ‘Add an item’ text and the ability to customize the look and feel via CSS. When you add the list as a web part to a sharepoint page, you’re unable to perform these customizations using a WYSIWYG editor or web part properties; however you can specify an XSL template. Here’s mine:

<?xml version=”1.0″?>
<xsl:stylesheet xmlns:xsl=”http://www.w3.org/1999/XSL/Transform&#8221; version=”1.0″ xmlns:ddwrt2=”urn:frontpage:internal”>
  <xsl:include href=”/_layouts/xsl/main.xsl”/> 
  <xsl:include href=”/_layouts/xsl/internal.xsl”/>
  
  <xsl:template match=”/” xmlns:ddwrt=”http://schemas.microsoft.com/WebParts/v2/DataView/runtime”&gt;
    <div class=”webfeed”>
      <img src=”/pages/SiteAssets/myimage.png” style=”float: right;” class=”boximage”></img>


      <xsl:for-each select=”/dsQueryResponse/Rows/Row”>
        <xsl:if test=”string-length(@Title) &gt; 0″>
          <a href=”/Lists/MyList/DispForm.aspx?ID={@ID}”>
            <xsl:value-of select=”@Title” /></a><br/><br/>
          </xsl:if>
      </xsl:for-each>




      <xsl:call-template name=”Freeform”>
        <xsl:with-param name=”AddNewText”>Add an item</xsl:with-param>
        <xsl:with-param name=”ID”>
          <xsl:choose>
            <xsl:when test=”List/@TemplateType=’104′”>idHomePageNewAnnouncement</xsl:when>
            <xsl:when test=”List/@TemplateType=’101′”>idHomePageNewDocument</xsl:when>
            <xsl:when test=”List/@TemplateType=’103′”>idHomePageNewLink</xsl:when>
            <xsl:when test=”List/@TemplateType=’106′”>idHomePageNewEvent</xsl:when>
            <xsl:when test=”List/@TemplateType=’119′”>idHomePageNewWikiPage</xsl:when>
            <xsl:otherwise>idHomePageNewItem</xsl:otherwise>
          </xsl:choose>
        </xsl:with-param>
      </xsl:call-template>




    </div>
  </xsl:template>
</xsl:stylesheet>


Normally, I *hate* color-coding anything; but since I’ve only used a few colors, it should be pretty easy to read.


I’m not going to go through the entire thing — I think the code is pretty self-explanatory (not that you should have been able to cook this shit up in your head or anything… But once you see it in front of you, it’s pretty easy to understand what each part is). However, I’ll walk through the highlights. If there’s anything you see that you don’t understand, or isn’t clear, feel free to comment!  

A customized ‘Add’ button:

Notice that one of the first things I do is include some of sharepoint’s own XSL templates — I’ve pointed these out in red.  Then, I add the ‘call-template’ section, which is highlighted in blue, at the end of my XSL. This renders the line, the + icon, and the Add text. I’ve highlighted the text you can customize in bold — you’ll have probably noted that what the code does is call a template from sharepoint XSL’s with two parameters. The first parameter is the text you want to customize; the second is a value that is computed based on the list’s type — if it’s an announcement, it’s set to idHomePageNewAnnouncement, if it’s a document it uses idHomePageNewDocument, and so forth. I generally save my images and resources in SiteAssets. Does anybody have any counter-indications there? I found it to be useful to make sure versioning is enabled for the repo; this way, if any changes in my resources mess up the layout or anything, I can revert to a previous version.

An embedded image:
The image (purple in the code) was pretty easy; notice that in order to embed it nicely in the text, I just used float: right in the style attribute. 
Additional styling:
Let’s face it: faffing about with XSL is fun, but not very practical for a web designer. I’m particularly sucky at aesthetics, so I want to make sure that I can pass off as much of that kind of work into the right hands as possible and make it as easy as I can.
Note the presence of the DIV (in green) at the beginning of the XSL: it has a class attached to it (webfeed) so that it can be customized via CSS. Same goes for the image described above (boximage). This means that you can have a CSS file in your SiteAssets repository in which you can put in any formatting specifications for both the links of the feed and the image of the feed; note that in order to apply this CSS, you’ll have to edit the portal’s master page(s) to reference it.
The benefit of having a separate CSS file from the site theme CSS is that it can be made accessible to your team’s web designer without granting the designer any particular permissions to the sharepoint server. The CSS file will be versioned, so you can quickly revert if something gets screwed up in the layout or colors, and if the file is accidentally deleted it can be restored from the Recycle Bin (rather than permanently deleted from a NetBIOS share…)
CFN!

Addendum: I’ve also been asked to modify the behavior of the link; instead of opening a new page, it’s supposed to open up Sharepoint 2010’s new modal pop-up dialog box. What at first seemed really annoying and complicated turned out to be quite easy. Substitute this line:

<a href=”/Lists/MyList/DispForm.aspx?ID={@ID}”>
            <xsl:value-of select=”@Title” /></a><br/><br/>

With these lines:

<xsl:variable name=”formLink”>/Lists/MyList/DispForm.aspx?ID=</xsl:variable>

<a>
  <xsl:attribute name=”href”>
    <xsl:value-of select=”$formLink” />
    <xsl:value-of select=”@ID” />
  </xsl:attribute>


  <xsl:attribute name=”onClick”>
    <xsl:text>javascript:NewItem2(event, &quot;</xsl:text>
    <xsl:value-of select=”$formLink” />
    <xsl:value-of select=”@ID” />
    <xsl:text>&amp;RootFolder=&quot;);javascript:return false;</xsl:text>
    </xsl:attribute>


    <xsl:value-of select=”@Title” /></a><br/><br/>

What does this do? First, it attributes the URL to a variable, $formLink. Then, it renders that variable in the ‘href’ attribute of the link. Finally, it calls an AJAX function of Sharepoint’s that opens up a modal dialog box and renders the content of the URL specified in $formLink. Presto! Instant Sharepoint 2010 panache.

>Run your linux applications remotely over SSH

>
This is a *very* short article on using x-win with SSH — namely because there’s a ton of articles out there on the subject already. I found that this worked using with cygwin and ubuntu… If you’re using ubuntu as both the client and the server, you won’t need to export the DISPLAY variable…

Server: the machine whose programs you want to run; could be a server on a rack
Client: the machine on which you want to see the programs; could be your workstation

from your client:
1) start X-Win (if cygwin)
2) use xhost to grant access to the x-win server: “xhost +[name]”, where name can be a host or a user.
3) use ssh to connect to the server: “ssh -X [user]@[servername]”

from the server via ssh:
1) set the display (this assumes you’re using bash): “export DISPLAY=[client ip address]:0.0”
2) test using xclock: “xclock &”

Once you’re done, I would recommend that you do an “xhost -[name]” from your client again.

>Setting up a bridge for your headless VirtualBox machine

>

Last week, I wrote an article on how to set up a bridge for QEMU, which is quite practical for when you want to set up servers quickly and easily. QEMU has its drawbacks, however, when it comes to using graphic interfaces, so I tend to prefer using VirtualBox for my day-to-day virtualization needs.
I’ve been working on a virtual machine for teaching purposes, lately, and have determined that a headless VirtualBox VM is the way to go.  Here are a few notes that I’ve taken on setting up a headless VM on a bridged network – this allows the person running the VM to start the machine without starting up a console, and to be hit the VM’s services from the host machine.
Before I go on, though, here are the URL’s I use as a reference:
Good howtoforge by Falko Timme:
http://www.howtoforge.com/vboxheadless-running-virtual-machines-with-virtualbox-2.0-on-a-headless-ubuntu-8.04-server
Setting up a bridge, according to the VirtualBox wiki:
http://www.virtualbox.org/wiki/Advanced_Networking_Linux
These cover (with a fair amount of detail, I might add) the topics of setting up VirtualBox, creating a machine, and creating a fully functional bridge with DHCP etc etc. That’s something I’m not going to cover here – namely because it would be a pale copy of someone else’s work.  I’m writing about setting yourself up with something that you can run in a classroom or as a sandbox for short-term activities.  Hope this helps.

Setting up VirtuaBox 2.1 (or later)
At the time this article is written, Ubuntu Hardy Heron is the current LTS and VirtualBox 2.1 is the latest version. I will therefore be writing under the assumption that you are using these versions — please remember to change the commands according to your distro / version of VirtualBox!
First, you have to add VirtualBox’s repository and public key to your APT sources. Add the following line to your /etc/apt/sources.list file — you can tack it on to the end:
deb http://download.virtualbox.org/virtualbox/debian hardy non-free
You’ll also have to download and set up the key. You can do this using wget and apt-key:
wget http://download.virtualbox.org/virtualbox/debian/sun_vbox.asc
sudo apt-key add sun_vbox.asc
You can then retrieve virtualbox straight from apt-get:
apt-get install virtualbox-2.1
As opposed to VirtualBox open source edition (which can be run using the command ‘virtualbox’), VirtualBox 2.1 is run using ‘VirtualBox’ (case-sensitive, of course). I simply set up my VM using the GUI.

Setting up the bridge
As with qemu, you have to set yourself up with a virtual network interface (tap0, for instance), set it up with an IP address, and set up IP forwarding on your host machine.  I used the first of the scripts below to set myself up.  I then needed to run VirtualBox once again to modify the settings:  I added a Host Interface NIC to my machine’s configuration, which pointed to tap0.  I then ran my machine, and tested my config by having my guest ping my website, then my host and vice-versa.
I shutdown my guest, and tore down my virtual network using the second script below.

#!/bin/bash

# Script to set up bridging for your virtualbox machines. When setting up your VM, add an extra network interface of type “Host Network”, called tap0.  You can use this script as the network “startup script”.

# Create a TAP interface, tap0, to be used for bridging; set the owner of that interface to the current user (hence the whoami command):
sudo tunctl -t tap0 -u `whoami`

# Create a bridge, br0, and add the tap interface to it. DO NOT ADD THE PHYSICAL INTERFACE: you will kill your network connection if you do that πŸ™‚
sudo brctl addbr br0
sudo brctl addif br0 tap0

# Bring up the bridge and tap interfaces:
sudo ifconfig br0 10.1.1.1/24 up
sudo ifconfig tap0 10.1.1.2/24 up

# Turn on IP forwarding:
sudo xterm -e “echo 1 > /proc/sys/net/ipv4/ip_forward”

# Add a rule to forward traffic over to eth0:
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE


#!/bin/bash
# Script to tear down bridging for your virtualbox machines. You can use this script as the network “shutdown script”.

# Flush the traffic forwarding rules:
sudo iptables -t nat -F

# Disable IP forwarding
sudo xterm -e “echo 0 > /proc/sys/net/ipv4/ip_forward”

# bring down the bridge and tap interfaces
sudo ifconfig br0 down
sudo ifconfig tap0 down

# kill the bridge
sudo brctl delbr br0

# kill the tap interface
sudo tunctl -d tap0


Spreadin’ the love

Once the guest machine was configured and connected, I powered it down.  At this point, the guest is ready to transfer to a DVD or to a tarball.  I simply copied the machine’s config folder (~/.VirtualBox/Machines/<machine name>) and Virtual Disk (~/.VirtualBox/VDI/<machine name>.vdi) to a DVD. Before using them, of course, one needs to copy them to the correct locations on one’s disk. The VDI file will need to be registered using the Virtual Disk Manager of VirtualBox (or the equivalent VBoxManage command) and the machine will need to be registered using the following command:
VBoxManage registervm Machines/<machine name>/<machine name>.xml
I also copied the scripts to the DVD; I tacked on the following line at the end of the startup script:
VBoxManage startvm <machine name> -type vrdp
And this line at the beginning of the shutdown script:
VBoxManage controlvm <machine name> poweroff

>Adding a new virtual hard drive to a Ubuntu guest in VirtualBox

>The following is a simple procedure allowing you to create a new virtual hard drive for your ubuntu guest OS and assign it to a mount point.

As you might already know, VDI’s are a pain in the ass to resize — not impossible, but certainly a pain.  With an ubuntu guest OS, the simplest thing to do is to create a new VDI and mount it!

  1. Create the disk and attribute it to your Virtual Machine
    1. Fire up Virtual Box and open the Virtual Media Manager (File > Virtual Media Manager)
    2. Follow the instructions to create a new VDI file
    3. Open the settings of the virtual machine to which you’ll add your new drive.  Under the Hard Disks tab, add the drive that you’ve just created.
  2. Fire up your virtual machine
  3. Format and mount your new hard drive
    1. Once logged into the operating system, make sure that you have gparted installed (sudo apt-get install gparted if you don’t)
      1. The system should detect that you have a new drive (probably /dev/sdb).  Select that drive and format it as an msdos drive (click on New… and, when prompted, select msdos)
      2. Create a new partition of type ext3 (New…, then select primary partition and when prompted select ext3)
      3. Open up gparted (‘sudo gparted’ from a command line)
      4. Hit Apply — gparted will take care of formatting your drive
    2. You should now be able to use your hard drive simply by mounting it (run ‘mkdir /media/my_new_drive;mount /dev/sdb /media/my_new_drive’ as root).  However, for a more permanent setup, you’ll need to modify your /etc/fstab file
      1. Open /etc/fstab with your favorite editor (be sure to run the editor as root)
      2. Create a new line (I usually copy the line for my root partition and modify as necessary).  Make sure to specify your new hard drive as the device (probably /dev/sdb1) and some empty directory as the mount point (such as /opt).