Better netflow visualization with code_swarm coolness!

Howdy all,

In my last post, I may have mentioned codeswarm, a nifty tool for visualizing how frequently a software project gets updated throughout time. Since it’s an open-source project, I figured that it was worth having a look at the code and seeing if there are other uses for it.

If you check out the Google Code page, you’ll notice that the project isn’t terribly active – the last upload dates back to May 2009. But hey, it does what it’s supposed to do and it’s pretty straightforward.

Reading through the source files, in fact, use of the tool is super simple: you set up an XML file that contains the data to be used, you run Ant, and you let the program do the rest. The format of the sample data is very simple, frankly: a file name, a date, and an author.

So let’s see what other uses we could come up with. Here are a few ideas I thought might be cool:

  • What about adapting it to track your social media messages? First, if you’re following a lot of people, it would look wicked cool. Second, if you’re trying to prune your Follow list, that could be really practical for figuring out who’s the noisiest out there.
  • Sometimes when you’re trying to figure out bottlenecks in your traffic, it’s useful to have a decent visualization tool. Maybe this could be helpful!
  • Finally, you sometimes need a good way to track employee activities. Would this not be a kickass way to see who’s active on your network?

I decided to work on the second idea. I’m not looking to rework the code at this point, just to reuse it with a different purpose.


To pull this off, you’re going to need the following:

  • The codeswarm source code and Java, so that you can run the code on your system
  • Some netflow log files to test out
  • flow-tools, so that you can process said netflow log files
  • A scripting language so that you can process and parse the netflow traffic into XML. My language of choice was ruby, but it could be as simple as bash.

Β The netflow filter

Before we can parse the netflow statistics into the appropriate format, we need to know what we’ll be using and how to extract it. Here’s what I used: each IP endpoint should have its own line; the IP address maps to the “author” field (because that’s what is visible). The protocol and port will map to the filename field, and the octets in the flow will map to the weight field.

The following is the netflow report config file. You should save this in the codeswarm directory as netflow_report.config:

stat-report t1
 type ip-source/destination-address/ip-protocol/ip-tos/ip-source/destination-p$
 scale 100
  format ascii
  fields +first
stat-definition swarm
 report t1
 time-series 600

If you save some netflow data in data/input, you can test out your report by running this line:

flow-merge data/input/* | flow-report -s netflow_report.config -S swarm

Parsing the netflow

If the report worked out correctly for you, the next logical step is to write the code to create the .XML file that will be parsed by codeswarm. You’ll want to set your input directory (which we’d said would be data/input) and your output file (for instance, data/output.xml).

Here’s the source code for my grabData.rb file:

# Prepare netflow data for codeswarm.
$outputFilePath = "data/output.xml"
$outputFile =$outputFilePath, "w")
$outputFile << "<?xml version="1.0"?>n"
$outputFile << "<file_events>n"
# Grab the netflow information using flow-tools
$inputDirectory = "data/input"
$input = `flow-merge data/input/* | flow-report -s netflow_report.config -S swarm`
# This is the part that gets a bit dicey. I believe that in order to properly visualize
# the traffic, we should add an entry for each party of the flow. That's exactly what we're
# Going to do. The "author" in this case is going to be the IP address. The "filename" will
# be the protocol and port. The weight will be the octets.
$input_array = $input.split("n")
$input_array.grep(/recn/).each do |deleteme|
$input_array.each do |line|
 fields = line.split(",")
 last = fields[0]
 source = fields[1]
 dest = fields[2]
 srcport = fields[3]
 dstport = fields[4]
 proto = fields[5]
 octets = fields[8].to_i / 1000
$outputFile << " <event filename="#{proto}_#{srcport}" date="#{last}" author="#{source}" weight="#{octets}"/>n"
 $outputFile << " <event filename="#{proto}_#{dstport}" date="#{last}" author="#{dest}" weight="#{octets}"/>n"
$outputFile << "</file_events>"

And we’re done! This should generate a file called data/output.xml, which you can then use in your code swarm. You can either edit your data/sample.config file or copy it to a new file, then run ./

Reality Check

I was really excited when running my first doctored code swarm; unfortunately, though the code did work as expected, the performance was terrible. This was because the sample file that I used was rather large (over 10K entries). Probably considerably more than what the authors had expected for code repository checkins. Also, I suspect that my somewhat flimsy graphic card is unable to handle realtime rendering of the animation, so I set up the config file to save each frame to a PNG so I could reconstitute the animation later. Syntax for this is:

ffmpeg -r 10 -b 1800 -i %03d.jpg test1800.mp4

Moreover, I believe my scale was off; I changed the number of milliseconds per frame to 1000 (1 frame, 1 second).

The second rendering was much more interesting, but it did yield a heck of a lot of noise; let’s not forget that we’re working with hundreds, if not thousands, of IP addresses. However, if we do a little filtering we can probably make the animation significantly more readable.

All in all, this was a rather fun experience but a bit of a letdown. Codeswarm wasn’t meant to handle this high a volume of data, so it makes things tricky, and less readable than what I expected; if you play with your filters, you will definitely be able to see some interesting things but if you’re looking for a means to visually suss out what’s happening on your entire network, you are bound to be disappointed. By next time, I hope to talk a bit about more appropriate real-time visualization tools for netflow and pcap files, maybe even cut some code of my own.

>Quick analysis of a trojan targetting swiss users


We’ve seen a couple of cases of this trojan hitting client computers lately; unfortunately, the security bulletin by the CYCO doesn’t have much yet in terms of information on IP addresses, domain names, or what else the trojan might be doing in the background, so I dusted off the old forensics toolkit and did a bit of digging.
Look at this bad boy! Innit unreal? Brilliant πŸ™‚ I knew this kind of stuff was around but I must admit it’s the first time I encounter ransomware this targeted…

My colleague confirmed that this was only happening on the user’s account – not the local admin account present on the computer. So the first thing we did was run Sysinternals’ Process Monitor to identify what was causing the screen to appear. Note that we use Deep Freeze on users’ computers and the machine was frozen at the time of the infection, so it was likely that what was running was persisted on the user’s drive. I really wish that we could freeze everything but the user’s Desktop, My Docs, and Favorites – however, that seems to royally piss off our users. Would have prevented this from happening though.  Anyway, moving on. If you know that the only location where this executable could possibly exist is the user’s drive, it’s easy to identify the culprit:

No big surprise there — it’s running in the user’s Temp folder. Unsurprisingly as well, the user’s SoftwareWindows NTCurrentVersionWinlogon file has been modified to point the shell to that upd executable – that’s easily sussed out by using regripper or regdump. With regripper, we even get a timestamp of when this was done which will be useful for cross-referencing information later. 
OK great, so now we know where this thing is – how did it get there?
It was a bit harder to figure out how the hell the trojan got on the user’s computer, I’ll admit. I used Web Historian at first to identify any suspicious sites. I don’t know about the rest of you out there, but my experience is that when malware shows up on users’ computers, it’s typically because they’ve been downloading something illegal or, er, carnal. However, when looking at the user’s web history no alarm bells were going off. All good, clean, unremarkable sites. I went as far as to investigate the user’s mail store to see if the machine could have gotten infected by email – nothing suspicious there either. USB keys would have left a trace in the registry but since the machine was frozen, I wouldn’t be able to figure out if a key was inserted at the time of the infection. I therefore switched tactics and ran a timeline analysis of the user drive using sleuthkit. That’s when I found this:

The same minute the executable was written, something was written to the Java cache. Coincidence? Yeah right. I took a look at the index file, guess what I found?

If you decompile the JAR using jad, you get something like this:

If you check out the domain and IP address written in the index file, you’ll see that the domain is registered to a Russian registrant; the IP address traces back to the domain, but is hosted in the Netherlands.
That’s all the JAR file seems to do. I haven’t messed around with the upd.exe file yet, will probably do so sometime soon. In the meantime, I hope that you found this entertaining πŸ˜€ Should I be looking at anything else? Let me know.

>Network Access Control with a Juniper switch and RADIUS

>For the past two to three weeks, I’ve been testing out an EX4200 switch on loan to us from Juniper. It’s been pretty damn sweet – that thing has a lot of really great monitoring and prevention features.

One thing that’s been kicking my ass, though, is setting up the switch to do NAC using 802.1X. It’s been a combination of things, really: the learning curve is for the EX4200 (minimal as it is), the RADIUS implementation on Windows 2008 (IAS doesn’t exist anymore; it’s now NPS), incompatible packet formats… All in all, I’ve had configuration woes.

So here’s an attempt at turning this frown upside-down. I’m writing myself a guideline from scratch, in hopes that it’ll help me get it right! Writing’s good for that; to me, there’s no better cure for confusion than having to explain it to someone else.

The environment

Here’s a suggestion of the equipment you should use to get this right:

  • A good amount of room and light – you’re going to be hosting a lot of crap for what could be many days
  • 1 x power outlet, 1 x multi-plug. The multi-plug should have 5 plugs minimum, 7 plugs to be comfortable, and would preferably have an on-off switch. Make sure they’re grounded! Blowing up expensive switches is ill-advised, and might get you in trouble with your Finance department.
  • 1 x Juniper EX4200 switch (or equivalent). Should come with a serial cable.
  • 3 CAT5 cables or equivalent (ethernet, not crossover; if you don’t know the difference, you might want to consider finding that out first… :-D)
  • 1 x  linux workstation with a serial port, gtkterm, wireshark and a hypervisor (VirtualBox is free, easy, and yields good performance)
    • 1 x linux VM. radiusd will be installed on it
    • 1 x windows 2008 VM. Certificate services will be installed on it (optional)
  • 1 x linux workstation, with the FreeRADIUS client utilities (freeradius-utils package in ubuntu) installed.
  • 1 x Windows XP workstation (or Windows 7, if you prefer. For Pete’s sake, don’t talk to me about Vista)
  • 1 x laptop (or tablet) with wireless access to the ‘net – for note-taking and frequent research
Initial setup
Contrary to the Juniper setup guide, I don’t tend to start by mounting my equipment on the rack. The average server room is loud, cold, dark, and has entirely too little room to be faffing about in it trying to configure your new equipment. Seriously, you’re more likely to damage yourself and your existing equipment than anything.

Do pick somewhere roomy, dry, well-lit and safe from danger. A good example of this is your office. If you have one. Perhaps an infrequently used conference room. A corner table in the cafeteria is not an option, regardless of how temptingly close it is to the coffee machine. And, once again, remember: it might take you a few days to get this right, so maximize your chances.
Step One: configuring your admin workstation
This sounds silly. However, if you don’t have the right tools for diagnosis, you’re going to waste a whole lot of effort for no reason; in the very least, you should peruse this section to make sure you’ve got the ideal setup.
My admin workstation is a dual-core 32-bit Ubuntu box with 4 GB of RAM. Not particular fast, but it has enough space and memory to host a VM without dying a horrible, rattling death. The idea is to run your RADIUS server on a VM so that you can sniff traffic on it without having to set up port mirroring; additionally, you want to be able to snapshot you RADIUS server in order to try different permutations of your configs, or revert to an earlier state if you’ve screwed things up.
Another important thing is that your admin workstation must have a serial port and terminal. You can configure the switch without it, in all probability; be that as it may, you’ll want to at least be able to connect to the serial port so that you can get the switch’s status when you shut down for the day.
Step Two: the physical environment, VM, and switch
Once your admin workstation is ready, spend a little time prepping the environment. Get the multi-plug connected to the outlet and physically accessible at all times (you’ll find that you have to power-cycle the switch a lot). Do not plug the switch in yet! Place your admin workstation close to the switch, and plug in the serial cable and launch your terminal, so that when then switch turns on you can monitor its status as it boots up.
Before you get started on the switch, I recommend you install the base O/S of your RADIUS VM. Set up a barebones distro (I use ubuntu server LTS) with SSH on it, to start. When configuring the virtual hardware, make sure you have enough resources dedicated to it and, most importantly, set it to bridge the ethernet connection – don’t use NAT. The benefit of setting up the VM first is that while the O/S is installing you can focus on configuring the switch.
While system files are being copied to the VM, plug in your switch – it’ll start booting the minute it’s got power. Out of the box, it’s in initial setup mode; if you’ve bought it second-hand or inherited it, you’ll probably have to reset the factory settings; here’s how you do that:
  1. There are two buttons on the front panel of the switch – one to switch options (the menu button), and one to enter options (the enter button). Hit the menu button until you get to the Maintenance Menu, then press the enter button
  2. Within the maintenance menu, hit the menu button until you get to Reset Factory Settings, then press the enter button
  3. Press the enter button to confirm; the switch will display a message indicating that it’s resetting everything to factory settings.
These switches are configurable via two interfaces: the CLI interface (serial port), or the J-Web interface (http). I’ve found that there is really no benefit to using the CLI for the initial setup, so I’m going to explain how it works the J-Web way:
  1. Press the menu button until you get to Maintenance Menu, then press enter
  2. Press the menu button until you get to EZSetup, then press enter
  3. Plug your admin workstation into the first port of the switch in the front panel (port 0) – note that this differs on other models
  4. The manual suggests that the switch will automatically attribute an IP addy to your workstation via DHCP – that’s a bunch of hooey. You’ll need to configure a static IP address. By default, the network setup for the switch is, so you’ll probably want to attribute to your admin box.
  5. Open a browser window to and follow the initial setup instructions. If you follow the default options, you generally set up a default management VLAN of 0, and opt for in-band management. This works fine for simple configs – adapt your approach to your infrastructure if this isn’t the case, obviously.
This is pretty much where I’ve found the manual ceases to be useful. Yippee. So here are a few notes I’ve collected over the past few days:
I couple of helpful things to know about your EX4200
The first thing to know is that the web interface is pure gold. We use a lot of Cisco Catalyst switches – which are very powerful, of course, but the web interface really feels like an afterthought. The J-Web interface is badass. Here are just a few of the things I really like about it:
  • Via the web interface, you can assign MAC addresses to different ports via the Port Security section. You can also ‘trust DHCP’; didn’t get to play with that but presumably, you’d be able to initialize your switch by ticking this option, and then un-ticking it afterwards. (Note, of course, that MAC address filtering is not the safest option for NAC, since it is possible to spoof a MAC address. However, it will prevent your users from plugging in their personal computers and potentially infecting the entire campus with a worm…)
  • You can set up port mirroring on multiple ports, allowing you to perform specific analyses or even to ‘load balance’ your traffic sniffing.
  • Ever gotten an ARPwatch alert and not known where the hell a rogue computer’s connecting from? The J-Web interface allows you to see the switching table information; you can therefore see what port a rogue MAC address is on.
But I digress – let’s focus on a few things you need to know about the switch in order to understand how to configure it for 802.1X:
  • You apply 802.1X at the port level. This is to say that some of the ports of your switch can have 802.1X enabled, and some not. At first I thought, “crap – if I have to apply it to each port one by one, it’s going to take hours.” Not so: you can select multiple ports at the same time and enable 802.1X to your selection.
  • You apply 802.1X by “Setting the 802.1X profile”. You disable it by selecting one or many ports and clicking on “Delete”. Search me why they didn’t use a consistent vocabulary – but hey, it works.
  • Before you can set the 802.1X profile, you have to set the port’s role. This is analogous to Cisco’s smart ports; you essentially set up a profile for the device to which the port is connected. You can define your own, but there are a few pre-defined roles such as ‘default’, ‘desktop’, ‘switch’ or ‘phone’. If you don’t apply a port role, you simply can’t set the profile. Addendum: you can leave the RADIUS server’s role to none or default.
  • How does 802.1X work? God, I’ve read a gazillion definitions, some very academic, others very high-level; and quite frankly, I’ve found that all of them lack either clarity or detail. I finally found a decent overview in the following excerpt, which I’ve taken from an Avaya support article (click here for the full document). I’ll admit that the wikipedia section on the authentication progression helped, as well. If anyone out there has a better definition than the one below, please let me know:

At this point, you’re pretty much ready to attack the ‘hard part’ of the problem, which is getting the authenticator (switch), supplicant (workstation) and the authentication server (RADIUS box).

Step Three: getting 802.1X up and running
The first thing you’ll need to do is configure the RADIUS server. Your best bet is to go with Linux; I know that Windows NPS is tempting! It’s easy to perform the install, it integrates with all the other components in a windows network, it allows you to bind it to AD effortlessly, it provides more useful features than a swiss army knife. Yackity yackity. Linux might not be the easiest to install, but it’ll work once it’s installed; and it’ll continue to work for a long, long time. The last bloody thing you need is for your entire campus to go tits up every second wednesday of the month, if you know what I mean.
At this point, your VM is probably ready to be configured. Let’s get started with a basic configuration – one that will allow users to authenticate using user creds present in the radius users config file:
  1. Install RADIUS: sudo apt-get install freeradius
  2. Navigate to the /etc/freeradius directory (you’ll need to be logged in as root to do this)
  3. Edit the users file; add a user with only the Cleartext-Password attribute set (there should be examples of this in the file)
  4. Edit the clients file; set the RADIUS secret of the localhost (eventually, we’ll have to specify one for the juniper switch)
  5. Restart freeradius: /etc/init.d/freeradius restart
  6. Test freeradius: radtest <user> <password> localhost 10 <secret>
  7. If you have trouble connecting, I suggest you stop freeradius, then start it with -X. Consult this link for more information on debugging:

Note: stupidly, I had added a user to the users file with the same name as a user in my passwd file. By default, freeradius does have PAP enabled, and it does have priority over the users file.

Then, you should make the switch aware of the RADIUS server – in other words, configure the Authenticator to interface with the Authentication Server. To do this, follow these steps:
  1. Log onto the J-Web interface
  2. Go to the Configuration tab
  3. Go to the 802.1X section
  4. Click on the RADIUS Servers button toward the top right of the page, and add a new server. The terminology is a bit confusing here, so make sure that you put the RADIUS server IP in the destination address field, and the swtich IP in the source address field. Also, make sure you specify the correct port (1812). You’ll need to fill all the fields – the switch will happily take blanks but I’m pretty sure it mucks up the transmission.

Remember that the switch’s role as Authenticator will be to take the auth packet coming from the XP workstation, encapsulate it in an EAPOL frame and pass it on to the RADIUS box.

Also remember that in this case, the only machine that will be directly communicating with the RADIUS server is the juniper switch. A common mistake is to think that all hosts will need an entry in the clients file of freeradius – in fact, the only device that does is the juniper switch.

Step four: connecting an XP machine to your switch

You want to start by making sure that your XP box can connect without the additional mumbo jumbo – without preemptively setting any 802.1X profiles for the port, plug your XP machine in and ping the switch (presumably, you’re going to have to set up a static IP address before you’re able to connect). You should proceed only if you’re able to hit the switch successfully at this point; at the risk of stating the obvious, your workstation isn’t going to automagically fix itself once you’ve got the whole thing configured.

Next, you should make sure that your XP box is running the necessary service to talk 802.1X to the client. Once upon a time (in pre-SP3 versions of XP), this meant starting the Wireless Zero config service – which, you know, makes oh-so-much sense when you’re trying to set up a wired connection. As of SP3, the service you’re looking for is Wired AutoConfig – make sure it’s set to Automatic startup and is, in fact, started now. Navigate to Network Connections (Start > Settings > Network Connections), right-click on the wired NIC and hit Properties. You should see at least three tabs, one of which is ‘Authentication’; if you don’t see the Authentication tab, re-read the beginning of this paragraph and slap yourself on the wrist – you’re skipping steps.

In the Authentication tab, enable 802.1X authentication, set it to use Protected EAP, and hit the Settings button. Disable validation of the server cert (though you’re obviously going to want to revisit that one later to be safer), and set your authentication method to be MD5. Click on Configure and make sure that your machine isn’t using your windows creds to authenticate, though this may be the case later. Finally, make sure that Fast Reconnect is enabled and the two other options are disabled (once again – you’re going to have to revisit that later), and hit OK. Note that you should still be able to ping the switch; what we’ve done is configure an additional layer which is only used if it is needed. This is practical if a lot of your users have laptops.

This should be a decent config for now – very simple, starting easy. I find that it’s easier to start simple and work toward the more complicated configs incrementally; it reduces chances of getting a composite problem that isn’t easily fixed with the flick of a switch.

Returning where we left off in step 3, we’re now going to assign an 802.1X profile to the port on which your XP box is connected. Return to the J-Web interface, and navigate to Configure > Security > 802.1X. Click on your XP box port, then click Edit > Apply 802.1X profile. That will enable 802.1X security on the port. Note that you can select as many ports as you like and apply 802.1X to them in bulk.

Look at your XP box, now: you should get a bubble appearing over your network connection, indicating that you have to specify additional information to connect. If you click on it, you’re prompted for a user name and password. Great success! You’ve just set up RADIUS authentication.

Step five: expanding your horizons

Now that you’ve configured 802.1X on your wired network, you’re probably thinking “great, so now that’s done. Whatever happens, I don’t have to worry about network access control ever again.” Right?


You’re just getting started. This is pretty much the tip of the iceberg – iteration one of dozens, if not hundreds, of iterations to get your systems secure.  Let me show you what I’m talking about:

Your RADIUS server

Right now, your RADIUS server is sitting on your switch happily answering auth requests. You need to think about locking that baby down; for one thing, it should only accept requests from the switch – this can and should be done using ACL’s on the juniper and on the RADIUS box.


You should at least be logging failed requests: there aren’t a gazillion reasons why those should be occuring. Someone on your team might be (re)configuring a workstation; or maybe an end-user is trying something he/she shouldn’t be doing, like hooking up a personal device to the network or messing with the network config. Maybe someone’s trying to brute-force the connection… Use your imagination.

Monitoring’s nice, alerting is better. What system are you going to be using? How are your alerts delivered? Be sure to use the appropriate monitoring tool, make sure you’re not leaking information or generating unnecessary traffic, and spend time testing and pruning the data your system is generating. Data is not information if there’s too much shit to read! You want as little false positives as possible (don’t we all…) – so think about what scenarios you should truly be following up on and cut down on the noise.

Encryption and validation

What we’ve configured up above is insecure:

  1. Packets are not encrypted 
  2. Authentication is done via MD5 challenge – MD5 has some vulnerabilities (c.f. this paper).
  3. If you don’t choose a good RADIUS shared secret, it can be crackable. In fact, RADIUS itself does possess a number of vulnerabilities (c.f. Joshua Hill’s article on which, if you’re not careful, could lead to DOS conditions or password compromise (and if your RADIUS box is relying on Active Directory for authentication, the extent of the damage could be consequential).
Quite a few of the vulnerabilities mentioned above can be circumvented by restricting access to the RADIUS server from the network (as mentioned above), getting relevant alerts when a new device is plugged in, and using a different authentication mechanism (MS-CHAP-V2 might be an option, but it’s definitely not 100% secure either as you can see).


Speaking of MS-CHAP-V2… Have you considered using LDAP? [Edit – was going to add some derisive comment pointed at a company we all know and love. But that’d be below me… Heheh, right.]

If you’re using Active Directory in your infrastructure, you can of course hook your RADIUS server up to that. An added benefit of this is that you’ll be disabling both the user’s access to services and the network from a centralized location. The disadvantage is that, once again, your network access is relying on a centralized AAA system that has been known to fail. Plus, there’s a good chance that if you’re able to compromise credentials via the RADIUS server, you’d pretty much have the keys to the kingdom. If you’re going to use LDAP with RADIUS, you should at least be thinking about using TLS for encryption (which requires setting up a Cert Authority).

Quarantine Checks

You probably noticed the ‘Enable quarantine checks’ checkbox when configuring your client in step four. Sounds enticing, doesn’t it? Essentially, what it means is that your client would accept participating in Statement of Health checks; of course, it does mean that you have to set up a Windows Network Policy Server. If you’re doing that, then you might as well set up RADIUS on the same server. c.f. step three to see why I think this is a bad idea…

Multiple connections on a single port, and special ports

One of 802.1X’s potential pitfalls is that of multiple machines connected to a single port. What happens if somebody comes in with a switch of their own? They could plug it into their ethernet connection and put a corporate machine on it as well as an external machine. If the switch isn’t configured correctly, the corporate machine could answer the RADIUS auth requests and the external machine would have all the access to the network it needs.

The EX4200 has provisioned for this possibility, and can be locked down in two following ways:

  1. You can prohibit a port from having multiple devices connected to it
  2. You can configure the switch to require that all devices connected to the port need to authenticate
  3. You can use MAC address filtering
Nope, it’s not a mistake – I indicated that there were two ways to lock down the switch and listed three. That’s because the third’s not a viable option: I’m stating it so I can shoot it down right away, because it’s no ridiculously easy to circumvent.
I think that further research into the implementation of option two is necessary to be able to sign off on it fully… But that, like many other elements of this article, is for another time.
I hope that you found this little intro to 802.1X and Juniper useful. It’s meant as an introduction and is  by no means comprehensive, but will hopefully get you thinking about the various infrastructural and security aspects that you need to consider when implementing such a mechanism.
Frightfully sorry about some of the gaps in explanations – if you write to me or comment on my post, I will do what I can to answer any questions!

>Soundminer — a trojan that steals your credit card info by voice.

>Found this very impressive:

Essentially, Soundminer is a trojan that steals credit card data by listening for touch tones or audio. You can combine it with other malware to transmit the stolen information back to you. The mobile app is a proof of concept — it has to be installed and approved by the user to work (link to the paper here). However, I think that it very effectively proves that one should exercise caution with any electronic device, not just computers.

Smart devices – be they cell phones, multi-function printers, or projectors – should be considered as mini-computers; be careful what you install, and be careful what you access. Include it in your security planning, know how to react if a smart phone gets stolen or its security compromised, know what access these things have to your organization.

Security weaknesses due to smart devices are nothing new: I attended a brilliant Defcon 16 talk, Bringing Sexy Back: Breaking in with Style, during which Errata security explained how they broke into a client network by mailing an iPhone with wifi enabled to the office. At Brucon 2010, Joe McCray mentioned ‘borrowing’ the network connections of MFP’s during pen-testing exercices more that once. Our electronic devices are getting cooler and cooler – and security is the price to pay for the extra snaziness. Users be warned πŸ™‚

>What does the frontend of an online hacker store look like? Courtesy of Boing Boing.

>I thought this post was both a frightening and yet strangely entertaining thought. It has such a ‘hollywood’ feel to it — perhaps this is why it’s so dangerous.

You’d think that this is an unsustainable business; I mean, don’t admins change their passwords at least from time to time? Don’t vulnerabilities get fixed, making it impossible to find the password in the long run?

Yeah, right. Site admins are probably as conscientious as they can be given their time and budget constraints. Also, it’s increasingly common for organizations to have ‘site admins’ that have more of an editing / web design background than a sysadmin / web dev / infosec background — an unfortunate consequence of increased outsourcing of web development and increased usability of CMS systems.

What did you expect to see on a webmaster’s CV 5-10 years ago? Fluency in HTML, CSS, and javascript, intermediate to advanced knowledge in a scripting language such as PHP perhaps, maybe some working knowledge of Flash, and definitely some experience with some web design package (like Dreamweaver) or IDE (such as Visual Studio .Net, Eclipse — or hell, even WebMatrix). The site admin was expected to liaise with the Comms team or something in order to put the content on the web, and had little to no experience in the field of editing or journalism.

Nowadays, it’s the opposite effect: with easy-to-use tools such as Drupal, Joomla, DotNetNuke, or Sharepoint, you don’t need nearly as much hard skills in order to administer and maintain a website. I’d go as far as to say that to recruit an admin with a strong technical background would only lead to the person’s frustration and eventual resignation. However, it does mean that this new generation of site administrators is less likely to exercise proper caution — reading access logs, using secure passwords, performing routine security tests and code reviews, and following security feeds in order to reduce the chances of your site getting pwned.

Okay wise-ass, I can hear you say, thanks for stating the problem — now what’s the solution?

Sadly, there is no easy solution for this. Ideally, in a small to medium organization, you want the web team to have at least one person managing the content, layout and editing of your website — let’s face it, we techies are generally allergic to such things (anyone that’s worked with me knows not to mention colors in my presence – I get hives). That person is the main ‘business’ liaison and project champion — let’s call him/her the ‘web editor’. Then, on the technical side, you’d have one web development liaison, and one sysadmin liaison. You don’t want the person that’s writing the code to review the code, or checking the logs — each person has a set of responsibilities that compliments the others. Nobody’s stuck with a laundry list of responsibilities, routine checks are more likely to be performed and, provided that there’s adequate communication between parties, one generally avoids getting listed on such sites as mentioned above.

>Warrior worms!

>When I first saw the title below, I had a chuckle:

As a security puke, “warrior worms” has an entirely different connotation to me. The article does in fact talk about a species of flatworm that was discovered to have organized division of labor (like bees), hence the name.

I skimmed it during breakfast, briefly pausing to wonder whether such an article constituted appropriate mealtime reading. About halfway through, something caught my attention:

The scientists think the worms started out as generalists. But as onslaught from invaders increased, traits evolved in some worms that benefited defense, while the reproducers became more specialized at what they do best.

Hang on. Wait just a minute there. Worms that started out as generalists, but specialized as attacks on them increased? How could this apply to the virtual world?

I see a potential application — and as a preamble, let me state that I haven’t checked to see if people out there are already doing some research on this: first and most obvious application would be to write an entirely new class of polymorphic worms that specialize and work in relation to each other.

To wit: malware that has a “larval stage”, penetrating a host with an 0-day, grabbing local admin credentials, sniffing traffic, and replicating itself to other vulnerable hosts.

Once it can no longer replicate in this form, it begins a specialization process — each ‘larva’ determines what operating system, services and apps it’s running on. Larvae then report their information to some sort of C n’ C center, which shall in return provide information that will help shape the larvae into specialized drones so that they can effectively compromise the rest of the network, erase their trail and obfuscate themselves from future analysis (perhaps reconfigure IDS and netflow rules? Who knows).

Because the C n’ C has received the host’s information, it can re-use the info to redeploy new variants of exploits as they come out or attack specific services — consider this scenario: “I want to overload this e-mail server with SMTP traffic”. Traditionally, you’d get all your compromised hosts to attack the e-mail server (telnet, netcat, python script… Whatever). That’s going to be very bloody noisy, so you’ll probably only be able to do it once before sysadmins realize what’s going on and re-ghost all hosts that have been talking (or trying to talk — depends on the network’s egress filters, yeah?) to port 25.

But what if your C n’ C knows which compromised hosts out there are mail servers? It can tell just those hosts to attack the mail server. The attack is practically untraceable at the network level; as long as you’re not trying to send too many e-mails out and that your content isn’t stupidly conspicuous, your mail admin won’t be able to tell the difference between regular traffic and your traffic. Better still, this attack would allow you to continue using your compromised hosts, because it’s much more discrete.

How on earth do you protect against this sort of attack? I’d say that your best bet is to have agents installed on your workstations and servers to monitor any changes made and report them on a regular basis (once weekly, for instance) — there are tools already out there for that, thankfully. You’re not *guaranteed* that you’ll catch it — you need someone to look at the logs really carefully — but you’re more likely to catch it than if you’re not doing anything.

Another countermeasure would be to reghost machines on a regular basis — and once again, there are a lot of tools out there for that.  You can do this fairly easily for workstations, but let’s face it — it’s nigh impossible to do that for servers.

Are you into biomimicry, too? Or do you think this article is bullocks? Great — leave me a comment! Would love to hear what you have to say πŸ™‚

>An ubuntu install script


Wrote a simple little script this morning to install all the software packages I might need for ruby development (plus a few security tools).  Hopefully it will serve someone other than me πŸ™‚
I know, I know…  You can’t generalize and install some set of packages without knowing what they are.  That’s not the linux way.  On a production server, I’ll always perform a manual setup and, when I can, I compile from source rather than use packages.  This particular script is suited for a dev machine.
Note that, in the very beginning, I set up a few version variables.  You should be able to just set these and then fire up the script.
Caution: I’m providing this script as I use it, on a non-production, fresh install of a linux desktop environment. You can do whatever you want with it; but if you’re dumb enough to run this on a production server without checking it out in detail first, and it breaks your prod environment, don’t come complaining to me — I’ll hurt you, man! πŸ˜‰
And now for the code:

#This script assumes that you’re running ubuntu 10.4 32-bit. For the metasploit, ruby enterprise and flash packages, you’ll definitely need to change the packages downloaded!

if [ “$(whoami)” != ‘root’ ]; then
        echo “You have no permission to run $0 as non-root user.”
        exit 1;

#Set a few variables here:

echo ************************** Installing basic packages: **************************
apt-get install -y build-essential subversion vpnc network-manager-vpnc libreadline5-dev

echo ************************** Installing forensics packages: **************************
apt-get install -y ewf-tools sleuthkit registry-tools hfsutils squashfs-tools
echo ************************** Installing security packages: **************************
apt-get install -y snort flow-tools aircrack-ng ettercap-gtk python-scapy wireshark tcpreplay ghex openvas-server openvas-client nmap zenmap

echo ************************** Setting up metasploit **************************
wget`echo $metasploit_version`.run
chmod +x framework-`echo $metasploit_version`.run
./framework-`echo $metasploit_version`.run

echo ************************** Installing software development packages: **************************
apt-get install -y ruby`echo $ruby_version` ruby`echo $ruby_version`-dev libopenssl-ruby rubygems mysql-server meld

echo ************************** Installing web server packages: **************************
apt-get install -y apache2 apache2-prefork-dev libapr1-dev libaprutil1-dev

echo ************************** Removing mysql-server autostart **************************
update-rc.d -f mysql remove

echo ************************** Removing apache autostart **************************
update-rc.d -f apache2 remove

echo ************************** Setting up ruby enterprise **************************
wget`echo $ruby_enterprise_version`.deb
dpkg -i ruby-enterprise_`echo $ruby_enterprise_version`.deb

echo ************************** Setting up passenger **************************
/usr/local/lib/ruby/gems/`echo $gem_version`/gems/passenger-`echo $passenger_version`/bin/passenger-install-apache2-module

echo LoadModule passenger_module /usr/local/lib/ruby/gems/`echo $gem_version`/gems/passenger-`echo $passenger_version`/ext/apache2/ > /etc/apache2/mods-available/passenger.load
echo <IfModule mod_mime_magic.c> > /etc/apache2/mods-available/passenger.conf
echo PassengerRoot /usr/local/lib/ruby/gems/`echo $gem_version`/gems/passenger-`echo $passenger_version` >> /etc/apache2/mods-available/passenger.conf
echo PassengerRuby `which ruby` >> /etc/apache2/mods-available/passenger.conf
echo </IfModule> >> /etc/apache2/mods-available/passenger.conf

echo ************************** Getting Flash Player **************************
wget`echo $flash_version`.deb
dpkg -i install_flash_player_`echo $flash_version`.deb

echo ************************** cleanup **************************
rm examples.desktop install_flash_player_10_linux.deb framework-`echo $metasploit_version`.run ruby-`echo $ruby_enterprise_version`.deb

Here’s a sample apache config (taken straight from phusion’s installer…):
   <VirtualHost *:80>
      DocumentRoot /somewhere/public 
      <Directory /somewhere/public>
         AllowOverride all
         Options -MultiViews

>Run your linux applications remotely over SSH

This is a *very* short article on using x-win with SSH — namely because there’s a ton of articles out there on the subject already. I found that this worked using with cygwin and ubuntu… If you’re using ubuntu as both the client and the server, you won’t need to export the DISPLAY variable…

Server: the machine whose programs you want to run; could be a server on a rack
Client: the machine on which you want to see the programs; could be your workstation

from your client:
1) start X-Win (if cygwin)
2) use xhost to grant access to the x-win server: “xhost +[name]”, where name can be a host or a user.
3) use ssh to connect to the server: “ssh -X [user]@[servername]”

from the server via ssh:
1) set the display (this assumes you’re using bash): “export DISPLAY=[client ip address]:0.0”
2) test using xclock: “xclock &”

Once you’re done, I would recommend that you do an “xhost -[name]” from your client again.

>Reverse RDP tunneling using SSH

Nowadays, the market is *flooded* with really, really good remote control applications. I’d say that in 99% of the cases out there, you’re set by using a client such as LogMeIn or even something a bit more beefy like Kaseya. However, there *are* times when you need a little something extra. Maybe it’s because you just prefer RDP. Maybe your needs are special (you’ve got a legacy app that works over TCP but isn’t encrypted, and you need remote access). Maybe your client’s workstation just got patched and it broke your remote control (oh yeah — still happens). Whatever the case may be, you need an extra remote control that is secure, password protect and — traverses NAT.

For those of you that already use SSH, this is really nothing new. SSH has been around for ever, port forwarding included, so if you’re already experienced in SSH, chances are you know all of this already. In my case, it was necessary to do some investigation of a lightweight SSH client on windows. Installation had to be silent (read: command-line quiet installation) and it had to traverse NAT. I selected Bitvise’s Tunnelier application for this. It’s free for personal use.

In today’s article, we’re going to look at tunneling applications through a reverse-SSH shell. I use RDP (port 3389) as an example but it should apply to pretty much anything (VNC, SQL etc…)


  • Local machine is directly accessible to the administrator (console, RDP, LMI or otherwise). If the machine is a linux box, consider setting yourself up with X-Win (read my very short how-to on x-win at
  • RDP is running on remote machine at port 3389 (otherwise, change the destination port in profile configuration below) 
  • You have an SSH server accessible from the internet


  • LOCAL – the administrator’s local machine. SSH server. Must be accessible from the net. 
  • REMOTE – the workstation to which the administrator wishes to connect. One must be able to transfer the Tunnelier setup files and run the command for installation.

Tunnelier silent install (from a shell session to the remote machine):

  • Tunnelier-Inst.exe -installDir=c:Tunnelier -acceptEULA -force
  • You can get the latest version of the install binary here – be sure to pay bitvise a thorough visit when you can, they’ve got some really great tools:

Create a tunnelier profile (can be done from a local machine with tunnelier):

  1. For ease of use, this profile should be copied to the same directory as the tunnelier binary
  2. Use the password method, save it to the profile.
  3. IMPORTANT: Create an ***S2C*** forwarding entry.  Listen interface:; port: 3389 (or other if already in use – note that this is the port on the LOCAL machine); Destination host:; Dest. Port: 3389
  4. Also under S2C forwarding, check on “Accept server-side port forwardings”
  5. Under Options, check the “Always reconnect automatically” option.

To tunnel RDP connections:

  1. From the remote command line (i.e. ssh or netcat), switch to the tunnelier directory and run “tunnelier -profile=your_tunnel_profile.tlp -loginOnStartup” (replace your_tunnel_profile with the appropriate profile filename).
  2. Tunnelier should automatically connect to SSH
  3. From the local machine, open up an RDP connection to localhost at the port specified in the profile.