>Private browsing and forensics

>

Ever wondered whether the “private browsing” feature in your browser actually works?  

This article may shed some light on this topic for you. On a sour note, I’m completely shocked that Microsoft’s implementation of private browsing leaves something to be desired.

From a privacy advocate and defensive security perspective, I’m all for private browsing, both in the private and corporate world, and here’s the main reason: cookies, cached files and the like represent a significant security issue and a potential data leak. If your company uses webmail or an intranet and you’re consulting confidential files on the fly, that data gets stored locally on a machine. This constitutes a risk at the enterprise level that trumps the need for a forensically viable audit trail.

Private browsing isn’t a panacea, though: since data is stored in memory, malware that is already installed on the PC could scrape memory in search of interesting data (credit card numbers, credentials, etc etc.) — and not just malware, either. If you were at the European SANS forensics summit this year, you might have heard this guy talk about retrieving the contents of a machine’s memory using forensics tools.  Nor does it protect the user against a traditional network sniffer / MITM attack. Finally, it assumes that you actually bother to close your browser to clear that memory of sensitive data.

A lot of this is abstract for the layperson, so let’s provide a real-world scenario:

Let’s say you work for a pharma company and you’re waiting for a flight.  You’re bored, so you go to an internet café and open up your webmail. Your teammate’s sent you the latest draft of that report you’ve been working on, internally disclosing the findings of your latest research. You review the document, and fire her back an e-mail with your comments; you then leave the café and proceed to your gate. 

Risk #1: the PC you use isn’t an enterprise PC: to quote a memorable Mike Myers film, it’s the village bicycle of IT — everyone’s had a ride. What’s the café’s policy on updating its A/V? Is there regular maintenance? Does the machine get re-ghosted after every use? Is there a slot for a USB drive (and therefore a vector of infection)? Is the network traffic being sniffed (i.e. monitored)? It all depends on the owner of the café — there aren’t any laws or standards that oblige internet café owners to comply to basic security measures. For this risk, no amount of “private browsing” can help you – you may as well have broadcast your enterprise password and files on facebook.

Risk #2: that report you just looked as has pretty much become public property the minute you opened it up on that public machine. Not only can subsequent users of that PC retrieve your report, but the law will not be on your side (“you should have known better” will be the de facto response). Private browsing can help you there, provided that you close the browser, because the data is stored in memory and not on disk.

Risk #3: how often do people forget to log off? Very often. As a matter of fact, I don’t think there’s a single person on this planet that’s used a computer and has never, ever forgotten to log off. And yet, if you forget to log off when you walk away from that public PC, all of your company’s past, present and future secrets could be compromised. Ever heard of the switchblade USB key? It retrieves cached passwords very nicely, and almost instantaneously. Very difficult to use: you insert the key in the computer, wait thirty seconds, pull it out — voilà, passwords du jour. In this scenario as well, private browsing can be extremely useful, because it doesn’t allow cached passwords to be written to the disk.

So there you have it, straight from the horse’s mouth: private browsing may well make forensics more difficult, but it doesn’t make it impossible. That is an acceptable risk to me, given that it mitigates enterprise and personal risk of a security breach.

>Well isn’t this just peachy…

>The dust is only just settling after patch Tuesday, and we’ve already got good news…

http://www.zdnet.com/blog/security/microsoft-confirms-unpatched-aspnet-data-leakage-security-flaw/7363

In a nutshell, Microsoft’s reported a vulnerability in ASP.Net’s encryption implementation — this issue makes it such that an attacker could read any ASP.Net-encrypted data, such as the ViewState. The vulnerability is due to an information leak in Microsoft’s error responses. The workaround is to enable custom errors and point all errors to a single page.  Good news is, if you’re doing that already, you don’t have to do anything until MS issues a fix…

>Network monitoring using Netflow analysis

>Over the last two months, I’ve been oh-so-slowly working on a network monitoring system using rails and netflow analysis.  Sure, there are a few netflow apps out there sure as ntop — and don’t get me wrong, they’re absolutely great — but I felt like I would benefit from creating my own system. I wanted to use rails because I’m familiar with rails, and wanted something that I could quickly tweak.

My system has a few simple working requirements: a linux box with mysql, flow-tools, and ruby enterprise installed — most of which you can get automatically installed from this script.

Once the basic components are installed, you need to get your routers and/or switches to report netflow data to your box — assuming that your hardware is compatible with netflow.  Another option is for you to use a separate machine, working inline or on a span port, to report netflow traffic using fprobe or equivalent.

I created a mysql database and used rails migrations to create the netflow table, as well as some views for general statistics. I then set up a rake task to pull the netflow traffic into my database, and clean up old traffic (for performance purposes) – I then used a cron job to execute the rake task on a regular basis.

There are a few really nifty rails gems out there that will help you right from the start: the geoip gem that uses maxmind’s GeoLite geolocation database is super useful, of course; but there are also some really nice graphing tools out there (openflashchart is brilliant, and it seems to handle relatively large volumes of data beautifully).

None of this took much time to actually code. When I say that I’ve been coding this thing “over the last two months”, I’m talking about a half hour here or there. The most time-consuming activity in all this is figuring out which libraries you want to use, and designing your queries.

None of this is rocket science obviously… Just a few tips to get going with some basic netflow monitoring. If you’re expecting to be able to replace a Sourcefire system with this… Sorry, it ain’t gonna happen.  But if you need some basic tools to improve your ability to perform netflow analysis and you’re not afraid to code, these tips might be of help 🙂

I’m thinking of setting this tool up on sourceforge — but hell, there are so many packages out there already, it feels silly.

>Warrior worms!

>When I first saw the title below, I had a chuckle:
http://news.discovery.com/animals/warrior-worms-caste-colony.html

As a security puke, “warrior worms” has an entirely different connotation to me. The article does in fact talk about a species of flatworm that was discovered to have organized division of labor (like bees), hence the name.

I skimmed it during breakfast, briefly pausing to wonder whether such an article constituted appropriate mealtime reading. About halfway through, something caught my attention:

The scientists think the worms started out as generalists. But as onslaught from invaders increased, traits evolved in some worms that benefited defense, while the reproducers became more specialized at what they do best.

Hang on. Wait just a minute there. Worms that started out as generalists, but specialized as attacks on them increased? How could this apply to the virtual world?

I see a potential application — and as a preamble, let me state that I haven’t checked to see if people out there are already doing some research on this: first and most obvious application would be to write an entirely new class of polymorphic worms that specialize and work in relation to each other.

To wit: malware that has a “larval stage”, penetrating a host with an 0-day, grabbing local admin credentials, sniffing traffic, and replicating itself to other vulnerable hosts.

Once it can no longer replicate in this form, it begins a specialization process — each ‘larva’ determines what operating system, services and apps it’s running on. Larvae then report their information to some sort of C n’ C center, which shall in return provide information that will help shape the larvae into specialized drones so that they can effectively compromise the rest of the network, erase their trail and obfuscate themselves from future analysis (perhaps reconfigure IDS and netflow rules? Who knows).

Because the C n’ C has received the host’s information, it can re-use the info to redeploy new variants of exploits as they come out or attack specific services — consider this scenario: “I want to overload this e-mail server with SMTP traffic”. Traditionally, you’d get all your compromised hosts to attack the e-mail server (telnet, netcat, python script… Whatever). That’s going to be very bloody noisy, so you’ll probably only be able to do it once before sysadmins realize what’s going on and re-ghost all hosts that have been talking (or trying to talk — depends on the network’s egress filters, yeah?) to port 25.

But what if your C n’ C knows which compromised hosts out there are mail servers? It can tell just those hosts to attack the mail server. The attack is practically untraceable at the network level; as long as you’re not trying to send too many e-mails out and that your content isn’t stupidly conspicuous, your mail admin won’t be able to tell the difference between regular traffic and your traffic. Better still, this attack would allow you to continue using your compromised hosts, because it’s much more discrete.

How on earth do you protect against this sort of attack? I’d say that your best bet is to have agents installed on your workstations and servers to monitor any changes made and report them on a regular basis (once weekly, for instance) — there are tools already out there for that, thankfully. You’re not *guaranteed* that you’ll catch it — you need someone to look at the logs really carefully — but you’re more likely to catch it than if you’re not doing anything.

Another countermeasure would be to reghost machines on a regular basis — and once again, there are a lot of tools out there for that.  You can do this fairly easily for workstations, but let’s face it — it’s nigh impossible to do that for servers.

Are you into biomimicry, too? Or do you think this article is bullocks? Great — leave me a comment! Would love to hear what you have to say 🙂

>Because manually downloading a bunch of papers is really really boring!

>Just received an e-mail indicating that the SANS forensics summit presentations can now be downloaded off their portal. Apparently, the summit was successful enough that dates are already being blocked for next year – WIN!

Getting to the page, though, I quickly got bored trying to download all the papers; you see, I want to put them on my ebook reader rather than read them all on my computer screen. So, rather than sit there killing off my brain cells with boredom, I used a nice little wget command to grab all the PDF’s off the site. You can pretty much read up about all the flags in the man file but here’s a nice little article courtesy of techpatterns that quickly explains what each flag does.

wget https://computer-forensics.sans.org/community/summits/ -A.pdf –no-check-certificate -r -np -l1 -nd -erobots=off


In case you don’t like clicking on embedded links, here’s a quickie explanation:
-A.pdf ==> filter by the pdf extension
–no-check-certificate ==> don’t check the validity of the cert
-r ==> recurse
-np ==> don’t go up to the parent site
-l1 ==> only recurse one level
-erobots=off ==> ignore the robots.txt file
-nd ==> consolidate all the files in one directory

Schweet.  And now for some dinner.

>Sneaky SAMBA SID vulnerability…

>Here’s a nice little tweet that made my hair stand on end:

hdmoore 

If you are running Samba, turn it off NOW until you can upgrade: http://bit.ly/9bOFH3 (via @jduck1337)

Ugh…  Really? Well this sucks.  I guess the question now is, are you confident that you’re not going to forget a damn server in when you go a-patching?

I’m thinking of scripting something to take the pain away… If your linux boxes are debian based, this shouldn’t be too hard, right?  Assuming that your boxes all have SSH on ’em, that you can authenticate to them using a cert, and that you’ve updated your repo, you could write a script to automate this…

The snippet to identify whether samba is installed would be something like:
aptitude show samba | grep “State: installed”

The snippet to update and install the new package would be
apt-get update; apt-get install samba

Putting this together, you could imagine something like this:
if aptitude show samba | grep “State: installed”; then apt-get update; apt-get install samba; fi

Finally, let’s wrap this up by using a for loop and nmap, shall we?
for i in `nmap -p22 -oG – 192.168.1.0/24 | awk ‘/open/{print $2}’`; do ssh root@$i ‘if aptitude show samba | grep “State: installed”; then apt-get update; apt-get install samba; fi’


This should work… But honestly, I haven’t tested it yet!