>Warrior worms!

>When I first saw the title below, I had a chuckle:
http://news.discovery.com/animals/warrior-worms-caste-colony.html

As a security puke, “warrior worms” has an entirely different connotation to me. The article does in fact talk about a species of flatworm that was discovered to have organized division of labor (like bees), hence the name.

I skimmed it during breakfast, briefly pausing to wonder whether such an article constituted appropriate mealtime reading. About halfway through, something caught my attention:

The scientists think the worms started out as generalists. But as onslaught from invaders increased, traits evolved in some worms that benefited defense, while the reproducers became more specialized at what they do best.

Hang on. Wait just a minute there. Worms that started out as generalists, but specialized as attacks on them increased? How could this apply to the virtual world?

I see a potential application — and as a preamble, let me state that I haven’t checked to see if people out there are already doing some research on this: first and most obvious application would be to write an entirely new class of polymorphic worms that specialize and work in relation to each other.

To wit: malware that has a “larval stage”, penetrating a host with an 0-day, grabbing local admin credentials, sniffing traffic, and replicating itself to other vulnerable hosts.

Once it can no longer replicate in this form, it begins a specialization process — each ‘larva’ determines what operating system, services and apps it’s running on. Larvae then report their information to some sort of C n’ C center, which shall in return provide information that will help shape the larvae into specialized drones so that they can effectively compromise the rest of the network, erase their trail and obfuscate themselves from future analysis (perhaps reconfigure IDS and netflow rules? Who knows).

Because the C n’ C has received the host’s information, it can re-use the info to redeploy new variants of exploits as they come out or attack specific services — consider this scenario: “I want to overload this e-mail server with SMTP traffic”. Traditionally, you’d get all your compromised hosts to attack the e-mail server (telnet, netcat, python script… Whatever). That’s going to be very bloody noisy, so you’ll probably only be able to do it once before sysadmins realize what’s going on and re-ghost all hosts that have been talking (or trying to talk — depends on the network’s egress filters, yeah?) to port 25.

But what if your C n’ C knows which compromised hosts out there are mail servers? It can tell just those hosts to attack the mail server. The attack is practically untraceable at the network level; as long as you’re not trying to send too many e-mails out and that your content isn’t stupidly conspicuous, your mail admin won’t be able to tell the difference between regular traffic and your traffic. Better still, this attack would allow you to continue using your compromised hosts, because it’s much more discrete.

How on earth do you protect against this sort of attack? I’d say that your best bet is to have agents installed on your workstations and servers to monitor any changes made and report them on a regular basis (once weekly, for instance) — there are tools already out there for that, thankfully. You’re not *guaranteed* that you’ll catch it — you need someone to look at the logs really carefully — but you’re more likely to catch it than if you’re not doing anything.

Another countermeasure would be to reghost machines on a regular basis — and once again, there are a lot of tools out there for that.  You can do this fairly easily for workstations, but let’s face it — it’s nigh impossible to do that for servers.

Are you into biomimicry, too? Or do you think this article is bullocks? Great — leave me a comment! Would love to hear what you have to say 🙂