For the first three months after I started working on a government network security team I had nightmares. Then I decided that the grunts on the other side were probably in the same boat as we were--undermanned, underfunded, and misunderstood. Whether it's true or not, it let me sleep at night.
But the reasons I was so unsettled didn't go away. I tell people that most security analysts (and I include system and network admins who take responsibility for security in this group) are very good at the easy attacks, probably catching close to 100% of them quickly. They are pretty good at the moderately sophisticated ones--I would guess that well over 50% are eventually caught, although there is a fair amount of luck involved in a lot of those.
But what has always bothered me is that we don't have any idea of how much we don't know. Honeynets are easily avoided by the most sophisticated attackers, especially since they are focused on specific targets in the first place. What makes a target specific? Look at brick-and-mortar crime to get a sense of the possibilities. Based on this, I am guessing that because of the skills required, the problem is not rampant but is still significant.
So one of my goals in developing Realeyes was to provide a tool for security analysts to dig into the goings on in their networks. Those who are familiar with a network can tell when conditions just don't feel right, but what can they do with limited time and resources? After running Realeyes in a pilot program for over six months, it is beginning to deliver on its potential to be that tool.
The site's network admin had periodically seen large bursts of outbound traffic from several EMail servers in the early morning and suspected that there were some hosts being used to send spam. I defined rules to report all of the EMail server traffic between midnight and 6:00 am. Unfortunately, the school year ended shortly after the monitoring was set up and there has not been any of the traffic that it was defined to capture. However, there have been some interesting results.
Initially, there were a lot of reports of automatic software updates, so I used the exclude rule to prevent certain host or network addresses from being reported. It turned out that a couple of these servers were also listening on port 80, so there were a lot of reports of search engine web crawlers. These were eliminated by defining rules with keywords found in those sessions with the NOT flag set to cause the sessions to be ignored.
There were several 'Invalid Relay' errors issued by EMail servers, and some of the emails causing them were sent from and to the same address. At first I created a rule to monitor for the invalid relay message from the server. This captured a lot of simple mistakes, so I have started defining rules to capture email addresses that are used more than once. What I am trying to do is refine the definition of 'probes' which can then be used for further monitoring.
The further monitoring is accomplished using the 'Hot IP' rule. When a Hot IP is defined for an event, all sessions of the IP address (source, destination, or both) specified are captured for a defined time period after the event is detected, 24 hours for example. Using this technique, I have recently seen one of the probing hosts send an HTTP GET request to port 25, as well as some other apparent exploits.
This process is more interactive than the method used by most security tools. But by giving more control over what is monitored to those who know the environment best, I am trying to help build a better understanding of how much we don't know. And I hope that lets more good grunts sleep better at night.
Later . . . Jim
Wednesday, July 9, 2008
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment