Saturday, March 18, 2006

Musings on the ebb and flow of packets....

An interesting perspective from XXXX... negating the fact that he hasn't played with these types of device, a good high level theoretical approach and some definitive weaknesses pointed out.

I guess what interests me and has done for sometime on large national and international networks, is that there is not very much contextualized information around what *is* normal and anomalous ( sometimes no idea at all! ). Also to be able to tie many streams of information together in real time such as netflow, signature based IDS, anomaly detection and CIDR awareness.... helps a great deal. The complexity of today's networks ( hosts/apps/user behaviour too!) is such that the same issues of tuning,false positives and meaningful data are tough, however depending upon the degree of network segmentation, homogeneous platforms.. and the type of business you are in, this can be greatly reduced and more effective. I recently spent some time with a friend from Cisco over a beer or two watching the 80mb DDOS flows to cisco.com and how their Peakflow reported upon it ... some quite interestingly defined thresholds for some other attacks also. They also have a 'clean-pipes' solution... anything to help you stay afloat when millions go through your website each day... admittedly you always need bigger pipes than an attacker but they don't always have to be yours. What really adds value though is in the core...

The PeakFlow SP solution is also interesting and can really help; more so to define traffic and billing ( peering/ratios etc ) and is somewhat BGP aware. It becomes very interesting then in actually knowing what is going on for Tier 1 and Tier 2 transit carriers that are trying to traffic engineer some very interesting issues and bill accordingly. PeakFlow allows for contextualised information and can see both the flows and certain defined BGP attributes, amongst other things and can help to address the issue of time and error in trace back to facilitate targeted sinkholes at the edge... understanding the data and control planes on networks is crucial.

One of the biggest problems we face right now on enterprise networks is the transparency, visibility and auditability challenge. Unfortunately the network *is* more aware and being expected to influence traffic in many new ways... not just to route packets... relating to QOS, triggered changes in routing, complex traffic engineering and more 'application' awareness such as Cisco's NBAR to influence flows etc....

Today, one has to be across every transiting technology and node, including the topology, security and criticality of each conceptual and physical business area. Tools to help alleviate this are good(tm)... and using extrusion detection and flow monitoring helps to identify infected hosts, worm sign, reconnaissance and probing ( internally ) and contributes to the quality and usefulness of empirical data when dealing with incidents. I am not negating inherent host and application issues, but at the end of the day both the hosts, applications and network all play a roll and impose themselves upon each other with different unique characteristics and behavioural signatures... perhaps some day IPv6 and IPSec will allow for close to 100% encryption, but right now we have limited edge use of IPSEC and payload encryption is not so much an issue... data has to be mobile to be useful or destructive, once it moves, it leaves traces... and hopefully one still owns the control channel of ones network. ( Hopefully! )

Boxes that can do good Capacity Management, Flows, basic/light flow based NIDS and are somewhat 'network-aware' ( routing protocol attributes )... such that they can contextualise somewhat the data and control planes of a network... are in my opinion, still immature; but better than anything else out there right now!.... I am waiting to combine such a tool with an onboard routing daemon that can interrogate the enterprise address routing table ( negating certain sloppy summarisation! ) and see what address space is currently 'dark' within private ranges to provide an 'aware' darknet that ebbs and flows as address space usage does.

I guess my point is that the convergence of tools and methodologies to gain insight and awareness in to your network is better than not having such tools, or having them distributed amongst groups and unaware or unable to 'share' or contextualise data. In responding to some major worm outbreaks, un-intentional internal DOS, traffic engineering / billing issues etc etc on some pretty large networks, I would have given the left part of some of my anatomy for such tools / visibility.... there is no silver bullet I'll give you that, but in this arms race the 'Nuclear' Holocaust will do no one any favours -> only provides leverage.... which points back to more subtle cooperative, political and legal ways to address such global threats.... terrorism is alive and well and packets make good suicide bombers!

Donal

2 comments:

George said...

Hope you come up with something to police the packets, kill the roots and contextualise the contenders.

Keep the gray cells churning.

Anonymous said...

Gawd! I read it three times, but I think I'll come back in the morning and try again.
And I thought I understood a wee bit. Sheesh ...