Saturday, July 30, 2005

I wish.....

The network _is_ the computer....

I have really good spatial comprehension, and am mostly a visual person. This is how I think. Currently careerwise I am an Information Security practitioner and Network Engineer. I like building, fixing and securing / protecting things. ( Substitute paternal instinct as I have no kids? )

Anyway, I digress.... as you move from job to job you tend to build up your stash of happy tools, resources, methods etc etc... _however_ when arriving in a large organisation it is very hard to get a handle on what's going on and build a map in your head of the network and nodes that contain information you are supposed to be securing / defending / protecting.....( especially if that company's documentation is bad, non-existant or they have never used any visualisation tools or mapped / diagrammed anything! ) Also, sometimes the company can be in a high growth phase, where things change daily or weekly - and we all know that devices are not always built, deployed, alarmed or documented properly..

For quite sometime I have been formulating an idea on how to get a handle on this .. it also applies to the actual NOC / SOC [ Security Operations Center ] guys too and how they view their operating world.... these days we need to know what's going on second by second, not day by day, or week by week.. internet time is just too fast, and so are the releases of worms following proof of concept code, 0 day exploits, or reverse engineered vendor patches.

Complexity is also the enemy - however that beast is getting larger not smaller ( as node numbers, services and depth of code / processes on hosts increases.. ) which I believe leads to the true gap right now; the ability [or inability] of us mere mortals to ingest, comprehend, correlate and appreciate changes / incidents and outages _properly_, including the ability to take the decisive actions to mitigate, fix or even just improve the situation. Inherent in this model is the ultimate accountability or responsibility for the decisions made in mitigating or remediating said issues. This is where supposed 'silver bullets' like intelligent IPS's, intelligent networks, sandboxing policies will invariably always fail. Too many overheads. Configuration needs to be done before the fact, and this administration can be forgotten / overlooked or just ignored. We still need to create the rules, tune the IDS, define the actions for them to take and then still no one I know in the industry will let a system issue of it's own accord an ACL [ Access Control List ] change, TCP reset or blackhole / sinkhole routing to /dev/null, Null0 or a 'scrubber' of sorts. They are too worried about customers and mission critical platforms, and rightly so? A.I. is still rule based / heuristic and often incomplete, as humans still need to re-write or tweak the frameworks, sample spaces to achieve the desired results. Neural networks still rely on 'us' humans for their playing fields.

I don't believe machines will ever be able to do real-time business risk modelling by drawing the correct inferences at the right times, this is still a skill humans are better at. When associating patterns, schedules and dependencies from informtaion we are presented with, what's fundamental is the type, quality, amount and correctness of the data presented to the human operator. Most humans are visual creatures, even the blind who build connections and patterns in their minds....

Aside: ( one of the best Cisco Routing and Switching CCIE's in the TAC [ Technical Assistance Centre ] they had in Brussels, Belgium was actually blind and supported large complex enterprises remotely on the phone! )

For now though, let's think about having the right information, easily represented and at the right time. Take a peak at the OODA loop ( in previous post below ) and the concept of a CERT or CSIRT, if you are not familiar with them. ( I am bundling the NOC / SOC and concept of a CERT in to the same teams / functions here... )

The pitch: A near-realtime 3D network map, seperating out a rough OSI / ISO 7 layer model into 2D connected visualisation planes that can be manipulated in real-time possibly with a touch screen. ( Alternatively and probably more pragmatic would be that of the 5 layer TCP/IP Sun/DOD model ) Other features would include nodes giving off visual alarms when there are issues and when thresholds are reached. Screens could be split to render multiple parts of the network simultaneously. Employees / clients could access standard templates / defined sub-maps remotely. These clients may be run on normal users or operators desktops, with the realtime rendering done on the client. Clients may have different roles as it relates to the network and get seperate streams overlayed to their maps. ( Traps, Anti-Virus, IDS, Flows with filters, syslog alerts.... )

DBA's see overlaid maps of JDBC, ODBC, SQLNet etc
Network Operators see ICMP, SNMP, SSH, SCP, TFTP, RCP, RSH, Telnet, Syslog etc..
Security can see everything but pick known 'bad' ports or recent outbreaks that use certain ports?
Content guys can see their product moving around...
Web guys can see their piece of the pie etc etc etc

Note: Suddenly at any point in time, all your observers become your distributed operations and network monitors!!! An Open Source model to keep the _network_ smooth and efficient...

Client / Server architecture similar in a sense to that of a MMORPG's methods of passing state and object information in a highly compressed format whereby the rendering engine primarily uses the client-side resources. Included may be the concept of Multicast or Peer-to-Peer to distribute information reducing bandwidth consumption. As with the gaming model, administrators may change information in realtime or influence the network also in realtime. Operators could push, only clients could pull. As this mapping would be graph based, holding state information and inter node relationship information ( think link-state / hybrid routing protocols ) each client would have a world view but _build_ his or her own "routing table" or view of the world as a normal router would ( including endpoints too though..! ) and then receive _state_ changes, which, in the message passing syntax would be anything from a threshold alert to a node state change, to a change in the graphical representation of a node in relation to some pre-defined event etc...

So to 're-cap' the 'network game server' as we'll call it handles most of the topology information, message scrubbing and over-all admin rights. ( Think of it as a shiny front end MOM / NMS / Event Correlation engine that understands flows... ) Clients, be they desktop users, network administrators, remote NOC teleworkers or customers who wish to see their relevent part of the network or hosts are performing from a network perspective, all get to see what's going on when, where and _hopefully_ in a distributed environment _we_ can get to the ever more elusive why in a reduced amount of time?

Transparency drives growth, change and improvement.

As information and events are all realtime and streamed in somewhat of a pipeline ( including flows ) it should be possible to ( with accurate network wide NTP ) perform limited tracebacks of incidents, albeit the event must be recognised or pre-defined in some form. This is where baselining and normalisation is exteremely important. SourceFire are doing pretty well in this regard it seems with RNA...

Sounds futuristic? Maybe it's out there already?

Perhaps, but most of my previous posts, in theory, contain close to the correct tools to do this ( well nearly anyway ).... the closest I have seen in operation thus far is a good independent 2D map built by QualysGuards Vulnerability Assessment tool, and OpNet's SPGuru ( perhaps their new 3DNV product? ) that feeds itself from existing NMS's and MOM's like CiscoWorks Information Centre, HP Openview etc..

a) get all related SNMP read strings for routers, switches and firewalls ( if you so do...)
b) ensure your platform has full ACL rights for the above
c) ensure your platform has full port connectivity through firewalls etc to achieve connectivity... ICMP/TCP/UDP
d) allow your platform to fingerprint hosts and nodes and make it an iterative behaviour...
e) allow your object orientated mapping engine to attribute status to graph leaves in real-time as it's rendered
f) have a concept of trending / difference
g) allow your platform to parse routing tables and understand topology ( Hmmmm, stateful or stateless mapping.. guess it needs to build consistent view rather than rebuild each-time to reduce overheads... as with gaming, build the world.. then interpret changes? )
h) perhaps overlay NetFlow (tm) information for close to real-time ( 5min+- ) traffic overlays.. top talkers etc. ( NetFlow(tm) is not realtime but exported in time intervals to collectors where it can be aggregated..
i) perhaps use this engine to allow you to do a form of touchscreen IPS ( Intrusions Prevention System ) on your whole network, thus the final realtime responsibility lies with the Network Operators?
j) X3D http://www.web3d.org/ as a framework instead of the supposedly outdated VRML ?
k) you would possibly need a fast rendering game engine to achieve basic visualisation depending on network size and complexity if not using X3D / VRML.
l) could feed and help with Capacity Management ? RMON + real-time fault-tracking ( ICMP sweeps / SNMP traps )?

Just a thought, but it's kinda where I see the defensive perimieter paradigm being turned inside out as it relates to Information Security with the keywords _realtime_ _complexity_ _perimeter_ _defense_ _ips_... imagine also if the host OS or NOS could tag confidential enterprise information and insert this boolean based tag in the TCP header somewhere ( DSCP / TOS -> QOS-> Public || Confidential ) and then NetFlow also had a header that could see and report on this... you could then see when the information was walking out the network door? This is hugely simplified from the host, file and application context I know.. but it's a thought as it would need to be a standard and built in to document formats. Users could then turn it off perhaps... maybe it could be enforced at a polcy level, but most host based agents don't run on all platforms or would be supported etc alas engineers will always want to run their own OS.... or have root privileges anyway.

This of course does not take in to account making copies on to removable media.. that's another issue... but it would be a start... probably impinging on DRM [Digital Rights Management] but not *really* as it's targeted for a corporate environment only.. and it would be a label / watermark.. not an endpoint restriction.. ( though it could be, I am mainly referring to the network gateways / edge though... ! ) but it lends itself to being auditable and the concept of the "Shrinking Perimeter" being popularised by Dan Geer http://www.verdasys.com/site/content/whitepapers.html

Most of the time companies drop keyword searches for the term "company confidential", or take a copy of encrypted emails for future use. This does not address dns, http, ftp etc... ftp access is not always granted, but http(s) is, either through proxies or direct. Maybe we should give up and not try to control the data leaving the network.. just audit it and focus on employee visibility and compliance? At what point does the complexity, entropy and technology allowing access to information really become manageable, controllable and auditable by humans anyway?

1 comment:

Anonymous said...

I take your general point - ignore security at your peril; act well in advance; the cost of retrieval is horrendous compared with the cost of prevention.

When will the ba****ds learn? When their company has gone down the tubes? And, yes! Responsibility must be defined or the system won't work.