Great aggregated blog @ Planet Security http://www.dayioglu.net/planet/ for all things Information Security.
And another @ InfosecDaily http://infosecdaily.net/securitynews/ .... and.. TaoSecurity http://taosecurity.blogspot.com/
Here's a fun WormBlog @ http://www.wormblog.com/ and here's a similar one from F-Secure http://www.f-secure.com/weblog/
Microsoft Response Center Blog http://blogs.technet.com/msrc/
Microsoft Security Wiki http://channel9.msdn.com/wiki/default.aspx/SecurityWiki.HomePage
All I need to do now is get my new team membership in First .... I miss my First list with my coffee in the mornings!
Sunday, July 31, 2005
Saturday, July 30, 2005
I wish.....
The network _is_ the computer....
I have really good spatial comprehension, and am mostly a visual person. This is how I think. Currently careerwise I am an Information Security practitioner and Network Engineer. I like building, fixing and securing / protecting things. ( Substitute paternal instinct as I have no kids? )
Anyway, I digress.... as you move from job to job you tend to build up your stash of happy tools, resources, methods etc etc... _however_ when arriving in a large organisation it is very hard to get a handle on what's going on and build a map in your head of the network and nodes that contain information you are supposed to be securing / defending / protecting.....( especially if that company's documentation is bad, non-existant or they have never used any visualisation tools or mapped / diagrammed anything! ) Also, sometimes the company can be in a high growth phase, where things change daily or weekly - and we all know that devices are not always built, deployed, alarmed or documented properly..
For quite sometime I have been formulating an idea on how to get a handle on this .. it also applies to the actual NOC / SOC [ Security Operations Center ] guys too and how they view their operating world.... these days we need to know what's going on second by second, not day by day, or week by week.. internet time is just too fast, and so are the releases of worms following proof of concept code, 0 day exploits, or reverse engineered vendor patches.
Complexity is also the enemy - however that beast is getting larger not smaller ( as node numbers, services and depth of code / processes on hosts increases.. ) which I believe leads to the true gap right now; the ability [or inability] of us mere mortals to ingest, comprehend, correlate and appreciate changes / incidents and outages _properly_, including the ability to take the decisive actions to mitigate, fix or even just improve the situation. Inherent in this model is the ultimate accountability or responsibility for the decisions made in mitigating or remediating said issues. This is where supposed 'silver bullets' like intelligent IPS's, intelligent networks, sandboxing policies will invariably always fail. Too many overheads. Configuration needs to be done before the fact, and this administration can be forgotten / overlooked or just ignored. We still need to create the rules, tune the IDS, define the actions for them to take and then still no one I know in the industry will let a system issue of it's own accord an ACL [ Access Control List ] change, TCP reset or blackhole / sinkhole routing to /dev/null, Null0 or a 'scrubber' of sorts. They are too worried about customers and mission critical platforms, and rightly so? A.I. is still rule based / heuristic and often incomplete, as humans still need to re-write or tweak the frameworks, sample spaces to achieve the desired results. Neural networks still rely on 'us' humans for their playing fields.
I don't believe machines will ever be able to do real-time business risk modelling by drawing the correct inferences at the right times, this is still a skill humans are better at. When associating patterns, schedules and dependencies from informtaion we are presented with, what's fundamental is the type, quality, amount and correctness of the data presented to the human operator. Most humans are visual creatures, even the blind who build connections and patterns in their minds....
Aside: ( one of the best Cisco Routing and Switching CCIE's in the TAC [ Technical Assistance Centre ] they had in Brussels, Belgium was actually blind and supported large complex enterprises remotely on the phone! )
For now though, let's think about having the right information, easily represented and at the right time. Take a peak at the OODA loop ( in previous post below ) and the concept of a CERT or CSIRT, if you are not familiar with them. ( I am bundling the NOC / SOC and concept of a CERT in to the same teams / functions here... )
The pitch: A near-realtime 3D network map, seperating out a rough OSI / ISO 7 layer model into 2D connected visualisation planes that can be manipulated in real-time possibly with a touch screen. ( Alternatively and probably more pragmatic would be that of the 5 layer TCP/IP Sun/DOD model ) Other features would include nodes giving off visual alarms when there are issues and when thresholds are reached. Screens could be split to render multiple parts of the network simultaneously. Employees / clients could access standard templates / defined sub-maps remotely. These clients may be run on normal users or operators desktops, with the realtime rendering done on the client. Clients may have different roles as it relates to the network and get seperate streams overlayed to their maps. ( Traps, Anti-Virus, IDS, Flows with filters, syslog alerts.... )
DBA's see overlaid maps of JDBC, ODBC, SQLNet etc
Network Operators see ICMP, SNMP, SSH, SCP, TFTP, RCP, RSH, Telnet, Syslog etc..
Security can see everything but pick known 'bad' ports or recent outbreaks that use certain ports?
Content guys can see their product moving around...
Web guys can see their piece of the pie etc etc etc
Note: Suddenly at any point in time, all your observers become your distributed operations and network monitors!!! An Open Source model to keep the _network_ smooth and efficient...
Client / Server architecture similar in a sense to that of a MMORPG's methods of passing state and object information in a highly compressed format whereby the rendering engine primarily uses the client-side resources. Included may be the concept of Multicast or Peer-to-Peer to distribute information reducing bandwidth consumption. As with the gaming model, administrators may change information in realtime or influence the network also in realtime. Operators could push, only clients could pull. As this mapping would be graph based, holding state information and inter node relationship information ( think link-state / hybrid routing protocols ) each client would have a world view but _build_ his or her own "routing table" or view of the world as a normal router would ( including endpoints too though..! ) and then receive _state_ changes, which, in the message passing syntax would be anything from a threshold alert to a node state change, to a change in the graphical representation of a node in relation to some pre-defined event etc...
So to 're-cap' the 'network game server' as we'll call it handles most of the topology information, message scrubbing and over-all admin rights. ( Think of it as a shiny front end MOM / NMS / Event Correlation engine that understands flows... ) Clients, be they desktop users, network administrators, remote NOC teleworkers or customers who wish to see their relevent part of the network or hosts are performing from a network perspective, all get to see what's going on when, where and _hopefully_ in a distributed environment _we_ can get to the ever more elusive why in a reduced amount of time?
Transparency drives growth, change and improvement.
As information and events are all realtime and streamed in somewhat of a pipeline ( including flows ) it should be possible to ( with accurate network wide NTP ) perform limited tracebacks of incidents, albeit the event must be recognised or pre-defined in some form. This is where baselining and normalisation is exteremely important. SourceFire are doing pretty well in this regard it seems with RNA...
Sounds futuristic? Maybe it's out there already?
Perhaps, but most of my previous posts, in theory, contain close to the correct tools to do this ( well nearly anyway ).... the closest I have seen in operation thus far is a good independent 2D map built by QualysGuards Vulnerability Assessment tool, and OpNet's SPGuru ( perhaps their new 3DNV product? ) that feeds itself from existing NMS's and MOM's like CiscoWorks Information Centre, HP Openview etc..
a) get all related SNMP read strings for routers, switches and firewalls ( if you so do...)
b) ensure your platform has full ACL rights for the above
c) ensure your platform has full port connectivity through firewalls etc to achieve connectivity... ICMP/TCP/UDP
d) allow your platform to fingerprint hosts and nodes and make it an iterative behaviour...
e) allow your object orientated mapping engine to attribute status to graph leaves in real-time as it's rendered
f) have a concept of trending / difference
g) allow your platform to parse routing tables and understand topology ( Hmmmm, stateful or stateless mapping.. guess it needs to build consistent view rather than rebuild each-time to reduce overheads... as with gaming, build the world.. then interpret changes? )
h) perhaps overlay NetFlow (tm) information for close to real-time ( 5min+- ) traffic overlays.. top talkers etc. ( NetFlow(tm) is not realtime but exported in time intervals to collectors where it can be aggregated..
i) perhaps use this engine to allow you to do a form of touchscreen IPS ( Intrusions Prevention System ) on your whole network, thus the final realtime responsibility lies with the Network Operators?
j) X3D http://www.web3d.org/ as a framework instead of the supposedly outdated VRML ?
k) you would possibly need a fast rendering game engine to achieve basic visualisation depending on network size and complexity if not using X3D / VRML.
l) could feed and help with Capacity Management ? RMON + real-time fault-tracking ( ICMP sweeps / SNMP traps )?
Just a thought, but it's kinda where I see the defensive perimieter paradigm being turned inside out as it relates to Information Security with the keywords _realtime_ _complexity_ _perimeter_ _defense_ _ips_... imagine also if the host OS or NOS could tag confidential enterprise information and insert this boolean based tag in the TCP header somewhere ( DSCP / TOS -> QOS-> Public || Confidential ) and then NetFlow also had a header that could see and report on this... you could then see when the information was walking out the network door? This is hugely simplified from the host, file and application context I know.. but it's a thought as it would need to be a standard and built in to document formats. Users could then turn it off perhaps... maybe it could be enforced at a polcy level, but most host based agents don't run on all platforms or would be supported etc alas engineers will always want to run their own OS.... or have root privileges anyway.
This of course does not take in to account making copies on to removable media.. that's another issue... but it would be a start... probably impinging on DRM [Digital Rights Management] but not *really* as it's targeted for a corporate environment only.. and it would be a label / watermark.. not an endpoint restriction.. ( though it could be, I am mainly referring to the network gateways / edge though... ! ) but it lends itself to being auditable and the concept of the "Shrinking Perimeter" being popularised by Dan Geer http://www.verdasys.com/site/content/whitepapers.html
Most of the time companies drop keyword searches for the term "company confidential", or take a copy of encrypted emails for future use. This does not address dns, http, ftp etc... ftp access is not always granted, but http(s) is, either through proxies or direct. Maybe we should give up and not try to control the data leaving the network.. just audit it and focus on employee visibility and compliance? At what point does the complexity, entropy and technology allowing access to information really become manageable, controllable and auditable by humans anyway?
I have really good spatial comprehension, and am mostly a visual person. This is how I think. Currently careerwise I am an Information Security practitioner and Network Engineer. I like building, fixing and securing / protecting things. ( Substitute paternal instinct as I have no kids? )
Anyway, I digress.... as you move from job to job you tend to build up your stash of happy tools, resources, methods etc etc... _however_ when arriving in a large organisation it is very hard to get a handle on what's going on and build a map in your head of the network and nodes that contain information you are supposed to be securing / defending / protecting.....( especially if that company's documentation is bad, non-existant or they have never used any visualisation tools or mapped / diagrammed anything! ) Also, sometimes the company can be in a high growth phase, where things change daily or weekly - and we all know that devices are not always built, deployed, alarmed or documented properly..
For quite sometime I have been formulating an idea on how to get a handle on this .. it also applies to the actual NOC / SOC [ Security Operations Center ] guys too and how they view their operating world.... these days we need to know what's going on second by second, not day by day, or week by week.. internet time is just too fast, and so are the releases of worms following proof of concept code, 0 day exploits, or reverse engineered vendor patches.
Complexity is also the enemy - however that beast is getting larger not smaller ( as node numbers, services and depth of code / processes on hosts increases.. ) which I believe leads to the true gap right now; the ability [or inability] of us mere mortals to ingest, comprehend, correlate and appreciate changes / incidents and outages _properly_, including the ability to take the decisive actions to mitigate, fix or even just improve the situation. Inherent in this model is the ultimate accountability or responsibility for the decisions made in mitigating or remediating said issues. This is where supposed 'silver bullets' like intelligent IPS's, intelligent networks, sandboxing policies will invariably always fail. Too many overheads. Configuration needs to be done before the fact, and this administration can be forgotten / overlooked or just ignored. We still need to create the rules, tune the IDS, define the actions for them to take and then still no one I know in the industry will let a system issue of it's own accord an ACL [ Access Control List ] change, TCP reset or blackhole / sinkhole routing to /dev/null, Null0 or a 'scrubber' of sorts. They are too worried about customers and mission critical platforms, and rightly so? A.I. is still rule based / heuristic and often incomplete, as humans still need to re-write or tweak the frameworks, sample spaces to achieve the desired results. Neural networks still rely on 'us' humans for their playing fields.
I don't believe machines will ever be able to do real-time business risk modelling by drawing the correct inferences at the right times, this is still a skill humans are better at. When associating patterns, schedules and dependencies from informtaion we are presented with, what's fundamental is the type, quality, amount and correctness of the data presented to the human operator. Most humans are visual creatures, even the blind who build connections and patterns in their minds....
Aside: ( one of the best Cisco Routing and Switching CCIE's in the TAC [ Technical Assistance Centre ] they had in Brussels, Belgium was actually blind and supported large complex enterprises remotely on the phone! )
For now though, let's think about having the right information, easily represented and at the right time. Take a peak at the OODA loop ( in previous post below ) and the concept of a CERT or CSIRT, if you are not familiar with them. ( I am bundling the NOC / SOC and concept of a CERT in to the same teams / functions here... )
The pitch: A near-realtime 3D network map, seperating out a rough OSI / ISO 7 layer model into 2D connected visualisation planes that can be manipulated in real-time possibly with a touch screen. ( Alternatively and probably more pragmatic would be that of the 5 layer TCP/IP Sun/DOD model ) Other features would include nodes giving off visual alarms when there are issues and when thresholds are reached. Screens could be split to render multiple parts of the network simultaneously. Employees / clients could access standard templates / defined sub-maps remotely. These clients may be run on normal users or operators desktops, with the realtime rendering done on the client. Clients may have different roles as it relates to the network and get seperate streams overlayed to their maps. ( Traps, Anti-Virus, IDS, Flows with filters, syslog alerts.... )
DBA's see overlaid maps of JDBC, ODBC, SQLNet etc
Network Operators see ICMP, SNMP, SSH, SCP, TFTP, RCP, RSH, Telnet, Syslog etc..
Security can see everything but pick known 'bad' ports or recent outbreaks that use certain ports?
Content guys can see their product moving around...
Web guys can see their piece of the pie etc etc etc
Note: Suddenly at any point in time, all your observers become your distributed operations and network monitors!!! An Open Source model to keep the _network_ smooth and efficient...
Client / Server architecture similar in a sense to that of a MMORPG's methods of passing state and object information in a highly compressed format whereby the rendering engine primarily uses the client-side resources. Included may be the concept of Multicast or Peer-to-Peer to distribute information reducing bandwidth consumption. As with the gaming model, administrators may change information in realtime or influence the network also in realtime. Operators could push, only clients could pull. As this mapping would be graph based, holding state information and inter node relationship information ( think link-state / hybrid routing protocols ) each client would have a world view but _build_ his or her own "routing table" or view of the world as a normal router would ( including endpoints too though..! ) and then receive _state_ changes, which, in the message passing syntax would be anything from a threshold alert to a node state change, to a change in the graphical representation of a node in relation to some pre-defined event etc...
So to 're-cap' the 'network game server' as we'll call it handles most of the topology information, message scrubbing and over-all admin rights. ( Think of it as a shiny front end MOM / NMS / Event Correlation engine that understands flows... ) Clients, be they desktop users, network administrators, remote NOC teleworkers or customers who wish to see their relevent part of the network or hosts are performing from a network perspective, all get to see what's going on when, where and _hopefully_ in a distributed environment _we_ can get to the ever more elusive why in a reduced amount of time?
Transparency drives growth, change and improvement.
As information and events are all realtime and streamed in somewhat of a pipeline ( including flows ) it should be possible to ( with accurate network wide NTP ) perform limited tracebacks of incidents, albeit the event must be recognised or pre-defined in some form. This is where baselining and normalisation is exteremely important. SourceFire are doing pretty well in this regard it seems with RNA...
Sounds futuristic? Maybe it's out there already?
Perhaps, but most of my previous posts, in theory, contain close to the correct tools to do this ( well nearly anyway ).... the closest I have seen in operation thus far is a good independent 2D map built by QualysGuards Vulnerability Assessment tool, and OpNet's SPGuru ( perhaps their new 3DNV product? ) that feeds itself from existing NMS's and MOM's like CiscoWorks Information Centre, HP Openview etc..
a) get all related SNMP read strings for routers, switches and firewalls ( if you so do...)
b) ensure your platform has full ACL rights for the above
c) ensure your platform has full port connectivity through firewalls etc to achieve connectivity... ICMP/TCP/UDP
d) allow your platform to fingerprint hosts and nodes and make it an iterative behaviour...
e) allow your object orientated mapping engine to attribute status to graph leaves in real-time as it's rendered
f) have a concept of trending / difference
g) allow your platform to parse routing tables and understand topology ( Hmmmm, stateful or stateless mapping.. guess it needs to build consistent view rather than rebuild each-time to reduce overheads... as with gaming, build the world.. then interpret changes? )
h) perhaps overlay NetFlow (tm) information for close to real-time ( 5min+- ) traffic overlays.. top talkers etc. ( NetFlow(tm) is not realtime but exported in time intervals to collectors where it can be aggregated..
i) perhaps use this engine to allow you to do a form of touchscreen IPS ( Intrusions Prevention System ) on your whole network, thus the final realtime responsibility lies with the Network Operators?
j) X3D http://www.web3d.org/ as a framework instead of the supposedly outdated VRML ?
k) you would possibly need a fast rendering game engine to achieve basic visualisation depending on network size and complexity if not using X3D / VRML.
l) could feed and help with Capacity Management ? RMON + real-time fault-tracking ( ICMP sweeps / SNMP traps )?
Just a thought, but it's kinda where I see the defensive perimieter paradigm being turned inside out as it relates to Information Security with the keywords _realtime_ _complexity_ _perimeter_ _defense_ _ips_... imagine also if the host OS or NOS could tag confidential enterprise information and insert this boolean based tag in the TCP header somewhere ( DSCP / TOS -> QOS-> Public || Confidential ) and then NetFlow also had a header that could see and report on this... you could then see when the information was walking out the network door? This is hugely simplified from the host, file and application context I know.. but it's a thought as it would need to be a standard and built in to document formats. Users could then turn it off perhaps... maybe it could be enforced at a polcy level, but most host based agents don't run on all platforms or would be supported etc alas engineers will always want to run their own OS.... or have root privileges anyway.
This of course does not take in to account making copies on to removable media.. that's another issue... but it would be a start... probably impinging on DRM [Digital Rights Management] but not *really* as it's targeted for a corporate environment only.. and it would be a label / watermark.. not an endpoint restriction.. ( though it could be, I am mainly referring to the network gateways / edge though... ! ) but it lends itself to being auditable and the concept of the "Shrinking Perimeter" being popularised by Dan Geer http://www.verdasys.com/site/content/whitepapers.html
Most of the time companies drop keyword searches for the term "company confidential", or take a copy of encrypted emails for future use. This does not address dns, http, ftp etc... ftp access is not always granted, but http(s) is, either through proxies or direct. Maybe we should give up and not try to control the data leaving the network.. just audit it and focus on employee visibility and compliance? At what point does the complexity, entropy and technology allowing access to information really become manageable, controllable and auditable by humans anyway?
Work and Personal
So I'd like to address 2 topics and what's going on with me right now, both somewhat technologically impacted ( and then of course some interesting links etc.. ):
Personal:
I am having great fun right now with a mixture of PodCasting and the content @ Zencast.org . Free Buddhist classes for the masses, who said Podcasting wouldn't catch on. Today I sat in the sun on Manly beach for 2 hours learning and meditating :)
Work:
With no IT Security Strategy, comprehensive policy, budget, resources and incorrect internal reporting chains.... an outsource trying to drive the clients Information Security Policy and Information Security Management System; the emphasis has to be on initially enumerating information assets and classifying them as part of the companies risk profile / attack surface before engaging in anything else. This unfortunately means, in the absence of any current snapshot of information / physical assets or full knowledge of business processes, an independent audit is needed to achieve a baseline.. with subsequent scans / audits building upon this... with special focus paid to the ousourcing interface and contractual obligations on all parties. ( ...including the other outsourced services / interfaces from other companies / organisations.. )
It also denotes the need for a base level strategy and methodology. The most effective framework in information security right now is a subset of the Parkerian Hexad http://www.answers.com/Parkerian%20Hexad ( C.I.A. / Confideniality, Integrity and Availability ) ratings and also the OODA loop http://www.answers.com/ooda%20loop developed by John Boyd for gathering Intelligence and then Execution in Information Warfare.
General News:
I'd also like to mention a recently given speech at Blackhat by Michael Lynn an ex-ISS security researcher because many see it as a huge threat.... basically apart from the DNS root servers, everyone seems to forget about the routers(tm), as with Cisco's monopoly running IOS on most backbone infrastructure, why own 1000's of hosts.. when you can own the network? Ask yourself what else is ubiquitous... remember the SNMP issues and what about BGP or goofin' with the common implementations of the TCP/IP stack out there?
Some really cool people I admire in the Industry ( you gotta be known when you're in Wikipedia/Answers.com? ):
Rob Thomas http://www.cymru.com/
Dan Kaminsky http://www.doxpara.com/
Dan Geer http://www.answers.com/topic/dan-geer
Bruce Schneier http://www.answers.com/bruce%20schneier
Paul Graham http://www.answers.com/Paul%20graham
Some cool Penetration Testing / Information Security Consulting companies:
Security-Assessment http://www.security-assessment.com/
Corsaire http://www.corsaire.com/
NGS http://www.ngssoftware.com/
Information Security Testing Methodologies:
OSSTMM http://www.isecom.org/osstmm/
OWASP http://www.owasp.org/index.jsp
Back to the concept of network visualisation and graphing I have updated:
Personal:
I am having great fun right now with a mixture of PodCasting and the content @ Zencast.org . Free Buddhist classes for the masses, who said Podcasting wouldn't catch on. Today I sat in the sun on Manly beach for 2 hours learning and meditating :)
Work:
With no IT Security Strategy, comprehensive policy, budget, resources and incorrect internal reporting chains.... an outsource trying to drive the clients Information Security Policy and Information Security Management System; the emphasis has to be on initially enumerating information assets and classifying them as part of the companies risk profile / attack surface before engaging in anything else. This unfortunately means, in the absence of any current snapshot of information / physical assets or full knowledge of business processes, an independent audit is needed to achieve a baseline.. with subsequent scans / audits building upon this... with special focus paid to the ousourcing interface and contractual obligations on all parties. ( ...including the other outsourced services / interfaces from other companies / organisations.. )
It also denotes the need for a base level strategy and methodology. The most effective framework in information security right now is a subset of the Parkerian Hexad http://www.answers.com/Parkerian%20Hexad ( C.I.A. / Confideniality, Integrity and Availability ) ratings and also the OODA loop http://www.answers.com/ooda%20loop developed by John Boyd for gathering Intelligence and then Execution in Information Warfare.
General News:
I'd also like to mention a recently given speech at Blackhat by Michael Lynn an ex-ISS security researcher because many see it as a huge threat.... basically apart from the DNS root servers, everyone seems to forget about the routers(tm), as with Cisco's monopoly running IOS on most backbone infrastructure, why own 1000's of hosts.. when you can own the network? Ask yourself what else is ubiquitous... remember the SNMP issues and what about BGP or goofin' with the common implementations of the TCP/IP stack out there?
Some really cool people I admire in the Industry ( you gotta be known when you're in Wikipedia/Answers.com? ):
Rob Thomas http://www.cymru.com/
Dan Kaminsky http://www.doxpara.com/
Dan Geer http://www.answers.com/topic/dan-geer
Bruce Schneier http://www.answers.com/bruce%20schneier
Paul Graham http://www.answers.com/Paul%20graham
Some cool Penetration Testing / Information Security Consulting companies:
Security-Assessment http://www.security-assessment.com/
Corsaire http://www.corsaire.com/
NGS http://www.ngssoftware.com/
Information Security Testing Methodologies:
OSSTMM http://www.isecom.org/osstmm/
OWASP http://www.owasp.org/index.jsp
Back to the concept of network visualisation and graphing I have updated:
Sunday, July 10, 2005
Building blocks and giant's shoulders....
Newish stuff.... and references that are always good:
3G Related:
Risk related definitions:
Risk Management http://www.answers.com/risk%20management
Risk Assessmenmt http://www.answers.com/topic/risk-assessment
Law as it relates to IT:
http://www.groklaw.net/
Internet Modelling / Risk Modelling:
NetworkViz http://networkviz.sourceforge.net/
CAIDA http://www.caida.org/
OpenQVIS http://openqvis.sourceforge.net/
Opte Project http://www.opte.org/
LGL http://bioinformatics.icmb.utexas.edu/lgl/
( java'ish )
JASPVI http://lab.verat.net/Jaspvi/ ( very cool ASN mapping )
Tom Sawyer http://www.tomsawyer.com/home/index.php
yFiles http://www.yworks.com/
( http://www.yworks.com/en/products_yed_about.htm )
GINY http://csbi.sourceforge.net/
JUNG http://jung.sourceforge.net/
Piccolo (2d) http://www.cs.umd.edu/hcil/piccolo/
OSS Routing Daemons:
ZEBRA http://www.zebra.org/
OpenBGPd http://www.openbgpd.org/
OpenOSPFd ( coming ) http://www.openbgpd.org/
BGP/RADB/Whois type stuff:
http://bgp.potaroo.net/
http://www.dnsstuff.com/
http://www.traceroute.org/
http://www.bgp4.as/tools
More Netflow tools / info:
Netflow info http://www.cisco.com/warp/public/cc/pd/iosw/ioft/neflct/tech/napps_wp.htm
Extreme Happy Netflow Tool http://ehnt.sourceforge.net/
Quick NOS ( Network Operating System ) Emulation:
http://www.dcs.napier.ac.uk/~bill/emulators.html
DNS:
RDNS Project http://www.ripe.net/rs/reverse/rdns-project/
Cisco OSS Related:
COSI http://cosi-nms.sourceforge.net/
Local Internet Registries AU:
http://www.ripe.net/membership/indices/AU.html
- IBE ( Identity Based Encryption ) http://crypto.stanford.edu/ibe/
- SPAM ( The Penny Black Project ) http://research.microsoft.com/research/sv/PennyBlack/
- UDDI ( Universal Description, Discovery and Integration ) http://www.uddi.org/
- PyXML http://pyxml.sourceforge.net/
- XML Services http://www.xml.org/
- Unicode http://www.unicode.org/
- W3C http://www.w3.org/
3G Related:
- @Stake Research http://www.atstake.com/research/reports/
- IEEE Secure Mobile Communications Forum http://www.iee.org/events/securemobile.cfm
- GSM Security http://www.gsm-security.net/
- UMS Forum http://www.umts-forum.org/
- UMTS TDD http://www.umtstdd.org/
- UMTS World Information http://www.umtsworld.com/
- UMTS TD-CDMA http://www.ipwireless.com/
- Cell Phone Hacks http://www.cellphonehacks.com/
Risk related definitions:
Risk Management http://www.answers.com/risk%20management
Risk Assessmenmt http://www.answers.com/topic/risk-assessment
Law as it relates to IT:
http://www.groklaw.net/
Internet Modelling / Risk Modelling:
NetworkViz http://networkviz.sourceforge.net/
CAIDA http://www.caida.org/
OpenQVIS http://openqvis.sourceforge.net/
Opte Project http://www.opte.org/
LGL http://bioinformatics.icmb.utexas.edu/lgl/
( java'ish )
JASPVI http://lab.verat.net/Jaspvi/ ( very cool ASN mapping )
Tom Sawyer http://www.tomsawyer.com/home/index.php
yFiles http://www.yworks.com/
( http://www.yworks.com/en/products_yed_about.htm )
GINY http://csbi.sourceforge.net/
JUNG http://jung.sourceforge.net/
Piccolo (2d) http://www.cs.umd.edu/hcil/piccolo/
OSS Routing Daemons:
ZEBRA http://www.zebra.org/
OpenBGPd http://www.openbgpd.org/
OpenOSPFd ( coming ) http://www.openbgpd.org/
BGP/RADB/Whois type stuff:
http://bgp.potaroo.net/
http://www.dnsstuff.com/
http://www.traceroute.org/
http://www.bgp4.as/tools
More Netflow tools / info:
Netflow info http://www.cisco.com/warp/public/cc/pd/iosw/ioft/neflct/tech/napps_wp.htm
Extreme Happy Netflow Tool http://ehnt.sourceforge.net/
Quick NOS ( Network Operating System ) Emulation:
http://www.dcs.napier.ac.uk/~bill/emulators.html
DNS:
RDNS Project http://www.ripe.net/rs/reverse/rdns-project/
Cisco OSS Related:
COSI http://cosi-nms.sourceforge.net/
Local Internet Registries AU:
http://www.ripe.net/membership/indices/AU.html
Subscribe to:
Posts (Atom)