Organisations with 'open networks' want IPS to police their highways.
Organisations with 'closed/segmented networks' use internal firewalling to restrict passage to flows that are deemed 'good', but most of the time they're swiss cheese!
Organisations are starting to see the benefit of 'extrusion detection', with non-production routed darknets.
We need to only permit the good stuff and then enumerate the bad stuff inside the good stuff. How do you define the good stuff when sometimes organisations don't even know themselves, don't want to know, or don't care what's on their network? Asset and flow classification is a big, never ending job! It's very hard to spot bad stuff inside good stuff and very resource intensive.
Netflow helps. Baselining helps. Anomaly detection helps. Having management that understands, cares and realises the intangible, unquantifiable(metrics?) helps + experience goes a long way.
Logs help. Note: http://www.loganalysis.org/
Assumpton: All traffic is good = early internet. Facilitating ease of communication.
Currently: Lots of internet traffic is bad :(
Current issues: Net Neutrality?
Can we allow a form of QOS and simple economics to dictate the traffic on the internet and the service level it gets?
Can we afford not to?
We cannot trust all the endpoints. Can we trust subsets thereof? A multi-tiered, multi-class internet?
We cannot trust all companies, countries and organisations. Thats the way BGP and backbone security of the internet works today. DNS is slightly different but equally succeptible. Funnily enough it ain't _too_ bad!
Notes: Read Barry Greene ( BGP ) and Dan Kaminsky's ( DNS ) work for more info. We love Team Cymru too!
No global security metrics exist that are useful, due to lack of standards, lack of information/incident sharing, lack of cooperation, distributed responsibility, no accountablity, speed and transition of technologies. Yet the internet is global. Its compomising countries' laws are not. Mind you http://www.first.org/ is leading the way.
Constantly enumerating bad stuff is self defeating. Marcus Ranum eloquently puts this, in his 'The Six Dumbest Ideas in Computer Security' essay.
Enumerating good stuff and blocking *everything* else, or submitting it to a lower class of service works. It may only be feasible on Enterprise networks though and with managed endpoints. Moreso in the future, QOS will be done per binary/app/data-object.
The internet is full of unmanaged endpoints and 'unmanaged' users. The internet is full of managed and 'unmanaged' coders.
The internet still works and is resilient due to its 'loose coupling' and civic duty of its technorati.
Note: However BGP reachability was severely affected by SQL Slammer, some backbone routers lost 3-4+% of their internet table via route withdrawls.
Answers: I'm working on it. For the moment enjoy your privileged packet freedom!