Saturday, December 30, 2006

Reading list - [placeholder]

This post tries to echo the previous post somewhat...

Mortality:

Scarcity:

Religion:



Education:


Diet:

And then doing something about it all?

Some cud....

"Western Civilization, it seems to me, stands by two great heritages. One is the scientific spirit of adventure - the adventure into the unknown, an unknown which must be recognized as being unknown in order to be explored; the demand that the unanswerable mysteries of the universe remain unanswered; the attitude that all is uncertain; to summarize it - the humility of the intellect. The other great heritage is Christian ethics - the basis of action on love, the brotherhood of all men, the value of the individual - the humility of the spirit. These two heritages are logically, thoroughly consistent. But logic is not all; one needs one's heart to follow an idea. If people are going back to religion, what are they going back to? Is the modern church a place to give comfort to a man who doubts God - more, one who disbelieves in God? Is the modern church a place to give comfort and encouragement to the value of such doubts? So far, have we not drawn strength and comfort to maintain the one or the other of these consistent heritages in a way which attacks the values of the other? Is this unavoidable? How can we draw inspiration to support these two pillars of Western civilization so that they may stand together in full vigor, mutually unafraid? Is this not the central problem of our time?"

'The Relation of Science and Religion', Richard P. Feynman

Originally published by the California Institute of Technology in Engineering and Science magazine. Personally I would replace Christianity and the 'church' with 'religion', as this applies to many different forms and types of modern belief systems.

If one could really change the world in a long lasting meaningful way, it seems to me that a few barriers will always stand unless they are either a) stripped away ( which is unfortunately incredibly difficult with existing inertia and legacy frameworks and systems) or b) that of slowly replacing or superceeding that which is already there or appears to be working. Sometimes there is a slow and barely noticeable evolution; one that you don't notice until you actually stand back and see where you have come from, and other times there is massive paradigm shift that results in huge change by virtue of an idea, an invention or some catastrophic event ( which generally only lasts a short time in corporate or societal memory).

I posit that five basic concepts or issues are of utmost importance to how we think, live, learn and treat both our ecosystem and fellow sentient beings.

[ However, sometimes you find people have beaten you to it ;) http://www.copenhagenconsensus.com/ )

a) mortality ( http://en.wikipedia.org/wiki/Mortality )
b) scarcity ( http://en.wikipedia.org/wiki/Scarcity )
c) religion
d) education including information transparency and unfettered access to said information / knowledge
e) diet

So perhaps improvements could be made, or these concerns addressed by:

a) immortality - by constantly replacing or reprogramming our cells and fully understanding atomic / quantum interactions on molecular biology etc


b) nano-technological fabrication of goods and facilitation of services ( energy issues? ) and population control http://en.wikipedia.org/wiki/Population_control either voluntarily or involuntarily via governmental eugenics http://en.wikipedia.org/wiki/Eugenics or Nature re-exerting balance via some form of mass infertility and hopefully preventing some form of Malthusian Catastrophe
c) a sweeping new unified religion / belief system similar to, if not expanding upon buddhism in its synergy with modern science, universal compassion and tolerance... or even a spontaneous new mass movement or group awakening / Enlightenment which seems to be brewing due to accelerating global disillusionment
d) free ( as in beer ) universal education and wider topics of learning for younger generations, including digitization thereof and free access to the sum of all human knowledge including all global libraries, universities and educational publications

Current and historical:


e) go vegan or vegetarian for everyone and everything's sake, including your own sanity and health! Why?
A gross simplification I know, ignoring current governments, regimes, political beliefs, markets, famine, drought, poverty, pollution etc etc.. where to begin? So many issues, so many problems... one problem I have spoken about before is that of archival, storage and standardisation of information and protocols to access said information. One must always assume rebuilding from an apocalypse e.g. instructions included, including the instructions for the instructions.

Saturday, December 16, 2006

Quality and existence?

Impermanence and suffering, hand in hand?

Today's thought of the day comes from trying to cram too much in to one day, and then deciding to do only one thing! One tries to get the most, the best or the biggest from activities in a distinct period of time. So, who and what defines the quality of our experiences? Is it external, is it society or can it really be ourselves.. unfettered by external influences?

If we are perceiving and defining as we go ( a bit like new metaphysical theories that potentially become self-fulfilling prophecies..? ), then this world has already been defined in to existence, and continues to be so in the micro and the macro the further we wish to plunge. It will take some radical thinking to help undefine parts of it and a common moral/ethical system to guide 'science' as it does so. How deep can we go? Is Planck's constant really the limiting factor, until of course we go deeper?

Recently, after reading the 'Elementary Particles' (quoted in my last post), the problem of mortality does NOT allow us to absolve ourselves of the responsibilities to both our earth and our fellow man upon death ( we speak sometimes of our children inheriting the earth), but not everyone has children, and I believe people are still inherently selfish and even though they spend a lot of time thinking about the future, they are all selfish thought cycles. Being immortal, we would not be impermanent and as such, would both remember others actions and would be compelled to act in the common good in knowing we were all intrinsically part of the 'Wheel of Life'? Funnily enough though, an image of the vampires in the movie Blade(I|II|II)? spring to mind, whereupon they went down the other path in a meglomaniacal sense (only because there were the inferior humans still to be ruled!), there was somewhat of a respect for other 'immortal' vampires. ( Unfortunately though, vampires could still be killed in certain ways, as would potential future biologically immortal humans... albeit they would be free from disease and sickness, they could still be beheaded / incinerated etc. etc.) This is also apparent with the fictional Titans in Dune's 'Machine Crusade'Titans' http://en.wikipedia.org/wiki/Titan_(Dune) , a continued set of Dune books written by Frank Herbert's son Brian.

Would be nice to address group consciousness without immortality though.... ;)

Thursday, December 14, 2006

Elementarily T.I.R.E.D.?

The latest sociological acronym is "Tired" - the Thirty-something Independent Radical Educated Dropout.


"Tell me, what is it you plan to do - with your one wild and precious life?"

The Summer Day, Mary Oliver


"Children suffer the world that adults create for them and try their best to adapt to it; in time, usually, they will replicate it."

The Elemetary Particles, Michel Houellebecq, 1998



Have I exhausted all modern society has to offer - a year early at 29? Is it now time to change the world as previously envisaged? Enlightenment may be a pre-requisite. And these North American zen groups are either too scared, cliquey.... or there are too many stoners and looneys around this continent to warrant any form of trust in a stranger. I'll need to 'explore/meditate' back home in the EU, where the vibe is akin to the openness and friendliness of my second home, Australia. ( I hope ;)

Listening to inside, still confused...

Thought: All these coffee shops and coffee addicted people feed the 'chattering' monkey mind of the ego and keep the 'pain-body' emotionally hurting both itself and others. Keep those dumb 'battery' humans in their boxes consuming, keep their music/ipods playing when outside so they don't hear the real world.

Q. What's left to suppress? A visual overlay on the world to create a 'pseudo' reality?

Dream: Brother arrived at same conclusions as I. He felt even more hopeless. I ended up beating and bullying him not to give up. He went limp in my arms. He had passively accepted whatever fate was going to throw at him. He went to bed suicidal and woke up happy. Then another one of him appeared, and another and another - all taking multiple choices, directions etc. His essence and energy was becoming severely diluted the more 'hims' appeared, we (family) wanted him to stop multiplying but once he had started he couldn't and didn't want to. It was freeing and very powerful to him.

There was a building with lots of windows, some were filled with versions of him, some had large sloth/ant-eater bipeds that were also his essence too. They were smiling and looking up chuckling.

I tried to get him to come to Australia, he wouldn't. I realised I didn't live there anymore. By accident he ended up in a green hippie van traveling to New Zealand. I was happy about that.

Thursday, November 23, 2006

Hint - Who or what am 'I' ?

"..idleness is lonely and demoralizing."

While many would agree with this statement taken from an essay by Paul Graham http://www.paulgraham.com/gap.html , I would challenge all facets of it.... why? or why shouldn't it be? This may be a form of mental gymnastics if you wish to contemplate this... but if you go deeper, it is in fact meaningless, tied to a concept of self, worth, value, dependence and desire.

In this modern day and age we need more idleness, reflection and less business / escapism. Idleness is not 'the Devil's playground'.

Monday, November 20, 2006

Game Over - Insert more credits....

Back in 2004 I text'ed some friends, family and colleagues after a day of banality, while waiting to board the Jetcat to Manly:

" Today I got no closer to understanding myself, the world or
existence - why am I wasting my time such?"


Since then I have been skirting the edges of that _thought_, looking deeper at other topics and perhaps shrouding it in the career I had chosen for myself - and since then have been trying to understand and fix some of the macro challenges within the construct of IT enabled organisations. I have delved in to the inner workings of the industry, looked at and compared other industries.. examined the similarities - the differences, and tried to fully grasp the complexities from 'end-to-end'. Some of the issues are unfortunately ubiquitous in many other industries.... but not necessarily near the level of complexity, naivety, lack of appraisement(data), ignorance, barrier of entry, nefariousness ( and related cost of resources for nefarious purposes...) the list goes on. Don't get me wrong, definitely a fun, playful, yet dangerous and ever evolving 'sandbox?', shame most don't get the underlying fact of the quite real and tangible intersection with the physical/natural world; for this 'virtual' world we have created, is not virtual at all, but an intrinsic part of our economy, infrastructure and daily life in more ways than the masses can comprehend. Some of the string, glue and sticky tape that holds together certain critical parts of said infrastructure and services constantly amazes me, but that is a rant for another day!

Part of me thought the answer would lie in 'front of house' or in a 'one to many' vendor based relationship... both allowing and facilitating me to grasp much more -> further and faster, if you will.... when in essence this only actually distanced me from the things I believed to be in my control, or to which I could influence, extricating me from my first hand experiences on the battlefield/battlefront. When we introduce the concept of internet time and physics, a fun question may be asked.. what use is a veteran of the 'Battle of Waterloo' in a modern war fought with drones, digital information and weaponry, and the tactics or strategies thus employed?

Note: Here one may counter with quotes from Sun Tzu, but I think you get my point :)
Note II: I am a n00b compared to RFP but he sums it up the industry well here.

I have ruminated on going back to academia to study military tactics, history, economics, statistics, computer science ( again ).... as we battle to understand and control the entity that is the internet - and the *new* challenges that go with it, such as new appreciations and understandings of the traditional concepts of physics, time and space trade-offs....things like 'crowdsourcing' and massively distributed computing.. however is this again distancing myself from the coal face? Or just walking the same path over again?

For me, working towards the assurance and overall security of these internet or IP enabled organisations, has seemed a noble goal, and I believe still is - the eternal struggle of "good vs evil" ( justifiable to oneself through the continued integral enablement and benefits of IT in industry and the global economy etc ), but have however been slipping away from myself, my real-self, and over-indulging in everything there is in modern society that gently allows us to 'escape' from the reality in which we live. The reality we ( or they? ) continue to create and mould for ourselves on a daily basis.

I may come full circle. I may not. But right now "life is short" and the answers and questions I seek are not to be generated or answered from within the construct that surrounds me. I am about to embark on a new journey to relinquish the 'I' and to perhaps find the 'We'.. who knows?

All I know is the time is now, the path is unclear.... but I do not fear it anymore!

First things, first though... I need to be with my family.

Friday, November 17, 2006

Machine and service integrity..

What if instead of worrying about compromised services and data in the short term with fingerprints/hashes of binaries and files, we applied the concept of re-use and cycling to the actual services and machines? Think TKIP or perhaps PFS for IPSEC on a macro service and machine scale?

Think load balanced web servers constantly rebooting from verified images - either sequentially or in some form of complex pre-computed pseudo-random pattern, thus reducing the potential time an attacker had on a box, service or version? I will think more about this, but VM's, load balancing and operational management would require a lot of planning, thought and overhead. Re-use of TCP connections e.g. TCP multiplexing is common now in many optimisation products/load balancing offerings.

If, as some in the industry have -> thrown the towel in per se, and are more worried about compromise, detection and time to restore a machine to an integral state - then why not take that to it's logical conclusion. Almost like a macro level Stackguard and ProPolice in OpenBSD that randomises an offset to the next addressable chunk of memory to make it harder to predict/calculate and reproduce attacks with standard results.

Let's limit the conceptual static state of a live machine ( harder for databases and synchronisation though.. ) but an interesting thought nonetheless.

Maybe you'd need a farm of diskless head-end servers the monkeys would constantly upgrade the OS/App from a bootable set of flash drives etc?

No one has addressed the issue of micro-time adequately in Information Security, rather intractability and macro-time as a defense! Please correct me if I am wrong here...

Thursday, November 16, 2006

Safety nets?

"There's always going to be a job out there if you're coherent and can put a sentence together."

Thursday, November 09, 2006

Hug the world

Today I was walking down Pitt St. in the centre of Sydney and a hippy'ish looking girl had a big piece of cardboard up above her head with 'Free Hugs' written on it in big letters.

I thought twice about it and then gave her one ;)

Most people were just staring and moving on confused, bemused and shocked. The world needs more hugs. City people don't connect enough. This was the highlight of my day.

The lowlight was that I had to think twice about it.

Saturday, September 02, 2006

Welcome to the interweb my friend!

Organisations with 'open networks' want IPS to police their highways.
Organisations with 'closed/segmented networks' use internal firewalling to restrict passage to flows that are deemed 'good', but most of the time they're swiss cheese!

Organisations are starting to see the benefit of 'extrusion detection', with non-production routed darknets.

We need to only permit the good stuff and then enumerate the bad stuff inside the good stuff. How do you define the good stuff when sometimes organisations don't even know themselves, don't want to know, or don't care what's on their network? Asset and flow classification is a big, never ending job! It's very hard to spot bad stuff inside good stuff and very resource intensive.

Netflow helps. Baselining helps. Anomaly detection helps. Having management that understands, cares and realises the intangible, unquantifiable(metrics?) helps + experience goes a long way.

Logs help. Note: http://www.loganalysis.org/

http://www.sans.org/resources/top5_logreports.pdf


Assumpton: All traffic is good = early internet. Facilitating ease of communication.
Currently: Lots of internet traffic is bad :(

Current issues: Net Neutrality?

Can we allow a form of QOS and simple economics to dictate the traffic on the internet and the service level it gets?

Can we afford not to?

We cannot trust all the endpoints. Can we trust subsets thereof? A multi-tiered, multi-class internet?

We cannot trust all companies, countries and organisations. Thats the way BGP and backbone security of the internet works today. DNS is slightly different but equally succeptible. Funnily enough it ain't _too_ bad!

Notes: Read Barry Greene ( BGP ) and Dan Kaminsky's ( DNS ) work for more info. We love Team Cymru too!

No global security metrics exist that are useful, due to lack of standards, lack of information/incident sharing, lack of cooperation, distributed responsibility, no accountablity, speed and transition of technologies. Yet the internet is global. Its compomising countries' laws are not. Mind you http://www.first.org/ is leading the way.

Constantly enumerating bad stuff is self defeating. Marcus Ranum eloquently puts this, in his 'The Six Dumbest Ideas in Computer Security' essay.

Enumerating good stuff and blocking *everything* else, or submitting it to a lower class of service works. It may only be feasible on Enterprise networks though and with managed endpoints. Moreso in the future, QOS will be done per binary/app/data-object.

The internet is full of unmanaged endpoints and 'unmanaged' users. The internet is full of managed and 'unmanaged' coders.

The internet still works and is resilient due to its 'loose coupling' and civic duty of its technorati.

Note: However BGP reachability was severely affected by SQL Slammer, some backbone routers lost 3-4+% of their internet table via route withdrawls.

Answers: I'm working on it. For the moment enjoy your privileged packet freedom!

Wednesday, July 26, 2006

Trust and Enforcement QOS style....

I will expand on this at a later date, however the possibility of _not_ just using QOS as a marking and dropping mechanism for malicious packets, but using it as a fingerprint and trust mechanism for hosts is appealing.

By using a specific subset of DSCP values, marked by a 'trusted' host's binary, a QOS fingerprint of perhaps a few DSCP values may be used by a network to permit packets to traverse the network. Each organisations network would have a different DSCP fingerprint making it even harder for malicious binaries to arbitrarily spread using standard packets.

The concept of trust may need to be addressed via some form of NAC ( Network Admission Control ) and this becomes tightly coupled with the concept of a well developed and understood SOE ( Standard Operating Environment ).

Having a 'Scavenger' class at your disposal is handy from a QOS perspective also as it allows you to protect the 'Control Plane' of the network from DSCP values you are unsure of also.

So you either explicitly drop anything not in your DSCP fingerprint, or place in a 'Scavenger' class that cannot harm your network, albeit it could be a malicious packet that exploits your next host!

Maybe even cycle the markings in a sequential manner e.g. a rotating key based system....

Saturday, June 17, 2006

BYO RFC

'Risk in a Box': IP Bubblewrap bubblewrap1

A technology risk framework and data visualisation methodology for an IP organisation based upon empirical data, end to end flow / type, control / data planes, connectedness, trust levels, *known* vulnerabilities and threats. This framework hopes to assist individuals and organisations quantify risks to an IP infrastructure and services by enumerating current traffic and threats, rather than a 'perception' of risk and threat in relation to the unknown, which incidentally, is infinite and unquantifiable with the elapsing of time.

The author hopes to help foster, more so, an enumeration and enablement of the positive or 'valid' data/services in extreme or unforeseen circumstances, rather than attempting to enumerate the unknown or 'invalid' negative conditions. However, enumerating the *currently* known negative will help to quantify and frame technology risk overall. Due to the challenges of complexity, a very generic breakdown will be applied which will hopefully be improved upon and refined in the future. The focus is on the actual flows, relationships, dependencies and interfaces to IP services, rather than on specific valuation of data at rest.

Aside: From the title the term 'IP' may be used to represent either, 'Internet Protocol' or 'Information Protection', but not 'Intellectual Property', and is used in the 'Internet Protocol' terminology throughout this paper.

Technology risk is viewed as a subset of overall business risk and may be attributed different weightings based upon an organisation or individuals perception of their reliance upon certain technology areas.


Problem Statement
==================

The complexity and velocity of information technology in IP enabled organisations is such that irrespective of dimensioning and design, the majority of organisations face similar challenges relating to information based asset classification and information protection; thanks mainly to the underlying intricacy of protocols, node relationships, operating systems, applications etc. and the management, operation, integrity and stability of all of the above.

Unfortunately from the macro-organisational view it can be hard to prioritise information flows and services , as different groups or individuals may not posses enough knowledge about the transports and technologies that enable their distinct applications or business processes. This in turn leads to a sub-optimal allocation of resources and / or a skewed view of the organisations IP enabled world. It is very easy to see the end result of a product or new application, but very hard to see the intrinsic use of the IP network, operating systems and dependencies upon which, enable the actual operation and availability of said application or product.

From the micro-view, a single entry in a logfile, a certain vulnerability, attack or potential loss in data integrity may cause catastrophic consequences to the ability of an organisation to operate effectively or in some cases even at all. Some circumstances may result in either systematic degradation of service over time and others in immediately quantifiable revenue loss (albeit generally not with any form of guaranteed accuracy).

There exists no common or easily applied methodology or framework to help contextualise the connectedness, dependencies, threats and risks to an organisation which finds itself heavily dependent upon IP based services.


Overview
========

Certainly different individuals ( be they IT management, IT executive or not.. ) may attribute different values to information assets, however common or shared infrastructure, protocols, services and/or data have more far-reaching consequences to an organisations function and ongoing stable operations than they may initially be privy to. It is with this understanding that a baseline or sliding window must be established across an organisation to facilitate macro classification of services, data and nodes. It should be noted this is only one way to interpret and view an IP enabled organisation.


Nodes
=====

a) transit / infrastructure node

( Routers, firewalls, load balancers, VPN concentrators, GGSN, MMSC, IDP, or anything that facilitates a flow between endpoints. Traffic may be generated by a 'transit node' for the purposes of management, reporting etc however these flows should be attributed to (c) below... this node type includes nodes that store related configuration and management data e.g Network Management Systems / MOMs / OSS )

IT workers laptops and desktops are defined as type (a) nodes as they are viewed as supporting and adding value to the infrastructure and services, as are servers running services such as Active Directory, Native LDAP, DNS etc...

b) endpoint / business node

( Customers, clients, servers or any standard end to end connectivity / flow that terminates or generates explicitly business or revenue related traffic. These nodes should not be related to the operation, management and reporting functionalities of the underlying infrastructure or network... and do include nodes that store related billing, company financial, customer, ordering / provisioning data etc )


Services
========

c) infrastructure control and protocols ( business infrastructure and process support )

( BGP/OSPF/EIGRP/MPLS suite/RIP/DNS/NTP/SYSLOG/LDAP/Radius/SSH(SFTP/SCP)*/TELNET/TFTP/FTP/SNMP/RPC/ICMP/SIP/H323... )

d) data / payload and protocols. ( business product / service / customer / transaction information related )

( SMTP, HTTP, HTTPS, FTP, Radius, SSH(SFTP/SCP)*, NFS, CIFS, SQL based, Mediation / Billing based, CRM based, ERP based, Financial / HR based )


With the above in mind, a node of type (a) may facilitate services or data of type (c) and / or (d) but should only generate (c) itself. A node (b) should only facilitate services and data of type (d), though may generate its own infrastructure traffic (c) such as SNMP traps, Syslog, authentication traffic and standard name resolution protocols etc.

More granularity will be introduced at a later date for protocol types assigned to service types (c) and (d) which can be weighted to influence metrics. Tunnelled protocols may be viewed as separate flows during and after encapsulation, and would be attributed similar values unless transiting different 'trust' basis. ( See 'Trust' section. )

* Note: the author would like to express the desire to be able to tag traffic with a 16 character word rather than just DSCP / MPLS labels and hopes that with the increase in 'Service Orientated Architectures' that protocols such as the ones used by Tibco may be 'peered' in to by IPFIX based flow reporting for example. An application that could prefix or mark itself in the actual payload in clear text with a 'subject' which may be matched via something like FPM ( Flexible Packet Matching ) and treated uniquely by the network would be believed to be beneficial.


Control (Infrastructure) plane / Payload (Data) plane
=====================================================

As the 'Risk in a Box' framework views the nodes and services breakdown, the question is 'where do they fit?'. From a business context it is here that one may view the IP enabled organisation in two operational planes.

1. The "Payload" plane containing (b) - endpoint / business nodes and (d) data / payload services

2. The "Control" plane containing (a) transit / infrastructure nodes and (c) infrastructure services


Trust
=====

The concept of 'trust' is to be applied to different IP segments and / or hosts based upon their overall IP reachability, posture, user / systems access and can be somewhat qualitative though needn't be.

There exists three trust domains that should be assigned by those knowledgeable of the organisations overall IT architecture:

1. Trusted ( e.g. internal only services and networks on which only trusted employees or systems operate upon, Intranet etc. )

2. Semi-Trusted

3. Un-Trusted ( e.g. Internet etc )

Many may disagree here and say that some if not all segments and hosts should be treated as 'un-trusted', however this is not the reality we find ourselves occupying and with 'defense in depth' strategies and layered security models, certain factors including cost, resources, technology and expertees dictate trade-offs such that hosts or networks are viewed in these categories or treated as such.


Connectedness and Importance (CI)
=================================

Value (CI) 0-1

Inherent in the degree of connectedness is indeed an intrinsic measure of value, but residing moreso in the context and frequency of transiting or terminating flows. This quantification applies more accurately to controlled or trusted organisational segments as only valid business traffic should exist on the aforementioned 'Control plane' and 'Payload plane'.

Entropy may be increased for nodes facing 'non-trusted' segments like the Internet, or Extranet paths that do not invoke rate-limiting or QOS when required.

Example argument: something like a 'flash crowd' to some piece of static content on a web server may not actually increase the value of an access/border gateway router or firewall that carries no real service of type (d) business product / service. It would however highlight a delta in traffic and a possible DOS ( Denial of Service ) condition to any other services that utilised that shared connectivity path such as external DNS resolution. Theoretically at any time the maximum number of flows either to or transiting a device is constrained by a combination of factors such as total IP host reachability, upstream bandwidth and resources in servicing such IP flows from a CPU, memory and hardware perspective. This may temporarily raise the importance metric enough to warrant attention or to highlight the need for certain corrective measures.

From the 'Control plane' it is possible to extract flow information regarding all endpoint IP enabled devices, their unique and common relationships, flow types and frequencies. This facilitation of flows highlights the importance of the 'Control plane' and the degree of connectedness is most easily, accurately and economically drawn from 'flow' enabled nodes of type (a). Flows may be garnered from nodes of type (b) with host agents such as Argus etc but can be platform specific and do not scale as easily. In future, off-box, auditable host flow records may be warranted / recommended. For more information about IPFIX or flows please view RFC 3917.

For the moment the concept of the number of flows transiting a node of type (a) shall give it increased value over an assigned base value. The additional value of the type (a) node shall be calculated by virtue of the number and frequency of type (c) and (d) flows with weightings being attributed accordingly. ( as they undoubtedly will differ per organisation and associated usage of non-standard or arbitrary high ports. This will be discussed in detail at a later stage and also depends upon positioning and trust values. )

Nodes of type (b), be they server or client may also be attributed a base value and assigned additional value by virtue of the number and frequency of type (c) and (d) flows they entertain. Naturally a server should host more sessions than a client, be they client to server authentication, server to server traffic, or server to database etc. Should a client side device be experiencing high volumes of *valid* traffic then it may highlight the actual importance of the function of that client machine be they user or automated sessions. This may also help to highlight devices that should be deemed as servers and treated as such or some other anomalous or non-acceptable/invalid use or traffic. There will always be exceptions to this rule.

Aside: an 'End System Multicast' or legitimate 'Peer to Peer' application may break this concept though multicast should be a distinct address range and the legitimate Peer to Peer traffic may be re-classified in to a less weighted flow type. Auto-discovery and port scanning nodes should be known in advance and should be a special case of type (a) nodes, anything else would suggest 'invalid / negative' traffic and should a workstation peak in terms of flow frequency, it would be deemed grounds for investigation.

k = total classifications of C flows decided upon in terms of priority where k is a whole number and x=1 is the most important flow or flow group.
s = total classifications of D flows decided upon in terms of priority where s is a whole number and x=1 is the most important flow or flow group.


Table 1 ( Partial) :

---------------------------------------

C Flows weighting( Classification/Priority x = {1..k} )

c(x) = k / ( x² + k )

D Flows weighting ( Classification/Priority x = {1..s} )

d(x) = s / ( x² + s )

----------------------------------------

z(c1) = number of C1 priority flows per time period 't' in seconds where z is a number between 0 and ? ( hmmm, problems with upper bounds.. )
z(d1) = number of D1 priority flows per time period 't' in seconds where z is a number between 0 and ? ( hmmm, problems with upper bounds.. )

It is recommended that 't' is set low initially ~ 1 week.


Payload plane node: Connectedness / Importance (CI) = not sure of my maths here yet but some sort of integral over a set!

Control plane node: Connectedness / Importance (CI ) = not sure of my maths here yet but some sort of integral over a set!


Vulnerabilties and Threats (VT)
===============================

Value (VT) 0-1

Without re-inventing the wheel the http://www.first.org/ CVSS ( Common Vulnerability Scoring System ) shall be used as a metric to help enumerate known vulnerabilities in the organisation. The vulnerability / threat concept shall be that of a 0-1 value and may have other properties based upon the 'Trust' level as viewed by the organisation.

Actually relating these metrics to the organisation will require a 'Vulnerability Assessment' of sorts, which may be in the form of an automated tool or manual process. It is hoped that calcuations may be done automatically in the future based upon some form of 'risk' or correlation engine that can take feeds from CVSS enabled vulnerability scanners. It is recommended that a vulnerability scanner should be given ubiquitous access to segments either locally or such that if future ACL of FW changes occur all known/existing vulnerabilities are still capable of being enumerated. It should be noted that vulnerability scanning contains risks of its own regarding stability and availability. Such scanners would be treated as type (a) nodes.

IDS/IDP/IPS may also feed in to these figures as confirmation and escalation of threat levels.

A known vulnerability or multiple vulnerabilities for a node generates a (V) value, that is multiplied should the trust profile and/or known flows, posture or IDS confirm the threat (T).


Data Visualisation 'IP Bubblewrap'
==================================

The concept is that of a 3D / isometric cube which plots two distinct planes made up of equi-sized bubbles ( a 'bubblewrap' malleable plane each if you will... ) Bubbles may be considered as individual nodes, but more generally will be groups of nodes with similar connectedness / flows / posture / or IP prefixes. Bubbles shall attempt to cling together ( e.g. have some form of stickiness ) to provide a single viewable plane, but when queried directly will represent exact figures.

The Control plane [type (a) nodes/groups] starts as one horizontal plane at the base of the cube , and the Payload plane [type (b) nodes/groups] is another horizontal plane starting at the middle, thus the cube is sub-divided horizontally in to a Control space and Payload space. All values vary between 0 and 1 and as such the actual planes that may be inhabited form two short, squat vertical cylinders.

The four sides of the cube represent 'Trust' levels e.g. 2x Trusted(opposite), 1x Semi-Trusted and 1x Un-Trusted - which allow for ( the majority of which should be ) 'trusted' hosts, but may be skewed towards the 'non-trusted' or 'semi-trusted' to be graphed along a diagonal border of two zones for visual purposes.

Distance from the cubes sides to the centrepoint horizontally are considered measures of connectedness and importance (CI). The closer to the center a bubble is, defines more connected and more important, whereas towards the edge defines less connected and less important. A node may have thousands of flows per second which may not be of any major importance to the business, or a host may have few flows of major importance to the business.

Risk is calculated as the inverse of the distance between the node and the apex of the cube/cyclinder and is a value between 1 - 100.

Basically the closer to the centre of the cube and the higher, the greater the risk.

Multiple views may be taken and filtered upon, including a node or 'bubble' in the Payload plane that has a relationships or reliance with nodes on the Control plane. This can be easily addressed via flows and / or SNMP with normal topological data, but would correlate risk very quickly and give good visual interpretations thereof. Also most topological data uses graphs to visualise, but as this is a risk map two 'bubbles' [type (b) nodes or groups] may sit next to each other, even touch but not have direct connectivity. They may only speak down to the Control plane and back up to the Payload plane ( unless similar segments with different risk ratings ).


Risk
====

Risk = 1 ÷ √ ( ( 1 - VT )² + ( CI )² )


bubblewrap2

Thursday, June 08, 2006

Looking forward?

So what about IPv6's innate ability to source-route? Perhaps it could allow for the types of 'sinkhole' discussed below, such that a valid / active node on the network would opt-in to route through a 'security gateway'.. everyone else is deemed bad?

Just a thought... don't shoot the messenger!

Monday, June 05, 2006

Why can't I have my 'Intelligent Packets' and eat them too?

This may sound like a form of network DRM.. however let's motor on shall we?

Idea I: could an app call the local host TCP implementation ( An addtional API/feature? ) to write a bit to the TCP header similar to TOS/DSCP/Precedence to define the *Confidentiality* or *Useability* of a packet and would it be honored in transit or at the endpoint? Could you just reserve an existing 'Class'? or have you used them all already? Is IPnIP a possibility for 'intelligent packets' or must it be an 'intelligent network'? Does this all go out the window with IPv6 or should we be building more security in to the type of data object with our v6 stacks and be more in synch with our 'shrinking permieters' ?

Idea II: Why not use intentional enterprise 'sinkholes' to achieve TIA ( Total Information Awareness ) on your cores.. and watch everything realtime? Lots of negatives here but bear with me a minute.. more for Enterprise than Telco...

Over-ride the standard default route, or force edge traffic to the regional or local core first... whereupon you can watch and/or look at session data? It would be too hard to change hosts default gateways and not practical/achievable anyway. Why not turn the IDS / Sniffing mentality inside out? Force the traffic to high bandwidth cores for inspection cleaning/recording/scrubbing? Combine it with edge IPFIX/Flows? Cmon' BW is not a problem anymore, only in ASIAPAC ;)

For real life examples ( only from a 'billing' perspective.. ) Cisco's Intelligent Service Solution:
http://www.cisco.com/en/US/products/ps6588/products_ios_protocol_group_home.html

and

Service Control Engine:
http://www.cisco.com/en/US/products/ps6150/products_data_sheet0900aecd801d8564.html


But from an Enterprise security and assurance perspective.... anyway watch this space.. I will think about this some more....

Sunday, March 26, 2006

A Ph.D, product or pet project?

Been thinking lately I would like to continue learning Python by building something other than my Netscreen config parsers ( note: also helping me to get back to OO programming... )

So right now I am still thinking of my distributed gaming / network viz platform ( http://bsdosx.blogspot.com/2005/07/i-wish.html ) that turns close to all supported nodes in to both collectors, engines and viewing platforms. ( Would love to do touch-screen also a la: http://www.youtube.com/watch?v=iVI6xw9Zph8 , but that's Phase 2 :p )

Constituent parts thus far include VTK http://public.kitware.com/VTK/index.php , Twisted python http://twistedmatrix.com/trac/ , and a perhaps an ESM ( End System Multicast ) http://esm.cs.cmu.edu engine for scaling, congestion etc rather than relying upon network based multicast in heterogeneous environments.

Bittorrent to distribute updates, policy and new functionality. DNS as a common C&C ( Command and Control ) channel for a 'Nematoad' ( nematode ) new sub-domain. Maybe use the domain as a test with RFC 1918 IP address space to stay within yor enterprise borders ( Urk, 'cept for extranets and other examples of double/sinlge NAT that may impinge! ). Also maybe use intentional defects in the SOEs to replicate to valid hosts rather than actual exploits.

If anyone is aware of Australian courses offering 'Security Metrics' http://www.securitymetrics.org/ and Network Visualization I would be happy to hear from you!

Saturday, March 18, 2006

Musings on the ebb and flow of packets....

An interesting perspective from XXXX... negating the fact that he hasn't played with these types of device, a good high level theoretical approach and some definitive weaknesses pointed out.

I guess what interests me and has done for sometime on large national and international networks, is that there is not very much contextualized information around what *is* normal and anomalous ( sometimes no idea at all! ). Also to be able to tie many streams of information together in real time such as netflow, signature based IDS, anomaly detection and CIDR awareness.... helps a great deal. The complexity of today's networks ( hosts/apps/user behaviour too!) is such that the same issues of tuning,false positives and meaningful data are tough, however depending upon the degree of network segmentation, homogeneous platforms.. and the type of business you are in, this can be greatly reduced and more effective. I recently spent some time with a friend from Cisco over a beer or two watching the 80mb DDOS flows to cisco.com and how their Peakflow reported upon it ... some quite interestingly defined thresholds for some other attacks also. They also have a 'clean-pipes' solution... anything to help you stay afloat when millions go through your website each day... admittedly you always need bigger pipes than an attacker but they don't always have to be yours. What really adds value though is in the core...

The PeakFlow SP solution is also interesting and can really help; more so to define traffic and billing ( peering/ratios etc ) and is somewhat BGP aware. It becomes very interesting then in actually knowing what is going on for Tier 1 and Tier 2 transit carriers that are trying to traffic engineer some very interesting issues and bill accordingly. PeakFlow allows for contextualised information and can see both the flows and certain defined BGP attributes, amongst other things and can help to address the issue of time and error in trace back to facilitate targeted sinkholes at the edge... understanding the data and control planes on networks is crucial.

One of the biggest problems we face right now on enterprise networks is the transparency, visibility and auditability challenge. Unfortunately the network *is* more aware and being expected to influence traffic in many new ways... not just to route packets... relating to QOS, triggered changes in routing, complex traffic engineering and more 'application' awareness such as Cisco's NBAR to influence flows etc....

Today, one has to be across every transiting technology and node, including the topology, security and criticality of each conceptual and physical business area. Tools to help alleviate this are good(tm)... and using extrusion detection and flow monitoring helps to identify infected hosts, worm sign, reconnaissance and probing ( internally ) and contributes to the quality and usefulness of empirical data when dealing with incidents. I am not negating inherent host and application issues, but at the end of the day both the hosts, applications and network all play a roll and impose themselves upon each other with different unique characteristics and behavioural signatures... perhaps some day IPv6 and IPSec will allow for close to 100% encryption, but right now we have limited edge use of IPSEC and payload encryption is not so much an issue... data has to be mobile to be useful or destructive, once it moves, it leaves traces... and hopefully one still owns the control channel of ones network. ( Hopefully! )

Boxes that can do good Capacity Management, Flows, basic/light flow based NIDS and are somewhat 'network-aware' ( routing protocol attributes )... such that they can contextualise somewhat the data and control planes of a network... are in my opinion, still immature; but better than anything else out there right now!.... I am waiting to combine such a tool with an onboard routing daemon that can interrogate the enterprise address routing table ( negating certain sloppy summarisation! ) and see what address space is currently 'dark' within private ranges to provide an 'aware' darknet that ebbs and flows as address space usage does.

I guess my point is that the convergence of tools and methodologies to gain insight and awareness in to your network is better than not having such tools, or having them distributed amongst groups and unaware or unable to 'share' or contextualise data. In responding to some major worm outbreaks, un-intentional internal DOS, traffic engineering / billing issues etc etc on some pretty large networks, I would have given the left part of some of my anatomy for such tools / visibility.... there is no silver bullet I'll give you that, but in this arms race the 'Nuclear' Holocaust will do no one any favours -> only provides leverage.... which points back to more subtle cooperative, political and legal ways to address such global threats.... terrorism is alive and well and packets make good suicide bombers!

Donal