
Sunday, June 24, 2007
Damn straight
Creativity expert Sir Ken Robinson challenges the way we're educating our children.
Thursday, June 21, 2007
Dear Sirs..
Another bold question if I may. The topic is trust. The subjects are sheeple and computer systems. The framework is IT Security. The context is always changing. The goals are the same. Intent is irrelevant. Miscreants abound.
Excuse me arguing by analogy, but this online age verification system to access movie trailers, sums up many of the major issues and ignorances in IT Security.
http://blogs.csoonline.com/dirty_trailers_cheap_tricks
As the depth, pace and breadth of technology increases, no one can be expected to be an expert in all systems and subsystems they either use, interface with or build upon. Knowing what's going on 'under the hood' is becoming increasingly abstract and esoteric, especially to the standard consumer of computing resources. The issue is compounded by depth of code, system complexity, legacy systems, and third party drivers and modules, which are either knowingly or unknowingly part of a solution. Users require protection from both themselves and others while interfacing with systems or when having their information stored or utilised.
Unfortunately global systems span geo-political boundaries. Global systems which can be highjacked and used to attack more innocents.(Unfortunately systems will continue to be or will become vulnerable over time!) And I am talking about any node here; routers, switches, firewalls and traditional endpoints.
I am leaning towards the belief that more services should be available to end-users in their local cloud. Not necessarily mandated, but available - depending upon the environment. This is a highly complex and potentially volatile area, and arguments abound, however the question should be 'what's effective?'. DAMN -> fast, reliable and cheap. Though I like reliable!
How can you trust unmanaged systems and users? (also known as an information processing nodes!). See previous post.
How can you trust managed systems and users?
How can you trust infrastructure nodes?
Expect them all to fail. Expect them to be compromised. Expect to lose trust in them.
Now where does that leave us?
Let's look at the enforcement points on a simple systems trust model again... See previous post. (I like to think of the diagram as the equivalent of a Feynman diagram for IT Security, tee hee!)
So some stuff to think about. Here's a new acronym/phrase for you akin to SOA(Service Orientated Architecture).
SOV(Service Orientated Vulnerability) can be a compound or blended vulnerability.
SS(Service Surface) interface, network, user, back-end etc
IS(Interface Surface) subset of the above and takes in to account multiple new input vectors as the future interface will have more than one API/endpoint/processor per endpoint utilising new input devices and virtualisation.
Fun, fun, fun.
Every node will be a client.
Every node will be a server.
Every node will be a cache.
So now, do you trust the node, or introduce another trusted node to watch the node.
This could go on ad infinitum. At some point you hope there are enough checks and balances to watch the watchers.
Can we checksum people, anyone?
Schneier gets credit for leading me to the age verification system... http://www.schneier.com/blog/archives/2007/06/age_verificatio.html
Excuse me arguing by analogy, but this online age verification system to access movie trailers, sums up many of the major issues and ignorances in IT Security.
This morning, the New York Times has a nice story on gateways to online movie trailers that contain adult content. Trailers online will be preceded by colored tags, just like the green one you see in theaters that indicates the preview is acceptable for anyone watching. A yellow tag indicates the trailer may include PG-13ish content and a red one indicates an R-rated trailer, as it does in theaters, though red tags are rarely used in theaters.
The trailers that appear on the studios' movie sites, the story said, also have time of day restrictions, ostensibly viewable only between 9 p.m and 4 a.m.
More here
As the depth, pace and breadth of technology increases, no one can be expected to be an expert in all systems and subsystems they either use, interface with or build upon. Knowing what's going on 'under the hood' is becoming increasingly abstract and esoteric, especially to the standard consumer of computing resources. The issue is compounded by depth of code, system complexity, legacy systems, and third party drivers and modules, which are either knowingly or unknowingly part of a solution. Users require protection from both themselves and others while interfacing with systems or when having their information stored or utilised.
Unfortunately global systems span geo-political boundaries. Global systems which can be highjacked and used to attack more innocents.(Unfortunately systems will continue to be or will become vulnerable over time!) And I am talking about any node here; routers, switches, firewalls and traditional endpoints.
I am leaning towards the belief that more services should be available to end-users in their local cloud. Not necessarily mandated, but available - depending upon the environment. This is a highly complex and potentially volatile area, and arguments abound, however the question should be 'what's effective?'. DAMN -> fast, reliable and cheap. Though I like reliable!
How can you trust unmanaged systems and users? (also known as an information processing nodes!). See previous post.
How can you trust managed systems and users?
How can you trust infrastructure nodes?
Expect them all to fail. Expect them to be compromised. Expect to lose trust in them.
Now where does that leave us?
Let's look at the enforcement points on a simple systems trust model again... See previous post. (I like to think of the diagram as the equivalent of a Feynman diagram for IT Security, tee hee!)
So some stuff to think about. Here's a new acronym/phrase for you akin to SOA(Service Orientated Architecture).
SOV(Service Orientated Vulnerability) can be a compound or blended vulnerability.
SS(Service Surface) interface, network, user, back-end etc
IS(Interface Surface) subset of the above and takes in to account multiple new input vectors as the future interface will have more than one API/endpoint/processor per endpoint utilising new input devices and virtualisation.
Fun, fun, fun.
Every node will be a client.
Every node will be a server.
Every node will be a cache.
So now, do you trust the node, or introduce another trusted node to watch the node.
This could go on ad infinitum. At some point you hope there are enough checks and balances to watch the watchers.
Can we checksum people, anyone?
Schneier gets credit for leading me to the age verification system... http://www.schneier.com/blog/archives/2007/06/age_verificatio.html
Sunday, June 17, 2007
Friday, June 15, 2007
What does IT Security and a HIV/STD test have in common?
Answers on a S.A.E. ( Self Addressed Email )
Thursday, June 14, 2007
Symbiosis
If one doesn't separate the human from the endpoint system e.g. which is what client side security is really all about, then - and only then - will we make progress in the IT security battle. The human, peripherals and machine comprise the client side endpoint which needs to be protected in its entirety! Now let's think about Integrity, Availability and Confidentiality again.

Aside: Lines are being blurred between the conceptual client and server roles each day. Service orientated enterprise architectures are only a minor part of the puzzle... Let us never forget the users, administrators, operators and developers as part of the overall puzzle. (Or is it a mystery?)

Aside: Lines are being blurred between the conceptual client and server roles each day. Service orientated enterprise architectures are only a minor part of the puzzle... Let us never forget the users, administrators, operators and developers as part of the overall puzzle. (Or is it a mystery?)
Dorky is right!
IT Security needs more than this Open University style waffle.
I prefer the 'Look Around You' approach to learning ;)
I prefer the 'Look Around You' approach to learning ;)
Wednesday, June 13, 2007
A text from the ether
I got this message via text from a friend today:
"How many wasted thought cycles do we have each day, each month, each year, in a lifetime? How does fear rule our actions, control our thoughts, overrule our instincts, and dictate our emotions? Are we conditioned how to act and react? Are we bred to slave over data in the workplace? Have our minds been turned in to computers? Have our bodies been bred to consume? Are we drugged from childhood? Are we awake and if we were, how would we know?"
And here is a nice TED talk from Tenzin Bob Thurman (Uma Thurman's Dad!), who became a Tibetan monk at age 24, about a topic I would refer to as 'enlightened self-interest':
"How many wasted thought cycles do we have each day, each month, each year, in a lifetime? How does fear rule our actions, control our thoughts, overrule our instincts, and dictate our emotions? Are we conditioned how to act and react? Are we bred to slave over data in the workplace? Have our minds been turned in to computers? Have our bodies been bred to consume? Are we drugged from childhood? Are we awake and if we were, how would we know?"
And here is a nice TED talk from Tenzin Bob Thurman (Uma Thurman's Dad!), who became a Tibetan monk at age 24, about a topic I would refer to as 'enlightened self-interest':
On my tech mind.
- Complexity Crunch
- Feedback Loops
- Change Management
- Reliability(Integrity)
- Loosely Coupled
- Mobile
- Everything is a client, everything is a server, everything is a cache
- Distributed content inventories
- Intelligent packets
- Metrics
- Quality of information
- Feedback Loops
- Change Management
- Reliability(Integrity)
- Loosely Coupled
- Mobile
- Everything is a client, everything is a server, everything is a cache
- Distributed content inventories
- Intelligent packets
- Metrics
- Quality of information
Sunday, June 03, 2007
The Blue Packet

This is a great post from a site I like about the mobile Telco industry. It made me laugh out loud. Things that evoke an audible response from you are special, whether good or bad!
Link ( also in image ) : http://the.taoofmac.com/space/blog/2004/11/08
Tuesday, May 29, 2007
Friday, May 25, 2007
Ack, Ack, Ack
Just wanted to reiterate something from Wade's blog:
Watch your thoughts: They become your words.
Watch your words: They become your actions.
Watch your actions: They become your habits.
Watch your habits: They become your character.
Watch your character: It becomes your destiny.
Watch your thoughts: They become your words.
Watch your words: They become your actions.
Watch your actions: They become your habits.
Watch your habits: They become your character.
Watch your character: It becomes your destiny.
OSX'ers... please don't create a monoculture!
Well there is an argument out there that the security framework of OSX/BSD's is far superior to that of Windows - however - aside from the MOAB Month of Apple Bugs ( which incidentally didn't have an unassisted arbitrary remote code exploit - which was wormable ) it's nice to see some of my trusted analysts chime in.
"Apple running OS-X is the clear operating environment of choice today for most
normal users and most businesses, especially for notebook computers."
Report here from Fred Cohen and Associates: http://all.net/Analyst/2007-06.pdf
"Apple running OS-X is the clear operating environment of choice today for most
normal users and most businesses, especially for notebook computers."
Report here from Fred Cohen and Associates: http://all.net/Analyst/2007-06.pdf
Monday, May 21, 2007
On the up and up.
Glad to see Bruce Schneier sums up nicely my emergent view and business plan.
Link here: Do we really need a security industry?
[ http://www.schneier.com/blog/archives/2007/05/do_we_really_ne.html ]
Link here: Do we really need a security industry?
[ http://www.schneier.com/blog/archives/2007/05/do_we_really_ne.html ]
Saturday, May 19, 2007
How to assign value to digital objects and flows
This may be my next programming project. As a wise man once said, "You either code, or you don't!". Hmmm.. I think it was me actually. As a student of life once said, "...
Anyway here's the new pitch. A statically linked, cross platform binary to implement my 'Doobies' implementation of information evaluation in an enterprise. It takes advantage of multicast DNS and unicast DNS, thus the paths are already there! The client shoots off reports every so often to the 'reporter' which is the first entry in the 'value' subdomain, under 'reporter.value.companyx.com'.
Building blocks for client: Zeroconf, Netconf, BeePy, mDNS , nProbe and for some unknown reason, maybe resiliency?.. DHT's come to mind as does Anycast !
Anyway here's the new pitch. A statically linked, cross platform binary to implement my 'Doobies' implementation of information evaluation in an enterprise. It takes advantage of multicast DNS and unicast DNS, thus the paths are already there! The client shoots off reports every so often to the 'reporter' which is the first entry in the 'value' subdomain, under 'reporter.value.companyx.com'.
Building blocks for client: Zeroconf, Netconf, BeePy, mDNS , nProbe and for some unknown reason, maybe resiliency?.. DHT's come to mind as does Anycast !
My head hurts ...
The web is about to explode all over again, and I mean in a 2002/3 CodeRed/Slammer/Nimda/Blaster/Nachi type of way. With services like Dapper and the new flavours of mashup AJAX'y type apps - it's hard to get your head around how information will be mangled by consumers, hobbyists and MISCREANTS.
I believe soon everyone will be running their own OpenID servers or will require SSO services to reduce the identity overheads of all these network-centric services. No one has addressed the old issues of domain ownership and transferral though. These are generally rooted in silly things like confirmation by fax, whereby no one bothers to check the calling parties number. Don't get me started on headed notepaper.
I used to "dis" the Jericho Forum, but the web is morphing from the inside out. Combine this with mesh, mobility and multicast/p2p and the funny thing is... we need to secure even more rather than less in enterprises. We've known this for a while. Anyone who throws out their firewalls yet might as well take the doors off their houses too. Decommisioning is expensive at all levels and hard to do well. Legacy kit and issues abound.
However, the paradigm has already changed. It's still the Internet and World Wide Web, just there's more of it and the information is being atomized and made even more malleable and 'remixable'.
This scared me today, though I had heard of the previous incidents of self-replicating XSS ...
Funny thing is, all these open API's are creating another type of wider monoculture built on more layers than just TCP/IP.
Doobies.
I have joked before about units called 'doobies' but the idea is simple and flexible. Assume secure DNS. Use DNS as the dynamic database that it is - to create a sub-domain that relates to value. Each organisation may have different values/exchange rates to their own countries currency unit.
Once you breakdown your traffic to objects and flows and start quantifying different types, you can then assign arbitrary amounts to atomic entities to begin with and tweak from there.
value.companyx.com
dns-flows.value.companyx.com
dns-packets.value.companyx.com
dns-records.value.companyx.com
customer-ssn.value.companyx.com
customer-address.value.companyx.com
This could get very complicated very quickly, but could also be as basic and simple as one wanted. Using either any part of IPv4 address space or just BOGONS/Martians RFC1918/RFC3330 current values are resolved and could have huge scope depending on the organisation.
This value is your 'dooby' value. Devices report back, or are queried, on how many of each type of object they have processed or stored in an interim. Devices then supply flexible stats and can consult a central value database.(Kinda like SNMP/RMON only better, unless I am missing
something!)
DNS is ubiquitous. Kernel hooks to a special accounting/reporting client is required.
Device processed x times type y 'doobies'. What is the current 'dooby' exchange rate for my organisation?
Maybe you could re-use SNMP but I think the centralised DNS store of current values is more flexible.
Thoughts, this is just a beer mat type scribble idea on my behalf.
Once you breakdown your traffic to objects and flows and start quantifying different types, you can then assign arbitrary amounts to atomic entities to begin with and tweak from there.
value.companyx.com
dns-flows.value.companyx.com
dns-packets.value.companyx.com
dns-records.value.companyx.com
customer-ssn.value.companyx.com
customer-address.value.companyx.com
This could get very complicated very quickly, but could also be as basic and simple as one wanted. Using either any part of IPv4 address space or just BOGONS/Martians RFC1918/RFC3330 current values are resolved and could have huge scope depending on the organisation.
This value is your 'dooby' value. Devices report back, or are queried, on how many of each type of object they have processed or stored in an interim. Devices then supply flexible stats and can consult a central value database.(Kinda like SNMP/RMON only better, unless I am missing
something!)
DNS is ubiquitous. Kernel hooks to a special accounting/reporting client is required.
Device processed x times type y 'doobies'. What is the current 'dooby' exchange rate for my organisation?
Maybe you could re-use SNMP but I think the centralised DNS store of current values is more flexible.
Thoughts, this is just a beer mat type scribble idea on my behalf.
Thursday, May 17, 2007
Horse and cart? Cart and horse?
Donal to Securitymetrics mailing list.
(snippet)
Is not our problem that of assigning value to digital objects and/or their contents? First we need a good handle on our objects.
So intrinsic in 'Security Metrics' I posit are 'Non-Security Metrics' of sorts ;)
Are we putting the cart before the horse?
(snippet)
Basically the thrust here is that we are trying to measure security and risk without actually fully measuring the playing field, players and game to begin with. This is self-defeating as we only then sell FUD. One must first assign a value to digital objects no matter how hard that may be. I have suggested interim value units in the past that can be susequently assigned dynamic financial values on a per organisation basis. This could be achieved with DNS ( though DNS is a target in itself! )
"Security metrics deal with risk and risk is not about security - it's about the utility of content." ( From a highly respected individual in the field. )
So how do we measure our content and track it in the first place?
We cannot assign a value to something if we don't know it's actually there, what it is exactly... how many of them there are and where etc. Flexible real time distributed content inventory is required. This harks back to my emerging belief in a form of 'Total Information Awareness' and digital surveillance of networks. Distributed endpoint file/object indexing, keylogging etc. This then also raises issues regarding the security of said goldmine of information.
Yes, I am steering back towards the 'network computer'... thin everything!
(snippet)
Is not our problem that of assigning value to digital objects and/or their contents? First we need a good handle on our objects.
So intrinsic in 'Security Metrics' I posit are 'Non-Security Metrics' of sorts ;)
Are we putting the cart before the horse?
(snippet)
Basically the thrust here is that we are trying to measure security and risk without actually fully measuring the playing field, players and game to begin with. This is self-defeating as we only then sell FUD. One must first assign a value to digital objects no matter how hard that may be. I have suggested interim value units in the past that can be susequently assigned dynamic financial values on a per organisation basis. This could be achieved with DNS ( though DNS is a target in itself! )
"Security metrics deal with risk and risk is not about security - it's about the utility of content." ( From a highly respected individual in the field. )
So how do we measure our content and track it in the first place?
We cannot assign a value to something if we don't know it's actually there, what it is exactly... how many of them there are and where etc. Flexible real time distributed content inventory is required. This harks back to my emerging belief in a form of 'Total Information Awareness' and digital surveillance of networks. Distributed endpoint file/object indexing, keylogging etc. This then also raises issues regarding the security of said goldmine of information.
Yes, I am steering back towards the 'network computer'... thin everything!
Wednesday, May 16, 2007
Watch the bits go bye!
More Infosec stuffing:
Haven't brushed up on 'information geometry' yet ;) but this reminds me of what I was trying to map out with raw real data here:
http://static.flickr.com/47/174233556_2c39eb159b_o.jpg
Long rambling post lives here if anyone is interested, but very network centric and is garrulous and overblown: http://bsdosx.blogspot.com/2006/06/byo-rfc.html
Basically, should we be mapping everything real time at the data object and/or flow level from an operational perspective. Could every managed node actively stream back data? Should there be secure management covert channels ( Think Sebek http://www.honeynet.org/tools/sebek/sebek_intro.png ) to constantly feed back a nodes state, message passing and flows?
When you think about it, are nodes too independent and not surveilled enough? Rather than configure something to monitor/watch them (Openview, IDS, Argus), assuming initial trust, could they *constantly* advertise/disseminate statistical/session data that could be base lined (other than syslog/SNMP traps etc)? Am thinking initial zeroconf and MANETS style operation here, or MMORPG gaming clients? libkstat on steroids?
I know Verdasys have Digital Guardian, CA have Audit... but will Enterprise Digital Rights Management scale, or does it have the same problems as PKI.
Surveillance and Adhocracy scale. With utility computing, servers will move and be re-purposed and the clients are already on the move.
Haven't brushed up on 'information geometry' yet ;) but this reminds me of what I was trying to map out with raw real data here:
http://static.flickr.com/47/174233556_2c39eb159b_o.jpg
Long rambling post lives here if anyone is interested, but very network centric and is garrulous and overblown: http://bsdosx.blogspot.com/2006/06/byo-rfc.html
Basically, should we be mapping everything real time at the data object and/or flow level from an operational perspective. Could every managed node actively stream back data? Should there be secure management covert channels ( Think Sebek http://www.honeynet.org/tools/sebek/sebek_intro.png ) to constantly feed back a nodes state, message passing and flows?
When you think about it, are nodes too independent and not surveilled enough? Rather than configure something to monitor/watch them (Openview, IDS, Argus), assuming initial trust, could they *constantly* advertise/disseminate statistical/session data that could be base lined (other than syslog/SNMP traps etc)? Am thinking initial zeroconf and MANETS style operation here, or MMORPG gaming clients? libkstat on steroids?
I know Verdasys have Digital Guardian, CA have Audit... but will Enterprise Digital Rights Management scale, or does it have the same problems as PKI.
Surveillance and Adhocracy scale. With utility computing, servers will move and be re-purposed and the clients are already on the move.
Subscribe to:
Posts (Atom)