Been thinking lately I would like to continue learning Python by building something other than my Netscreen config parsers ( note: also helping me to get back to OO programming... )
So right now I am still thinking of my distributed gaming / network viz platform ( http://bsdosx.blogspot.com/2005/07/i-wish.html ) that turns close to all supported nodes in to both collectors, engines and viewing platforms. ( Would love to do touch-screen also a la: http://www.youtube.com/watch?v=iVI6xw9Zph8 , but that's Phase 2 :p )
Constituent parts thus far include VTK http://public.kitware.com/VTK/index.php , Twisted python http://twistedmatrix.com/trac/ , and a perhaps an ESM ( End System Multicast ) http://esm.cs.cmu.edu engine for scaling, congestion etc rather than relying upon network based multicast in heterogeneous environments.
Bittorrent to distribute updates, policy and new functionality. DNS as a common C&C ( Command and Control ) channel for a 'Nematoad' ( nematode ) new sub-domain. Maybe use the domain as a test with RFC 1918 IP address space to stay within yor enterprise borders ( Urk, 'cept for extranets and other examples of double/sinlge NAT that may impinge! ). Also maybe use intentional defects in the SOEs to replicate to valid hosts rather than actual exploits.
If anyone is aware of Australian courses offering 'Security Metrics' http://www.securitymetrics.org/ and Network Visualization I would be happy to hear from you!
Sunday, March 26, 2006
Saturday, March 18, 2006
Musings on the ebb and flow of packets....
An interesting perspective from XXXX... negating the fact that he hasn't played with these types of device, a good high level theoretical approach and some definitive weaknesses pointed out.
I guess what interests me and has done for sometime on large national and international networks, is that there is not very much contextualized information around what *is* normal and anomalous ( sometimes no idea at all! ). Also to be able to tie many streams of information together in real time such as netflow, signature based IDS, anomaly detection and CIDR awareness.... helps a great deal. The complexity of today's networks ( hosts/apps/user behaviour too!) is such that the same issues of tuning,false positives and meaningful data are tough, however depending upon the degree of network segmentation, homogeneous platforms.. and the type of business you are in, this can be greatly reduced and more effective. I recently spent some time with a friend from Cisco over a beer or two watching the 80mb DDOS flows to cisco.com and how their Peakflow reported upon it ... some quite interestingly defined thresholds for some other attacks also. They also have a 'clean-pipes' solution... anything to help you stay afloat when millions go through your website each day... admittedly you always need bigger pipes than an attacker but they don't always have to be yours. What really adds value though is in the core...
The PeakFlow SP solution is also interesting and can really help; more so to define traffic and billing ( peering/ratios etc ) and is somewhat BGP aware. It becomes very interesting then in actually knowing what is going on for Tier 1 and Tier 2 transit carriers that are trying to traffic engineer some very interesting issues and bill accordingly. PeakFlow allows for contextualised information and can see both the flows and certain defined BGP attributes, amongst other things and can help to address the issue of time and error in trace back to facilitate targeted sinkholes at the edge... understanding the data and control planes on networks is crucial.
One of the biggest problems we face right now on enterprise networks is the transparency, visibility and auditability challenge. Unfortunately the network *is* more aware and being expected to influence traffic in many new ways... not just to route packets... relating to QOS, triggered changes in routing, complex traffic engineering and more 'application' awareness such as Cisco's NBAR to influence flows etc....
Today, one has to be across every transiting technology and node, including the topology, security and criticality of each conceptual and physical business area. Tools to help alleviate this are good(tm)... and using extrusion detection and flow monitoring helps to identify infected hosts, worm sign, reconnaissance and probing ( internally ) and contributes to the quality and usefulness of empirical data when dealing with incidents. I am not negating inherent host and application issues, but at the end of the day both the hosts, applications and network all play a roll and impose themselves upon each other with different unique characteristics and behavioural signatures... perhaps some day IPv6 and IPSec will allow for close to 100% encryption, but right now we have limited edge use of IPSEC and payload encryption is not so much an issue... data has to be mobile to be useful or destructive, once it moves, it leaves traces... and hopefully one still owns the control channel of ones network. ( Hopefully! )
Boxes that can do good Capacity Management, Flows, basic/light flow based NIDS and are somewhat 'network-aware' ( routing protocol attributes )... such that they can contextualise somewhat the data and control planes of a network... are in my opinion, still immature; but better than anything else out there right now!.... I am waiting to combine such a tool with an onboard routing daemon that can interrogate the enterprise address routing table ( negating certain sloppy summarisation! ) and see what address space is currently 'dark' within private ranges to provide an 'aware' darknet that ebbs and flows as address space usage does.
I guess my point is that the convergence of tools and methodologies to gain insight and awareness in to your network is better than not having such tools, or having them distributed amongst groups and unaware or unable to 'share' or contextualise data. In responding to some major worm outbreaks, un-intentional internal DOS, traffic engineering / billing issues etc etc on some pretty large networks, I would have given the left part of some of my anatomy for such tools / visibility.... there is no silver bullet I'll give you that, but in this arms race the 'Nuclear' Holocaust will do no one any favours -> only provides leverage.... which points back to more subtle cooperative, political and legal ways to address such global threats.... terrorism is alive and well and packets make good suicide bombers!
Donal
I guess what interests me and has done for sometime on large national and international networks, is that there is not very much contextualized information around what *is* normal and anomalous ( sometimes no idea at all! ). Also to be able to tie many streams of information together in real time such as netflow, signature based IDS, anomaly detection and CIDR awareness.... helps a great deal. The complexity of today's networks ( hosts/apps/user behaviour too!) is such that the same issues of tuning,false positives and meaningful data are tough, however depending upon the degree of network segmentation, homogeneous platforms.. and the type of business you are in, this can be greatly reduced and more effective. I recently spent some time with a friend from Cisco over a beer or two watching the 80mb DDOS flows to cisco.com and how their Peakflow reported upon it ... some quite interestingly defined thresholds for some other attacks also. They also have a 'clean-pipes' solution... anything to help you stay afloat when millions go through your website each day... admittedly you always need bigger pipes than an attacker but they don't always have to be yours. What really adds value though is in the core...
The PeakFlow SP solution is also interesting and can really help; more so to define traffic and billing ( peering/ratios etc ) and is somewhat BGP aware. It becomes very interesting then in actually knowing what is going on for Tier 1 and Tier 2 transit carriers that are trying to traffic engineer some very interesting issues and bill accordingly. PeakFlow allows for contextualised information and can see both the flows and certain defined BGP attributes, amongst other things and can help to address the issue of time and error in trace back to facilitate targeted sinkholes at the edge... understanding the data and control planes on networks is crucial.
One of the biggest problems we face right now on enterprise networks is the transparency, visibility and auditability challenge. Unfortunately the network *is* more aware and being expected to influence traffic in many new ways... not just to route packets... relating to QOS, triggered changes in routing, complex traffic engineering and more 'application' awareness such as Cisco's NBAR to influence flows etc....
Today, one has to be across every transiting technology and node, including the topology, security and criticality of each conceptual and physical business area. Tools to help alleviate this are good(tm)... and using extrusion detection and flow monitoring helps to identify infected hosts, worm sign, reconnaissance and probing ( internally ) and contributes to the quality and usefulness of empirical data when dealing with incidents. I am not negating inherent host and application issues, but at the end of the day both the hosts, applications and network all play a roll and impose themselves upon each other with different unique characteristics and behavioural signatures... perhaps some day IPv6 and IPSec will allow for close to 100% encryption, but right now we have limited edge use of IPSEC and payload encryption is not so much an issue... data has to be mobile to be useful or destructive, once it moves, it leaves traces... and hopefully one still owns the control channel of ones network. ( Hopefully! )
Boxes that can do good Capacity Management, Flows, basic/light flow based NIDS and are somewhat 'network-aware' ( routing protocol attributes )... such that they can contextualise somewhat the data and control planes of a network... are in my opinion, still immature; but better than anything else out there right now!.... I am waiting to combine such a tool with an onboard routing daemon that can interrogate the enterprise address routing table ( negating certain sloppy summarisation! ) and see what address space is currently 'dark' within private ranges to provide an 'aware' darknet that ebbs and flows as address space usage does.
I guess my point is that the convergence of tools and methodologies to gain insight and awareness in to your network is better than not having such tools, or having them distributed amongst groups and unaware or unable to 'share' or contextualise data. In responding to some major worm outbreaks, un-intentional internal DOS, traffic engineering / billing issues etc etc on some pretty large networks, I would have given the left part of some of my anatomy for such tools / visibility.... there is no silver bullet I'll give you that, but in this arms race the 'Nuclear' Holocaust will do no one any favours -> only provides leverage.... which points back to more subtle cooperative, political and legal ways to address such global threats.... terrorism is alive and well and packets make good suicide bombers!
Donal
Sunday, September 25, 2005
CIA
Confidentiality, Integrity and Availability...
Sometimes we forget about how exactly to tackle the last one. HA, Load-Balancing, BCP, Geographical Redundancy, Clustering, Primary/Secondary, Active/Active etc etc...
Don't forget 'backups', http://taobackup.com/ ( nice but vendor related! )
Fun http://www.backuptrauma.com/video/default2.aspx?r=1 from John Cleese!
Sometimes we forget about how exactly to tackle the last one. HA, Load-Balancing, BCP, Geographical Redundancy, Clustering, Primary/Secondary, Active/Active etc etc...
Don't forget 'backups', http://taobackup.com/ ( nice but vendor related! )
Fun http://www.backuptrauma.com/video/default2.aspx?r=1 from John Cleese!
Sunday, September 18, 2005
Once more in to the breach....
So I have started to recount this phrase to myself on a Sunday evening ( over a beer.. or two.. ) before stepping once again in to my job on a Monday morning...
I am an 'Information Security' practitioner for a large national mobile Telco and the landscape _is_ always changing... ( though we face the most basic challenges of yesteryear also..)
...out of the trenches and march forward in to the (semi)-unknown! Perhaps someone will allow the 'Red-Cross' in and sing 'Stile Nacht' over Christmas, while we bunker down and play a MMORPG.. however I doubt it as the Internet never sleeps! ( And nor should SecOps! )...
I have been aware of 'Marcus Ranum' for a while but revisted his site recently after a link was sent around for 'The Six Dumbest Ideas in Computer Security'.. http://www.ranum.com/security/computer_security/index.html
I would like to share with you some of the 'nuggets' in this 'Prophet's' site, that not only _pre-date_ but echo most of my sentiments -> if you have been here before:
Aside: I am only a mere mortal vs. this 'security-techno-demi-god' !
Quotes like:
1) Set up the production systems
2) Make them work
3) Test them
4) While true; do
If they are working; Continue; Endif
If they are not working; GOTO 2; Endif
5) Done
( Maybe OpenBSD + layered security + quality userland software.. )
or:
The mainframe programmers of the 70's and 80's used to write of a practice called "Change Control" - in which production systems were managed with care and forethought. During the late 90's the last of the Change Control believers were taken out and shot, and their cubicles were given to the consultants who were there to mark everything up in XML in order to make everything better in some manner nobody understands yet.
maybe the 'calender' based upon the classic 'Motivations' calenders:
http://www.ranum.com/security/computer_security/calendar/index.html
I am an 'Information Security' practitioner for a large national mobile Telco and the landscape _is_ always changing... ( though we face the most basic challenges of yesteryear also..)
...out of the trenches and march forward in to the (semi)-unknown! Perhaps someone will allow the 'Red-Cross' in and sing 'Stile Nacht' over Christmas, while we bunker down and play a MMORPG.. however I doubt it as the Internet never sleeps! ( And nor should SecOps! )...
I have been aware of 'Marcus Ranum' for a while but revisted his site recently after a link was sent around for 'The Six Dumbest Ideas in Computer Security'.. http://www.ranum.com/security/computer_security/index.html
I would like to share with you some of the 'nuggets' in this 'Prophet's' site, that not only _pre-date_ but echo most of my sentiments -> if you have been here before:
Aside: I am only a mere mortal vs. this 'security-techno-demi-god' !
Quotes like:
1) Set up the production systems
2) Make them work
3) Test them
4) While true; do
If they are working; Continue; Endif
If they are not working; GOTO 2; Endif
5) Done
( Maybe OpenBSD + layered security + quality userland software.. )
or:
The mainframe programmers of the 70's and 80's used to write of a practice called "Change Control" - in which production systems were managed with care and forethought. During the late 90's the last of the Change Control believers were taken out and shot, and their cubicles were given to the consultants who were there to mark everything up in XML in order to make everything better in some manner nobody understands yet.
maybe the 'calender' based upon the classic 'Motivations' calenders:
http://www.ranum.com/security/computer_security/calendar/index.html
Saturday, September 17, 2005
BadStreets
My old/new movie is online@ http://www.undergroundfilm.org/films/detail.tcl?wid=1020131
Go check it you!
Go check it you!
Friday, September 09, 2005
Anomaly or progress...
Hmm.. again I love the advances in 'polymorhic worm' behaviour, traffic normalization, IDS, IPS etc etc etc...
But I really think we are missing the fundamental point entirely. My favourite phrase is 'Complexity is the Enemy', especially as it relates to fast paced ever changing environments. 'Change Control' , 'Change Management' or 'Release Management' is great.. but I have never seen it done really effectively. Even in one of the best networking companies in the world, it is still a form of controlled chaos! As best effort / guestimate work is done in identifying host dependencies in downstream networks or similar service dependencies in downstream / upstream applications or code. ( Let alone full appreciation for business and supporting processes... ). Who _are_ these guardians of 'Change Control' who _really_ understand the _Infrastructure_ in all its glorius levels and depths... -----> 'techno-demigods' I think they would be called :)
"Well, that's the security guys / operations manager's role... oh, well then, it's the um administrators or engineering, or implementations guys....", I hear you all say in tandem.... well perhaps, but do they really know what's going on? Who actually did what, when, where and why? And could you really tell what was done and how?
Who are the implementors? Are they insourced, outsourced or was the update or change performed by some 'fly-by-night' technorati....? Relax my friends, it's all ok you uber-geeks, we all know the CIO knows exactly what's happening and is responsible for the whole shebang!
Take for example a business with a large dependency on IT ( any medium to large business, desperate to bring an IT based service or product to market-> think of Microsoft in the early days, some may argue still now...! ) and sprinkle that with a lack of _quality_ in employees' experience, training and a lagging behind the pace of technology... then add a dollop of rapidly trying to use said latest and greatest technology, and has _anyone_ really got a handle on what's going on! Do they have the policies, management support / comprehension and business backing to inherently understand the risks to existing and future services. The risk to the products and current or projected revenue streams is vast while driving the pace at full kilter. Only experience lends itself to an instinctual appreciation of the hidden costs of _rushing_ something out the door without the necessary QA, UAT, SIT.... ( Quality Assurance, User Acceptance Testing, Systems Integration Testing )....
Remember that millions of lines of code are wrapped around all Operating Systems and Applications or Services, whether in supporting the business or tied up in the business' delivery of products and services to its customers... then introduce the standard network users - driving the equivalent of virtual computer tanks and nuclear warheads with no proof of 'licensed to operate' or without the requisite training and experience. Mix this with network and system administrators, developers and database administrators with about as much scientific appreciation of computational logic and determinism ( in so far as _computer-systems_ are deterministic :) as the Incas had in believing in Sun Gods and that engaging in human sacrifice and voodoo like 'hibbidy-gibbidy', would appease said Gods of the time. Add to this a light sprinkling of 'management' who now find themselves in some _key_ technically related role, who have about as much experience with technology as those assembling their first 'Kinder Egg' with similar measures of people management skills, akin if you will to the atypical high school gym 'last pick' ability to inspire confidence, lead a team or score goals.
You are now ready to bake in the binary oven of success or failure, wait 30 minutes at 'Homeland Security' defcon 4 for the inevitable results.
'Baked Alaska' is not something you can get right with beginners luck...
So back the key theme, that with such complexity and general lack of appreciation of said complexity.. it actually needs to be reduced to faciliate some form of control. Most solutions these days actually _increase_ the complexity to try and control the complexity! (which doesn't really work without the correct resourcing, comprehension and mangement!)
Let's take a step back and focus on the basics. Let's cut out the fluff and focus on solid and secure systems and services that allow us to work on the real 'add-value' to the business or customers. Why is it we require an army of incompetents who create their own microcosms of increased complexity, entropy and cost, when computers are supposed to save us time so we can get on with what we're actually really good at?
But I really think we are missing the fundamental point entirely. My favourite phrase is 'Complexity is the Enemy', especially as it relates to fast paced ever changing environments. 'Change Control' , 'Change Management' or 'Release Management' is great.. but I have never seen it done really effectively. Even in one of the best networking companies in the world, it is still a form of controlled chaos! As best effort / guestimate work is done in identifying host dependencies in downstream networks or similar service dependencies in downstream / upstream applications or code. ( Let alone full appreciation for business and supporting processes... ). Who _are_ these guardians of 'Change Control' who _really_ understand the _Infrastructure_ in all its glorius levels and depths... -----> 'techno-demigods' I think they would be called :)
"Well, that's the security guys / operations manager's role... oh, well then, it's the um administrators or engineering, or implementations guys....", I hear you all say in tandem.... well perhaps, but do they really know what's going on? Who actually did what, when, where and why? And could you really tell what was done and how?
Who are the implementors? Are they insourced, outsourced or was the update or change performed by some 'fly-by-night' technorati....? Relax my friends, it's all ok you uber-geeks, we all know the CIO knows exactly what's happening and is responsible for the whole shebang!
Take for example a business with a large dependency on IT ( any medium to large business, desperate to bring an IT based service or product to market
Remember that millions of lines of code are wrapped around all Operating Systems and Applications or Services, whether in supporting the business or tied up in the business' delivery of products and services to its customers... then introduce the standard network users - driving the equivalent of virtual computer tanks and nuclear warheads with no proof of 'licensed to operate' or without the requisite training and experience. Mix this with network and system administrators, developers and database administrators with about as much scientific appreciation of computational logic and determinism ( in so far as _computer-systems_ are deterministic :) as the Incas had in believing in Sun Gods and that engaging in human sacrifice and voodoo like 'hibbidy-gibbidy', would appease said Gods of the time. Add to this a light sprinkling of 'management' who now find themselves in some _key_ technically related role, who have about as much experience with technology as those assembling their first 'Kinder Egg' with similar measures of people management skills, akin if you will to the atypical high school gym 'last pick' ability to inspire confidence, lead a team or score goals.
You are now ready to bake in the binary oven of success or failure, wait 30 minutes at 'Homeland Security' defcon 4 for the inevitable results.
'Baked Alaska' is not something you can get right with beginners luck...
So back the key theme, that with such complexity and general lack of appreciation of said complexity.. it actually needs to be reduced to faciliate some form of control. Most solutions these days actually _increase_ the complexity to try and control the complexity! (which doesn't really work without the correct resourcing, comprehension and mangement!)
Let's take a step back and focus on the basics. Let's cut out the fluff and focus on solid and secure systems and services that allow us to work on the real 'add-value' to the business or customers. Why is it we require an army of incompetents who create their own microcosms of increased complexity, entropy and cost, when computers are supposed to save us time so we can get on with what we're actually really good at?
Wednesday, August 31, 2005
Sunday, August 21, 2005
S.O.E. ( Standard Operating Environment )
Well, even if you use NetFlow on routers / switches why not include something like Argus [ http://www.qosient.com/argus/index.htm ] in all your standard host builds limited to its own slice / filesystem ( or implement some log rotation.. ) so the system or host itself builds a historical log of network relationships for troubleshooting, forensics etc etc
Sunday, August 07, 2005
How do we know about History? What are we doing wrong today....
Hmmmm.. simple premise.... we only uncovered much of what we know today about previous civilisations due to the mark they made upon the world, whether the information was intentionally created for recording purposes or that which was an unintentional byproduct of something else they did / used or created.
Here, the concepts of the intentional lifetime of data and the medium of storage chosen are of utmost importance. ( additionally data format / language and physical / logical interface to the data are of concern )
Some remnants of a society such as architecture may be considered a byproduct, however many buildings such as the pyramids of Egypt and South America were built to last the ages and were intended to be a legacy of the then rulers or of the civilisation itself. Funny that in the current modern era, we have sprawling metropolis' of concrete and steel which will in theory also last the test of time, but we don't in essence continue to write or record anything on mediums with similar longevity. Cave paintings and vellum scrolls when in the right conditions can last for thousands of years and convey stories and records of life as it was, and lessons for future generations whether intentional or not. Imagine if you will if the Rosetta Stone was written on a sheet of modern paper, saved on a harddrive in a proprietary format or burnt to a CD-R or DVD...... how long would it last, and what would future generations lose out on or be deprived of?
Maybe you are starting to see my point? We have seen amazing advances in the current and last century, mainly attributable to the rapid increase and spread of information. Cumulative knowledge allows for rapid progress. More raw human processing power, if you will, all connected and digesting reams of information, making inferences, connections, theories, statistical observations and basically learning, refining and increasing the sum knowlege of all humankind. Hopefully making things better and not worse!
Now imagine .... we destroy ourselves in a nuclear holocaust accompanied by a huge EM pulse ( electromagnetic pulse ) that wipes out most, but not all of the digital data on the planet. The end of the current global age of the internet and digital data. We need to start from scratch but most of the engineers and basic information for building complex circuits and the means to access any of the remaining survivable information is gone to us? Operating systems, source code and close to all raw data would be gone.
One could well ask if it ever did then actually exist? How much are we missing from that which was daily life for the Egyptians, Greeks, Romans, Incas etc etc... From what we have found thus far.. e.g ruins, artifacts, personal effects, architecture, farming and certain amounts of business and governmental records of the time - we build a picture of the politics, philosophy, medicine, science, mathamatics, law, ethics etc of these people's lives and overall civilisations....
Many of these civilisations either fell, mutated, diverged or imploded.... again imagine if you will the sum of all human knowledge available to us should we have had a cumulative repository over the past few thousand years... maybe we would be more advanced or maybe we would have wiped ourselves out properly, once and for all!!!!
We are creating and learning at a pace never seen before in human history ( well to the best of our current knowledge based upon what we have found.... how would we really know unless they wrote it down or *all* the information and records had survived? ) what happens when our civilisation comes to an abrupt or bitter end?
Should we be doing more to ensure the information we create and learn about ourselves, our civilisation and our environment is given the intrinsic longevity it deserves.. if not for our children, for future races of humans, for the historians, researchers, teachers and perhaps those that once again one day try to rebuild society out of the dark ages?
Aside: This question also begs an answer to the issue of complexity in many of our sciences and systems and how we represent them. Ask yourself if it would be easy to rebuild, reproduce or look at a current high level system, grasp the underlying concepts and reproduce the outcomes. These systems I speak of could be anything from computer systems, to law systems to social systems. We are building a house of cards with no thought for my favourite question, not "Why" but "What if?".....
Aside II: Do we really actually care about the "What if?" or would it stifle our creativity and the speed of advancements if we were to spend more time ensuring the integrity and longevity of our cumulative knowledge? Why are we rushing so far and fast ahead in to the unknown, we'll still get there eventually... it will still be unknown a week from next Tuesday! Maybe it's time to slow down and take a timeout, have a 'kit-kat' and then take a really good long hard look at what we as a race are really doing and trying to achieve... Also is it sustainable and recreatable should we break it? The wonderful concept not of "How well does it work?", but "How well does it break?" comes to mind...
Right now I see a huge risk to society at large.
"How we represent, store, archive and share digital information. "
I believe the sum of all human knowledge is in danger... let's at least start by ensuring we could start from scratch again. ( or someone or something else could.. what would an alien archaelogist make of all this should they visit earth after we have reduced ourselves to another pre-industrial age again? "Bloody amateurs!" )... oh and here's an interesting byproduct.... maybe a new paradigm would create a template for human learning that creates a code or method for how to educate children so they do not need to spend X times as long comprehending a mish-mash of overlapping disciplines by starting from scratch each time they enter a new field of study. Some may agrue that that is the role language fulfils, a symbolic representation of ideas and concepts to allow them to be expressed and communicated.
There is no handbook for parents, there is no common teaching system other than repetetively hammering information in to childrens skulls. The advances we could make as a race if we harnassed and guided the abstract thought processes of children. ( possible focus "edutainment" )
Libraries and museums perhaps need a bit of a rethink and some real funding?
If we want to build a new society the 'Tipping Point' will start with the children.
If we want to 'keep' and progress our society we need to focus on 'keeping' the cumulative information safe and healthy.
If we want to advance our society we need to eliminate fear, greed and inequality.
Hmmm... rant over.. time to watch the Simpsons....
Some fun links to projects / papers:
Information Longevity http://sunsite.berkeley.edu/Longevity/
NARA National Archives and Records Administration ( American, http://www.archives.gov/ ) ERA ( Electronics Records Archive ) http://www.archives.gov/era/index.html
OSTA Optical Storage Technology Association ( http://www.osta.org/ )
Here, the concepts of the intentional lifetime of data and the medium of storage chosen are of utmost importance. ( additionally data format / language and physical / logical interface to the data are of concern )
Some remnants of a society such as architecture may be considered a byproduct, however many buildings such as the pyramids of Egypt and South America were built to last the ages and were intended to be a legacy of the then rulers or of the civilisation itself. Funny that in the current modern era, we have sprawling metropolis' of concrete and steel which will in theory also last the test of time, but we don't in essence continue to write or record anything on mediums with similar longevity. Cave paintings and vellum scrolls when in the right conditions can last for thousands of years and convey stories and records of life as it was, and lessons for future generations whether intentional or not. Imagine if you will if the Rosetta Stone was written on a sheet of modern paper, saved on a harddrive in a proprietary format or burnt to a CD-R or DVD...... how long would it last, and what would future generations lose out on or be deprived of?
Maybe you are starting to see my point? We have seen amazing advances in the current and last century, mainly attributable to the rapid increase and spread of information. Cumulative knowledge allows for rapid progress. More raw human processing power, if you will, all connected and digesting reams of information, making inferences, connections, theories, statistical observations and basically learning, refining and increasing the sum knowlege of all humankind. Hopefully making things better and not worse!
Now imagine .... we destroy ourselves in a nuclear holocaust accompanied by a huge EM pulse ( electromagnetic pulse ) that wipes out most, but not all of the digital data on the planet. The end of the current global age of the internet and digital data. We need to start from scratch but most of the engineers and basic information for building complex circuits and the means to access any of the remaining survivable information is gone to us? Operating systems, source code and close to all raw data would be gone.
One could well ask if it ever did then actually exist? How much are we missing from that which was daily life for the Egyptians, Greeks, Romans, Incas etc etc... From what we have found thus far.. e.g ruins, artifacts, personal effects, architecture, farming and certain amounts of business and governmental records of the time - we build a picture of the politics, philosophy, medicine, science, mathamatics, law, ethics etc of these people's lives and overall civilisations....
Many of these civilisations either fell, mutated, diverged or imploded.... again imagine if you will the sum of all human knowledge available to us should we have had a cumulative repository over the past few thousand years... maybe we would be more advanced or maybe we would have wiped ourselves out properly, once and for all!!!!
We are creating and learning at a pace never seen before in human history ( well to the best of our current knowledge based upon what we have found.... how would we really know unless they wrote it down or *all* the information and records had survived? ) what happens when our civilisation comes to an abrupt or bitter end?
Should we be doing more to ensure the information we create and learn about ourselves, our civilisation and our environment is given the intrinsic longevity it deserves.. if not for our children, for future races of humans, for the historians, researchers, teachers and perhaps those that once again one day try to rebuild society out of the dark ages?
Aside: This question also begs an answer to the issue of complexity in many of our sciences and systems and how we represent them. Ask yourself if it would be easy to rebuild, reproduce or look at a current high level system, grasp the underlying concepts and reproduce the outcomes. These systems I speak of could be anything from computer systems, to law systems to social systems. We are building a house of cards with no thought for my favourite question, not "Why" but "What if?".....
Aside II: Do we really actually care about the "What if?" or would it stifle our creativity and the speed of advancements if we were to spend more time ensuring the integrity and longevity of our cumulative knowledge? Why are we rushing so far and fast ahead in to the unknown, we'll still get there eventually... it will still be unknown a week from next Tuesday! Maybe it's time to slow down and take a timeout, have a 'kit-kat' and then take a really good long hard look at what we as a race are really doing and trying to achieve... Also is it sustainable and recreatable should we break it? The wonderful concept not of "How well does it work?", but "How well does it break?" comes to mind...
Right now I see a huge risk to society at large.
"How we represent, store, archive and share digital information. "
I believe the sum of all human knowledge is in danger... let's at least start by ensuring we could start from scratch again. ( or someone or something else could.. what would an alien archaelogist make of all this should they visit earth after we have reduced ourselves to another pre-industrial age again? "Bloody amateurs!" )... oh and here's an interesting byproduct.... maybe a new paradigm would create a template for human learning that creates a code or method for how to educate children so they do not need to spend X times as long comprehending a mish-mash of overlapping disciplines by starting from scratch each time they enter a new field of study. Some may agrue that that is the role language fulfils, a symbolic representation of ideas and concepts to allow them to be expressed and communicated.
There is no handbook for parents, there is no common teaching system other than repetetively hammering information in to childrens skulls. The advances we could make as a race if we harnassed and guided the abstract thought processes of children. ( possible focus "edutainment" )
Libraries and museums perhaps need a bit of a rethink and some real funding?
If we want to build a new society the 'Tipping Point' will start with the children.
If we want to 'keep' and progress our society we need to focus on 'keeping' the cumulative information safe and healthy.
If we want to advance our society we need to eliminate fear, greed and inequality.
Hmmm... rant over.. time to watch the Simpsons....
Some fun links to projects / papers:
Information Longevity http://sunsite.berkeley.edu/Longevity/
NARA National Archives and Records Administration ( American, http://www.archives.gov/ ) ERA ( Electronics Records Archive ) http://www.archives.gov/era/index.html
OSTA Optical Storage Technology Association ( http://www.osta.org/ )
Sunday, July 31, 2005
Meta-info... coz' my time is short...
Great aggregated blog @ Planet Security
http://www.dayioglu.net/planet/ for all things Information Security.
And another @ InfosecDaily http://infosecdaily.net/securitynews/ .... and.. TaoSecurity http://taosecurity.blogspot.com/
Here's a fun WormBlog @ http://www.wormblog.com/ and here's a similar one from F-Secure http://www.f-secure.com/weblog/
Microsoft Response Center Blog http://blogs.technet.com/msrc/
Microsoft Security Wiki http://channel9.msdn.com/wiki/default.aspx/SecurityWiki.HomePage
All I need to do now is get my new team membership in First .... I miss my First list with my coffee in the mornings!

And another @ InfosecDaily http://infosecdaily.net/securitynews/ .... and.. TaoSecurity http://taosecurity.blogspot.com/
Here's a fun WormBlog @ http://www.wormblog.com/ and here's a similar one from F-Secure http://www.f-secure.com/weblog/
Microsoft Response Center Blog http://blogs.technet.com/msrc/
Microsoft Security Wiki http://channel9.msdn.com/wiki/default.aspx/SecurityWiki.HomePage
All I need to do now is get my new team membership in First .... I miss my First list with my coffee in the mornings!
Saturday, July 30, 2005
I wish.....
The network _is_ the computer....
I have really good spatial comprehension, and am mostly a visual person. This is how I think. Currently careerwise I am an Information Security practitioner and Network Engineer. I like building, fixing and securing / protecting things. ( Substitute paternal instinct as I have no kids? )
Anyway, I digress.... as you move from job to job you tend to build up your stash of happy tools, resources, methods etc etc... _however_ when arriving in a large organisation it is very hard to get a handle on what's going on and build a map in your head of the network and nodes that contain information you are supposed to be securing / defending / protecting.....( especially if that company's documentation is bad, non-existant or they have never used any visualisation tools or mapped / diagrammed anything! ) Also, sometimes the company can be in a high growth phase, where things change daily or weekly - and we all know that devices are not always built, deployed, alarmed or documented properly..
For quite sometime I have been formulating an idea on how to get a handle on this .. it also applies to the actual NOC / SOC [ Security Operations Center ] guys too and how they view their operating world.... these days we need to know what's going on second by second, not day by day, or week by week.. internet time is just too fast, and so are the releases of worms following proof of concept code, 0 day exploits, or reverse engineered vendor patches.
Complexity is also the enemy - however that beast is getting larger not smaller ( as node numbers, services and depth of code / processes on hosts increases.. ) which I believe leads to the true gap right now; the ability [or inability] of us mere mortals to ingest, comprehend, correlate and appreciate changes / incidents and outages _properly_, including the ability to take the decisive actions to mitigate, fix or even just improve the situation. Inherent in this model is the ultimate accountability or responsibility for the decisions made in mitigating or remediating said issues. This is where supposed 'silver bullets' like intelligent IPS's, intelligent networks, sandboxing policies will invariably always fail. Too many overheads. Configuration needs to be done before the fact, and this administration can be forgotten / overlooked or just ignored. We still need to create the rules, tune the IDS, define the actions for them to take and then still no one I know in the industry will let a system issue of it's own accord an ACL [ Access Control List ] change, TCP reset or blackhole / sinkhole routing to /dev/null, Null0 or a 'scrubber' of sorts. They are too worried about customers and mission critical platforms, and rightly so? A.I. is still rule based / heuristic and often incomplete, as humans still need to re-write or tweak the frameworks, sample spaces to achieve the desired results. Neural networks still rely on 'us' humans for their playing fields.
I don't believe machines will ever be able to do real-time business risk modelling by drawing the correct inferences at the right times, this is still a skill humans are better at. When associating patterns, schedules and dependencies from informtaion we are presented with, what's fundamental is the type, quality, amount and correctness of the data presented to the human operator. Most humans are visual creatures, even the blind who build connections and patterns in their minds....
Aside: ( one of the best Cisco Routing and Switching CCIE's in the TAC [ Technical Assistance Centre ] they had in Brussels, Belgium was actually blind and supported large complex enterprises remotely on the phone! )
For now though, let's think about having the right information, easily represented and at the right time. Take a peak at the OODA loop ( in previous post below ) and the concept of a CERT or CSIRT, if you are not familiar with them. ( I am bundling the NOC / SOC and concept of a CERT in to the same teams / functions here... )
The pitch: A near-realtime 3D network map, seperating out a rough OSI / ISO 7 layer model into 2D connected visualisation planes that can be manipulated in real-time possibly with a touch screen. ( Alternatively and probably more pragmatic would be that of the 5 layer TCP/IP Sun/DOD model ) Other features would include nodes giving off visual alarms when there are issues and when thresholds are reached. Screens could be split to render multiple parts of the network simultaneously. Employees / clients could access standard templates / defined sub-maps remotely. These clients may be run on normal users or operators desktops, with the realtime rendering done on the client. Clients may have different roles as it relates to the network and get seperate streams overlayed to their maps. ( Traps, Anti-Virus, IDS, Flows with filters, syslog alerts.... )
DBA's see overlaid maps of JDBC, ODBC, SQLNet etc
Network Operators see ICMP, SNMP, SSH, SCP, TFTP, RCP, RSH, Telnet, Syslog etc..
Security can see everything but pick known 'bad' ports or recent outbreaks that use certain ports?
Content guys can see their product moving around...
Web guys can see their piece of the pie etc etc etc
Note: Suddenly at any point in time, all your observers become your distributed operations and network monitors!!! An Open Source model to keep the _network_ smooth and efficient...
Client / Server architecture similar in a sense to that of a MMORPG's methods of passing state and object information in a highly compressed format whereby the rendering engine primarily uses the client-side resources. Included may be the concept of Multicast or Peer-to-Peer to distribute information reducing bandwidth consumption. As with the gaming model, administrators may change information in realtime or influence the network also in realtime. Operators could push, only clients could pull. As this mapping would be graph based, holding state information and inter node relationship information ( think link-state / hybrid routing protocols ) each client would have a world view but _build_ his or her own "routing table" or view of the world as a normal router would ( including endpoints too though..! ) and then receive _state_ changes, which, in the message passing syntax would be anything from a threshold alert to a node state change, to a change in the graphical representation of a node in relation to some pre-defined event etc...
So to 're-cap' the 'network game server' as we'll call it handles most of the topology information, message scrubbing and over-all admin rights. ( Think of it as a shiny front end MOM / NMS / Event Correlation engine that understands flows... ) Clients, be they desktop users, network administrators, remote NOC teleworkers or customers who wish to see their relevent part of the network or hosts are performing from a network perspective, all get to see what's going on when, where and _hopefully_ in a distributed environment _we_ can get to the ever more elusive why in a reduced amount of time?
Transparency drives growth, change and improvement.
As information and events are all realtime and streamed in somewhat of a pipeline ( including flows ) it should be possible to ( with accurate network wide NTP ) perform limited tracebacks of incidents, albeit the event must be recognised or pre-defined in some form. This is where baselining and normalisation is exteremely important. SourceFire are doing pretty well in this regard it seems with RNA...
Sounds futuristic? Maybe it's out there already?
Perhaps, but most of my previous posts, in theory, contain close to the correct tools to do this ( well nearly anyway ).... the closest I have seen in operation thus far is a good independent 2D map built by QualysGuards Vulnerability Assessment tool, and OpNet's SPGuru ( perhaps their new 3DNV product? ) that feeds itself from existing NMS's and MOM's like CiscoWorks Information Centre, HP Openview etc..
a) get all related SNMP read strings for routers, switches and firewalls ( if you so do...)
b) ensure your platform has full ACL rights for the above
c) ensure your platform has full port connectivity through firewalls etc to achieve connectivity... ICMP/TCP/UDP
d) allow your platform to fingerprint hosts and nodes and make it an iterative behaviour...
e) allow your object orientated mapping engine to attribute status to graph leaves in real-time as it's rendered
f) have a concept of trending / difference
g) allow your platform to parse routing tables and understand topology ( Hmmmm, stateful or stateless mapping.. guess it needs to build consistent view rather than rebuild each-time to reduce overheads... as with gaming, build the world.. then interpret changes? )
h) perhaps overlay NetFlow (tm) information for close to real-time ( 5min+- ) traffic overlays.. top talkers etc. ( NetFlow(tm) is not realtime but exported in time intervals to collectors where it can be aggregated..
i) perhaps use this engine to allow you to do a form of touchscreen IPS ( Intrusions Prevention System ) on your whole network, thus the final realtime responsibility lies with the Network Operators?
j) X3D http://www.web3d.org/ as a framework instead of the supposedly outdated VRML ?
k) you would possibly need a fast rendering game engine to achieve basic visualisation depending on network size and complexity if not using X3D / VRML.
l) could feed and help with Capacity Management ? RMON + real-time fault-tracking ( ICMP sweeps / SNMP traps )?
Just a thought, but it's kinda where I see the defensive perimieter paradigm being turned inside out as it relates to Information Security with the keywords _realtime_ _complexity_ _perimeter_ _defense_ _ips_... imagine also if the host OS or NOS could tag confidential enterprise information and insert this boolean based tag in the TCP header somewhere ( DSCP / TOS -> QOS-> Public || Confidential ) and then NetFlow also had a header that could see and report on this... you could then see when the information was walking out the network door? This is hugely simplified from the host, file and application context I know.. but it's a thought as it would need to be a standard and built in to document formats. Users could then turn it off perhaps... maybe it could be enforced at a polcy level, but most host based agents don't run on all platforms or would be supported etc alas engineers will always want to run their own OS.... or have root privileges anyway.
This of course does not take in to account making copies on to removable media.. that's another issue... but it would be a start... probably impinging on DRM [Digital Rights Management] but not *really* as it's targeted for a corporate environment only.. and it would be a label / watermark.. not an endpoint restriction.. ( though it could be, I am mainly referring to the network gateways / edge though... ! ) but it lends itself to being auditable and the concept of the "Shrinking Perimeter" being popularised by Dan Geer http://www.verdasys.com/site/content/whitepapers.html
Most of the time companies drop keyword searches for the term "company confidential", or take a copy of encrypted emails for future use. This does not address dns, http, ftp etc... ftp access is not always granted, but http(s) is, either through proxies or direct. Maybe we should give up and not try to control the data leaving the network.. just audit it and focus on employee visibility and compliance? At what point does the complexity, entropy and technology allowing access to information really become manageable, controllable and auditable by humans anyway?
I have really good spatial comprehension, and am mostly a visual person. This is how I think. Currently careerwise I am an Information Security practitioner and Network Engineer. I like building, fixing and securing / protecting things. ( Substitute paternal instinct as I have no kids? )
Anyway, I digress.... as you move from job to job you tend to build up your stash of happy tools, resources, methods etc etc... _however_ when arriving in a large organisation it is very hard to get a handle on what's going on and build a map in your head of the network and nodes that contain information you are supposed to be securing / defending / protecting.....( especially if that company's documentation is bad, non-existant or they have never used any visualisation tools or mapped / diagrammed anything! ) Also, sometimes the company can be in a high growth phase, where things change daily or weekly - and we all know that devices are not always built, deployed, alarmed or documented properly..
For quite sometime I have been formulating an idea on how to get a handle on this .. it also applies to the actual NOC / SOC [ Security Operations Center ] guys too and how they view their operating world.... these days we need to know what's going on second by second, not day by day, or week by week.. internet time is just too fast, and so are the releases of worms following proof of concept code, 0 day exploits, or reverse engineered vendor patches.
Complexity is also the enemy - however that beast is getting larger not smaller ( as node numbers, services and depth of code / processes on hosts increases.. ) which I believe leads to the true gap right now; the ability [or inability] of us mere mortals to ingest, comprehend, correlate and appreciate changes / incidents and outages _properly_, including the ability to take the decisive actions to mitigate, fix or even just improve the situation. Inherent in this model is the ultimate accountability or responsibility for the decisions made in mitigating or remediating said issues. This is where supposed 'silver bullets' like intelligent IPS's, intelligent networks, sandboxing policies will invariably always fail. Too many overheads. Configuration needs to be done before the fact, and this administration can be forgotten / overlooked or just ignored. We still need to create the rules, tune the IDS, define the actions for them to take and then still no one I know in the industry will let a system issue of it's own accord an ACL [ Access Control List ] change, TCP reset or blackhole / sinkhole routing to /dev/null, Null0 or a 'scrubber' of sorts. They are too worried about customers and mission critical platforms, and rightly so? A.I. is still rule based / heuristic and often incomplete, as humans still need to re-write or tweak the frameworks, sample spaces to achieve the desired results. Neural networks still rely on 'us' humans for their playing fields.
I don't believe machines will ever be able to do real-time business risk modelling by drawing the correct inferences at the right times, this is still a skill humans are better at. When associating patterns, schedules and dependencies from informtaion we are presented with, what's fundamental is the type, quality, amount and correctness of the data presented to the human operator. Most humans are visual creatures, even the blind who build connections and patterns in their minds....
Aside: ( one of the best Cisco Routing and Switching CCIE's in the TAC [ Technical Assistance Centre ] they had in Brussels, Belgium was actually blind and supported large complex enterprises remotely on the phone! )
For now though, let's think about having the right information, easily represented and at the right time. Take a peak at the OODA loop ( in previous post below ) and the concept of a CERT or CSIRT, if you are not familiar with them. ( I am bundling the NOC / SOC and concept of a CERT in to the same teams / functions here... )
The pitch: A near-realtime 3D network map, seperating out a rough OSI / ISO 7 layer model into 2D connected visualisation planes that can be manipulated in real-time possibly with a touch screen. ( Alternatively and probably more pragmatic would be that of the 5 layer TCP/IP Sun/DOD model ) Other features would include nodes giving off visual alarms when there are issues and when thresholds are reached. Screens could be split to render multiple parts of the network simultaneously. Employees / clients could access standard templates / defined sub-maps remotely. These clients may be run on normal users or operators desktops, with the realtime rendering done on the client. Clients may have different roles as it relates to the network and get seperate streams overlayed to their maps. ( Traps, Anti-Virus, IDS, Flows with filters, syslog alerts.... )
DBA's see overlaid maps of JDBC, ODBC, SQLNet etc
Network Operators see ICMP, SNMP, SSH, SCP, TFTP, RCP, RSH, Telnet, Syslog etc..
Security can see everything but pick known 'bad' ports or recent outbreaks that use certain ports?
Content guys can see their product moving around...
Web guys can see their piece of the pie etc etc etc
Note: Suddenly at any point in time, all your observers become your distributed operations and network monitors!!! An Open Source model to keep the _network_ smooth and efficient...
Client / Server architecture similar in a sense to that of a MMORPG's methods of passing state and object information in a highly compressed format whereby the rendering engine primarily uses the client-side resources. Included may be the concept of Multicast or Peer-to-Peer to distribute information reducing bandwidth consumption. As with the gaming model, administrators may change information in realtime or influence the network also in realtime. Operators could push, only clients could pull. As this mapping would be graph based, holding state information and inter node relationship information ( think link-state / hybrid routing protocols ) each client would have a world view but _build_ his or her own "routing table" or view of the world as a normal router would ( including endpoints too though..! ) and then receive _state_ changes, which, in the message passing syntax would be anything from a threshold alert to a node state change, to a change in the graphical representation of a node in relation to some pre-defined event etc...
So to 're-cap' the 'network game server' as we'll call it handles most of the topology information, message scrubbing and over-all admin rights. ( Think of it as a shiny front end MOM / NMS / Event Correlation engine that understands flows... ) Clients, be they desktop users, network administrators, remote NOC teleworkers or customers who wish to see their relevent part of the network or hosts are performing from a network perspective, all get to see what's going on when, where and _hopefully_ in a distributed environment _we_ can get to the ever more elusive why in a reduced amount of time?
Transparency drives growth, change and improvement.
As information and events are all realtime and streamed in somewhat of a pipeline ( including flows ) it should be possible to ( with accurate network wide NTP ) perform limited tracebacks of incidents, albeit the event must be recognised or pre-defined in some form. This is where baselining and normalisation is exteremely important. SourceFire are doing pretty well in this regard it seems with RNA...
Sounds futuristic? Maybe it's out there already?
Perhaps, but most of my previous posts, in theory, contain close to the correct tools to do this ( well nearly anyway ).... the closest I have seen in operation thus far is a good independent 2D map built by QualysGuards Vulnerability Assessment tool, and OpNet's SPGuru ( perhaps their new 3DNV product? ) that feeds itself from existing NMS's and MOM's like CiscoWorks Information Centre, HP Openview etc..
a) get all related SNMP read strings for routers, switches and firewalls ( if you so do...)
b) ensure your platform has full ACL rights for the above
c) ensure your platform has full port connectivity through firewalls etc to achieve connectivity... ICMP/TCP/UDP
d) allow your platform to fingerprint hosts and nodes and make it an iterative behaviour...
e) allow your object orientated mapping engine to attribute status to graph leaves in real-time as it's rendered
f) have a concept of trending / difference
g) allow your platform to parse routing tables and understand topology ( Hmmmm, stateful or stateless mapping.. guess it needs to build consistent view rather than rebuild each-time to reduce overheads... as with gaming, build the world.. then interpret changes? )
h) perhaps overlay NetFlow (tm) information for close to real-time ( 5min+- ) traffic overlays.. top talkers etc. ( NetFlow(tm) is not realtime but exported in time intervals to collectors where it can be aggregated..
i) perhaps use this engine to allow you to do a form of touchscreen IPS ( Intrusions Prevention System ) on your whole network, thus the final realtime responsibility lies with the Network Operators?
j) X3D http://www.web3d.org/ as a framework instead of the supposedly outdated VRML ?
k) you would possibly need a fast rendering game engine to achieve basic visualisation depending on network size and complexity if not using X3D / VRML.
l) could feed and help with Capacity Management ? RMON + real-time fault-tracking ( ICMP sweeps / SNMP traps )?
Just a thought, but it's kinda where I see the defensive perimieter paradigm being turned inside out as it relates to Information Security with the keywords _realtime_ _complexity_ _perimeter_ _defense_ _ips_... imagine also if the host OS or NOS could tag confidential enterprise information and insert this boolean based tag in the TCP header somewhere ( DSCP / TOS -> QOS-> Public || Confidential ) and then NetFlow also had a header that could see and report on this... you could then see when the information was walking out the network door? This is hugely simplified from the host, file and application context I know.. but it's a thought as it would need to be a standard and built in to document formats. Users could then turn it off perhaps... maybe it could be enforced at a polcy level, but most host based agents don't run on all platforms or would be supported etc alas engineers will always want to run their own OS.... or have root privileges anyway.
This of course does not take in to account making copies on to removable media.. that's another issue... but it would be a start... probably impinging on DRM [Digital Rights Management] but not *really* as it's targeted for a corporate environment only.. and it would be a label / watermark.. not an endpoint restriction.. ( though it could be, I am mainly referring to the network gateways / edge though... ! ) but it lends itself to being auditable and the concept of the "Shrinking Perimeter" being popularised by Dan Geer http://www.verdasys.com/site/content/whitepapers.html
Most of the time companies drop keyword searches for the term "company confidential", or take a copy of encrypted emails for future use. This does not address dns, http, ftp etc... ftp access is not always granted, but http(s) is, either through proxies or direct. Maybe we should give up and not try to control the data leaving the network.. just audit it and focus on employee visibility and compliance? At what point does the complexity, entropy and technology allowing access to information really become manageable, controllable and auditable by humans anyway?
Work and Personal
So I'd like to address 2 topics and what's going on with me right now, both somewhat technologically impacted ( and then of course some interesting links etc.. ):
Personal:
I am having great fun right now with a mixture of PodCasting and the content @ Zencast.org . Free Buddhist classes for the masses, who said Podcasting wouldn't catch on. Today I sat in the sun on Manly beach for 2 hours learning and meditating :)
Work:
With no IT Security Strategy, comprehensive policy, budget, resources and incorrect internal reporting chains.... an outsource trying to drive the clients Information Security Policy and Information Security Management System; the emphasis has to be on initially enumerating information assets and classifying them as part of the companies risk profile / attack surface before engaging in anything else. This unfortunately means, in the absence of any current snapshot of information / physical assets or full knowledge of business processes, an independent audit is needed to achieve a baseline.. with subsequent scans / audits building upon this... with special focus paid to the ousourcing interface and contractual obligations on all parties. ( ...including the other outsourced services / interfaces from other companies / organisations.. )
It also denotes the need for a base level strategy and methodology. The most effective framework in information security right now is a subset of the Parkerian Hexad http://www.answers.com/Parkerian%20Hexad ( C.I.A. / Confideniality, Integrity and Availability ) ratings and also the OODA loop http://www.answers.com/ooda%20loop developed by John Boyd for gathering Intelligence and then Execution in Information Warfare.
General News:
I'd also like to mention a recently given speech at Blackhat by Michael Lynn an ex-ISS security researcher because many see it as a huge threat.... basically apart from the DNS root servers, everyone seems to forget about the routers(tm), as with Cisco's monopoly running IOS on most backbone infrastructure, why own 1000's of hosts.. when you can own the network? Ask yourself what else is ubiquitous... remember the SNMP issues and what about BGP or goofin' with the common implementations of the TCP/IP stack out there?
Some really cool people I admire in the Industry ( you gotta be known when you're in Wikipedia/Answers.com? ):
Rob Thomas http://www.cymru.com/
Dan Kaminsky http://www.doxpara.com/
Dan Geer http://www.answers.com/topic/dan-geer
Bruce Schneier http://www.answers.com/bruce%20schneier
Paul Graham http://www.answers.com/Paul%20graham
Some cool Penetration Testing / Information Security Consulting companies:
Security-Assessment http://www.security-assessment.com/
Corsaire http://www.corsaire.com/
NGS http://www.ngssoftware.com/
Information Security Testing Methodologies:
OSSTMM http://www.isecom.org/osstmm/
OWASP http://www.owasp.org/index.jsp
Back to the concept of network visualisation and graphing I have updated:
Personal:
I am having great fun right now with a mixture of PodCasting and the content @ Zencast.org . Free Buddhist classes for the masses, who said Podcasting wouldn't catch on. Today I sat in the sun on Manly beach for 2 hours learning and meditating :)
Work:
With no IT Security Strategy, comprehensive policy, budget, resources and incorrect internal reporting chains.... an outsource trying to drive the clients Information Security Policy and Information Security Management System; the emphasis has to be on initially enumerating information assets and classifying them as part of the companies risk profile / attack surface before engaging in anything else. This unfortunately means, in the absence of any current snapshot of information / physical assets or full knowledge of business processes, an independent audit is needed to achieve a baseline.. with subsequent scans / audits building upon this... with special focus paid to the ousourcing interface and contractual obligations on all parties. ( ...including the other outsourced services / interfaces from other companies / organisations.. )
It also denotes the need for a base level strategy and methodology. The most effective framework in information security right now is a subset of the Parkerian Hexad http://www.answers.com/Parkerian%20Hexad ( C.I.A. / Confideniality, Integrity and Availability ) ratings and also the OODA loop http://www.answers.com/ooda%20loop developed by John Boyd for gathering Intelligence and then Execution in Information Warfare.
General News:
I'd also like to mention a recently given speech at Blackhat by Michael Lynn an ex-ISS security researcher because many see it as a huge threat.... basically apart from the DNS root servers, everyone seems to forget about the routers(tm), as with Cisco's monopoly running IOS on most backbone infrastructure, why own 1000's of hosts.. when you can own the network? Ask yourself what else is ubiquitous... remember the SNMP issues and what about BGP or goofin' with the common implementations of the TCP/IP stack out there?
Some really cool people I admire in the Industry ( you gotta be known when you're in Wikipedia/Answers.com? ):
Rob Thomas http://www.cymru.com/
Dan Kaminsky http://www.doxpara.com/
Dan Geer http://www.answers.com/topic/dan-geer
Bruce Schneier http://www.answers.com/bruce%20schneier
Paul Graham http://www.answers.com/Paul%20graham
Some cool Penetration Testing / Information Security Consulting companies:
Security-Assessment http://www.security-assessment.com/
Corsaire http://www.corsaire.com/
NGS http://www.ngssoftware.com/
Information Security Testing Methodologies:
OSSTMM http://www.isecom.org/osstmm/
OWASP http://www.owasp.org/index.jsp
Back to the concept of network visualisation and graphing I have updated:
Sunday, July 10, 2005
Building blocks and giant's shoulders....
Newish stuff.... and references that are always good:
3G Related:
Risk related definitions:
Risk Management http://www.answers.com/risk%20management
Risk Assessmenmt http://www.answers.com/topic/risk-assessment
Law as it relates to IT:
http://www.groklaw.net/
Internet Modelling / Risk Modelling:
NetworkViz http://networkviz.sourceforge.net/
CAIDA http://www.caida.org/
OpenQVIS http://openqvis.sourceforge.net/
Opte Project http://www.opte.org/
LGL http://bioinformatics.icmb.utexas.edu/lgl/
( java'ish )
JASPVI http://lab.verat.net/Jaspvi/ ( very cool ASN mapping )
Tom Sawyer http://www.tomsawyer.com/home/index.php
yFiles http://www.yworks.com/
( http://www.yworks.com/en/products_yed_about.htm )
GINY http://csbi.sourceforge.net/
JUNG http://jung.sourceforge.net/
Piccolo (2d) http://www.cs.umd.edu/hcil/piccolo/
OSS Routing Daemons:
ZEBRA http://www.zebra.org/
OpenBGPd http://www.openbgpd.org/
OpenOSPFd ( coming ) http://www.openbgpd.org/
BGP/RADB/Whois type stuff:
http://bgp.potaroo.net/
http://www.dnsstuff.com/
http://www.traceroute.org/
http://www.bgp4.as/tools
More Netflow tools / info:
Netflow info http://www.cisco.com/warp/public/cc/pd/iosw/ioft/neflct/tech/napps_wp.htm
Extreme Happy Netflow Tool http://ehnt.sourceforge.net/
Quick NOS ( Network Operating System ) Emulation:
http://www.dcs.napier.ac.uk/~bill/emulators.html
DNS:
RDNS Project http://www.ripe.net/rs/reverse/rdns-project/
Cisco OSS Related:
COSI http://cosi-nms.sourceforge.net/
Local Internet Registries AU:
http://www.ripe.net/membership/indices/AU.html
- IBE ( Identity Based Encryption ) http://crypto.stanford.edu/ibe/
- SPAM ( The Penny Black Project ) http://research.microsoft.com/research/sv/PennyBlack/
- UDDI ( Universal Description, Discovery and Integration ) http://www.uddi.org/
- PyXML http://pyxml.sourceforge.net/
- XML Services http://www.xml.org/
- Unicode http://www.unicode.org/
- W3C http://www.w3.org/
3G Related:
- @Stake Research http://www.atstake.com/research/reports/
- IEEE Secure Mobile Communications Forum http://www.iee.org/events/securemobile.cfm
- GSM Security http://www.gsm-security.net/
- UMS Forum http://www.umts-forum.org/
- UMTS TDD http://www.umtstdd.org/
- UMTS World Information http://www.umtsworld.com/
- UMTS TD-CDMA http://www.ipwireless.com/
- Cell Phone Hacks http://www.cellphonehacks.com/
Risk related definitions:
Risk Management http://www.answers.com/risk%20management
Risk Assessmenmt http://www.answers.com/topic/risk-assessment
Law as it relates to IT:
http://www.groklaw.net/
Internet Modelling / Risk Modelling:
NetworkViz http://networkviz.sourceforge.net/
CAIDA http://www.caida.org/
OpenQVIS http://openqvis.sourceforge.net/
Opte Project http://www.opte.org/
LGL http://bioinformatics.icmb.utexas.edu/lgl/
( java'ish )
JASPVI http://lab.verat.net/Jaspvi/ ( very cool ASN mapping )
Tom Sawyer http://www.tomsawyer.com/home/index.php
yFiles http://www.yworks.com/
( http://www.yworks.com/en/products_yed_about.htm )
GINY http://csbi.sourceforge.net/
JUNG http://jung.sourceforge.net/
Piccolo (2d) http://www.cs.umd.edu/hcil/piccolo/
OSS Routing Daemons:
ZEBRA http://www.zebra.org/
OpenBGPd http://www.openbgpd.org/
OpenOSPFd ( coming ) http://www.openbgpd.org/
BGP/RADB/Whois type stuff:
http://bgp.potaroo.net/
http://www.dnsstuff.com/
http://www.traceroute.org/
http://www.bgp4.as/tools
More Netflow tools / info:
Netflow info http://www.cisco.com/warp/public/cc/pd/iosw/ioft/neflct/tech/napps_wp.htm
Extreme Happy Netflow Tool http://ehnt.sourceforge.net/
Quick NOS ( Network Operating System ) Emulation:
http://www.dcs.napier.ac.uk/~bill/emulators.html
DNS:
RDNS Project http://www.ripe.net/rs/reverse/rdns-project/
Cisco OSS Related:
COSI http://cosi-nms.sourceforge.net/
Local Internet Registries AU:
http://www.ripe.net/membership/indices/AU.html
Wednesday, June 22, 2005
Mind Mapping
Been looking for a java based cross platform alternative to TheBrain http://www.thebrain.com/ for some time and just found this, FreeMind http://freemind.sourceforge.net/
Also here is a curses based 'hierarchical notebook' called HNB http://hnb.sourceforge.net/ .. enjoy!
Also here is a curses based 'hierarchical notebook' called HNB http://hnb.sourceforge.net/ .. enjoy!
Monday, June 13, 2005
Standards, standards, standards... open?
- ITU ( International Telecommunication Union ) http://www.itu.int/publications/default.aspx
- NGN ( Next Generation Network ) http://www.itu.int/ITU-T/2001-2004/com13/ngn2004/index.html
- ITU-T ( Recommendations ) http://www.itu.int/rec/recommendation.asp?type=series〈=e&parent=T-REC
- ITU-T (Recommendations X ) Information technology - Open Systems Interconnection - Security frameworks for open systems. http://www.itu.int/rec/recommendation.asp?type=products〈=e&parent=T-REC-X
- X.805, X.810-X.816
- Audiovisual and Multimedia Systems http://www.itu.int/rec/recommendation.asp?type=products〈=e&parent=T-REC-H
- International Numbering Resources http://www.itu.int/ITU-T/inr/index.html
- RFC's ( Request For Comments )
- IETF http://www.ietf.org/rfc.html
- FAQs.org http://www.faqs.org/faqs/ ( Internet RFC's and 'other' RFC's )
- ITIL ( IT Infrastructure Library ) http://www.ogc.gov.uk/index.asp?id=2261
- 3G http://www.3gpp.org/
- COSO and COBIT ( Control Objectives for Information and Related Technologies ) http://www.sox-online.com/coso_cobit.html
- ISACA ( Information Systems Audit and Control Association ) http://www.isaca.org/
- ITGI ( IT Governance Institute ) http://www.itgi.org/
- Sarbannes-Oxley http://www.sarbanes-oxley.com/ ( Publicly traded U.S. companies. )
Sunday, June 12, 2005
Password posts / blogs and tech blogs....
Passwords and Passphrases:
Password and Recovery Tools ( Commercial and Free ):
Tech Blogs:
[MSDN - Microsoft Developer Network] http://blogs.msdn.com/larryosterman/default.aspx
- [Windows] Robert Hensing http://blogs.technet.com/robert_hensing/archive/2004/07/28/199610.aspx
- [The Great Debates: Pass Phrases Passwords 1] http://www.microsoft.com/technet/security/secnews/articles/itproviewpoint091004.mspx
- [The Great Debates: Pass Phrases vs. Passwords 2] http://www.microsoft.com/technet/community/columns/secmgmt/sm1104.mspx
- [The Great Debates: Pass Phrases vs. Passwords 3] http://www.microsoft.com/technet/security/secnews/articles/itproviewpoint110104.mspx
Password and Recovery Tools ( Commercial and Free ):
- Winternals ERD http://www.winternals.com/products/repairandrecovery/
- @Stake LC http://www.atstake.com/products/lc/
- UBCD ( Ultimate Boot CD ) http://www.ultimatebootcd.c
om/
- STD Knoppix http://www.knoppix-std.org/
- Rainbow Tables ( Pre-computed attacks ) http://www.antsight.com/zsl/rainbowcrack/ http://www.rainbowcrack-online
.com/ - Bart PE http://www.nu2.nu/pebuilder/
Tech Blogs:
[MSDN - Microsoft Developer Network] http://blogs.msdn.com/larryosterman/default.aspx
Sunday, May 29, 2005
Longevity = Portability, Security, Mass acceptance?
OK, so I'm basically wondering why I am frustrated ( spiritually! )... one possible answer is I haven't *created* anything in a long time. I add value to projects and script now and again in relation to work, but most of what I do is related to 'Risk Management' and 'Information Security' from a process, network and application / system standpoint... thus it's mainly advice, recommendations and some design and architecture ( this bit does involve 'creating' usually... )
I haven't done any art, cartooning, flash, web pages, video etc in a long, long time ( last was probably http://indigo.ie/~nodecity )... I have sort of decided to go back to proramming [something I used to hate in University...http://www.cs.ucd.ie/ ] but am finding more uses for it these days... usually however, I 'script' with perl for some quick and dirty stuff.. e.g. text parsing + regexp and bolting together other apps and scanning scripts to automate real time network reports etc...
Firstly, what struck me was that if I was going to create something I should get the most 'bang for my buck', and it should be cross-platform, interoperable, open source/standards and have a great deal of flexibility ( from GUI development to low level access to memory etc [speed and efficiency must be taken in to account here also..). It should also be beautiful, concise, intuitive and easy to hack on - easy to prototype on... after reading Paul Graham's take from 'The Python Paradox' http://www.paulgraham.com/pypar.html and Eric S. Raymonds take in 'Why Python' http://www.linuxjournal.com/article/3882 I decided to put some time and effort in to the Python language. I first had to do some bits and pieces on my new Mac Mini to get Python playing happy with an extra toolkit called Tkinter ( TK Interface ) to allow for some GUI programming...
Secondly, I started thinking about OS choice again... perhaps NetBSD or my beloved OpenBSD would make more sense? [Again depends on function || hardware and / or also desktop (FreeBSD) / server / intranet / internet / extranet / ] Maybe go back to Fedora Core? Try out Solaris 10 ?
Then perhaps standardise on a window manager like twm ( as it somes with X ), FVWM or go for something lightweight and extensible like Fluxbox, shell-wise sh / bash but what about rc anyone? ( Guess that breaks the 'lowest common denominator' thrust? ) .....
I guess when you come full circle you have to really look at the title of the post.... I am looking for longevity and a perceivable 'Return on Investment' on the time and energy I am going to invest both professionally and personally.. and subsequently many factors [ some external / market related ] come in to play...
Anyway for your and mine own viewing pleasure; some interesting links for posterity:
- Python http://www.python.org/
- DivX based Python video tutorials http://ourmedia.org/node/11134 and Dive Into Python http://diveintopython.org/ not forgetting O'Reilly's http://python.oreilly.com/
- Tkinter http://www.pythonware.com/library/tkinter/introduction/ and http://wiki.python.org/moin/TkInter
- DJB stuff.. http://cr.yp.to/ focus especially on his software... qmail, djbdns, daemontools and ucspi-tcp
Note: I also recently switched to Camino http://www.caminobrowser.org/ as my browser [on Mac OSX] as Firefox 1.0.4 kept crashing!
Note: Camino now seems to be grumpy with Blogger http://www.blogger.com/ :( , back to Safari http://www.apple.com/safari/ which is not fully supported by Blogger either? Double DOH! Anyone wanna' run three browsers?
I haven't done any art, cartooning, flash, web pages, video etc in a long, long time ( last was probably http://indigo.ie/~nodecity )... I have sort of decided to go back to proramming [something I used to hate in University...http://www.cs.ucd.ie/ ] but am finding more uses for it these days... usually however, I 'script' with perl for some quick and dirty stuff.. e.g. text parsing + regexp and bolting together other apps and scanning scripts to automate real time network reports etc...
Firstly, what struck me was that if I was going to create something I should get the most 'bang for my buck', and it should be cross-platform, interoperable, open source/standards and have a great deal of flexibility ( from GUI development to low level access to memory etc [speed and efficiency must be taken in to account here also..). It should also be beautiful, concise, intuitive and easy to hack on - easy to prototype on... after reading Paul Graham's take from 'The Python Paradox' http://www.paulgraham.com/pypar.html and Eric S. Raymonds take in 'Why Python' http://www.linuxjournal.com/article/3882 I decided to put some time and effort in to the Python language. I first had to do some bits and pieces on my new Mac Mini to get Python playing happy with an extra toolkit called Tkinter ( TK Interface ) to allow for some GUI programming...
Secondly, I started thinking about OS choice again... perhaps NetBSD or my beloved OpenBSD would make more sense? [Again depends on function || hardware and / or also desktop (FreeBSD) / server / intranet / internet / extranet / ] Maybe go back to Fedora Core? Try out Solaris 10 ?
Then perhaps standardise on a window manager like twm ( as it somes with X ), FVWM or go for something lightweight and extensible like Fluxbox, shell-wise sh / bash but what about rc anyone? ( Guess that breaks the 'lowest common denominator' thrust? ) .....
I guess when you come full circle you have to really look at the title of the post.... I am looking for longevity and a perceivable 'Return on Investment' on the time and energy I am going to invest both professionally and personally.. and subsequently many factors [ some external / market related ] come in to play...
Anyway for your and mine own viewing pleasure; some interesting links for posterity:
- Python http://www.python.org/
- DivX based Python video tutorials http://ourmedia.org/node/11134 and Dive Into Python http://diveintopython.org/ not forgetting O'Reilly's http://python.oreilly.com/
- Tkinter http://www.pythonware.com/library/tkinter/introduction/ and http://wiki.python.org/moin/TkInter
- DJB stuff.. http://cr.yp.to/ focus especially on his software... qmail, djbdns, daemontools and ucspi-tcp
Note: I also recently switched to Camino http://www.caminobrowser.org/ as my browser [on Mac OSX] as Firefox 1.0.4 kept crashing!
Note: Camino now seems to be grumpy with Blogger http://www.blogger.com/ :( , back to Safari http://www.apple.com/safari/ which is not fully supported by Blogger either? Double DOH! Anyone wanna' run three browsers?
Thursday, May 26, 2005
Lands I have visited....
Some very interesting Internet research sites and handy stuff you may not have seen before (some *very* techy and some not! ) :
- [research] CAIDA Coopertaive Association for Internet Data Analysis http://www.caida.org/
- [research] Internet II http://international.internet2.edu/partners/
- [security, flows] Arbor Networks PeakflowX http://www.arbor.net/
- [security, flows] SILK System for Internet Level Knowledge http://silktools.sourceforge.net/
- [security, flows] Flow-Tools http://www.splintered.net/sw/flow-tools/
- [security, flows] Argus Audit Record Generation and Utilization System http://www.qosient.com/argus/
- [security, flows] nProbe http://www.ntop.org/nProbe.html
- [network / enterprise management] OpenNMS http://wiki.opennms.org/
- [network / enterprise management] NetDisco http://www.netdisco.org/
- [network / enterprise management] Nagios http://www.nagios.org/
- [network / enterprise management] http://www.enterprisemanagement.com/
- [network, BGP] Netlantis http://www.netlantis.org/ [Not back up fully yet...]
- [network, INFORMATION] http://www.networksorcery.com/
- [netblocks] RadB http://www.radb.net/
- [research] Registrar Stats http://www.registrarstats.com/
- [live cd's / dvd's] http://www.frozentech.com/content/livecd.php
- [organisations] Nanog http://www.nanog.org/
- [research] NetCraft http://news.netcraft.com/
- [news] BBC World Service http://www.bbc.co.uk/radio/aod/networks/wservice/wmp.shtml?6hi#
- [news] Slashdot http://www.slashdot.org/
- [news] The Register http://www.theregister.co.uk/
- [news] Kuro5hin http://www.kuro5hin.org/
- [encyclopedia] Wikipedia http://www.wikipedia.org/
- [news] Shirky.com http://www.shirky.com/
- [security, news] SANS Internet Storm Center http://isc.sans.org/
- [security, research] CyberInsecurity paper http://www.ccianet.org/papers/cyberinsecurity.pdf
- [secuirty, research] The Shrinking Perimeter paper http://www.verdasys.com/site/content/pr_040222.html
- [archive] Wayback Machine http://www.archive.org/
Wednesday, May 25, 2005
Make me better... "on the shoulders of giants"..
It struck me that most blogs I read only have X number of postings on the front page - and as with Google these days, it's very rare to go past the initial front page. This doesn't quite hold true if you have been a long term reader of a blog or get updates from Bloglet http://www.bloglet.com/ , but the point being 'less' is 'more'... quality over quantity per se..
I hereby set myself the challenge to keep this blog at one page, almost like a Wiki... but it's still a blog OK? This means reduced graphics, shorter explanations and more links to let you guys go 'walkabout', to read around the edges -> as most things have been said before anyway...
So I have some filters on my Gmail namely 'Efficiency / Productivity' and 'Reading List', and I thought I'd share some of them with you..
Efficiency and Productivity
Reading and Listening List
Fun Stuff
I hereby set myself the challenge to keep this blog at one page, almost like a Wiki... but it's still a blog OK? This means reduced graphics, shorter explanations and more links to let you guys go 'walkabout', to read around the edges -> as most things have been said before anyway...
So I have some filters on my Gmail namely 'Efficiency / Productivity' and 'Reading List', and I thought I'd share some of them with you..
Efficiency and Productivity
- How to Write Micro Content http://www.useit.com/alertbox/980906.html
- WikiWikiWeb based TiddlyWiki Remote http://phiffer.org/tiddly/
- Read this for a deep, deep insight into the workplace http://www.changethis.com/ go to the 'View Manifestos' link and then make sure you read Slacker@Work and How to Manage Smart People . Then maybe spend some time at http://www.slackermanager.com/
- Motivate yourself and 'develop personally' ;) with http://www.stevepavlina.com/blog/ , I particularly enjoyed reading How to Become and Early Riser and then went to http://headrush.typepad.com/ and goofed around with a focus on Users aren't Dangerous and F**k the Rules
- The aptly titled http://www.lifehacker.com/ just too cool for school !
- The Josh Kaufman "Personal MBA" Program http://www.joshkaufman.net/archives/2005/03/the_josh_kaufma_1.html ( could kinda' be in the 'Reading List' section ! )
- GTD 'Getting Things Done' based excellent blog http://www.davidco.com/blogs/david/ and a brief startup list http://merlin.blogs.com/43folders/2004/09/getting_started.html to get you going with a nice PDF workflow etc http://www.davidco.com/pdfs/gtd_workflow_advanced.pdf
- 43Folders 'A bunch of tricks, hacks & other cool stuff' http://www.43folders.com/
Reading and Listening List
- O'Reilly Radar http://radar.oreilly.com/ , Wired and Wired Magazine respectively http://www.wired.com/ , http://www.wired.com/wired/
- Joel on Software 's reading list.. http://www.joelonsoftware.com/navLinks/fog0000000262.html
- Professional Integrity nowhere better explained as in 'The Fountainhead', by Ayn Rand
- Really great chat's with the big guys like Paul Graham, Steve Wozniak, Tim O'Reilly and Bruce Shneier on ITConversations http://www.itconversations.com/
- Keep an eye on Plan 9 and the GNU's HURD , let's not forget the 'to be' Open Sourced Solaris 10
- Solaris 10 sounds great and is possibly a more designed version of the below.
- Some interesting blogs / essays I like, [Security, General] Bruce Shneier http://www.schneier.com/blog/ , [Security, Microsoft] Robert Hensing http://blogs.technet.com/robert_hensing/default.aspx , [Security, Microsoft] Tim Rains http://blogs.msdn.com/tim_rains/default.aspx , [Hacking, Programming] Paul Graham http://www.paulgraham.com/articles.html , [General, Techy] Robert Scoble http://radio.weblogs.com/0001011/ , [General, Techy] Steve Gillmor http://blogs.zdnet.com/Gillmor/
Fun Stuff
- Happy Tree Friends http://happytreefriends.atomfilms.com/index.html
- Boing Boing http://boingboing.net/
- PodCast Alley http://www.podcastalley.com/
- Paradise Engineering anyone? http://www.bltc.org/
- MSN Messenger through most firewalls http://webmessenger.msn.com/
Monday, May 23, 2005
Here's a thought.... or two.... or three...
- learn Chinese ( Mandarin / Cantonese )
- play squash competitively
- learn to properly defend yourself
- take a night class in something interesting
- your body is a nutrient /drug filter, only put nutrients and good drugs in to it ! http://moodfoods.com/
- spend time developing your mind and soul
- as you are the center of your universe, learn about yourself
- want less, expect more
- be patient, sometimes doing nothing is something
- do not watch random tv, specific channels or programs only, dl programs...
- read the classics
- look for the good in people, if you can't find any... move on...
Subscribe to:
Posts (Atom)