Friday, November 17, 2006

Machine and service integrity..

What if instead of worrying about compromised services and data in the short term with fingerprints/hashes of binaries and files, we applied the concept of re-use and cycling to the actual services and machines? Think TKIP or perhaps PFS for IPSEC on a macro service and machine scale?

Think load balanced web servers constantly rebooting from verified images - either sequentially or in some form of complex pre-computed pseudo-random pattern, thus reducing the potential time an attacker had on a box, service or version? I will think more about this, but VM's, load balancing and operational management would require a lot of planning, thought and overhead. Re-use of TCP connections e.g. TCP multiplexing is common now in many optimisation products/load balancing offerings.

If, as some in the industry have -> thrown the towel in per se, and are more worried about compromise, detection and time to restore a machine to an integral state - then why not take that to it's logical conclusion. Almost like a macro level Stackguard and ProPolice in OpenBSD that randomises an offset to the next addressable chunk of memory to make it harder to predict/calculate and reproduce attacks with standard results.

Let's limit the conceptual static state of a live machine ( harder for databases and synchronisation though.. ) but an interesting thought nonetheless.

Maybe you'd need a farm of diskless head-end servers the monkeys would constantly upgrade the OS/App from a bootable set of flash drives etc?

No one has addressed the issue of micro-time adequately in Information Security, rather intractability and macro-time as a defense! Please correct me if I am wrong here...

No comments: