I'm not sure if this is a good idea or not yet, in practice. But, the concept is quite fascinating. In essence, malicious software assumes your computer will operate in a certain way so why not confuse it and be unpredictable.
The idea is being worked upon by Daniela Oliveira, amongst others. The paper that she wrote, and which was presented at last year's USENIX but I'm only just coming around to reading up about it. I'm surprised I'd not heard of it before.
The principle has some very sound basis in military strategy. The Art of War (孫子兵法) by Sun Tzu has many suggestions on how to outwit your enemy which would appear to be quite applicable here, if they can be made to work in practice.
I understand that Prof Oliveira is working on an operating system called Chameleon in which she and her colleagues aim to encompass the principles set out in the paper and presentation.
We've already seen some of the ideas suggested for Chameleon in Honeypots. However, in addition to allowing malware to operate in a façade environment whilst the system collects data about the malicious software, Chameleon looks as if it goes further by having common operating system functions respond in unpredictable ways.
And its at that point any operating system designer would throw their hands up and say that this is going to make the operating system unusable: the very essence of improved operating system design is to have it behave as predictably as possible, even when the inputs are slightly unpredictable. The concept of perturbing system calls in the kernel of an operating system literally doesn't compute for most of us.
But, the initial research conducted does suggest that the security gains from such an approach are quite considerable so it is well worth further study to see if a suitable level of trade off can be found.
Having only just found the concept I've no idea if this approach will be practical, but its certainly an area of research to keep an eye on, and I look forward to seeing Chameleon in action.