The recent Panda AV catastrophe got me thinking. Why aren’t safeguards in place to prevent bad definition updates from flagging and deleting core system files?
In the case of Panda, I read of some companies that had 1,400 systems which were broken by the bad update.
So how can we fix / prevent this from happening?
Some AV companies might be doing what I’m about to suggest below. If some are, it would be nice to have a list of the ones doing this and ones that are not.
Some critical system files in Windows are signed by Microsoft. If 1 bit is changed, the certificate is invalidated. AV companies should be checking the status of the system file(s) certificate (if it has one) before it carelessly deleting it.
Create hashes of all of the Windows system files. This would be done not on the users system but either on the AV Company’s server or maybe Microsoft could host a server to the public that has API’s anyone can use.
It would work like this:
The database would include hashes of every version of every file which has ever been included in or added to Windows by Microsoft.
A bad definition file gets pushed out to a user’s machine. The AV gets the signal from the bad update saying “delete these infected Windows system files!!!” … the AV responds by saying “Umm. Ok, but these are critical windows system files, hold on one second while I hash the files in question and compare them to the hash database. I want to double check that we aren’t making a mistake.” It then checks the hash. “Oh crap… these system files are the real deal, not bad in anyway” … it then aborts the file deletion.