Trustworthy Computing

The new Microsoft standard for security holes.

(See also: http://www.theregister.co.uk/content/4/23715.html)

I wonder if the phrase is meant to conflate the ideas of security and digital rights management?

We probably could use a good discussion of security, trustworthiness, and digital rights. The terms are being maligned for marketing purposes, but what does each really entail? Does each one rely upon the foundation of the other? Can we build a layered implementation model with security at the bottom, then trustworthiness, then digital rights management, adding little pieces to certify each additional layer? And then why should we allow Microsoft to dictate adherence to this model?

"Digital rights" is incompatible with user security. If a user has any kind of control over their machine, they will be able to violate "digital rights". And since user control is the cornerstone of user security, "digital rights" contradicts user security.

This is certainly true... as an example look at how folks hacked into the Xbox. They made the point that, given physical access and the right tools, no piece of hardware/firmware/software is secure. On the other hand, to get around this Microsoft may require hardware vendors to provide sealed boxes. Your next PC may more resemble your cable descrambler box. Of course, people hack into cable descramblers as well.


One of the best points I've heard about this was what we really want is "Trustless Computing". Basically, Trust no-one. Put everything in a sandbox and make sure it has only the rights it needs, and then, most of them are carefully guarded. The reason for this is that MicroSoft's "TrustworthyComputing" appears to be based on the idea that you can trust signed code. They tried this once before with ActiveX, and it ran afoul of people writing code that you wanted (It displayed a pretty picture!) but you couldn't trust it (It had buffer overflows). If your email program ran in a sandbox that could only read/write from your mailboxes, addressbook, create network connections to authorised smtp and pop3 servers a lot of the various email viruses would be reduced (but not eliminated). Everything runs in a sandbox, and communicates over tightly controlled channels to other programs running in their own sandboxes. -- PerryLorier

Security based on capability lists and direct manipulation would eliminate all viruses, email or otherwise.

I wouldn't bet on it. Consider a virus that claims to be an improved remote desktop client. BottomMind


Well, there's actually two "flow" ideas happening here. The first is that, as a computer user, when I run code I want to be able to attribute it to a specific legal entity. In other words, if I run the program and it damages my data files or corrupts other programs then I want to know who I should sue, and be able to provide convincing non-reputable evidence to the court that indeed the virus writer was at fault.

The other "flow" though is that, when I run the program (or access some data), the provider can identify who the client is. That way, if I don't pay my one-tenth of a cent for listening to the music, they can sue me.

Many people agree with the identification of the provider, but would prefer privacy of the client.

Instead of suing someone (an AmericanCulturalAssumption no doubt), what about:

As a computer user, when I run code, I want it to be unable to access anything to which I have not provided specific (revokable) access. So the only possible way for a program to damage my data files or corrupt other programs is if I deliberately, consciously, knowledgeably and freely decide to let it do so. In that case, if anything gets screwed up on my computer then I am responsible and can blame no one else.

But once you get rid of the stupid worthless lawsuits it becomes that much more disgusting that the music industry can sue you. Has a lawsuit ever restored corrupt data? Unbroken someone's arm? Undone their suffering? Brought the dead back to life? Then what good are they?!

They provide negative reinforcement to the culprits to prevent future problems, started by other members of the society


Perhaps we have an AntiPattern here.

1. User runs some new software 2. Bad things happen 3. User gets upset

Our first attempt at a solution goes something like

1. Lock down the OS (or the firewall or whatever) so bad things can't possibly happen

2. Then good things are also blocked

3. User gets upset

Our next attempt goes something like

1. Improve the OS (or the firewall or whatever) so every time the software wants to do something, the user must specifically approve it.

A. The user clicks "approve" just one too many times. Bad things happen. The user gets upset.

B. The user carefully considers each request. User spends more time setting preferences and approving/rejecting requests than doing anything productive. No data is lost, but user gets upset anyway.

In an ideal world, we would

1. Have the OS use have some sort of tool that would allow us to make sure there is no trojan or virus or accidental data-corruption errors in the software before we ran it. And in a really ideal world, we could fix those problems. Or if it *does* do something bad, we could undo it. Does the HaltingProblem really make this impossible?

This particular thing was achieved in Plan 9 using its native filesystem. It would take achieving kernel mode (on the fileserver) to achieve intentional permanent corruption. Rolling back to yesterday's directory tree was nearly a trivial operation.


EditText of this page (last edited June 18, 2013) or FindPage with title or text search