TCPA and Palladium Technical Analysis
This article presents a technical analysis of the TCPA hardware system and the Palladium operating system. Palladium and TCPA have been covered in some depth on slashdot and various FAQA. Unfortunately, much of the information available from these sources is highly subjective and confusing (for example, TCPA and Palladium are presented as if they were the same thing). Reliable and objective technical information on
Palladium and TCPA has been hard to come by-and the actions of Microsoft has not made obtaining such information any easier.
My personal highest security concern is privacy. To evaluate the ability of TCPA and Palladium to protect privacy I needed technical "facts" on this system - not just marketing hype or politically motivated criticisms of Microsoft. The investigations that led to this artical started specifically to evaluate the privacy protection characteristics of TCPA and Palladium.
After conducting my investigation of TCPA and Palladium, I have come to the conclusion that TCPA has some very positive characteristics and
some very negative features. Among the most negative features of TCPA, include the Certification Authorities and the unique identification these perform on users when these CAs are involved. However the most negative
single feature of TCPA and Palladium is the nature of Palladium and the philosophy that has driven Microsoft's development and promotion of
I don't pretend to be without bias on the topics of security and privacy. I'm trying to be up front about my personal bias so that readers can better shape their own opinions-and use this article to effectively supplement the other information more readily available on TCPA and palladium. The nature of this article is technical, but I have attempted to make the most important parts accessible to a wide range of readers because I feel the emergence of TCPA and Palladium may have broad social and political impact.
This text excludes analysis of some important TCPA technical features, most notably: the user local authentication methods, platform acquisition, or TPM call details that are described at the TCPA specification. I consider it unnecessary to understand these details
to appreciate what's what TCPA and Palladium are all about and what the likely effects of widespread use of these products is likely to
This article, is the final product of several months investigations, including reading the Microsoft specifications. The author would greatly appreciate any technical or stylistic suggestions in how to improve this article.
1.- TCPA introduction
1.1.- TCPA origins
1.2.- TCPA implications
2.- TCPA Analysis
2.1.- Which components change?
2.2.- CRTM (Core Root of Trust)
2.3.- TPM (Trusted Platform Module)
2.3.1.- System measurement values
2.3.2.- Cryptographic algorithms
2.3.3.- User-TPM authentication
2.4.- PCR logs
2.4.1.- PCR registers detail
2.4.2.- PCR changes reaction
2.5.- System boot-up
2.6.- TPM functions
2.6.1.- TPM Drivers
2.6.2.- Functions on the BIOS driver
2.6.3.- Memory Present Driver
2.6.4.- Protected Storage
2.6.5.- New identities and the TTP
3.- Palladium Analysis
3.1.- Palladium introduction
3.2.- Palladium's kernel implementation
3.3.- The external TORs
3.4.- Digital Rights Management
5.- Appendix A: Bibliography
6.- Appendix B: Greetings
[ 1.- TCPA introduction ]
[ 1.1.- TCPA origins ]
There's a great disinformation on TCPA and Palladium, which has been encouraged by media ignorance and by Microsoft's marketing techniques.
Newsweek released an article that MSNBC copied, where they talked on what now seems to be the only truth on all this stuff, that there is some chip called "Fritz" that Microsoft made for their operating system called Palladium, a chip that consists in some kind of obscure thing attached to our PCs which decides what programs we can use and which ones we can't.
Though the truth on all this stuff is worrying, it doesn't have much in common with the vision I've just shown; Microsoft has contributed to this point of view by lying, talking about TCPA security features as if they belonged to Palladium, and so trying to convince their potential costumers that TCPA and Palladium are indissoluble. Even, Microsoft lies so flagrantly that in their website they say that Palladium (a
product that isn't still coded) offers the "security no other operating system can offer now", basing this assertion not in their product's security but on TCPA's security features (which could be used by ANY
TCPA is an union of some of the most important computer, financial and communications businesses, aimed to create a common specification dedicated to the "growing of user's trust" on information security (this beeing the "official version", as with an operating system
like Palladium we would have mostly negative consequences for the end-user, and the same happens with some of the TCPA characteristics).
TCPA is a public standard, an architectural change on the PC that is accomplished by installing two new "passive" components, that is, that they don't have control on the normal computer use, but provide it with some features; the problem is, how these are beeing used...
This alliance was first established by Compaq, HP, IBM, Intel and Microsoft, though many other companies have joined (making a rough total of 200 in september 2002). Some of these are Adobe, American Express, American Megatrends, AMD, Dell, Fujitsu, Motorola, National Semiconductor, NEC, Novell, Philips, Samsung, Siemens, SMSC, Toshiba, Tripwire, Verisign and a lot more (the full list seems to have dissapeared from trustedcomputing's homepage)
As you can easily deduce by looking at this huge quantity of companies - some of them are the main semiconductor producers in the world -,
TCPA is something really serious that's beeing carried out from a joint effort by the most important computation and telecommunication companies in the world so to radically change the idea on computer equipment, and so we need to know what's happening before it's too late.
[ 1.2.- TCPA implications ]
This new system isn't as efficient and secure as their proposers tell us; it has some features that could strengthen computer security, but it's not gold all that glitters.
Their FAQ says that with TCPA "access to data can be denied to malicious code such as virus in a platform, because this intrussion necessarily
changes the platform software state". As you will later deduce by the technical analysis, this isn't true, as it isn't the fact that "you can trust the software environment on the platform is operating as desired".
Anyway, some statements that follow in the FAQ are true: it is said this system would strengthen trust in public/private key systems; maybe, this is the only real place where TCPA could mean something positive, as the private key can only be broken by bruteforce and its sealed and hardware protected.
The negative face on TCPA is Certification Authorities. That is, the user identities generated by TCPA (not identifying the platform) need to be certified by trusted third parties in which we are supposed to trust, authorities we send unique identificative data on our system (as it was intended when Intel tried to put unique serial numbers on their processors accesible by software). The method TCPA uses, described in this article, is more indirect but also dangerous to the user.
[ 2.- TCPA Analysis ]
Here we fully analyze the TCPA system behaviour in PC platforms, as it is told in the public standards specified in www.trustedpc.org about this system, and the external complementary specifications used for the standards that are mentioned in the TCPA documentation.
This specification provided by TCPA itself is quite complete and leads little space to misinterpretation (TCPA compliant hardware needs
not to be certified by any authority); albeit, in this specifications there are sub-references to other specifications and systems described in other places, and they're all written in a "specification language" that really needs to be "decyphered" to acquire a real knowledge on how it all works. The real implementation on TCPA will be almost equal to the details I provide; as long as all the companies that are united in TCPA might start producing hardware fully compatible with the specification, we can know with a high degree of confidence how this system will work.
[ 2.1.- Which components change? ]
Now, the architectural organization on a PC (following the TCPA nomenclature) would be the following, beeing the higher level the most external, the lowest the most internal:
| System | - Peripherals, drivers, applications
| Platform | - Disk units, cards, power supply
| Motherboard | - CPU, memory, connection buses
| Microprocessor |
The new model proposed by TCPA - the change is however less bigger than they claim -, proposes this architectural changes:
| System | - Without changes
| Platform | - "TCPA subsystem" is added
| Motherboard | - Without changes
| Microprocessor | - Same
| TBB | - Composed by the TPM and CRTM
TCPA tells us there are two changes in the generic architecture of the PC; a TCPA subsystem is put on the platform level, and a block called TBB (Trusted Building Block) is added at a lower level than the processor itself. This block is considered the only part of the system that can be initially trusted.
The TBB is composed by two parts: the CRTM, Core Root of Trust, and the TPM, Trusted Platform Module. When we detail them, we'll notice that this classification on architecture isn't very accurate. The CRTM is just a "trusted BIOS" where execution begins after a reset.
The TPM, that we'll also cover on detail, is just an integrated peripheral that performs some specific functions (doesn't make sense with the way TCPA has been shown as a complete architectural change). TCPA subsystem is the mechanism which communicates these elements and attaches them to the PC architecture.
Anyway, the TPM can be annulated by the user when booting, and all the TCPA system is unnecesary for a TCPA-compliant PC to work: an option is given to deactivate it.
[ 2.2.- CRTM (Core Root of Trust) ]
This is the place where execution always begins when the system starts running, so TCPA considers absolutely necessary it's integrity can be assured: it must not be modified in any way so the system can still be considered secure, and a condition is given that every reset must make the processor start executing inside the CRTM. It's certainly the equivalent of the BIOS in our PCs, and as the BIOS is, it will be updateable - supposedly, only by the CRTM vendor -.
One of the most interesting stuff here, is that the company that builds the CRTM is responsible on providing the updates and code manteinance for it: TCPA doesn't tell anything on how it is done; they just say it's necessary to provide mechanisms so that this actions can be performed and "forget" to talk about security on this point.
When execution starts at the CRTM, it checks it's own integrity, the system components, the Option ROM of the peripherals, and the code that's beeing executed next (the IPL i.ex), extending what they call the "chain of trust".
[ 2.3.- TPM (Trusted Platform Module) ]
This is the most important component, and it must be sealed to the motherboard in two different possible ways:
- The TPM is physically bounded to the platform.
- The TPM is a SmartCard placed outside the PC (communicated by an USB port or similar). Communication between the TPM and the platform would be performed through some criptographic method such as a shared secret between the platform and the SmartCard, but in a way only one TPM can be related to one platform.
Regardless on how it's implemented, TPM is working as some special kind of SmartCard. It's providing functions that strengthen system's security on integrity by a rewritable memory and a sealed memory (not accesible from the outside, and never revealed by the TPM) and has several microprogrammed cryptographic algorithms.
Now, the TPM components are described.
2.3.1.- System measurement values
In order to assure components' integrity in a system, TPM is using eight different 32-bit registers called PCR to PCR to store eight values that refer to system measurements (fully described at section 2.4.1).
The design dilemma was that eight registers measuring the whole system integrity are much too little. Too many registers would make it too expensive. A circular memory system would be unsecure as data could be overwritten (lack of space), and making a stack from these values could also suffer from this lack of space and generate inconsistencies.
So, the TPM works by initializating every register with a known sequence in the beggining, and every time that a new element needs to be attached to the sequence, a hash algorithm will be performed over the concatenation of the sequence and the new value. Let's see an example:
* The PCR[x] register is initialized by the TPM itself to a value it knows, and we want to add a measurement to it (i.ex, a hash over the harddisk partition table data):
| PCR register | + | Hash data |
* So, we concatenate the 32 bits from the PCR register to the 32 bits from the hash we made, generating a 64-bit sequence:
| Sequence concatenated with the new one |
* Now, we take this sequence and make a new hash over it, so there we have the 32 bits that are finally stored in PCR[x], where we wanted to add the new measurement:
| Hashed sequence | - New PCR[x] value
These values are the root of the trust the system has on it's dependant data and dispositives. They are communicated to the external system by signing them with a private key the TPM never reveals (and that is only used to make this signatures), so the TPM can authenticate itself as the author of the information that's beeing released on the PCR registers, and the receiving entity can assure it comes from the TPM.
At the same time, this communication is protected against replication attacks. That is, every request made to the TPM has a random value attached, so not the data and the random value are signed by the TPM, assuring the TPM is working at the moment and answering exactly that request.
Now, there's a problem, because if we need to check nothing has changed in our system, the operations need to be performed again from the beggining to the end (so the final result that is stored in the TPM is achieved). That is, if a PCR[x] contains the result from analyzing three components, when integrity checking a way must be provided so the same analysis can be chronologically made and the result is the same as the one stored in the PCR registers. That's why some "activity logs" are used (these are detailed at section 2.4).
2.3.2.- Cryptographic algorithms
Inside the TPM there are some cryptographic algorithms microprogrammed, so they can be trusted as they can't be modified by software. These are:
* SHA-1: Hash algorithm used for system integrity measurements stored in the PCR logs.
* RSA: Several uses: a private key can sign the data TPM provides to the external world. Also, this algorithm is used to sign data that when it's needed to verify the TPM identity, and to encrypt/decrypt data and sub-tree crypto-keys. There's only one source key, but several signing identities can be created (this is detailed in other sections).
* RNG: Semi-random number generation, used to check if the system is alive and against replication attacks. It tries to accomplish randomness by using a hash function over semi-random data. The implementation on the origin of the random seed needs to be cheap and can be a flaw; it is proposed that temperature measurements or key pressing is used for this purpose.
* 3DES: Triple DES use isn't specified and it isn't considered important. Using symmetrical cyphering systems is discouraged, though it might be useful at those configurations where the TPM is an external SmartCard and communication between TPM and the platform is based on a shared secret.
2.3.3.- User-TPM authentication
The TPM has an internal logic based on 4 possible basic states:
- Permanent/inactive: This is the moment where the user has decided that his data is stored in a non-volatile way - when the TPM belongs to him, so that he is the only one that can use it -. TPM beeing inactive, means the user has not authenticated himself yet.
- Not permanent/inactive: Here, the TPM hasn't stored any information on it's owner and it's not active; this is the state the TPM is shipped, awaiting for an owner.
- Active (whether permanent or not): This is the way the TPM is meant to work (the platform wouldn't work without the existance of the TPM because of security motives). This doesn't mean the TPM can't be later deactivated by software, but at least the user needs to authenticate herself to use the system.
The Active/Non Permanent configuration isn't desired at all: a TPM without an owner might only make a few operations such as telling the outside world it exists, but it wouldn't let the platform work.
One of the biggest problems the TCPA people themselves recognize, is the radical deactivation politics on the TPM. Apart from the deactivation that can be software-performed, the TPM can be deactivated if it receives an unauthenticated message - even, this message can be a remote command, as long as it can call the TPM that way -, forcing this way a complete system reset. This opens the door - and the TCPA people recognize it - to a full scope of DoS attacks to any TCPA
[ 2.4.- PCR Logs ]
Now back to the PCR stuff, there's a problem: ok, the TPM holds in it's protected space the PCR values, but... how can it check if this values are correct when different stuff has been measured and hashed, and concatenated, and rehashed over the same PCR registers? A series of "logs" about these actions are stored, along with a description on what has been measured and on the measurement itself, so making it easier to check the PCR registers VS System.
Here's one of the most interesting TCPA parts; though they insist on that TPM holds great security as it just has eight fixed lenght registers, they need a series of logs to reconstruct the way these operations were performed, and these logs have a variable size (which was the stuff they didn't like, when I first talked about the PCR registers). The only thing they've done is moving the problem outside of the TPM. The objective is making the TPM secure, but to do so, they delegate this length variability and access problem to another dispositive.
These logs are stored in the system's firmware, working by a standard called ACPI (Advanced Configuration and Power Interface), that was made by Microsoft, Phoenix and Toshiba, and that will now be introduced as a necessary standard (beeing it some kind of monopolistic trap on the ACPI and the businesses behind it).
The specification itself, pretends to implement a BIOS specification dealing with the relationship between a motherboard and it's dispositives, and how they relate to the operating system (and it's API), with the objective, they say, of building more robust Plug&Play systems and higher peripheral control (configuration, power saving, etc)
Now, the ACPI implementation is beeing used for TCPA. An important part, is it's table system: this tables will be mapped into the kernel space on the operating system so it can directly deal with them.
The beggining on these tables is something you can locate yourself in your home-PC (though they don't have the TCPA-capabilities yet ;-) ). In the i386 platforms, the beggining of these tables can be accesed by the RSDP pointer (Root System Description Pointer), which points to the RDSPS table. This is the table described below:
| RDSP Pointer -> RDSPS table in kernel memory |
| Offset | Length | Description |
| 0 | 8 | Text identifying string, "RDS PTR" |
| 8 | 1 | Checksum |
| 9 | 6 | OEM Identificator |
| 15 | 1 | Version number |
| 16 | 4 | RSDT table PHYSICAL address |
| 20 | 4 | Table length in bytes |
| 24 | 8 | 64 bits XDST address |
| 32 | 1 | Extended checksum |
| 33 | 3 | Reserved |
The thing we're interested in is the pointer to the RSDT table (Root System Description Table), which is where all the TCPA changes on the standard ACPI are. There, we'll find a new pointer which belongs to the "TCPA Table".
The way to find these subtables is easy; 36 bytes after the beggining of the RSDT there an array of 32-bit pointers we can recursively check until we find the table we want. The first data on these tables would be an identifier on the table's name (i.ex, the RSDT itself begins by "RSDT" as it's first four bytes).
This RSDT will point to two important places:
- FACP Table (Fixed ACPI Description Table): This is the less important one. It contains information about the system dispositives and parameters for their Plug&Play characteristics configuration, as well as pointers to other tables such as the DSDT (an extended table indicating characteristics about hardware temperature automeasuring stuff and others that didn't fit in the FACP) or the FACS table (dedicated to synchronization and control). In any case, the only information that could be important (the FACS ones identifying the hardware configuration) is now mostly ignored as the new TCPA structures hold it.
|---------| |--------| |--------|
| RSDPS | -------) | RSDT | -------) | TCPA |
|---------| |--------| |--------|
----) | FACP | --) ... DSDT & FACS ...
- TCPA Table: Here's the important stuff. This new table is where the logs are beeing stored. This table is specifically stored inside the BIOS-related information in the ACPI way, so that the system maps it (with some little differences, as it can't be reclaimed by the OS for other uses), and has a variable length where, after some stuff about the length of the table (vendor data and so), the logs are stored:
* TCPA entry:
|Offset| Length | Stored data |
| 0 | 4 | Text string 'TCPA' |
| 4 | 4 | Complete TCPA table length |
| 8 | 1 | Revision number from the table |
| 9 | 1 | Checksum |
| 0Ah | 6 | Vendor identifier (text) |
| 10h | 8 | Vendor's model identifier |
| 18h | 4 | TCPA revision number for this model |
| 1Ch | 4 | TCPA Table vendor's identifier |
| 20h | 4 | Serial number for the value above |
| 22h | 2 | Reserved (default: 0000h) |
| 24h | 4 | Maximum length (bytes) of the logs zone in the |
| | |system before booting is performed |
| 28h | 8 | Indication on the physical memory (64 bytes) |
| | |where the events' log area is stored. |
The list itself is stored in the ACPI firmware, and is mapped into memory in a reserved BIOS address, so it can be red by the operating system. The TCPA table is anyway different from the other ACPI tables in that it's "non-reclaimable": reclaimable means that once it's used no more, the OS can reclaim it's memory space so to use it as it wishes. The TCPA table is non-reclaimable because an hibernation of the system performed by the operating system might destroy the possibility on performing the integrity checks.
The log area in the system (following the TCPA table) is composed by a variable length data structure called TCPA_PCR_EVENT, each of it's entries having this format:
|Offset| Length | Data |
| 0 | 4 | Event identifier (EventID) |
| 4 | 4 | Length of the EventData for this entry |
| 8 | ? | EventData |
The value stored at EventID will tell us what kind of information EventData is going to handle. For example, the POST-BIOS strings will have an EventID=3h, and the EventData will be their hashed data. For the CMOS, an EventID=4h will be used, but the EventData will be the unhashed CMOS data.
Accessing these tables can be performed by the standard way as specified by the ACPI, by using it's drivers; INT 15h can be useful to read the ACPI tables locating memory blocks by the 0E820h function. The operating system can access these when booting up:
* Calls to INT 015h/Function 0E820h:
EAX = 0E820h
EBX = "Continuation value", 0 the first time, and the returned value in the subsequent calls
ES:DI = Buffer where the BIOS is writing the data
ECX = Buffer length (minimum of 20 bytes)
EDX = 'SMAP' string
The "Continuation value" is returned at EBX, ECX will hold the number of bytes written, and CF will be activated if there was an error.
The buffer structure (on read) is:
| Buffer |
| Offset | Length | Description |
| 0 | 4 | 32 lower bits (base address) |
| 4 | 4 | 32 higher bits (base address) |
| 8 | 8 | Length |
| 10h | 4 | Kind of memory block |
We've got to check the memory block type then if we want to locate the tables this way: type 1 is normal memory, type 2 is "reserved", the third is for the ACPI tables, and the fourth to NVS ACPI memory. Anyway the TCPA table isn't in the third-type space as the ACPI tables are (it would be reclaimable that way), and it will be on reserved space, though accesible by the ACPI root tables (and also if possible, by the RSDT pointer)
2.4.1.- PCR registers detail
Every PCR register is very specific on where it is to be used. So, here's a description on what are they used for:
PCR: Logs all the CRTM executable code and system's firmware.
PCR: Refers to CPU microcode updates, peripheral configuration in the platform, CMOS and ESCD (Extended System Configuration Data) if it exists, and SMBIOS (System Management BIOS, information over peripherals and their serial numbers, over the BIOS, physical and cache memory, slots, etc).
PCR: Option ROM code, that is, executable read-only memory from non-booting peripherals such as a graphic card. If it's a booting peripheral, it will be hashed as IPL code, and not as Option ROM.
PCR: Option ROM data and configuration.
PCR: IPL code, that is, booting-up code; i.ex, for a hard disk the IPL would be the MBR code.
PCR: Configuration and IPL data, that is, i.ex, for a hard disk this would be the partition table.
PCR: State transition (ACPI events such as sleeping the PC, etc)
2.4.2.- PCR changes reaction
One of the details that remain unexplained after this is, how will the system react if there's been a change on the measurement - and so the system has changed? As the TPM only provides with functions that allow to check if there has been changes in this configuration, this part is unspecified at the TCPA specification. Wondering what the system's reaction would be (or at least what would be recommended), I e-mailed the TCPA staff, and so they did answer:
"How a consumer of the PCR contents (application, OS, etc.) uses the values in the PCR are up to that consumer.[...]
The reporting of changed contents is also an option for the consumer of the PCR. The application using the PCR can hide that fact that a value is changed and go through an upgrade process or it could ask the platform user to participate in the upgrade. Again these are all options that the application designer must take into account."
So, security is finally delegated into the user and the programmer; it will depend on how the system manages this PCR changes that the security will be greater or lesser - a bad software implementation might leave space for malicious code to install itself without user noticing.
[ 2.5.- System boot-up ]
When we push the power button, the first one to control our computer is the CRTM (equivalent as I said to the BIOS we all know), that will check if there's been any change on itself (PCR), on the platform (PCR), or on the Option ROMs (PCRs 2 and 3), then hashing (or letting the measured Option ROMS do it) the POST-BIOS (considering the POST-BIOS as the boot-up system, i.ex the partition table, in PCR 4). When all this is done, finally the IPL holds execution.
This IPL, checks the IPL data on PCR and the beggining of the operating system, extending the "chain of trust" to it, so the system can be considered "secure".
[ 2.6.- TPM functions ]
2.6.1.- TPM Drivers
The TPM provides several drivers for it's functionalities.
The first of them uses int 01Ah as an interface through their features can be used, and will be only be available to the BIOS (which will
deactivate it later). At the same time, it will implement a driver into the non-reclaimable ACPI memory called "Memory Present Driver", which will be used later by the operating system.
The tunneleable functions inside the API these drivers provide, will be dedicated to key generation for protected storage, user authentication, hashing, event generation and certification, which have been described in the sections above or will be detailed below (subsections 2.6.4 and 2.6.5).
2.6.2.- Functions on the BIOS driver
These are the functions provided by the BIOS (though others can be "tunneled" to the TPM, but these are the specifical implemented as int 01Ah functions):
* StatusCheck: The TPM answers with an "I exist!" message, providing its version number and a pointer to the event logs in memory in ESI (which would let us another way on avoiding to deal with the ACPI tables).
* HashLogExtendEvent: Performs a hash over the selected portion of memory, storing the results in the PCR register selected by the call and generating the subsequent logs.
* Auto-deactivation: By the AL=03h function, it deactivates this driver leaving the system with the opportunity on executing without TCPA subsystem presence.
2.6.3.- Memory Present driver
We've got a structure of four 32-bit blocks for the MPDriver calls (the way it's passed to the TPM is something that the driver developer will have to deal with). They might in any case be these ones:
- pbInBuf: DD ? ; Pointer to the input data
- pbInLen: DD ? ; Input data maximun lenght
- pbOutBuf: DD ? ; Pointer to the output data buffer
- pbOutLen: DD ? ; Maximum length on this buffer, and when returning, the number of bytes red.
- AL: Will indicate a function selector.
This driver would implement three specific functions, though the other ones are "tunneleable":
* A BIOS-Driver-Style StatusCheck, so the TPM is working as expected.
* An initialization function (MPInitTPM, AL=01h), initializing the driver and stablishing a communication channel with the TPM.
* An MPCloseTPM function so to close communication with the TPM (AL=02h)
2.6.4.- Protected Storage
TCPA provides several public/private key pairs. Also, no key used for encryption can be also used for signing because of security reasons.
TPM basically contains one RSA key pair that is called SRK (Storage Root Key), which is generated inside the TPM and that cannot be extracted in any way (hardware protection). With the help of sub-keys, the TPM acts as a portal to secure data stored outside itself, that would only be accesible by means of the TPM features. We can think of the SRK as the root of two trees, one that isn't migratable made from TPM-generated keys, and one migratable that can only be composed from externally generated keys.
One of the data-types that can be stored externally to the TPM but sealed by it are other public/private keys, in a way that they form a tree which root would be the SRK, which nodes or branches would be keys dedicated to encryption/decryption, and which leafs would be the signing keys:
|----------------| |------------------| |---------|
| Non migratable | | Ciphering and | | Signing |
| key inside TPM | ----> | deciphering keys | ----> | keys |
|----------------| |------------------| |---------|
The concept behind this is that the SRK protects the ciphering/deciphering keys: these intermediate keys are deciphered by the SRK, and they decypher the data they want to protect (that would have been cyphered by TPM features adding a signed hash) and the signing key (deciphered by this cypher/decypher key it hangs from), which would check this signed hash by performing the inverse operation at the sealed data. Finally, the TPM provides mechanisms so this encrypted data can be migrated and shared with other platforms.
An example on this system's use, is in multiuser systems authentication: the user keys are stored in nodes and the users would activate. These users would authenticate through them, using this cyphering/decyphering for their activities and the signing keys in the leaf to assure their identity (another feature that could be positive or negative dependingon the OS implementation).
2.6.5.- New identities and the TTP
TPM contains an unique identifier in order to assure it's own identity against others; though, this identity is never directly used, but by a Certification Authority (CA), also called Trusted Third Party (TTP). Certifying these identities by a TTP, TCPA tries to assure that anyone who made a request with a TCPA identity is the owner of a real TPM.
The idea behind this system is similar to Protected Storage, with a public/private key pair dedicated to signing, along with an external certification assuring it belongs to a TPM. Even, the TPM will only produce an identity through an internal function known as "TPM_MakeIdentity", which requires this external certification. Several identities can coexist in a TPM, but they NEED to be validated by one (and only one) Certification Authority.
Here, we have the most serious privacy breach on TCPA, because of the intrusive steps we need to follow in order to create an identity:
* TPM creates an internal key pair which will be used for signing as a new identity.
* It sends evidence about the TPM so it can be considered genuine, which consists on platform data (signed by this newly generated key), and also sends this new key pair's public key to the Certification Authority, who validates it reversing the signing operation to assure it comes from the signing key. Among the signed data that is sent to the CA, there's also the CA public key, so it can be sure that the petition is directed to this CA and not any other one. Talking fast, the CA checks the data sent from a platform corresponds to a genuine TPM.
* The CA encrypts the data with that newly generated public key, and sends the certificate back, so indicating the TPM which identity is the certificate directed to (of course, the one that needed certification)
So, the idea is that a key has to be certified in a way that when you use it, noone can know which TPM it belongs to, but that it knows that it actually belongs to a TPM (so there cannot be an unique identification with the TPM holder, as it's identity is aliased).
TCPA FAQ fiercely defends that this identity aliasing works for the user's privacy. They say, there is no unique indentification of the TPM holder. Sadly, this statement is blatantly false, for two reasons:
- First of all, even though identification doesn't relate to the main (SRK) key (which will encrypt the keys that will be our new identity), identification can be performed by other methods. For example, our origin IP can identify the owner in the net when certifying with a CA along with internal data, or even when browsing if the internet access is provided by another identification of the system that wants it.
- Even worse, the data that is sent to the Certification Authority about our platform, known as TCPA_IDENTITY_PROOF, is a structure based on credentials referring to our platform and the TPM. TCPA tells these credentians aren't unique and can be repeated in different configurations (i.ex, in same model/version of a platform, the number representing the PlatformCred would remain the same). TCPA specification becomes particullarly obscure at this point, so one way we're told that there's an unique identifier for this CA operation, but at the same time they say that data sent in TCPA_IDENTITY_PROOF is not unique:
It's better to go deeper on the specification, and there we can find out that, among the data that is sent to the CA, we've got three certificates about our system, that are:
* TPM Endorsement Credential (endorsementCred)
* TPM Platform Credential (platformCred)
* TPM Conformance Credential (conformanceCred)
And in the endorsement credential structure, a public key unique to our system is going to be sent (the TPM public endorsement key), so there's a way on easily identifying which TPM we have in an unique way.
So, the TPM can be identified by the Certification Authority when issuing a certificate for a new identity that is generated by a TPM, even if this identity is not itself related to the TPM except for that is held cyphered with a higher key in the key-tree that can be related to that TPM. In our day-by-day browsing or whatever activities we use that identity for, there would be no unique identification on the owner, but there is the possibility to know who does that identity belong to, as the CA knows it.
[ 3.- Palladium Analysis ]
[ 3.1.- Palladium introduction ]
Now we start with the operating system idea that's driving us mad: Palladium.
This article started telling that Microsoft talked to the media about some TCPA features as if they belonged to Palladium in another marketing game from this company. If we look at Microsoft's press releases on Palladium, the "big advantages" they describe about their still not coded operating system are just TCPA-belonging characteristics, that is, the security on files that they describe (by public/private key pairs from TCPA), and even hardware/software components trustworthyness they offer, are also part of TCPA.
Microsoft also tells us about some other things that are absolutely false. They say, in example, that "Trusted code runs in memory that is physically isolated, protected and inaccessible to the rest of the system, making it inherently impervious to viruses, spyware or other software attacks", talking about their kernel code. Yes, we haven't seen anything on TCPA talking on a physically isolated memory, but the explanation on this is easy: Microsoft, as we'll see, is lying again.
[ 3.2.- Palladium's kernel implementation ]
Microsoft talks us about two basic components at kernel:
- TOR: It's the component that would control system calls on programs running under Palladium and stores critical data from these programs. It's who would protect the memory zone where the kernel is kept, and the encrypted data held from applications and user information. That is, this "Trusted Operating Root" (that's what the acronym means) is just a part of the operating system kernel.
- Trusted Agents: Programs that are executed in user mode but inside what MS calls the "trusted space". They would use TOR functions to encrypt data and store it in kernel space, that could only be retrieved by this same agents. The trusted agents' integrity would be checked by the TOR hashing the zone of the application that's performing the system call, so it would assure it's beeing used correctly: this also applies to system calls over memory management and any other critical system function, that is, Microsoft calls any API kernel caller a "Trusted Agent".
Until this point, I've used Microsoft's language to talk about this. Now I'll use common language: TOR is just a common kernel that, of course, is inside a part of the memory that's protected from user processes, but that's not physically isolated from the common memory, but by a standard memory protection, like EVERY kernel. "Trusted Agents" is just a lot of babbling that only means: "a part of a program that can call kernel API: the kernel hashes that part so it can assure its integrity". The system structure remains the same as the old one, performing communication by messages (old Minix style which led to Windows NT and so). The system works using priviledges scalability, as this "Trusted Agents" are more priviledged when calling kernel's API than user programs.
So, Microsoft doesn't bring us anything new on security, and Palladium will still have viruses. When someone finds a vulnerability and makes the processor go ring0, it will be as ring0 as it can be now, and no TOR super-strenght mechanism will prevent that. The other stuff Microsoft talks about on this OS security, is that user space processes can't access the TOR because it's in protected kernel memory, and so they can't access the private data in kernel memory. This data can only be retrieved by a certified entity that would have it's own space of data introduced in the TOR.
Now we've learnt Palladium will have a kernel memory and an user memory (which they talk about as a great new feature :-) ), beeing the TOR the one who holds the key to the user's private data stored at kernel space. So, apart from the hashing performed thanks to the TCPA to the system calling functions and the data storage at kernel space (that could be revealed if kernel space is reached by a rogue program), Palladium doesn't bring us anything new, security talking.
This is all the innovation Microsoft wants to offer in Palladium is security; but, as little as they offer this way, as big they take on privacy. And that's what we talk about when we start lookint at the "external TORs".
[ 3.3.- The external TORs ]
So to give an IMHO "false security" to the user, Palladium will use external TORs. There will be external entities that we should trust to authenticate parts of the operating system so to know they have not been modified. They would take OS/application data, hash this data and tell us if it is safe.
Now there are two possibilities. First one, is that only a very specific part could be sent to the entities that certify our system integrity (a much more perverted way of the CAs we talked about on TCPA!). This could be trusted if Microsoft's code was open source, but unfortunately, it is not and will be not. The other possibility, is that the organization itself decides which data is sent: that is, in any case we wouldn't know what we're sending to these external entities that are meant to care for our "distributed security".
This doesn't stop here, because Microsoft's target isn't user security but pretending it is. The real importance of the external TORs is what has been emphasized as the real evil on this operating system, that could bring us to a reality like what Richard Stallman told us in his "right to read" story. Here it appears Palladium's application of DRM, Digital Rights Management.
[ 3.4.- Digital Rights Management ]
Now we reach DRM, a system made by Microsoft that consists on a series of protocols programmed so an user can only watch a movie, read a book, hear a song or read an e-mail if he authenticates himself. Restrictions can be stablished so this access can be time limited, a specific number of views on the subject can be set until it's erased from our hard disk, etcetera. DRM is a system which main target is defending copyright systems. But DRM doesn't stop here, including ways to monitorize "intelectual property accesses", as they frightnengly call them.
A fact is that DRM itself says that its two main objectives are reproduction permission's management about anything dealing with electronic property, and monitorization on how this property is used (even if it's legal). That is, they explain that the "tracking" system built into DRM knows how many times and when are we watching a specific movie that we've bought permission for reproducing 10 times. Another example Microsoft proposes is "preview" and "free" (no-money) versions on electronic documents, that is, that even previews and free versions on stuff (like watching the first 5 minutes of a movie or the 15 first pages of a book) are also controlled and monitorized.
DRM itself is now working on Windows Media Player, where this technology wants to be fully functional even before Palladium. Files are encrypted and Microsoft holds the key that opens them, so the user needs to buy a license which holds data such as how much times can the video be reproduced or the time lapse we can view it.
Microsoft's idea is that Palladium will let them use DRM protocols based on hardware keys. Now they use software, and this is no good. With Palladium, they can act as an external TOR. This is not fully specified (though Microsoft wants to make DRM work with Palladium using the external TORs), but it's easy to deduce how it would happen:
- Our average Joe wants a movie, so Joe opens Palladium Media Player and sends a public key from the TPM with his request (which would also be encrypted, as it holds bank data), so he can watch Matrix 4 for one day in his computer. The server registers that Joe has rented Matrix 4 for a day, and transfers the corresponding money to the copyright holders.
- Now, Joe wants to watch the movie he just rented. So, when he executes the Palladium Media Player, it can behave in two different ways (the second one is more realistic, as it uses the external TORs):
A) Palladium Media Player checks the license the TOR has put inside the OS kernel (I wonder how big the Palladium kernel will become after some time using it, will it look like the registry?) and checks if he's allowed to watch this file (checking how much times has he watched the video, the system date, etc)
B) Even worse, the Palladium Media Player connects to the place where Matrix 4 was bought for one day and tells it the user wants to watch the movie; license is checked there, and if correct, a decryption key for the video file is sent to the Palladium Media Player that will be deleted once the movie has ended. The server now knows that Joe tried to watch the movie, the time he did it, if his licence was correct and if the operation was performed succesfully (and so with books, music...)
So, the aim of Palladium isn't only to provide bigger security to intellectual property: we have a great privacy breach in all this. Now the average Joe uses Outlook and Internet Explorer just because they're installed in Windows. The future Palladium user might use Palladium Media Player, Palladium Music Player, document reading programs (such as Word), and other software standarized and distributed with the operating system itself. Then, if I go to Amazon and get a trial copy of a book that is only letting me view 10 pages, Amazon will monitorize by DRM how many times and when did I view these pages (that makes me also wonder, will public libraries dissapear in the future? how do these deal with electronic data that can be copied?)
[ 4.- Conclussions ]
Now I'm talking more subjetive - though I expect the reader has created his own opinion on the matter -. At a first look, it would look like TCPA is mostly-good and that the complete evil is Palladium. TCPA is Operating System agnostic, its an open standard, and doesn't even consider whether a dispositive is "TCPA approved" or not.
Even though, TCPA has a hidden and terrible face. It supposes we can rely on the Certification Authorities, who would emit the certificates that allow us to identify ourselves with the semi-anonymous new identities a TCPA system can create. The problem is, this CAs can identify anyone who made a certification request, and, in the end, relate user's actions with an unique identity.
That's probably the worst stuff in TCPA - they tell us certificating this way is "neccesary" -. We need to trust these authorizing entities that DO know who we are, and TCPA system relies on this trust. Fight against TCPA if it finally comes, might focus on fighting against the Certification Authorities idea. If TCPA becomes a standard, an interesting action opposing them would be creating autonomous and anonymous CAs that would break the thread that makes an user relate to his TCPA identity (though, we would still have the big problem if, for example, a Microsoft's application required a Microsoft's certificate: every user on this application would be identified easily, stopping this way on disobeying the CAs) The only exit would be on open source systems as CAs that destroyed every relationship between the identification and the system owner.
What does this mean? My personal point of view is, TCPA+Palladium doesn't mean we're identifying ourselves to everyone (for example, while browsing). This identification would only be achieved by the Certification Authorities, companies which would hold the key to ourselves. They might identify us, but only a few ones will have the priviledge to do so.
Even worse is the Palladium OS, because probably Microsoft will still be the most common OS provider in case TCPA comes.
We might then have terrible scenarios as the indiscriminate use of DRM as it was explained in this article and in Palladium FAQs. Protocols could be used - in propietary systems - to identify us if messages were signed by a TPM identity. That is, an application could work in a way that, when sendind a message to a server by the Internet, it hashed this message relating to the identity; it would authenticate the client, but... it would also know every of his movements, and relate this authentication in message protocols with his personal TPM by an intermediate CA, or whatever other methods Palladium would like - as, in the end, Palladium's code won't be Open.
[ 5.- Appendix A: Bibliography]
In this section you'll find some of the web pages that hold more info on TCPA and Palladium, official and unnoficcial, for and against.
[ Appendix: Bibliography ]
The materials used for this article can be found in these addresses:
TCPA/Palladium FAQ by Ross Anderson
And the most fun (check out, the press releases and FAQ are contradictory every now and then; Microsoft refuses to give further information on their closed system called Palladium, and while they often act as if they were to grow up to hardware manufacturers, sometimes they recognize it's really the TCPA implementation, though this is usually made occult for the sake of making Palladium sound bigger. Another think I'm fearing right now, is that Microsoft wants to make its own propietary version of TCPA as their latest modifications in the FAQ suggest... but that would lead to the biggest computer-related business fight that have ever existed o_O)
A Business Overview (technical)http://www.microsoft.com/PressPass/features/2002/jul02/0724palladiumwp.asp
Microsoft's fun/interesting press releases on Palladium
[ 6.- Appendix B: Greetings]
Kuro5hin readers, specially to Randall Burns, who rewrote in english the introduction to this article ;)