Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
TCPA and Palladium technical analysis

By wintah in Technology
Mon Oct 28, 2002 at 11:27:44 AM EST
Tags: Security (all tags)
Security

This article tries to be an objective technical analysis on the TCPA hardware system and the Palladium operating system. It covers the most important technical details on TCPA and the (dis)information Microsoft has given about Palladium.


The full text is reproduced in this article, though you can also view updated versions in english (TCPA and Palladium technical analysis) or spanish (Análisis técnico de TCPA y Palladium)


TCPA and Palladium Technical Analysis

by Wintermute

wintrmute@retemail.es

v1.01

This article presents a technical analysis of the TCPA hardware system and the Palladium operating system. Palladium and TCPA have been covered in some depth on slashdot and various FAQA. Unfortunately, much of the information available from these sources is highly subjective and confusing (for example, TCPA and Palladium are presented as if they were the same thing). Reliable and objective technical information on Palladium and TCPA has been hard to come by-and the actions of Microsoft has not made obtaining such information any easier.

My personal highest security concern is privacy. To evaluate the ability of TCPA and Palladium to protect privacy I needed technical "facts" on this system - not just marketing hype or politically motivated criticisms of Microsoft. The investigations that led to this artical started specifically to evaluate the privacy protection characteristics of TCPA and Palladium.

After conducting my investigation of TCPA and Palladium, I have come to the conclusion that TCPA has some very positive characteristics and some very negative features. Among the most negative features of TCPA, include the Certification Authorities and the unique identification these perform on users when these CAs are involved. However the most negative single feature of TCPA and Palladium is the nature of Palladium and the philosophy that has driven Microsoft's development and promotion of Palladium.

I don't pretend to be without bias on the topics of security and privacy. I'm trying to be up front about my personal bias so that readers can better shape their own opinions-and use this article to effectively supplement the other information more readily available on TCPA and palladium. The nature of this article is technical, but I have attempted to make the most important parts accessible to a wide range of readers because I feel the emergence of TCPA and Palladium may have broad social and political impact.

This text excludes analysis of some important TCPA technical features, most notably: the user local authentication methods, platform acquisition, or TPM call details that are described at the TCPA specification. I consider it unnecessary to understand these details to appreciate what's what TCPA and Palladium are all about and what the likely effects of widespread use of these products is likely to mean.

This article, is the final product of several months investigations, including reading the Microsoft specifications. The author would greatly appreciate any technical or stylistic suggestions in how to improve this article.

Index

1.- TCPA introduction
 1.1.- TCPA origins
 1.2.- TCPA implications

2.- TCPA Analysis
 2.1.- Which components change?
 2.2.- CRTM (Core Root of Trust)
 2.3.- TPM (Trusted Platform Module)
  2.3.1.- System measurement values
  2.3.2.- Cryptographic algorithms
  2.3.3.- User-TPM authentication
 2.4.- PCR logs
  2.4.1.- PCR registers detail
  2.4.2.- PCR changes reaction
 2.5.- System boot-up
 2.6.- TPM functions
  2.6.1.- TPM Drivers
  2.6.2.- Functions on the BIOS driver
  2.6.3.- Memory Present Driver
  2.6.4.- Protected Storage
  2.6.5.- New identities and the TTP

3.- Palladium Analysis
 3.1.- Palladium introduction
 3.2.- Palladium's kernel implementation
 3.3.- The external TORs
 3.4.- Digital Rights Management

4.- Conclussions

5.- Appendix A: Bibliography
6.- Appendix B: Greetings


[ 1.- TCPA introduction ]

[ 1.1.- TCPA origins ]

There's a great disinformation on TCPA and Palladium, which has been encouraged by media ignorance and by Microsoft's marketing techniques.

Newsweek released an article that MSNBC copied, where they talked on what now seems to be the only truth on all this stuff, that there is some chip called "Fritz" that Microsoft made for their operating system called Palladium, a chip that consists in some kind of obscure thing attached to our PCs which decides what programs we can use and which ones we can't.

Though the truth on all this stuff is worrying, it doesn't have much in common with the vision I've just shown; Microsoft has contributed to this point of view by lying, talking about TCPA security features as if they belonged to Palladium, and so trying to convince their potential costumers that TCPA and Palladium are indissoluble. Even, Microsoft lies so flagrantly that in their website they say that Palladium (a product that isn't still coded) offers the "security no other operating system can offer now", basing this assertion not in their product's security but on TCPA's security features (which could be used by ANY operating system).

TCPA is an union of some of the most important computer, financial and communications businesses, aimed to create a common specification dedicated to the "growing of user's trust" on information security (this beeing the "official version", as with an operating system like Palladium we would have mostly negative consequences for the end-user, and the same happens with some of the TCPA characteristics).

TCPA is a public standard, an architectural change on the PC that is accomplished by installing two new "passive" components, that is, that they don't have control on the normal computer use, but provide it with some features; the problem is, how these are beeing used...

This alliance was first established by Compaq, HP, IBM, Intel and Microsoft, though many other companies have joined (making a rough total of 200 in september 2002). Some of these are Adobe, American Express, American Megatrends, AMD, Dell, Fujitsu, Motorola, National Semiconductor, NEC, Novell, Philips, Samsung, Siemens, SMSC, Toshiba, Tripwire, Verisign and a lot more (the full list seems to have dissapeared from trustedcomputing's homepage)

As you can easily deduce by looking at this huge quantity of companies - some of them are the main semiconductor producers in the world -, TCPA is something really serious that's beeing carried out from a joint effort by the most important computation and telecommunication companies in the world so to radically change the idea on computer equipment, and so we need to know what's happening before it's too late.


[ 1.2.- TCPA implications ]

 This new system isn't as efficient and secure as their proposers tell us; it has some features that could strengthen computer security, but it's not gold all that glitters.

Their FAQ says that with TCPA "access to data can be denied to malicious code such as virus in a platform, because this intrussion necessarily changes the platform software state". As you will later deduce by the technical analysis, this isn't true, as it isn't the fact that "you can trust the software environment on the platform is operating as desired".

Anyway, some statements that follow in the FAQ are true: it is said this system would strengthen trust in public/private key systems; maybe, this is the only real place where TCPA could mean something positive, as the private key can only be broken by bruteforce and its sealed and hardware protected.

The negative face on TCPA is Certification Authorities. That is, the user identities generated by TCPA (not identifying the platform) need to be certified by trusted third parties in which we are supposed to trust, authorities we send unique identificative data on our system (as it was intended when Intel tried to put unique serial numbers on their processors accesible by software). The method TCPA uses, described in this article, is more indirect but also dangerous to the user.



[ 2.- TCPA Analysis ]

Here we fully analyze the TCPA system behaviour in PC platforms, as it is told in the public standards specified in www.trustedpc.org about this system, and the external complementary specifications used for the standards that are mentioned in the TCPA documentation.

This specification provided by TCPA itself is quite complete and leads little space to misinterpretation (TCPA compliant hardware needs not to be certified by any authority); albeit, in this specifications there are sub-references to other specifications and systems described in other places, and they're all written in a "specification language" that really needs to be "decyphered" to acquire a real knowledge on how it all works. The real implementation on TCPA will be almost equal to the details I provide; as long as all the companies that are united in TCPA might start producing hardware fully compatible with the specification, we can know with a high degree of confidence how this system will work.

[ 2.1.- Which components change? ]

 Now, the architectural organization on a PC (following the TCPA nomenclature) would be the following, beeing the higher level the most external, the lowest the most internal:

 |--------------------|
 |       System       | - Peripherals, drivers, applications
 |--------------------|
 |      Platform      | - Disk units, cards, power supply
 |--------------------|
 |    Motherboard     | - CPU, memory, connection buses
 |--------------------|
 |   Microprocessor   |
 |--------------------|

The new model proposed by TCPA - the change is however less bigger than they claim -, proposes this architectural changes:

 |--------------------|
 |       System       | - Without changes
 |--------------------|
 |      Platform      | - "TCPA subsystem" is added
 |--------------------|
 |    Motherboard     | - Without changes
 |--------------------|
 |   Microprocessor   | - Same
 |--------------------|
 |        TBB         | - Composed by the TPM and CRTM
 |--------------------|

TCPA tells us there are two changes in the generic architecture of the PC; a TCPA subsystem is put on the platform level, and a block called TBB (Trusted Building Block) is added at a lower level than the processor itself. This block is considered the only part of the system that can be initially trusted.

The TBB is composed by two parts: the CRTM, Core Root of Trust, and the TPM, Trusted Platform Module. When we detail them, we'll notice that this classification on architecture isn't very accurate. The CRTM is just a "trusted BIOS" where execution begins after a reset.

The TPM, that we'll also cover on detail, is just an integrated peripheral that performs some specific functions (doesn't make sense with the way TCPA has been shown as a complete architectural change). TCPA subsystem is the mechanism which communicates these elements and attaches them to the PC architecture.

Anyway, the TPM can be annulated by the user when booting, and all the TCPA system is unnecesary for a TCPA-compliant PC to work: an option is given to deactivate it.

[ 2.2.- CRTM (Core Root of Trust) ]

 This is the place where execution always begins when the system starts running, so TCPA considers absolutely necessary it's integrity can be assured: it must not be modified in any way so the system can still be considered secure, and a condition is given that every reset must make the processor start executing inside the CRTM. It's certainly the equivalent of the BIOS in our PCs, and as the BIOS is, it will be updateable - supposedly, only by the CRTM vendor -.

One of the most interesting stuff here, is that the company that builds the CRTM is responsible on providing the updates and code manteinance for it: TCPA doesn't tell anything on how it is done; they just say it's necessary to provide mechanisms so that this actions can be performed and "forget" to talk about security on this point.

When execution starts at the CRTM, it checks it's own integrity, the system components, the Option ROM of the peripherals, and the code that's beeing executed next (the IPL i.ex), extending what they call the "chain of trust".

[ 2.3.- TPM (Trusted Platform Module) ]

This is the most important component, and it must be sealed to the motherboard in two different possible ways:

 - The TPM is physically bounded to the platform.

 - The TPM is a SmartCard placed outside the PC (communicated by an USB port or similar). Communication between the TPM and the platform would be performed through some criptographic method such as a shared secret between the platform and the SmartCard, but in a way only one TPM can be related to one platform.

Regardless on how it's implemented, TPM is working as some special kind of SmartCard. It's providing functions that strengthen system's security on integrity by a rewritable memory and a sealed memory (not accesible from the outside, and never revealed by the TPM) and has several microprogrammed cryptographic algorithms.

Now, the TPM components are described.

 2.3.1.- System measurement values
 ---------------------------------

In order to assure components' integrity in a system, TPM is using eight different 32-bit registers called PCR[0] to PCR[7] to store eight values that refer to system measurements (fully described at section 2.4.1).

The design dilemma was that eight registers measuring the whole system integrity are much too little. Too many registers would make it too expensive. A circular memory system would be unsecure as data could be overwritten (lack of space), and making a stack from these values could also suffer from this lack of space and generate inconsistencies.

So, the TPM works by initializating every register with a known sequence in the beggining, and every time that a new element needs to be attached to the sequence, a hash algorithm will be performed over the concatenation of the sequence and the new value. Let's see an example:

* The PCR[x] register is initialized by the TPM itself to a value it knows, and we want to add a measurement to it (i.ex, a hash over the harddisk partition table data):

  |-------------------------------|   |---------------------|
  |         PCR register          | + |      Hash data      |
  |-------------------------------|   |---------------------|

  * So, we concatenate the 32 bits from the PCR register to the 32 bits from the hash we made, generating a 64-bit sequence:

  |----------------------------------------|
  | Sequence concatenated with the new one |
  |----------------------------------------|

* Now, we take this sequence and make a new hash over it, so there we have the 32 bits that are finally stored in PCR[x], where we wanted to add the new measurement:

  |-------------------------------|  
  |        Hashed sequence        |  - New PCR[x] value
  |-------------------------------|  

These values are the root of the trust the system has on it's dependant data and dispositives. They are communicated to the external system by signing them with a private key the TPM never reveals (and that is only used to make this signatures), so the TPM can authenticate itself as the author of the information that's beeing released on the PCR registers, and the receiving entity can assure it comes from the TPM.

At the same time, this communication is protected against replication attacks. That is, every request made to the TPM has a random value attached, so not the data and the random value are signed by the TPM, assuring the TPM is working at the moment and answering exactly that request.

Now, there's a problem, because if we need to check nothing has changed in our system, the operations need to be performed again from the beggining to the end (so the final result that is stored in the TPM is achieved). That is, if a PCR[x] contains the result from analyzing three components, when integrity checking a way must be provided so the same analysis can be chronologically made and the result is the same as the one stored in the PCR registers. That's why some "activity logs" are used (these are detailed at section 2.4).

 2.3.2.- Cryptographic algorithms
 --------------------------------

Inside the TPM there are some cryptographic algorithms microprogrammed, so they can be trusted as they can't be modified by software. These are:

* SHA-1: Hash algorithm used for system integrity measurements stored in the PCR logs.

* RSA: Several uses: a private key can sign the data TPM provides to the external world. Also, this algorithm is used to sign data that when it's needed to verify the TPM identity, and to encrypt/decrypt data and sub-tree crypto-keys. There's only one source key, but several signing identities can be created (this is detailed in other sections).

* RNG: Semi-random number generation, used to check if the system is alive and against replication attacks. It tries to accomplish randomness by using a hash function over semi-random data. The implementation on the origin of the random seed needs to be cheap and can be a flaw; it is proposed that temperature measurements or key pressing is used for this purpose.

* 3DES: Triple DES use isn't specified and it isn't considered important. Using symmetrical cyphering systems is discouraged, though it might be useful at those configurations where the TPM is an external SmartCard and communication between TPM and the platform is based on a shared secret.

 2.3.3.- User-TPM authentication
 -------------------------------

The TPM has an internal logic based on 4 possible basic states:

- Permanent/inactive: This is the moment where the user has decided that his data is stored in a non-volatile way - when the TPM belongs to him, so that he is the only one that can use it -. TPM beeing inactive, means the user has not authenticated himself yet.

 - Not permanent/inactive: Here, the TPM hasn't stored any information on it's owner and it's not active; this is the state the TPM is shipped, awaiting for an owner.

- Active (whether permanent or not): This is the way the TPM is meant to work (the platform wouldn't work without the existance of the TPM because of security motives). This doesn't mean the TPM can't be later deactivated by software, but at least the user needs to authenticate herself to use the system.

The Active/Non Permanent configuration isn't desired at all: a TPM without an owner might only make a few operations such as telling the outside world it exists, but it wouldn't let the platform work.

One of the biggest problems the TCPA people themselves recognize, is the radical deactivation politics on the TPM. Apart from the deactivation that can be software-performed, the TPM can be deactivated if it receives an unauthenticated message - even, this message can be a remote command, as long as it can call the TPM that way -, forcing this way a complete system reset. This opens the door - and the TCPA people recognize it - to a full scope of DoS attacks to any TCPA
based platform.

[ 2.4.- PCR Logs ]

 Now back to the PCR stuff, there's a problem: ok, the TPM holds in it's protected space the PCR values, but... how can it check if this values are correct when different stuff has been measured and hashed, and concatenated, and rehashed over the same PCR registers? A series of "logs" about these actions are stored, along with a description on what has been measured and on the measurement itself, so making it easier to check the PCR registers VS System.

 Here's one of the most interesting TCPA parts; though they insist on that TPM holds great security as it just has eight fixed lenght registers, they need a series of logs to reconstruct the way these operations were performed, and these logs have a variable size (which was the stuff they didn't like, when I first talked about the PCR registers). The only thing they've done is moving the problem outside of the TPM. The objective is making the TPM secure, but to do so, they delegate this length variability and access problem to another dispositive.

These logs are stored in the system's firmware, working by a standard called ACPI (Advanced Configuration and Power Interface), that was made by Microsoft, Phoenix and Toshiba, and that will now be introduced as a necessary standard (beeing it some kind of monopolistic trap on the ACPI and the businesses behind it).

The specification itself, pretends to implement a BIOS specification dealing with the relationship between a motherboard and it's dispositives, and how they relate to the operating system (and it's API), with the objective, they say, of building more robust Plug&Play systems and higher peripheral control (configuration, power saving, etc)

 Now, the ACPI implementation is beeing used for TCPA. An important part, is it's table system: this tables will be mapped into the kernel space on the operating system so it can directly deal with them.

The beggining on these tables is something you can locate yourself in your home-PC (though they don't have the TCPA-capabilities yet ;-) ). In the i386 platforms, the beggining of these tables can be accesed by the RSDP pointer (Root System Description Pointer), which points to the RDSPS table. This is the table described below:

|----------------------------------------------------------------------|
|          RDSP Pointer -> RDSPS table in kernel memory                |
|---------|------------|-----------------------------------------------|
| Offset  |  Length    |  Description                                  |
|---------|------------|-----------------------------------------------|
| 0       |  8         |  Text identifying string, "RDS PTR"           |
| 8       |  1         |  Checksum                                     |
| 9       |  6         |  OEM Identificator                            |
| 15      |  1         |  Version number                               |
| 16      |  4         |  RSDT table PHYSICAL address                  |
| 20      |  4         |  Table length in bytes                        |
| 24      |  8         |  64 bits XDST address                         |
| 32      |  1         |  Extended checksum                            |
| 33      |  3         |  Reserved                                     |
|---------|------------|-----------------------------------------------|  

 The thing we're interested in is the pointer to the RSDT table (Root System Description Table), which is where all the TCPA changes on the standard ACPI are. There, we'll find a new pointer which belongs to the "TCPA Table".

The way to find these subtables is easy; 36 bytes after the beggining of the RSDT there an array of 32-bit pointers we can recursively check until we find the table we want. The first data on these tables would be an identifier on the table's name (i.ex, the RSDT itself begins by "RSDT" as it's first four bytes).

 This RSDT will point to two important places:

 

- FACP Table (Fixed ACPI Description Table): This is the less important one. It contains information about the system dispositives and parameters for their Plug&Play characteristics configuration, as well as pointers to other tables such as the DSDT (an extended table indicating characteristics about hardware temperature automeasuring stuff and others that didn't fit in the FACP) or the FACS table (dedicated to synchronization and control). In any case, the only information that could be important (the FACS ones identifying the hardware configuration) is now mostly ignored as the new TCPA structures hold it.

 |---------|          |--------|          |--------|
 |  RSDPS  | -------) |  RSDT  | -------) |  TCPA  |
 |---------|          |--------|          |--------|
                          |
                          |      |--------|
                          ----)  |  FACP  | --) ... DSDT & FACS ...
                                 |--------|

- TCPA Table: Here's the important stuff. This new table is where the logs are beeing stored. This table is specifically stored inside the BIOS-related information in the ACPI way, so that the system maps it (with some little differences, as it can't be reclaimed by the OS for other uses), and has a variable length where, after some stuff about the length of the table (vendor data and so), the logs are stored:

 * TCPA entry:

 |------|--------|--------------------------------------------------|
 |Offset| Length |                  Stored data                     |
 |------|--------|--------------------------------------------------|
 | 0    | 4      | Text string 'TCPA'                               |
 | 4    | 4      | Complete TCPA table length                       |
 | 8    | 1      | Revision number from the table                   |
 | 9    | 1      | Checksum                                         |
 | 0Ah  | 6      | Vendor identifier (text)                         |
 | 10h  | 8      | Vendor's model identifier                        |
 | 18h  | 4      | TCPA revision number for this model              |
 | 1Ch  | 4      | TCPA Table vendor's identifier                   |
 | 20h  | 4      | Serial number for the value above                |
 | 22h  | 2      | Reserved (default: 0000h)                        |
 | 24h  | 4      | Maximum length (bytes) of the logs zone in the   |
 |      |        |system before booting is performed                |
 | 28h  | 8      | Indication on the physical memory (64 bytes)     |
 |      |        |where the events' log area is stored.             |
 |------|--------|--------------------------------------------------|

 The list itself is stored in the ACPI firmware, and is mapped into memory in a reserved BIOS address, so it can be red by the operating system. The TCPA table is anyway different from the other ACPI tables in that it's "non-reclaimable": reclaimable means that once it's used no more, the OS can reclaim it's memory space so to use it as it wishes. The TCPA table is non-reclaimable because an hibernation of the system performed by the operating system might destroy the possibility on performing the integrity checks.

The log area in the system (following the TCPA table) is composed by a variable length data structure called TCPA_PCR_EVENT, each of it's entries having this format:

 |------|--------|--------------------------------------------------|
 |Offset| Length |                    Data                          |
 |------|--------|--------------------------------------------------|
 | 0    | 4      | Event identifier (EventID)                       |
 | 4    | 4      | Length of the EventData for this entry           |
 | 8    | ?      | EventData                                        |
 |------|--------|--------------------------------------------------|

 The value stored at EventID will tell us what kind of information EventData is going to handle. For example, the POST-BIOS strings will have an EventID=3h, and the EventData will be their hashed data. For the CMOS, an EventID=4h will be used, but the EventData will be the unhashed CMOS data.

 Accessing these tables can be performed by the standard way as specified by the ACPI, by using it's drivers; INT 15h can be useful to read the ACPI tables locating memory blocks by the 0E820h function. The operating system can access these when booting up:

 * Calls to INT 015h/Function 0E820h:

 EAX = 0E820h

 EBX = "Continuation value", 0 the first time, and the returned value in the subsequent calls

 ES:DI = Buffer where the BIOS is writing the data

 ECX = Buffer length (minimum of 20 bytes)

 EDX = 'SMAP' string

The "Continuation value" is returned at EBX, ECX will hold the number of bytes written, and CF will be activated if there was an error.

 The buffer structure (on read) is:

|----------------------------------------------------------------------|
|                                Buffer                                |
|---------|------------|-----------------------------------------------|
| Offset  |  Length    |  Description                                  |
|---------|------------|-----------------------------------------------|
| 0       |  4         |  32 lower bits (base address)                 |
| 4       |  4         |  32 higher bits (base address)                |
| 8       |  8         |  Length                                       |
| 10h     |  4         |  Kind of memory block                         |
|---------|------------|-----------------------------------------------|

 We've got to check the memory block type then if we want to locate the tables this way: type 1 is normal memory, type 2 is "reserved", the third is for the ACPI tables, and the fourth to NVS ACPI memory. Anyway the TCPA table isn't in the third-type space as the ACPI tables are (it would be reclaimable that way), and it will be on reserved space, though accesible by the ACPI root tables (and also if possible, by the RSDT pointer)

 2.4.1.- PCR registers detail
 ----------------------------

 Every PCR register is very specific on where it is to be used. So, here's a description on what are they used for:

 PCR[0]: Logs all the CRTM executable code and system's firmware.

 PCR[1]: Refers to CPU microcode updates, peripheral configuration in the platform, CMOS and ESCD (Extended System Configuration Data) if it exists, and SMBIOS (System Management BIOS, information over peripherals and their serial numbers, over the BIOS, physical and cache memory, slots, etc).

 PCR[2]: Option ROM code, that is, executable read-only memory from non-booting peripherals such as a graphic card. If it's a booting peripheral, it will be hashed as IPL code, and not as Option ROM.

 PCR[3]: Option ROM data and configuration.

 PCR[4]: IPL code, that is, booting-up code; i.ex, for a hard disk the IPL would be the MBR code.

 PCR[5]: Configuration and IPL data, that is, i.ex, for a hard disk this would be the partition table.

 PCR[6]: State transition (ACPI events such as sleeping the PC, etc)

 PCR[7]: Reserved

 2.4.2.- PCR changes reaction
 ----------------------------

One of the details that remain unexplained after this is, how will the system react if there's been a change on the measurement - and so the system has changed? As the TPM only provides with functions that allow to check if there has been changes in this configuration, this part is unspecified at the TCPA specification. Wondering what the system's reaction would be (or at least what would be recommended), I e-mailed the TCPA staff, and so they did answer:

"How a consumer of the PCR contents (application, OS, etc.) uses the values in the PCR are up to that consumer.[...]

 The reporting of changed contents is also an option for the consumer of the PCR. The application using the PCR can hide that fact that a value is changed and go through an upgrade process or it could ask the platform user to participate in the upgrade. Again these are all options that the application designer must take into account."

 So, security is finally delegated into the user and the programmer; it will depend on how the system manages this PCR changes that the security will be greater or lesser - a bad software implementation might leave space for malicious code to install itself without user noticing.

[ 2.5.- System boot-up ]

When we push the power button, the first one to control our computer is the CRTM (equivalent as I said to the BIOS we all know), that will check if there's been any change on itself (PCR[1]), on the platform (PCR[2]), or on the Option ROMs (PCRs 2 and 3), then hashing (or letting the measured Option ROMS do it) the POST-BIOS (considering the POST-BIOS as the boot-up system, i.ex the partition table, in PCR 4). When all this is done, finally the IPL holds execution.

 This IPL, checks the IPL data on PCR[5] and the beggining of the operating system, extending the "chain of trust" to it, so the system can be considered "secure".

[ 2.6.- TPM functions ]

 2.6.1.- TPM Drivers
 -------------------

The TPM provides several drivers for it's functionalities.

 The first of them uses int 01Ah as an interface through their features can be used, and will be only be available to the BIOS (which will deactivate it later). At the same time, it will implement a driver into the non-reclaimable ACPI memory called "Memory Present Driver", which will be used later by the operating system.

 The tunneleable functions inside the API these drivers provide, will be dedicated to key generation for protected storage, user authentication, hashing, event generation and certification, which have been described in the sections above or will be detailed below (subsections 2.6.4 and 2.6.5).

 2.6.2.- Functions on the BIOS driver
 ------------------------------------

 These are the functions provided by the BIOS (though others can be "tunneled" to the TPM, but these are the specifical implemented as int 01Ah functions):

 * StatusCheck: The TPM answers with an "I exist!" message, providing its version number and a pointer to the event logs in memory in ESI (which would let us another way on avoiding to deal with the ACPI tables).

 * HashLogExtendEvent: Performs a hash over the selected portion of memory, storing the results in the PCR register selected by the call and generating the subsequent logs.

 * Auto-deactivation: By the AL=03h function, it deactivates this driver leaving the system with the opportunity on executing without TCPA subsystem presence.

 2.6.3.- Memory Present driver
 -----------------------------

 We've got a structure of four 32-bit blocks for the MPDriver calls (the way it's passed to the TPM is something that the driver developer will have to deal with). They might in any case be these ones:

 - pbInBuf: DD ?    ; Pointer to the input data

 - pbInLen: DD ?    ; Input data maximun lenght

 - pbOutBuf: DD ?   ; Pointer to the output data buffer

 - pbOutLen: DD ?   ; Maximum length on this buffer, and when returning, the number of bytes red.

 - AL: Will indicate a function selector.

 This driver would implement three specific functions, though the other ones are "tunneleable":

 * A BIOS-Driver-Style StatusCheck, so the TPM is working as expected.

 * An initialization function (MPInitTPM, AL=01h), initializing the driver and stablishing a communication channel with the TPM.

 * An MPCloseTPM function so to close communication with the TPM (AL=02h)

 2.6.4.- Protected Storage
 -------------------------

 TCPA provides several public/private key pairs. Also, no key used for encryption can be also used for signing because of security reasons.

 TPM basically contains one RSA key pair that is called SRK (Storage Root Key), which is generated inside the TPM and that cannot be extracted in any way (hardware protection). With the help of sub-keys, the TPM acts as a portal to secure data stored outside itself, that would only be accesible by means of the TPM features. We can think of the SRK as the root of two trees, one that isn't migratable made from TPM-generated keys, and one migratable that can only be composed from externally generated keys.

One of the data-types that can be stored externally to the TPM but sealed by it are other public/private keys, in a way that they form a tree which root would be the SRK, which nodes or branches would be keys dedicated to encryption/decryption, and which leafs would be the signing keys:

 |----------------|       |------------------|       |---------|
 | Non migratable |       |   Ciphering and  |       | Signing |
 | key inside TPM | ----> | deciphering keys | ----> |   keys  |
 |----------------|       |------------------|       |---------|

 The concept behind this is that the SRK protects the ciphering/deciphering keys: these intermediate keys are deciphered by the SRK, and they decypher the data they want to protect (that would have been cyphered by TPM features adding a signed hash) and the signing key (deciphered by this cypher/decypher key it hangs from), which would check this signed hash by performing the inverse operation at the sealed data. Finally, the TPM provides mechanisms so this encrypted data can be migrated and shared with other platforms.

 An example on this system's use, is in multiuser systems authentication: the user keys are stored in nodes and the users would activate. These users would authenticate through them, using this cyphering/decyphering for their activities and the signing keys in the leaf to assure their identity (another feature that could be positive or negative dependingon the OS implementation).

 2.6.5.- New identities and the TTP
 ----------------------------------

 TPM contains an unique identifier in order to assure it's own identity against others; though, this identity is never directly used, but by a Certification Authority (CA), also called Trusted Third Party (TTP). Certifying these identities by a TTP, TCPA tries to assure that anyone who made a request with a TCPA identity is the owner of a real TPM.

 The idea behind this system is similar to Protected Storage, with a public/private key pair dedicated to signing, along with an external certification assuring it belongs to a TPM. Even, the TPM will only produce an identity through an internal function known as "TPM_MakeIdentity", which requires this external certification. Several identities can coexist in a TPM, but they NEED to be validated by one (and only one) Certification Authority.

 Here, we have the most serious privacy breach on TCPA, because of the intrusive steps we need to follow in order to create an identity:

 * TPM creates an internal key pair which will be used for signing as a new identity.

 * It sends evidence about the TPM so it can be considered genuine, which consists on platform data (signed by this newly generated key), and also sends this new key pair's public key to the Certification Authority, who validates it reversing the signing operation to assure it comes from the signing key. Among the signed data that is sent to the CA, there's also the CA public key, so it can be sure that the petition is directed to this CA and not any other one. Talking fast, the CA checks the data sent from a platform corresponds to a genuine TPM.

 * The CA encrypts the data with that newly generated public key, and sends the certificate back, so indicating the TPM which identity is the certificate directed to (of course, the one that needed certification)

 So, the idea is that a key has to be certified in a way that when you use it, noone can know which TPM it belongs to, but that it knows that it actually belongs to a TPM (so there cannot be an unique identification with the TPM holder, as it's identity is aliased).

 TCPA FAQ fiercely defends that this identity aliasing works for the user's privacy. They say, there is no unique indentification of the TPM holder. Sadly, this statement is blatantly false, for two reasons:

 - First of all, even though identification doesn't relate to the main (SRK) key (which will encrypt the keys that will be our new identity), identification can be performed by other methods. For example, our origin IP can identify the owner in the net when certifying with a CA along with internal data, or even when browsing if the internet access is provided by another identification of the system that wants it.

 - Even worse, the data that is sent to the Certification Authority about our platform, known as TCPA_IDENTITY_PROOF, is a structure based on credentials referring to our platform and the TPM. TCPA tells these credentians aren't unique and can be repeated in different configurations (i.ex, in same model/version of a platform, the number representing the PlatformCred would remain the same). TCPA specification becomes particullarly obscure at this point, so one way we're told that there's an unique identifier for this CA operation, but at the same time they say that data sent in TCPA_IDENTITY_PROOF is not unique:

 It's better to go deeper on the specification, and there we can find out that, among the data that is sent to the CA, we've got three certificates about our system, that are:

  * TPM Endorsement Credential (endorsementCred)

  * TPM Platform Credential (platformCred)

  * TPM Conformance Credential (conformanceCred)

 And in the endorsement credential structure, a public key unique to our system is going to be sent (the TPM public endorsement key), so there's a way on easily identifying which TPM we have in an unique way.

 So, the TPM can be identified by the Certification Authority when issuing a certificate for a new identity that is generated by a TPM, even if this identity is not itself related to the TPM except for that is held cyphered with a higher key in the key-tree that can be related to that TPM. In our day-by-day browsing or whatever activities we use that identity for, there would be no unique identification on the owner, but there is the possibility to know who does that identity belong to, as the CA knows it.



[ 3.- Palladium Analysis ]

[ 3.1.- Palladium introduction ]

 Now we start with the operating system idea that's driving us mad: Palladium.

 This article started telling that Microsoft talked to the media about some TCPA features as if they belonged to Palladium in another marketing game from this company. If we look at Microsoft's press releases on Palladium, the "big advantages" they describe about their still not coded operating system are just TCPA-belonging characteristics, that is, the security on files that they describe (by public/private key pairs from TCPA), and even hardware/software components trustworthyness they offer, are also part of TCPA.

 Microsoft also tells us about some other things that are absolutely false. They say, in example, that "Trusted code runs in memory that is physically isolated, protected and inaccessible to the rest of the system, making it inherently impervious to viruses, spyware or other software attacks", talking about their kernel code. Yes, we haven't seen anything on TCPA talking on a physically isolated memory, but the explanation on this is easy: Microsoft, as we'll see, is lying again.

[ 3.2.- Palladium's kernel implementation ]

 Microsoft talks us about two basic components at kernel:

 - TOR: It's the component that would control system calls on programs running under Palladium and stores critical data from these programs. It's who would protect the memory zone where the kernel is kept, and the encrypted data held from applications and user information. That is, this "Trusted Operating Root" (that's what the acronym means) is just a part of the operating system kernel.

 - Trusted Agents: Programs that are executed in user mode but inside what MS calls the "trusted space". They would use TOR functions to encrypt data and store it in kernel space, that could only be retrieved by this same agents. The trusted agents' integrity would be checked by the TOR hashing the zone of the application that's performing the system call, so it would assure it's beeing used correctly: this also applies to system calls over memory management and any other critical system function, that is, Microsoft calls any API kernel caller a "Trusted Agent".

 Until this point, I've used Microsoft's language to talk about this. Now I'll use common language: TOR is just a common kernel that, of course, is inside a part of the memory that's protected from user processes, but that's not physically isolated from the common memory, but by a standard memory protection, like EVERY kernel. "Trusted Agents" is just a lot of babbling that only means: "a part of a program that can call kernel API: the kernel hashes that part so it can assure its integrity". The system structure remains the same as the old one, performing communication by messages (old Minix style which led to Windows NT and so). The system works using priviledges scalability, as this "Trusted Agents" are more priviledged when calling kernel's API than user programs.

 So, Microsoft doesn't bring us anything new on security, and Palladium will still have viruses. When someone finds a vulnerability and makes the processor go ring0, it will be as ring0 as it can be now, and no TOR super-strenght mechanism will prevent that. The other stuff Microsoft talks about on this OS security, is that user space processes can't access the TOR because it's in protected kernel memory, and so they can't access the private data in kernel memory. This data can only be retrieved by a certified entity that would have it's own space of data introduced in the TOR.

 Now we've learnt Palladium will have a kernel memory and an user memory (which they talk about as a great new feature :-) ), beeing the TOR the one who holds the key to the user's private data stored at kernel space. So, apart from the hashing performed thanks to the TCPA to the system calling functions and the data storage at kernel space (that could be revealed if kernel space is reached by a rogue program), Palladium doesn't bring us anything new, security talking.

 This is all the innovation Microsoft wants to offer in Palladium is security; but, as little as they offer this way, as big they take on privacy. And that's what we talk about when we start lookint at the "external TORs".

[ 3.3.- The external TORs ]

 So to give an IMHO "false security" to the user, Palladium will use external TORs. There will be external entities that we should trust to authenticate parts of the operating system so to know they have not been modified. They would take OS/application data, hash this data and tell us if it is safe.

 Now there are two possibilities. First one, is that only a very specific part could be sent to the entities that certify our system integrity (a much more perverted way of the CAs we talked about on TCPA!). This could be trusted if Microsoft's code was open source, but unfortunately, it is not and will be not. The other possibility, is that the organization itself decides which data is sent: that is, in any case we wouldn't know what we're sending to these external entities that are meant to care for our "distributed security".

 This doesn't stop here, because Microsoft's target isn't user security but pretending it is. The real importance of the external TORs is what has been emphasized as the real evil on this operating system, that could bring us to a reality like what Richard Stallman told us in his "right to read" story. Here it appears Palladium's application of DRM, Digital Rights Management.

[ 3.4.- Digital Rights Management ]

 Now we reach DRM, a system made by Microsoft that consists on a series of protocols programmed so an user can only watch a movie, read a book, hear a song or read an e-mail if he authenticates himself. Restrictions can be stablished so this access can be time limited, a specific number of views on the subject can be set until it's erased from our hard disk, etcetera. DRM is a system which main target is defending copyright systems. But DRM doesn't stop here, including ways to monitorize "intelectual property accesses", as they frightnengly call them.

 A fact is that DRM itself says that its two main objectives are reproduction permission's management about anything dealing with electronic property, and monitorization on how this property is used (even if it's legal). That is, they explain that the "tracking" system built into DRM knows how many times and when are we watching a specific movie that we've bought permission for reproducing 10 times. Another example Microsoft proposes is "preview" and "free" (no-money) versions on electronic documents, that is, that even previews and free versions on stuff (like watching the first 5 minutes of a movie or the 15 first pages of a book) are also controlled and monitorized.

 DRM itself is now working on Windows Media Player, where this technology wants to be fully functional even before Palladium. Files are encrypted and Microsoft holds the key that opens them, so the user needs to buy a license which holds data such as how much times can the video be reproduced or the time lapse we can view it.

 Microsoft's idea is that Palladium will let them use DRM protocols based on hardware keys. Now they use software, and this is no good. With Palladium, they can act as an external TOR. This is not fully specified (though Microsoft wants to make DRM work with Palladium using the external TORs), but it's easy to deduce how it would happen:

 - Our average Joe wants a movie, so Joe opens Palladium Media Player and sends a public key from the TPM with his request (which would also be encrypted, as it holds bank data), so he can watch Matrix 4 for one day in his computer. The server registers that Joe has rented Matrix 4 for a day, and transfers the corresponding money to the copyright holders.

 - Now, Joe wants to watch the movie he just rented. So, when he executes the Palladium Media Player, it can behave in two different ways (the second one is more realistic, as it uses the external TORs):

 A) Palladium Media Player checks the license the TOR has put inside the OS kernel (I wonder how big the Palladium kernel will become after some time using it, will it look like the registry?) and checks if he's allowed to watch this file (checking how much times has he watched the video, the system date, etc)

 B) Even worse, the Palladium Media Player connects to the place where Matrix 4 was bought for one day and tells it the user wants to watch the movie; license is checked there, and if correct, a decryption key for the video file is sent to the Palladium Media Player that will be deleted once the movie has ended. The server now knows that Joe tried to watch the movie, the time he did it, if his licence was correct and if the operation was performed succesfully (and so with books, music...)

 So, the aim of Palladium isn't only to provide bigger security to intellectual property: we have a great privacy breach in all this. Now the average Joe uses Outlook and Internet Explorer just because they're installed in Windows. The future Palladium user might use Palladium Media Player, Palladium Music Player, document reading programs (such as Word), and other software standarized and distributed with the operating system itself. Then, if I go to Amazon and get a trial copy of a book that is only letting me view 10 pages, Amazon will monitorize by DRM how many times and when did I view these pages (that makes me also wonder, will public libraries dissapear in the future? how do these deal with electronic data that can be copied?)

[ 4.- Conclussions ]

 Now I'm talking more subjetive - though I expect the reader has created his own opinion on the matter -. At a first look, it would look like TCPA is mostly-good and that the complete evil is Palladium. TCPA is Operating System agnostic, its an open standard, and doesn't even consider whether a dispositive is "TCPA approved" or not.

 Even though, TCPA has a hidden and terrible face. It supposes we can rely on the Certification Authorities, who would emit the certificates that allow us to identify ourselves with the semi-anonymous new identities a TCPA system can create. The problem is, this CAs can identify anyone who made a certification request, and, in the end, relate user's actions with an unique identity.

 That's probably the worst stuff in TCPA - they tell us certificating this way is "neccesary" -. We need to trust these authorizing entities that DO know who we are, and TCPA system relies on this trust. Fight against TCPA if it finally comes, might focus on fighting against the Certification Authorities idea. If TCPA becomes a standard, an interesting action opposing them would be creating autonomous and anonymous CAs that would break the thread that makes an user relate to his TCPA identity (though, we would still have the big problem if, for example, a Microsoft's application required a Microsoft's certificate: every user on this application would be identified easily, stopping this way on disobeying the CAs) The only exit would be on open source systems as CAs that destroyed every relationship between the identification and the system owner.

 What does this mean? My personal point of view is, TCPA+Palladium doesn't mean we're identifying ourselves to everyone (for example, while browsing). This identification would only be achieved by the Certification Authorities, companies which would hold the key to ourselves. They might identify us, but only a few ones will have the priviledge to do so.

 Even worse is the Palladium OS, because probably Microsoft will still be the most common OS provider in case TCPA comes.

 We might then have terrible scenarios as the indiscriminate use of DRM as it was explained in this article and in Palladium FAQs. Protocols could be used - in propietary systems - to identify us if messages were signed by a TPM identity. That is, an application could work in a way that, when sendind a message to a server by the Internet, it hashed this message relating to the identity; it would authenticate the client, but... it would also know every of his movements, and relate this authentication in message protocols with his personal TPM by an intermediate CA, or whatever other methods Palladium would like - as, in the end, Palladium's code won't be Open.

[ 5.- Appendix A: Bibliography]

 In this section you'll find some of the web pages that hold more info on TCPA and Palladium, official and unnoficcial, for and against.

[ Appendix: Bibliography ]

 The materials used for this article can be found in these addresses:

TCPA Standards

http://www.trustedpc.org

ACPI Standards

http://www.acpi.info/index.html

ACPI Functions

http://www.heise.de/ct/english/98/20/166/#Table1

TCPA/Palladium FAQ by Ross Anderson

  http://www.cl.cam.ac.uk/~rja14/tcpa-faq.html

And the most fun (check out, the press releases and FAQ are contradictory every now and then; Microsoft refuses to give further information on their closed system called Palladium, and while they often act as if they were to grow up to hardware manufacturers, sometimes they recognize it's really the TCPA implementation, though this is usually made occult for the sake of making Palladium sound bigger. Another think I'm fearing right now, is that Microsoft wants to make its own propietary version of TCPA as their latest modifications in the FAQ suggest... but that would lead to the biggest computer-related business fight that have ever existed o_O)

A Business Overview (technical)http://www.microsoft.com/PressPass/features/2002/jul02/0724palladiumwp.asp

  Microsoft's fun/interesting press releases on Palladium   http://www.microsoft.com/presspass/Features/2002/Jul02/07-01palladium.asp

Palladium FAQ

  http://www.microsoft.com/technet/security/news/PallFAQ2.asp?frame=true#g

[ 6.- Appendix B: Greetings]

 Kuro5hin readers, specially to Randall Burns, who rewrote in english the introduction to this article ;)



Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Related Links
o Slashdot
o Kuro5hin
o TCPA and Palladium technical analysis
o Análisis técnico de TCPA y Palladium)
o http://www .trustedpc.org
o http://www .acpi.info/index.html
o http://www .heise.de/ct/english/98/20/166/#Table1
o http://www .cl.cam.ac.uk/~rja14/tcpa-faq.html
o http://www .microsoft.com/PressPass/features/2002/jul02/0724palladiumwp.asp
o http://www .microsoft.com/presspass/Features/2002/Jul02/07-01palladium.asp
o http://www .microsoft.com/technet/security/news/PallFAQ2.asp?frame=true#g
o Also by wintah


Display: Sort:
TCPA and Palladium technical analysis | 73 comments (42 topical, 31 editorial, 0 hidden)
What RMS thinks (4.28 / 7) (#1)
by r00t on Sun Oct 27, 2002 at 04:20:54 PM EST

RMS (Richard Stallman, founder of GNU and the GPL) has his own take on the technology. Scary read.

-It's not so much what you have to learn if you accept weird theories, it's what you have to unlearn. - Isaac Asimov

RMS is indeed scary (2.24 / 25) (#5)
by theboz on Sun Oct 27, 2002 at 04:48:54 PM EST

I wouldn't take anything that crackpot commie says to heart. I mean, crap like this is simply trolling:

Imagine if you get an email from your boss telling you to do something that you think is risky; a month later, when it backfires, you can't use the email to show that the decision was not yours. "Getting it in writing" doesn't protect you when the order is written in disappearing ink.

He has no evidence that this scenario is ever going to be the case, nor that an employer could simply get one of the network admins to log onto your work computer and delete the email now. RMS is just a troll, and a worse one than E r i c. In this story he took what was a valid point, stretched the truth to the point of absurdity, and slashbots and other morons who listen to him are all up in arms at RMS's trolling. He ought to be ashamed of himself, and anyone who pays serious attention to him ought to be ashamed as well. Yeah, he might have wrote some strange software thirty years ago, but he's done nothing but harm to computer industry and the reputation of us all since then. If the guy isn't a crack addict, then he's even scarier than I first imagined.

Stuff.
[ Parent ]

It shouldn't even be an issue (4.40 / 5) (#16)
by r00t on Sun Oct 27, 2002 at 08:16:31 PM EST

He has no evidence that this scenario is ever going to be the case

It's true that he doesn't have any evidence it will be used in this way, however, the technology creates the possibility, a very real possibility. Why not stop it before it starts so it is no longer an issue?

-It's not so much what you have to learn if you accept weird theories, it's what you have to unlearn. - Isaac Asimov
[ Parent ]

Calm Down and Learn Some Respect (4.76 / 21) (#17)
by ph317 on Sun Oct 27, 2002 at 09:08:46 PM EST


This world needs fringe people to make it go 'round.  Without RMS and ESR who would balance Ballmer and Gates?  That's a bit too simple, but you know what I mean.  In all sorts of things, there's a decent middle, and there's extremists tugging at either side, and they're a neccesary part of balance.  If you want to have reasonable rights and freedoms, you need some nutcases way out on the fringe arguing for things that go too far for your tastes and seem absurd to make your point look acceptable.

On top of that, I take offense at your equating him to a mere troll and saying "He might have wrote some strange software thirty years ago, but he's done nothing but harm to[sic] computer industry and the reputation of as all since then."

What the fuck are you thinking with that sentence?  The establishment of fundamental projects like GCC (which RMS had everything to do with) and glibc (not sure on his involvement there, but it's still GNU) were crucial to the open source revolution we had down the road.  Whether you want to hear RMS whine or not, you have to respect the fact that he built with his own two bloody raw hands from scratch the foundation upon which all we know and love now exists.  Without FSF/GNU's foundation work covering everything you need that's unix-like with the exception of a kernel and the kitchen sink we wouldn't have had the kind of environment that fostered the development of other projects like Apache, Perl, etc...

So unless you're just wrapping up the finishing touches on your own compiler suite, C library, and set of build tools and command-line tools, which you're releasing for the general good of all programmers tommorow, which I doubt is the case, you're nobody to talk down to this man and you should just shut the fuck up.

[ Parent ]

You're wrong (1.00 / 3) (#24)
by Stick on Mon Oct 28, 2002 at 12:55:36 AM EST

I'm a total nutcase. I could be worse though. I might be one of those manics who think they're me.


---
Stick, thine posts bring light to mine eyes, tingles to my loins. Yea, each moment I sit, my monitor before me, waiting, yearning, needing your prose to make the moment complete. - Joh3n
[ Parent ]
No kitchen sink??? (none / 0) (#48)
by pde on Mon Oct 28, 2002 at 06:17:42 PM EST

Without FSF/GNU's foundation work covering everything you need that's unix-like with the exception of a kernel and the kitchen sink

I'm sorry, but I beg to differ. RMS became famous for his production of a kitchen sink.

Visit Computerbank, a GNU/Linux based charity
[ Parent ]

Hah (none / 0) (#61)
by ph317 on Tue Oct 29, 2002 at 08:50:05 AM EST


Yeah I hate emacs too.

[ Parent ]
yuo=fagit (3.41 / 12) (#20)
by 0xdeadbeef on Sun Oct 27, 2002 at 11:36:10 PM EST

With Palladium, it is possible, and even likely, that there will be email systems that prevent you from printing or archiving your email. The only reason such things aren't common now is that it that they are easy to defeat. With Palladium, you can't patch your operating system nor intercept system calls, and it will favor widespread adoption of DRM. DRM systems usually provide misfeatures such as "can't print", "read once", "can't copy", etc.

For liability reasons, corporations will make email ephemeral. It is the automatic paper shredder. And if if the head honchos use it to protect their butts from the next Enron, you can bet your ass that your dickhead of a boss will use it to cover his own mistakes.

[ Parent ]

No I'm not (1.50 / 6) (#23)
by Stick on Mon Oct 28, 2002 at 12:54:09 AM EST

I'm GNU/scary. Please correct this.


---
Stick, thine posts bring light to mine eyes, tingles to my loins. Yea, each moment I sit, my monitor before me, waiting, yearning, needing your prose to make the moment complete. - Joh3n
[ Parent ]
Why the gratuitous slam? (4.20 / 5) (#41)
by phliar on Mon Oct 28, 2002 at 04:13:49 PM EST

Don't forget, as someone (I forget who) said, all progress comes because of unreasonable people. Life today would really suck if it weren't for all the "lunatic fringe" people of the past who were mad as hell and were not going to take it any more.
Yeah, he might have wrote some strange software thirty years ago
And your claim to fame is...? (Aside: you should have used written instead of wrote.)

I still use the "strange software" he wrote and is still writing. In addition he has influenced many other people who wrote the software I use every day. Perhaps you were just trolling and I'm being hopelessly naïve by responding.

BTW, why should a few pictures of him make anyone adjust his or her viewpoint towards him to any degree whatsoever? I have talked to him in person, and I do not get along with him; his personal habits are, shall we say, unsettling. Makes no difference to me, I don't need him to be my friend. His views are well thought-out, consistent, and he has the courage of his convictions.


Faster, faster, until the thrill of...
[ Parent ]

You're flat out wrong (4.00 / 2) (#43)
by greenrd on Mon Oct 28, 2002 at 04:47:18 PM EST

He has no evidence that this scenario is ever going to be the case

Uh, yes he does. MS have boasted that Digital Rights Management could be used to control the dissemination of emails. It's a real example they gave! They've used it to try and distract people's attention from the fact that DRM will be primarily used to take away their freedoms, something that is of no benefit to the consumer at all.


"Capitalism is the absurd belief that the worst of men, for the worst of reasons, will somehow work for the benefit of us all." -- John Maynard Keynes
[ Parent ]

Ad Hominem? (4.80 / 5) (#44)
by Lagged2Death on Mon Oct 28, 2002 at 05:15:08 PM EST

RMS said: Imagine if you get an email from your boss telling you to do something that you think is risky; a month later, when it backfires, you can't use the email to show that the decision was not yours. "Getting it in writing" doesn't protect you when the order is written in disappearing ink.

TheBoz said: He has no evidence that this scenario is ever going to be the case, nor that an employer could simply get one of the network admins to log onto your work computer and delete the email now.

Virtually no one who has spent any significant amount of time working in Corporate America would doubt RMS's scenario for a second. The unwritten rules that actually govern corporate behavior have a lot more to do with what's politically feasible than what's right, fair, just or legal.

Your suggestion that an exec could get an admin to do his evil bidding and erase e-mails is missing the point. An The exec is trying to cultivate deniability; dragging a witness into things would be a pretty bone-headed move. Furthermore, with current mail systems, and depending on how the e-mail system is set up, the admin may not be able to delete copies of the e-mail that have already been transmitted to client machines. But a DRM-based e-mail system might change that.

Frankly, given recent events (i.e., Enron, WorldCom, Microsoft in court) I'm surprised anyone would doubt that a corporation would use technology like this to shield itself from blame.

Do you have an actual argument, or do you just despise RMS?

Starfish automatically creates colorful abstract art for your PC desktop!
[ Parent ]

George Bernard Shaw (4.00 / 1) (#45)
by epepke on Mon Oct 28, 2002 at 05:30:13 PM EST

He's the one who said that.

Also worth noting is Frank Zappa's "Without deviation, progress itself is impossible."


The truth may be out there, but lies are inside your head.--Terry Pratchett


[ Parent ]
Fuck labels (4.00 / 1) (#49)
by r00t on Mon Oct 28, 2002 at 07:45:52 PM EST

Commie or Capitalist, Christian or Muslim, Black or White. I couldn't care less. There is only right and wrong. Stallman is right and this technology is wrong.

-It's not so much what you have to learn if you accept weird theories, it's what you have to unlearn. - Isaac Asimov
[ Parent ]

Except, of course... (none / 0) (#59)
by tjost on Tue Oct 29, 2002 at 04:24:44 AM EST

"Right" and "wrong" are also labels.

[ Parent ]
Busted! (none / 0) (#60)
by r00t on Tue Oct 29, 2002 at 06:13:54 AM EST

True

-It's not so much what you have to learn if you accept weird theories, it's what you have to unlearn. - Isaac Asimov
[ Parent ]

no they're not (none / 0) (#68)
by werner on Fri Nov 01, 2002 at 07:32:17 PM EST

'right' and 'wrong' are states. I am often right, and also wrong, but I will always be white etc. Indeed, people define 'wrong' and 'right' very differently, but 'white', 'black' tend to stay the same. Very different indeed.

[ Parent ]
For people who want even more information :) (5.00 / 1) (#30)
by jacoplane on Mon Oct 28, 2002 at 05:40:25 AM EST

http://www.cryptome.org/palladium-mit.htm

More information? (none / 0) (#51)
by killthiskid on Mon Oct 28, 2002 at 07:59:17 PM EST

Good lord? Even more info?

Here's the thing, though... this is such a massively complex issue, that wanting to understand it all is out of reach of most everybody. Most, if not approaching all, of us must relie upon someone else to tell use what it does. So we are stuck with Stallman's views, Microsoft's marketing, and whatever else those in the know are willing to say about it.

The thing that makes me nervous about that is it is such a large amount of potential action and information moving through such a small pipe, and even then in a watered down form.

One thing seems some what for sure: the potential uses and the potential abuses. The hard question is where will we actually end up? I guess that is the normal hard prediction: what will the future hold?



[ Parent ]
goddamn they're crazy :) (none / 0) (#53)
by wintah on Mon Oct 28, 2002 at 09:27:18 PM EST

a. Curtained memory. The ability to wall off and hide pages of main memory so that each "Palladium" application can be assured that it is not modified or observed by any other application or even the operating system.

Mmmmm... this is the only new thing the abstract tells... makes me wonder, have they become crazy? Ok, it's good they plan to forbid one user-process to access another one (great idea hahahah) but... hiding user processes from the operating system? wha-wha-what????

[ Parent ]
hiding user pages (none / 0) (#64)
by coffee17 on Tue Oct 29, 2002 at 04:49:48 PM EST

that way, a user can be sure that their sysadmin isn't spying on them. Or more likely, so that one can't simply modify their kernel to take the decrypted bits from a DVD2 (or whatever will come out) and pipe them over to transcode.


-coffee


[ Parent ]

that's it (none / 0) (#71)
by wintah on Sat Nov 02, 2002 at 10:58:01 AM EST

Yes, protecting user processes from the admin can only have that objective... though, at the same time it's an absurd security policy: rogue programs mightn't be detected in that kind of system,... so as long as Microsoft doesn't make the "ultimate secure system without-any-exploiting-way" (which I really doubt, we can look at their ultra-secure X-Box :-) ), Palladium will be even less secure than the typical Windows, as sysadmin's control over the system would decrease...

[ Parent ]
Open Source, competition and MS (none / 0) (#39)
by Quila on Mon Oct 28, 2002 at 10:24:27 AM EST

How does this impact open source software? Will this raise a barrier to entry for other OSs, especially free ones that wouldn't be able to pay licenses and royalties? Can MS use this as a lever for other monopolistic practices?

Open standard (4.00 / 1) (#42)
by wintah on Mon Oct 28, 2002 at 04:15:39 PM EST

TCPA is an open standard (the specifications are public so any hardware developer can use them and produce TCPA compatible hardware) and there's no royalties for it, so - at least from what TCPA is now - there's no problem for Linux or any other OS to be implemented in that platform.

If it wasn't for all the Certification Authorities stuff on TCPA it might be considered "harmless", as this part CAs play on privacy harming is the main problem TCPA by itself can bring... anyway also there's the possibility that much people who use TCPA use Palladium, as now much people who use a PC work with Windows; anyway, there's no visible threat to Linux and Open Source by means of the TCPA system alone.

[ Parent ]
"Open standard" is meaningless today (none / 0) (#54)
by kcbrown on Tue Oct 29, 2002 at 12:39:01 AM EST

TCPA is an open standard (the specifications are public so any hardware developer can use them and produce TCPA compatible hardware) and there's no royalties for it, so - at least from what TCPA is now - there's no problem for Linux or any other OS to be implemented in that platform.
Says you and the TCPA ... for now.

Are you willing to bet that there are no patents pending on any of this stuff?

Yeah, that's what I thought.

We should all be fully aware of submarine patents by now, don't you think?

[ Parent ]

No patents pending, but... (none / 0) (#70)
by wintah on Sat Nov 02, 2002 at 10:54:34 AM EST

Mmm IMHO there's not submarine patents (though ACPI could be), but the problem could be at specific hardware companies implementations: for example, Intel's LaGrande (which seems to be Intel's advanced TCPA implementation designed for Palladium) might have patented characteristics, and that could bring us the problems you talk about... TCPA is the basis for all this, a joint effort from a big group of companies: though, from this basis different "advanced implementations" can be derived and there's where problems can arrive.

[ Parent ]
I know MS has patents, plus killing Linux (5.00 / 1) (#55)
by Quila on Tue Oct 29, 2002 at 02:36:05 AM EST

I'm sure royalties on the patents will be on the "RAND" (reasonable and non-discriminatory) basis, but even that is a killer for free software and most open source.

Even worse, I'm wondering how TCPA affects the very foundation of open source (if I've read this right): If you're playing restricted content, everything below the player has to be trusted. This means OS has to be trusted. The OS will never be trusted if the user can make changes and recompile because the user could disable the restrictions in the source. Bye bye any hope of Linux on the desktop, because no one will be able to watch anything.

Does anyone really think MS didn't, in part at least, have killing Linux in mind when they thought of this?

[ Parent ]

It would depend on hardware implementation (none / 0) (#69)
by wintah on Sat Nov 02, 2002 at 10:51:02 AM EST

The OS needs to be trusted, but the "protection policy" is implementation-dependant. There is nothing specified that way, as section 2.4.2 in the article tells: it all depends on the specific TCPA-hardware implementation you use. It could just be silent on the change at the OS, raise a pop-up window alerting you there's been a change, or any other method the specific hardware company thinks about...

Though, I don't believe it will be a "no, you *can't* change it", as even Palladium will need to make security updates; if hardware didn't let updates to happen, that would kill the system,... anyway, we should be aware on how hardware manufacturers perform TCPA (if it finally arrives) and how can that affect open source and dynamic software/OSs...

[ Parent ]
Paranoid **AAs (none / 0) (#73)
by Quila on Thu Nov 07, 2002 at 09:22:45 AM EST

I understand the aspect of TCPA as you stated it, allowing for OSS. But what I'm referring to is that the content producers' media files will be have controls embedded, relying on the player software and the operating system to honor those controls (as GhostScript can ignore the "no print" tag in a PDF). I'm sure they will rely on TCPA to confirm that a particular system is "clean" and can play their content.

But if a programmer can alter Linux or any open source OS so that it ignores controls and allows a bitstream copy, then the content producers will not allow their content to be played on those systems. The only possibility for Linux would be a signed, binary-only distro, but that would be against the letter and spirit of the GPL.

[ Parent ]

TCPA ? (5.00 / 2) (#46)
by kaltan on Mon Oct 28, 2002 at 05:39:25 PM EST

I read the teaser box, no explanation of what TCPA stands for. Then, i read the introduction, again, no explanation for this four letter acronym, I read some further, then I quit.

Timeless Confusing Peculiar Acronym ?



Re: TCPA? (5.00 / 1) (#47)
by YetAnotherDave on Mon Oct 28, 2002 at 06:07:12 PM EST

Trusted Computing Platform Alliance

from the FAQ:
---
So why is this called `Trusted Computing'?

In the US Department of Defense, a `trusted system or component' is defined as `one which can break the security policy'. This might seem counter-intuitive at first, but just stop to think about it. The mail guard or firewall that stands between a Secret and a Top Secret system can - if it fails - break the security policy that mail should only ever flow from Secret to Top Secret, but never in the other direction. It is therefore trusted to enforce the information flow policy.

A Trusted Third Party is a third party that can break your security policy.
---
http://www.cl.cam.ac.uk/~rja14/tcpa-faq.html

[ Parent ]

Identity in a nutshell (5.00 / 1) (#50)
by imrdkl on Mon Oct 28, 2002 at 07:51:13 PM EST

  1. The TPM has a builtin RSA key generator, presumably using the RNG random number generator, along with hardware based RSA code, presumably ASIC-based. A smart-card, alternatively, may be used to create the root (SRK) key.
  2. New owner authenticates to TPM
  3. TPM builds it's own internal key heirarchy, and creates a new identity for the new owner.
    1. TPM generates the root (SRK) key based on either from a symmetric shared key agreed with the smartcard, or from its own RSA key generator.
    2. TPM now reads the fully populated PCR registers will have at least some of the owner-identity bits laid on them with the hashing tool.
    3. TPM now generates, and SRK protects, the cypher/decypher keys.
    4. TPM now finally creates the signing (end-user) key for the owner, but these keys are protected by the cypher key.
    5. TPM_MakeIdentity creates a certreq for the new owner signing key, encoding the DN in an agreed manner with an friendly neighorhood CA.
    6. TPM sends the req from it's new signing pair to the CA, along with at least one unique identifier, from the machine, although that's not supposed to have anything to do with the owner's own keypair. The CA, at least, cant positively identify the user, only the machine.
    7. CA issues a certificate, with the new key and other bits, to be installed as an identity in the TPM.
    8. Now the new cert may be stored and re-utilized for this machine anytime the owner identifies to it.
  4. Repeat steps 2, and 3.4 - 3.8 for each additional user of the machine.
  5. Oh yea, once you get this far, you can disable the nosey little bugger, as well. Except that, well, once you get this far, why bother...? It does appear that most reasonable doubt could be eliminated in an investigation of the usage of a machine, because of all of the logged and signed values.

Is that a fair summary? This stuff is dense, so you might want to check back on this article for the next few months, to see if there are any new questions. :-)

Fixing a Computer w/ Palladium (none / 0) (#52)
by bjlhct on Mon Oct 28, 2002 at 08:20:46 PM EST

Is easy.
*

kur0(or)5hin - drowning your sorrows in intellectualism

It is as simple as stealing proifit. (2.00 / 1) (#56)
by jforan on Tue Oct 29, 2002 at 02:52:57 AM EST

The riaa and the mpaa (and other content creators/holders) are in micorosoft's sights.  Microsoft wants Everyone who makes any media to make it through their software.  That way, they can take a whole shit load of "Small Cuts" (which may be large to you and i), which businesses are happy to pay becuase it is not as big of a cut as they are presently used to.

Then, microsoft's competition (riaa/mpaa/etc) will suffer.

Microsoft is just following the money - and they are ten times smarter and more efficient than their content-rights-grubbing competition.  Yes, this technology may not be perfect for the end user, but it is a (WAY) better deal than what is out there now, and whatever pains it brings the end user, it will bring 100 times the pain for those who are reaping profits from keeping the artists' and media-creators' works from ever reaching my computer (which is very reachable) presently.  And I will probably take their offers (assuming they make decent ones), both as an artist and as a consumer.  

This technology will cut down some of the differences between buisness capabilities and consumer capabilities.

The main thing I will be watching out for, is microsoft locking me in to a technology.  They may capture a bunch of people, but they will not get me.

long live competition.

Jeff

I hops to be barley workin'.

Heh. (none / 0) (#58)
by i on Tue Oct 29, 2002 at 03:45:38 AM EST

They may capture a bunch of people, but they will not get me.

DReaM on.

and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]

Stopped reading at the "FAQA" (1.00 / 1) (#57)
by Rogerborg on Tue Oct 29, 2002 at 03:00:38 AM EST

c0z j00 R 2 1337 4 m33

"Exterminate all rational thought." - W.S. Burroughs

Article is just... not good (none / 0) (#62)
by Silent Chris on Tue Oct 29, 2002 at 12:29:55 PM EST

I read through this article, and while [some] of the technical merits are sound, the author is really driving the story with opinion and politics.  That's fine, if this was in the op-ed section.  It's not.

I would suggest getting more opinions.

that's just how it is (3.00 / 1) (#65)
by florin on Wed Oct 30, 2002 at 12:07:25 AM EST

Perhaps that's because the whole Palladium issue is about politics. ;-)

[ Parent ]
Has anyone considered the possibility (3.00 / 3) (#63)
by mingofmongo on Tue Oct 29, 2002 at 04:16:26 PM EST

That we don't need all this excessive security? What does anyone need with all this? A guy who practices with lock picks can break into most houses in less than a minute, and with a big hammer, its even faster. Computers are already far more secure than houses, even with windows.

Why are people so worried about computer security, when they routinely hand their credit card to minimum-wage workers at a store that doesn't shread anything.

I say we need less security. If any security is going to work, it needs to be a special case measure. Security that people don't think about is about as secure as an unattended tent in the woods. People need to know they are doing security thing when they are doing it. And the rest of the time things should be wide open.

You know you are not supposed to eat food you find on the ground, right? You learned this, and act on it, don't you? There is no protective device that keeps you from eating food you find on the ground, and none needed. Why can't the same logic apply to sending confidential data by unencrypted links?

I feel an article coming on...

"What they don't seem to get is that the key to living the good life is to avoid that brass ring like the fucking plague."
--The Onion

Sigh (none / 0) (#66)
by imrdkl on Thu Oct 31, 2002 at 04:18:06 PM EST

I was afraid of this. You people chased the lad away with all your criticism.

How would this work in practice? (none / 0) (#67)
by jeti on Fri Nov 01, 2002 at 08:25:34 AM EST

Does anyone know how this is supposed to look in practice?

Let's say I want to develop and distribute a shareware program for Palladium-PCs.

Would I be able to do development at all when Palladium is activated?
What hoops would I have to jump through before being able to distribute?
Are we talking about signing oneself with a certified private key? Or would
I have to submit binaries or source for certification?

Thank you, Jens


Answers. (none / 0) (#72)
by i on Sun Nov 03, 2002 at 04:43:44 AM EST

Would I be able to do development at all when Palladium is activated?

Probably yes.

What hoops would I have to jump through before being able to distribute?

Probably none.

Are we talking about signing oneself with a certified private key? Or would I have to submit binaries or source for certification?

You will be able to sign your app with your own key, but nobody else would trust it (because nobody knows you). If you are writing a networked game, or distribute your own "content" and a player that can play it, this is not a problem, because you only need one piece of software of yours to trust another piece of software of yours.

OTOH if you want to play, say, Disney's content, you will somehow have to convince Disney that your player is truthworthy so that they'll add your public key to their database of trusted 3rd party software. This will probably involve a certification authority of some sort (and you will have to pay for its services).

An OS that can run only certified software is probably of no interest to anyone, including Microsoft.

and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]

TCPA and Palladium technical analysis | 73 comments (42 topical, 31 editorial, 0 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest © 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!