Why Freenet is Complicated (or not)
By scgmille in Technology
Mon Feb 18, 2002 at 02:25:45 AM EST
Tags: Freedom (all tags)
This article is primarily a friendly rebuttal to Steven Hazel's
CodeCon 2002 talk entitled "libfreenet: a case study in horrors
incomprehensible to the mind of man, and other secure protocol design
mistakes". Hazel presents the Freenet protocol as an
overly complicated, self designed protocol. In fact, though
somewhat complicated, literally every step in the protocol was
carefully thought out to resist certain attacks and to increase
certain properties desirable for Freenet operators and the network as
First, not all of the components of the protocol were designed by
Freenet developers. Whenever possible, analyzed cryptographic
components were used. The entire key exchange protocol, for example,
is the Station-To-Station (STS) protocol, modified only slightly to
allow for a mode of operation called Silent-Bob (which allows nodes to
masquerade as other TCP services until a valid Freenet node is
Differences between 0.3 and 0.4
The primary link-level change was the finally merge of a full
public/private-key encryption system to prevent Man-in-the-Middle
attacks. With this capability, we make it difficult for an attacker
to become a cancer node by generating its own keyspace. The keyspace
discovery protocol Hazel alludes to attempts to involve enough nodes
(and spread far enough across the network) to include one
non-malicious node. It is designed such that if one non-malicious node
involved the resulting keyspace will be sufficiently random. In addition,
it 'bootstraps' nodes into the network in a way that very quickly makes a
new node useful for storage.
Freenet Client Protocol
Freenet developers realized quite quickly that the node-level
protocol was too difficult for use by general client developers. So,
in February of 2001 (yes, that long ago), the Freenet Client Protocol
was developed. This is a simple protocol that is trivial to
implement, uses no crypto, and speaks only between a client and the
locally running Freenet node to handle all aspects of Freenet
required by a client. This includes insertion, retrieval, key
generation. It hides nearly all the complexity of document structure,
metadata, file splitting, etc. FCP has already been written as a
library for several languages.
Other points of note
Informal unit testing has been in the code since very early in the
development. For the 0.4 iteration, formal unit testing was adopted,
by writing to the JUnit framework.
Freenet uses strong cryptography. Much of this cryptography relies
on secure random numbers. Early on a decision was made (for the
reference implementation) to implement the Yarrow PRNG from
Counterpane Labs. One characteristic of the Yarrow generator is that
it frequently rekeys the block-ciphers it uses. When choosing between
AES and Twofish, the choice was easy, as Twofish has an expensive
rekeying stage. Rekeying often was a severe performance bottleneck to
the PRNG. Once AES was chosen there, it was decided that using it
pervasively throughout the protocol would save implementers time.
- Twofish was originally used as the Document level cipher to hint to
developers that more than one cipher may be used, and to prevent those
developers hard coding their systems for any particular one. The
Document level cipher is configurable however, and AES can be used
there as well.
- Cryptographic primitives can be difficult to implement. In the 0.4
and 0.3 protocols, all used primitives could be implemented by calling
other libraries, such as OpenSSL. Only one primitive in 0.4 cannot be
implemented this way, the DHAES signature algorithm. However, that
algorithm is built on top of primitives also provided by OpenSSL, so
implementing and verifying it would take no more than a day or two.
- Why not just use SSL? The SSL relies on PKI for encryption, which
does not allow the flexibility in trust that Freenet's crypto allows.
SSL is also a far more complicated protocol than the Freenet
link-level protocol. Secondly, some concerns about the anonymity lost
when using SSL have been raised. These are a matter of discussion.
- Hazel comments on the difficulty of finding implementations of
Ciphers. A quick trip to Counterpane labs reveals three C, two VB,
one Java, and 3 Assembly implementations. As an AES candidate, a
reference implementation was required in portable C and Java, for two
more. Rijndael? 5 C implementations, several assembly
implementations, Delphi, Perl, Matlab, VB, Java, the NIST
requirements, Ada95, and Emacs Lisp! These libraries aren't difficult
to use either, check them out sometime.
- Hats off to the audience member who pointed out some of the *real*
reasons for unique-id's and the connection agnosticity. Reliability,
efficiency, and the ability to easily move to other transport layers
(even non IP based protocols).
- We are *not* implementing TCP over TCP. TCP is a complicated
protocol that provides reliability and congestion control. Freenet's
protocol does not attempt to reinvent any of this. Freenet's logic
allows for reliability from dropped connections, and efficiency from
connection pooling. Freenet is not unique in this. Check out the
Blocks protocol which does many of the same things (and much more,
more than Freenet needed).
- Base64 in URI's was decided because of the length of Freenet
URI's. Cryptographically protected URI's have at least 23 bytes of
data to encode. Hex encoding is the first choice, but this means a 46
character URI before adding optional components like the plain-text
path of Signed-subspace keys. Base64 gave us the most bang for the
buck. And the change hazel refers to? Removing characters that don't
play well with the browser and replacing them with ones that do.
Nearly all implementations of Base64 have a char with the Base64
character set. Getting an implementation to work would require less
than 10 keystrokes. Besides, a client needn't do this anyway, as FCP
handles all key generation and parsing.
- Documentation is difficult for a non-stable piece of software.
Documenting significantly always resulted in out-of date documents
that help no-one. However, documentation is a priority, and is likely
to appear sometime after the 0.5 release.
- With respect to experimentation. The reference implementation is not a platform for experimentation. When experimenting with network behavior, the developers use simulation tools. The reference implementation therefor is meant to have a respectable level of performance and reliability.
- Some perceived minor irritations may arise due to the implementation of
Freenet in Java. Java is not like C, so some porting issues are bound to arise. Porting is hard sometimes.
- An audience member mentioned the Java Messaging Specification, or
JMS. JMS, firstly, is a relatively new standard (Aug 2001)
was not around when Freenet started, in fact, it didn't appear until
well after the 0.3 release. Secondly, Freenet, by policy, does not
rely on any code with a restrictive license and a non-free
implementation. And re-implementing JMS was not a good use of any
There are few projects with requirements for security and anonymity as
strong as those found in Freenet. There is no way around some amount of
complexity to meet these goals. The developers of Freenet know this, and
also know that complexity is a barrier to entry. The protocol is made
only as complicated as necessary to meet its requirements and to allow
flexible implementation. We realize that the inter-node protocol is
too complicated for general client use. The Freenet Client Protocol
was designed to provide a simple client-to-node protocol that could
be implemented easily. Its unfortunate that Steven became frustrated
writing to a moving target, but watering down the protocol to make it
easy to implement would produce an ineffective system. In short,
software development is hard sometimes.