Jon Lasser, author of Think Unix and project coordinator of Bastille Linux wrote what had the makings of an excellent piece. He's written several columns in the past which raised
awareness of a variety of security related issues. Unfortunately this piece revolves around several weak concepts, which will be discussed in varying degrees of depth below.
For starters, identical configurations, binary compatibility and identical libraries are good for hackers.
While this is correct in essence it is hardly a reasonable picture to paint, in that it leaves out an awful
lot. This is nothing new in the world of Linux, as several distributions are widely used in virtually
default configurations by numerous sites. At the same time, many sites are run by capable administrators
who don't use default packages, configurations, and so forth. What this means is that sane default configurations, while malicious hackers may know
what they are, don't create a security flaw. Saying anything else is advocating a seriously flawed concept of security through obscurity.
Furthermore, the concept of "binary compatibility" is somewhat misleading. From an ABI (Application Binary Interface) standpoint, all
machines running Linux with the same version of the kernel are absolutely "binary" compatible. This
still leaves a potentially large number of unresolved dependencies, based on what the binary is, however his does not
introduce any greater risk of hacking. Hackers do not typically run dynamically linked binaries, which
means that as long as the kernel interface hasn't changed for any necessary system calls, the binary is more than likely going to run completely unhindered no matter what Linux system you put it on.
Just as "binary compatibility" is an advertising scheme when distributions use it as a hook, so is it an advertising scheme when security gurus try and make it out to be a sin.
Identical binary builds are an even more serious issue. Many exploits, such as buffer overflows, need to hard-code magic numbers like system calls and addresses that vary by Linux distribution, and by builds of the binaries.
This is true again, on the surface, however it is a most definitely overblown claim. Currently Red Hat is one of the most popular, if not the most popular, distribution in use worldwide. This means that there are a large block of users who're already using "identical" binaries in most cases, as they run whatever comes from the RPMs provided by Red Hat. The same is true of users of Debian, who most use apt-get to install their software, and of Mandrake, SuSE, and so forth. The only major distributions that do not suffer from the "identical" binary problem are Gentoo and Sourcerer, and that is only because of the fundamental nature of these distributions. If it
hasn't bitten us in our collective asses yet, why would one more monolithically popular distribution change that? This sounds to me an awful lot like advocating security through obscurity, in that they can't overflow the buffer they can't find. This is utter crap, and I suspect the author of the original piece knows this.
As United Linux will have identical binaries for base system software, an exploit that runs against one distribution built atop it will run against all other distributions.
I begin to wonder exactly what "base system software" is. The most common targets for exploits are network resources, such as web servers or FTP servers. These certainly are not base system software, and in fact are the most likely candidates for differentiation between the United Linux distributions. Each will most likely be tuning and tweaking their builds of Apache, MySQL, proftpd, or whatever other network servers they intend to bundle with the base distribution. The homogeneity among these distributions should be rooted in binaries like cp, mv, su and so forth. This is a much smaller deal, since these binaries are all relatively stable in versioning, as
major revolutions in the concept of copying a file haven't come along in years, and bugs are few and far between at this point.
That means if United Linux is successful, it will allow automated exploits to proceed with a ruthless efficiency, reminiscent of CodeRed, Nimda, and other worms targeting software mono-cultures.
This is just plain old FUD. A major factor in the success of those worms, and the cause behind the lack of an analog for UNIX since the Morris Worm, is a fundamental difference in design philosophy between the crew in Redmond and the Linux crowd. IIS runs with system privileges, and in fact this is the only way it can be run. They have tied IIS into the core Windows operating environment at such a fundamental level that it cannot divorce itself of the privileges it runs with. This makes bugs resulting in security vulnerabilities a much more serious matter, as a compromised IIS is now a compromised Windows OS. Contrast this with Linux and Apache, where by default it runs as user "nobody" which implies virtually no privileges on almost all default systems. You must consciously change this to achieve the same level of vulnerability as on a Windows server. As you examine software running in both environments on a piece by piece basis, you find this tends to be true throughout, save for those pieces of software which run in both environments, which tend to have a more Unix like philosophy.
Even further than that, has the author given any consideration to the thought that perhaps United Linux will actually bundle truly secure software such as Apache, QMail, TinyDNS, ProFTPd (or PublicFile) and other such excellent servers? Obviously everything has bugs, and there have been numerous security hole s in virtually all of those packages (qmail and tinydns being the exceptions), however they've also been resolved quickly and fixes made widely available in quite reasonable time periods by most vendors.
Another serious problem with United Linux will likely be coordination between vendors for security fixes.
How will this be any more serious of a problem than coordination between the actual software vendor and the distribution firm? In fact, I would think the opportunity is there for packages to be available faster, as there will be no single United Linux distribution, instead it being an abstract framework for building compatible distributions. This means that if SuSE is excellent at packaging and shipping fixes, in the worst case users of the other vendor distros
built from United Linux then they can use SuSE packages with little or no modification. Beyond that, it's not as if users cannot install the fixed packages from source, or large installations can always package the fixed versions themselves, as so many shops already do.
Also questionable is United Linux's mandatory availability of SNMP (Simple Network Management Protocol) software. While most Linux distributions already include this, it is rarely installed by default. The default installation of SNMP is almost always insecure, and, in fact, unchanged SNMP community strings made number seven on SANS' list of the most serious Unix vulnerabilities.
There are several holes in this assertion. One is that just because it's installed it will be running. Two is that just because it's installed the community strings won't be forced into non-standard values during installation. Finally is that it is the distribution vendors responsibility to guarantee the user
s security, instead of providing them with all the tools to do it themselves.
The author made several interesting points, and there is definitely a danger in having a large identical installed base, however there are mitigating factors to every point raised that went largely ignored in this article. There needs to be further exploration of these issues, instead of the vague and unsubstantiated
worries expressed in this column. This exploration needs to be grounded in fact, not fancy, and should cite concrete examples, rather than vaguely speculating about what might be possible. Frankly, I would've expected better from someone like Mr. Lasser.