Inoshiro is really the one to answer this question, but he's off writing his next article, so I'll take a stab at it. :-)
Basically, things like tcp_wrappers allow you to define, in a rough way, what hosts have access to what services on a machine. A firewall is more general, and more secure, in that it will intercept and examine all traffic for any number of machines, and take action as needed. So, I could have a dozen webservers, and put them all behind one firewall, then have that firewall machine filter traffic based on a bunch of criteria. The firewall takes all incoming traffic, and, if it approves the request, will forward packets on the the destination machine, and vice versa from the server to the world.
The line can be a bit blurred in the unix world, as a machine can serve as it's own firewall, to an extent. Things like ipchains allow you to run all incoming traffic for the local host through rulesets, and do pretty much the same thing you'd be doing on an actual firewall. The advantage to making it a dedicated machine is that you only have to manage the rulesets on one box, and you have a convenient traffic gateway, for any machine on the "private" side of the firewall.
Oh yeah, and hosts.deny and hosts.allow are just the config files for tcp_wrappers. They let you say which hosts can do what, for services that run through tcp_wrappers.
Not the real rusty
[ Parent ]