Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

The ethics of linkage

By zonker in Op-Ed
Sat Jan 04, 2003 at 08:03:23 PM EST
Tags: Internet (all tags)

If you read "meta" sites like Slashdot, Kuro5hin, Fark, Met4filter (natch), and Memepool you've probably encountered links to stories that you can't reach -- namely because the act of linking to a server not prepared for massive traffic has brought down the server, or worse, put the hapless soul over their bandwidth cap denying any use to anyone for the rest of the month or day or whatever time period the ISP or hosting provider uses to allocate bandwidth.

Sponsor: rusty
This space intentionally left blank
...because it's waiting for your ad. So why are you still reading this? Come on, get going. Read the story, and then get an ad. Alright stop it. I'm not going to say anything else. Now you're just being silly. STOP LOOKING AT ME! I'm done!
comments (24)
active | buy ad
Somebody builds the Sears tower in 1/10,000th scale in Legos, puts up a few photos for his buddies to check out, and the next thing that they know their server is flooded or their bandwidth cap is maxed out and they don't have the use of their server. Or, even worse, the provider just tacks on a fee for bandwidth usage and some poor schlub is facing a monster bandwidth bill that they didn't anticipate. I don't belive that people should have to get permission to link to another site, in general. If you put something on the Web without putting a password on it or whatever, you're explicitly allowing others to link to it -- at least in my opinion. Don't put something on the Web that you don't want other people to see. However, the folks who run Slashdot, et al, know that they carry an inordinate amount of influence when it comes to driving people to a website. (Hence the term, "slashdotted.") If they link to a JPEG of a used tissue, at least 1% of their audience -- which is of considerable size -- is going to click on the link just for grins. 1% of a site that probably gets hundreds of thousands of page views per day comes up to quite a few people. They can easily disable a site for a few days until the post is off their front page -- making life miserable for someone who just wanted to put up images for their buddies. (Often these things are not submitted by the owners of the site.) The moderators/owners of these sites should be doing the following:
  1. Ask permission if the site isn't a major site that can handle the traffic. Obviously, Met4filter linking to Slashdot isn't going to cause a problem for Slashdot, or Slashdot linking to CNN.
  2. Mirror the site. If the site owner is ameniable to having their content exposed to the world, but doesn't have the bandwidth/server resources to handle it, they should ask permission to mirror the site -- at least temporarily -- to handle the load. It hasn't happened yet, but I see a "reckless linking" lawsuit where someone sues Slashdot or another site for causing monetary damages.
  3. Drop the link. If the site owner isn't willing to be mirrored, and the site is obviously going to suffer if linked, then the post should be dropped or not put up in the first place. Not because Slashdot and the rest lack the legal right to link -- but out of common courtesy. Not everyone wants to share their Lego picture gallery with the rest of the world, at least not all at once.
I was just thinking about this because I ran into another link that was unreachable via Slashdot this morning, and yesterday I went to a link off of Fark where the site owner had replaced the content with a plea for donations to pay for bandwidth because they had been unexpectedly Farked. It's one thing when it was an unexpected side-effect. The first few sites slashdotted were kind of a surprise, I'm sure. However, now that the effect is well-documented, these folks should be a little more careful -- and a lot more courteous -- in the links that they post.


Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure


Should site owners change their practices?
o No, link away. 24%
o Maybe, depends on how much traffic they get. 34%
o Yes, I'm sick of being Slashdotted already! 17%
o I wanna get Slashdotted! 23%

Votes: 198
Results | Other Polls

Related Links
o Slashdot
o Kuro5hin
o Kuro5hin [2]
o Fark
o Met4filter (natch)
o Memepool
o Also by zonker

Display: Sort:
The ethics of linkage | 139 comments (137 topical, 2 editorial, 0 hidden)
Mirroring (3.00 / 6) (#2)
by jaymz168 on Sat Jan 04, 2003 at 01:38:12 PM EST

I think the idea of mirroring the content is a great idea, and has worked in several cases. I think that the major meta sites should start mirroring these smaller sites themselves, cancelling the mirror period once the link is off of their front page. The only problems that arise in this situation are that of contacting the webmaster. The owner of the website might not reply to a mirroring request for a day or so; but then nobody really needs to see Lego architecure right now.

Mirroring (3.28 / 7) (#3)
by Anonymous 7324 on Sat Jan 04, 2003 at 02:02:12 PM EST

has legal issues with copyright.

The best I've heard of is having a set of proxies that caches the content in question. (And of course, only "proxies" to the site about to be Slashdotted, to prevent its use as an open proxy). Since the use of content is solely as a cache, some of the legal issues with copyright violation might be circumventable, as I understand it.

Of course, obviously IANAL.

Response to copyright complaint (3.50 / 3) (#11)
by pyro9 on Sat Jan 04, 2003 at 07:07:30 PM EST

One response might be to immediatly and without question remove the mirror and let the site in question get clicked into oblivion.

What we really need is a legally defined default consistant with the design of the web (and the net in general. The default is link freely and anywhere (but don't try to make the content look like yours through 'clever' frame tricks) and to cache freely. That is, after all, the entire intent of the web!

Should the content owner wish otherwise the burden is on them to specify the non-default through no-cache headers and disallowing page display if the referrer is not internal. The latter should be supplimented with text on the page disallowing links, preferably some sort of standardized information so automated tools can recognize it.

The future isn't what it used to be
[ Parent ]
Advance permission (4.10 / 10) (#5)
by pyro9 on Sat Jan 04, 2003 at 02:08:08 PM EST

An eventual solution might be a statement of permission in advance. A mirrors.txt much like a robots.txt. The question is what to do if a site has not yet set up a mirrors.txt (which might only be a de-facto standard, if that).

One answer would be to look at the metadata. If it would allow proxy caching, mirror it (with periodic updates) until the story goes to archive mode. Another is to look at robots.txt. If google is alowed to spider it, cache it (since Google will).

It may be more reasonable (and correct) to cache it through a cgi on the linking site such that ad views will be retrieved and credited to the original site (which would certainly turn a potential negative into a positive for the site owner).

The future isn't what it used to be
In addition.. (4.50 / 5) (#20)
by Eloquence on Sat Jan 04, 2003 at 09:39:49 PM EST

.. have two checkboxes on Slashdot, K5 etc. below the story submission form:

[ ] I am the copyright holder of this site and I agree that its contents may be cached.

[ ] I have permission from the copyright holder and he agrees that the contents may be cached.

Sure, some people may choose option 2 without actually having permission, but then you can shift the blame to someone else (preferably an anonymous coward ;-).
Copyright law is bad: infoAnarchy Pleasure is good: Origins of Violence
spread the word!
[ Parent ]

That could work (none / 0) (#27)
by pyro9 on Sat Jan 04, 2003 at 10:30:17 PM EST

That would be an excellent way to remove all question.

The future isn't what it used to be
[ Parent ]
Sort Of Pointless (4.50 / 2) (#47)
by DarkZero on Sun Jan 05, 2003 at 07:03:28 AM EST

The question is what to do if a site has not yet set up a mirrors.txt (which might only be a de-facto standard, if that).

If the site were smart enough to expect a slashdotting, they would've done many other things in advance before getting to your "mirrors.txt" idea. They would've made their images as small as possible, used mostly static pages, and set something up so that they could watch the bandwidth they're using and be able to switch the site over to one or two static pages after the first hour or so of a slashdotting.

Your idea is a good one, but most of the problem here is the people that aren't taking the time to protect themselves, so it's more of an addition to the solution than an actual solution.

[ Parent ]

There is a point (3.00 / 1) (#51)
by pyro9 on Sun Jan 05, 2003 at 08:04:13 AM EST

Presumably, a default mirrors.txt would be preset by a provider such as geocities or the many virtual hosting providers when the account is created.

The rest of the post describes how to handle the likely case that there is no mirrors.txt. The lack of a mirrors.txt would be taken as permission (based on the idea that if you have published a page on the web without password restriction you presumably want the world to be able to see it). The existance of a standard would place the onus on the site owner if they want to scream and get legally anal about copyright and caching.

The future isn't what it used to be
[ Parent ]
Attitude much? (none / 0) (#134)
by davidduncanscott on Tue Jan 07, 2003 at 10:28:40 PM EST

If the site were smart enough to expect a slashdotting...most of the problem here is the people that aren't taking the time to protect themselves
Maybe I'm misreading you here, but you sound a bit, well, cold-hearted.

If, say, my daughter is reported to be a pretty girl (and she is, I might add) and suddenly 10,000 young men knock on my door and ask to meet her, my lawn is going to be a mess, nobody will be able to park in my neighborhood, and my life will generally become hellish. I don't think it's reasonable to say that I should have astro-turfed the yard and built a garage or that I should have provided private security.

[ Parent ]

No, but... (none / 0) (#135)
by Dephex Twin on Wed Jan 08, 2003 at 03:32:47 PM EST

That kind of thing can and does happen (although simply being pretty isn't itself newsworthy enough to have a huge number of visitors).

I don't think your analogy exactly works though, because you never intended for anyone to visit the house in that situation... but when a website gets slashdotted, the viewers are doing exactly what the website owner intended, just *more* than he has bandwidth to facilitate.

Say, you put up x-mas lights in your yard, and your particular decorations are especially clever and are pointed out in a news report, and that causes a sudden HUGE influx of visitors one day.

Yes, it sucks that an excessive amount people are around looking at it, but... you put up these lights for the public to see, right?  If you don't maybe put down some cursory security to control those visitors and protect your yard and privacy, then well... you lost on that gamble.

That's my opinion of it.  It still sucks of course.

Alcohol: the cause of, and solution to, all of life's problems. -- Homer Simpson
[ Parent ]

Funny you should mention lights (none / 0) (#136)
by davidduncanscott on Wed Jan 08, 2003 at 04:20:50 PM EST

I live a few blocks from a local Christmas light extravaganza, and it's something of a pain in the ass for the last few nights before Christmas, with streams of tourists and even less parking than usual.

However, I agree that the folks on that block are indeed deliberately attracting visitors. Nonetheless, not every web site is set up to draw a crowd. Many are created as a way for a few people to share the odd picture, and while they could of course slap on password protection or somesuch, this is not too far off saying that if you didn't want tour groups going through your antique shop you should have hired a bouncer. My feeling is that the tour guides have some responsibility here as well.

[ Parent ]

Or a linkban.txt? (none / 0) (#110)
by Elkor on Mon Jan 06, 2003 at 09:03:43 AM EST

Much like the "robots.txt" it would check who the referer (e.g. what site the person was following a link from) and could exclude people following links from k5, or /.

Or perhaps k5 could run links through a cgi filter during article submission that would check the robots.txt file. If the robots.txt file didn't allow crawling of the site, then k5 would bounce the article back saying "sorry, can't link to that site."

But then, IANAW(eb)P(rogrammer) so I have no idea whether that idea would work at all or is complete garbage.


"I won't tell you how to love God if you don't tell me how to love myself."
-Margo Eve
[ Parent ]
Other option and ignorant editors. (3.42 / 7) (#6)
by snowmoon on Sat Jan 04, 2003 at 02:17:29 PM EST

How about attempting to ask permission before inundating them with visitors?  Create a mirror BEFORE posting the story in case of outage?

How about the ignorant and reprehendable act of posting a link to a phpBB or other computationally intensive site knowing full well that those sites have max user loads of somewhere in the hundreds.  The editors at slashdot have often posted links to "active" pages that are down within seconds and they have to rely on the google cache.

Site admits should also take more preventative actions to make sure that their server fail in a graceful and predictable fashion.  It does not take that much to have an uptime kill switch to revert back to a static page until it clears.  Or a caching proxy to offload repetitive requests for the same page.

Personally I say shame on the editors and linkers for failing to properly warn the target site of impending doom.

Automatic migration caching (4.00 / 7) (#7)
by ogre on Sat Jan 04, 2003 at 03:54:52 PM EST

This problem would go away if the internet protocol encouraged or required sites to cache pages that pass through them. If you use any reasonable cache strategy, this would prevent slashdoting because as soon as a page becomes that popular it moves to the backbone and other major servers and links no longer hit the server that carries the original.

Of course there are some problems with this method; page hit counters would no longer work and active content in general would not work well. In my own humble opinion, anything that discourages active content is actually an advantage though.

There are network protocols that do this, and it may even be in IPv6. I don't recall specifically, but I do recall that when I looked at IPv6 it contained lots of good ideas.

Everybody relax, I'm here.

IPv6 doesn't go that high-level (4.30 / 10) (#10)
by fluffy grue on Sat Jan 04, 2003 at 07:00:27 PM EST

I think the closest IPv6 gets to that is multicasting, which is for a different (related, but not really adaptable) problem domain.

However, it is a pervasive part of HTTP/1.1; a huge chunk of the HTTP/1.1 spec is geared towards cache issues.

IMO, 'hit counters' aren't worth worrying about, though it's easy to make everything on the page except the hit counter get cached anyway. And for active content, there's plenty of ways to keep a site totally active and yet still cache-friendly. For example, in the case of K5 and /. and so on, non-logged-in views and client-side ad fetching (using iframes which point to a CGI) and so on could become cached simply by making the non-logged-in view (for things like comment display mode and so on) based purely on URL passing (/. is already like this, and K5 used to be but I don't think it is anymore), and putting out appropriate Last-Modified: and Expired: headers and so on.

Anyway. The other part of the equation is to get a nice hierarchal HTTP proxy setup going. If the backbone routers were to add a bunch of large transparent squid caches, that would do a lot to help everyone out, though it'd probably not change the net cost of bandwidth; bandwidth itself is free, and is just a metric used for fairly billing for infrastructure costs based on usage, and so even though there's less bandwidth being "physically" used, the infrastructure costs will only go up and so the backbones will still have to charge based on something. Though if they were nice about it and just charged based on real bandwidth, they'd not charge for cached stuff, and would instead charge more for those who insist on non-cache-friendly content.

Another idea is to take advantage of Squid's peering stuff and just setup a semi-grassroots network of transparent peering Squid caches which anyone could join in on (ISPs in particular, who could then run the cache transparently)... and then using Squid to make sure that all of the stuff is compressed when transmitted means that everyone wins, because even sites which don't use mod_gzip or equivalent will still take less actual bandwidth once it passes through a proxy, and so on.

Or, there's always freenet, which uses essentially the same idea...
"Is said in tentacle rape" is said in tentacle rape.
"Is not a quine" is not a quine.

[ Hug Your Trikuare ]
[ Parent ]

more on caching (3.16 / 6) (#16)
by ogre on Sat Jan 04, 2003 at 08:43:48 PM EST

Yeah, now that I think about it, it wouldn't make sense for the IP layer to do any caching. On the other hand leaving it up to each individual high-level protocol doesn't seem like a great idea either.

You make a good point about HTTP 1.1, the issue isn't the protocol any more, it's the infrastructure. But when you say "bandwidth is free" I'm sure you mean over-the-wire bandwidth. Every time a packet goes through a machine it takes up resources on that machine which aren't free. How about a charging scheme which charges based on some function of the number of nodes a packet passes through? Then there would be an incentive to keep frequently accessed data close to where it is needed.

Everybody relax, I'm here.
[ Parent ]

I meant in-computer too (3.85 / 7) (#18)
by fluffy grue on Sat Jan 04, 2003 at 09:00:13 PM EST

A CPU processing a packet doesn't itself cost anything. But the CPU power as a whole is a limited resource, which is part of the infrastructure costs.

Same goes with telephones and cellphones and long-distance and airtime connection charges and so on.

I mean, look at it this way: it's a resource which you can't save up for later. You can't "conserve bandwidth" so that you can have it later. If it's not used, it's still thrown out. It's not like water or gasoline or any other physical resource.

The cost of bandwidth is an artificial, prorated measure to charge based on utilization, so someone who uses a system twice as much gets charged twice as much. If that person were to immediately stop using the resource (and everyone else continued on normally), it wouldn't reduce the provider's total cost of operation (ignoring the upstream charges which are still the same issue, just offset a bit).
"Is said in tentacle rape" is said in tentacle rape.
"Is not a quine" is not a quine.

[ Hug Your Trikuare ]
[ Parent ]

On the spot (4.00 / 1) (#58)
by xL on Sun Jan 05, 2003 at 10:35:59 AM EST

This is the traditional reason why ISPs prefer to sell bandwidth and not traffic. If they offer you 100 GB traffic and you use it all up evenly, they need a capacity of around 350 Kb/s to carry your data without delay. If you use that same amount of traffic in a single day, you are bursting at 30 Mb/s, which is scary to an ISP owner.

This, contrary to what you say, does not make ip packets that much different from oil, sugar or electricity. For selling each, your distribution infrastructure has to be designed to cater for a certain demand. If demand temporarily exceeds your infrastructure, your capacity has to be expanded. If demand soon drops thereafter, you are left with infrastrucutre that exceeds demand in capacity several magnitudes.

This is why, in reality, there is nothing special going on that mandates pricing models that are substantially different from regular commodities. ISPs, like magazine publishers and oil barons, must act wisely and keep their capacity at a level slightly higher than the measured demand, and only increase this capacity when the demand structurally grows.

Economy of scale helps here, too. If you host 5000 websites that pay for 10GB and 90% of them actually only use 1GB (these numbers are not really that fictional), the burden of the occasional customer that incidentally hits 100 GB does not mandate charging intra-nasally for the excess, it is not like the ISP is going to roll out a second OC-192 "just in case". Except if they're WorldCom :)

[ Parent ]

The end result is that it's a commodity (4.00 / 1) (#80)
by fluffy grue on Sun Jan 05, 2003 at 03:39:33 PM EST

But that's still an artificial layer on top of it. Selling "bandwidth" is a mechanism of prorating customers based on usage. It still doesn't make the bandwith itself cost money, it's just a way of billing for infrastructure based on usage. It's the infrastructure which costs money, not the bandwidth itself.
"Is said in tentacle rape" is said in tentacle rape.
"Is not a quine" is not a quine.

[ Hug Your Trikuare ]
[ Parent ]

They're still not selsling infrastructure (none / 0) (#85)
by xL on Sun Jan 05, 2003 at 05:10:44 PM EST

It's not that I want to be stubborn, I see your point. But agree with me that what is actually sold is still bytes. You could argue that a newspaper publisher is actually selling infrastructure for advertising and newspapers are just virtual ;).

Perhaps the best comparison is with utilities. There, too, you can see the capacity vs. usage weirdness.

[ Parent ]

xL has a point (4.00 / 1) (#90)
by ogre on Sun Jan 05, 2003 at 07:50:34 PM EST

At first I agreed with you, but in retrospect the same kind of argument could be made about any renewable resource. Famers don't sell corn, they sell the infrastructure to plant, grow, harvest and deliver corn. The corn is free (it just appears naturally after you plant it) it's the fields, the labor, and the trucks that cost money. Charging by weight of corn is just a way of billing for the infrastructure (land, labor, equipment) based on usage. The more corn you use, the more infrastruture is required to support your use.

Non-renewable resources are the same except for the additional factor that when the resource runs out, the infrastructure becomes useless.

Everybody relax, I'm here.
[ Parent ]

I see your point (nt) (none / 0) (#91)
by fluffy grue on Sun Jan 05, 2003 at 08:04:37 PM EST

[ Parent ]
isn't that what they do at akamai? (4.00 / 1) (#37)
by martingale on Sun Jan 05, 2003 at 02:04:28 AM EST

'xcept it's not cheap. It would be great to have a free community service like that though, could also come in handy for freenet.

[ Parent ]
good subject (4.25 / 5) (#8)
by Matt Oneiros on Sat Jan 04, 2003 at 04:13:36 PM EST

how about using the google cache? I'd assume if something was on the front page long enough for anyone to see it then google probably saw it too.

Also, although I think it's a very polite thing to do I don't think it should necessarily be imposed on others to do. People should assume this may happen.

Although such politeness is a good idea, imposing it on others will limit the growth and diversity of the web.

Lobstery is not real
signed the cow
when stating that life is merely an illusion
and that what you love is all that's real

The problem is: (4.57 / 21) (#9)
by TheOnlyCoolTim on Sat Jan 04, 2003 at 06:48:21 PM EST

The Slashdot editors are a bunch of asshats about this. They know that www.geocities.com/robotlegoanimefurrycasemod is going to be down in about three seconds after it hits the Slashdot front page, but they've never even tried to do anything about it as far as I know.

From the Slashdot FAQ where they make excuses for not caching:

"I could try asking permission, but do you want to wait 6 hours for a cool breaking story while we wait for permission to link someone?"

However, the linked sites that suffer most from the Slashdot effect are generally small hobbyist pages or small companies and the stories linking to them are not time sensitive. The New York Times or CNN might have a "breaking story," but they can resist the Slashdot effect just fine. Basically the Slashdot editors don't want to bother.

"We are trapped in the belly of this horrible machine, and the machine is bleeding to death."

Malda & Co. (4.18 / 22) (#13)
by Trollificus on Sat Jan 04, 2003 at 07:43:03 PM EST

"I could try asking permission, but do you want to wait 6 hours for a cool breaking story while we wait for permission to link someone?"

Considering that most of the news on Slashdot is a few weeks out of date anyway, I don't think a few more hours will kill them.

Although, in defense of Slashdot, some Admins should get a clue after the third or fourth Slashdotting and start denying any referrers from the Slashdot domain. A lot of admins have caught on and have started doing just that. Props to them.

You have to remember that the Slashdot editors found a major cash cow when Slashdot went mainstream. They could sit around all day making bucketloads of dot-com play money for doing absolutely jack shit. And they exemplify this ideology to this day. Just look at their sub-par editing skills and third grade-level grammar for some examples.

If they actually gave a damn about anything but making easy money, do you think we would see any dupe stories on the frontpage? Would the news be two weeks behind the mainstream news sites? Would Taco start using a dictionary, or even start checking the links to the story he posts?

Who knows, if Slashdot editors actually felt like they had something to lose, they might clean up their act.
To the rest of the world, they come off as a bunch of lazy, illiterate pimply-faced teenagers who would rather whack off to Anime pr0n and play Warcraft 3 than actually manage their website with even the slightest semblance of professionalism.
But until that day comes, we're just going to have to put up with duplicate stories and Windows vs. Lunix essays written to incite rather than educate.

"The separation of church and state is a fiction. The nation is the kingdom of God, period."
--Bishop Harold Calvin Ray of West Palm Beach, FL
[ Parent ]

My wish... (3.50 / 2) (#28)
by /dev/trash on Sat Jan 04, 2003 at 10:50:43 PM EST

Amazon.com to go out of business.
VA to finally pull the plug on Slashdot.

Updated 02/20/2004
New Site
[ Parent ]
why? (none / 0) (#69)
by Xcyther on Sun Jan 05, 2003 at 12:32:00 PM EST

i love Amazon.com

"Insydious" -- It's not as bad as you think

[ Parent ]
why?! (none / 0) (#73)
by /dev/trash on Sun Jan 05, 2003 at 01:21:37 PM EST

  1.  They are a .com
  2.  Software patents suck.

Updated 02/20/2004
New Site
[ Parent ]
Re: why?! (none / 0) (#77)
by elemental on Sun Jan 05, 2003 at 02:22:07 PM EST

1. They are a .com


I love my country but I fear my government.
--> Contact info on my web site --

[ Parent ]
Blame VA, not slashdot (4.30 / 10) (#36)
by rde on Sun Jan 05, 2003 at 12:06:14 AM EST

It is still VA, isn't it?
Anyway, to all who like nothing better than slashdot bashing: lighten up.

You have to remember that the Slashdot editors found a major cash cow when Slashdot went mainstream
Bullshit. The eds found that they'd created a cash cow. They didn't sit up one day and say "it'd be nice if we could create a web site that does nothing but like to the hard work of others." Taco wrote slashwhateveritscalled, and provided links that others found useful and/or interesting. Enough of us found it so useful that we still visit it daily.

Just look at their sub-par editing skills and third grade-level grammar for some examples.
They're nerds, not grammarians. Granted, the two aren't necessarily exclusive, but given the amount of bad punctuation I've seen in comments on this site, it's obvious that they're far from unique in this regard. My feeling is that VA should've hired a proofreader when they bought the site, but I'm not going to blame the good Cmdr for his lack of grammatical expertise. I've never had a problem understanding what he was talking about.

If they actually gave a damn about anything but making easy money, do you think we would see any dupe stories on the frontpage?
Yep. If you're worried about dupes, do what I do; don't read the second instance. However annoying you find dupes, you can't possible find it as annoying as I find the assholes who'll open the story and take the time to tell us all that it's already been posted.
Even better are the assholes who'll write about it being a dupe without checking whether one of their fellow assholes has already made his snide little comment.

But until that day comes, we're just going to have to put up with duplicate stories and Windows vs. Lunix essays written to incite rather than educate.
I'm not one to make radical suggestions, but in this case I'll make an exception.

Don't fucking read them.

Is slashdot perfect? Far from it. However, I do have sympathy for their argument that sites could lose advertising revenue if caches are invoked. What I'd like to see is links to sites that don't carry advertising accompanied by a link to the google cache. If the site carries adverts, then fuck 'em. If they're in it for the money, they should be a) delighted with the attention and b) prepared to spend a few units of local currency on bandwidth. Or, who knows, maybe even changing the max number of connections on their server to accommodate the bandwidth they've already got.

[ Parent ]
Heh (none / 0) (#82)
by Anonymous 7324 on Sun Jan 05, 2003 at 04:16:26 PM EST

But until that day comes, we're just going to have to put up with duplicate stories and Windows vs. Lunix essays written to incite rather than educate.
I'm not one to make radical suggestions, but in this case I'll make an exception.

Don't fucking read them.

I'll go you one better: don't read the site at all. I know I don't anymore, and I don't feel any loss.

[ Parent ]

Wrong. They really *dont* care (4.00 / 1) (#118)
by brunes69 on Mon Jan 06, 2003 at 12:13:28 PM EST

How would you like it if your copy of the New York Times had the same headline story posted twice in the same week, with the same text and no new insight at all? (oh wait, this happened about 100 times after 9/11... but you get the idea). When your site has become a major media outlet with hundreds of thousands of viewers a day, you should put a *BIT* of effort into the management of it. For crying out loud, we aren't talking about dupes from months or even weeks back here, I have seen the *same story* posted on the *same front page* twice on the same day more times than I can count. It is almost as if the editors never even look at the site, all they do is type into their little web form. The worst part is I could fix this problem by adding about 10 lines of perl code to the submission page checking the last 30 days of submissions or so for duplicate links... but do they do it? No, cause they don't give a flying f*ck anymore.

---There is no Spoon---
[ Parent ]
My thoughts exactly (4.00 / 1) (#43)
by kholmes on Sun Jan 05, 2003 at 06:21:39 AM EST

I just want to say that I agree with everything you said. Has anyone figured out what their posting/rejection policy is? I am starting to seriously think that they incite people on purpose.

If you treat people as most people treat things and treat things as most people treat people, you might be a Randian.
[ Parent ]
Duplicate stories make /. problem worse (3.66 / 4) (#17)
by Arcturax on Sat Jan 04, 2003 at 08:51:03 PM EST

Given that ./ editors seem to chonically duplicate article postings, that can make it even worse on sites, since even though its a dupe, sometimes people go for a second look anyway.

I listen to the best music on Earth! http://www.digitalgunfire.com
[ Parent ]
Apache module for the Google cache? (3.57 / 7) (#12)
by dagg on Sat Jan 04, 2003 at 07:40:04 PM EST

It would be interesting to see an apache module that could automatically forward requests to the Google cache when the site starts to melt due to a slashdotting.

This obviously wouldn't always work, but I bet it would help out in a high percentage of cases. And if you could just add a simple line to your Apache config file to implement it, that'd sure save a lot of people a lot of headaches (excepting maybe Google :-)).

Find Yer Sex Gateway
In advance (4.25 / 5) (#21)
by ensignyu on Sat Jan 04, 2003 at 09:42:45 PM EST

Only if you think you're going to be Slashdotted in the future. Not everybody thinks their site is Slashdotworthy.

Also, you could automatically redirect any request with a slashdot.org referrer. Which of course would be Slashdot specific (set up one for fark also, etc), but easier to detect than a meltdown.

Of course, being listed on Slashdot doesn't necessarily lead to a Slashdotting (especially if it's just a comment or in some obscure section), but maybe it could be an indication to watch out.

[ Parent ]

Something Awful vs Slashdot (4.20 / 15) (#14)
by egg troll on Sat Jan 04, 2003 at 08:24:28 PM EST

I remember way back in the day when Slashdot posted a front page story about Something Awful, after Lowtax complained that his bandwidth was maxed out. Or something like that. It was nice for Taco to then dump a ton of additional bandwidth on him.

Lowtax responded by redirecting anyone who came to his page from Slashdot to everyone's favorite website!

He's a bondage fan, a gastronome, a sensualist
Unparalleled for sinister lasciviousness.

Why don't people still do this? (4.40 / 5) (#24)
by MessiahWWKD on Sat Jan 04, 2003 at 10:21:44 PM EST

I'm serious. If they want the jerks at Slashdot to show them any respect, they have to ensure that any referral from Slashdot sends the user to everyone's favorite website! In fact, that should be a requirement for all websites. It would certainly tell the asses at Slashdot that their abusiveness won't be tolerated.
Sent from my iPad
[ Parent ]
Time (5.00 / 5) (#30)
by egg troll on Sat Jan 04, 2003 at 11:09:03 PM EST

Well, some people might enjoy being on the front page of Slashdot. Others might not know how to redirect. But mostly I think its because people don't know they're on Slashdot until well after the fact. By then its too late to do anything.

He's a bondage fan, a gastronome, a sensualist
Unparalleled for sinister lasciviousness.

[ Parent ]

Penny Arcade (5.00 / 3) (#46)
by DarkZero on Sun Jan 05, 2003 at 06:56:11 AM EST

I'm surprised that more people don't do this, even if just for the humor value. I remember laughing my ass off when Gabe over at Penny Arcade got mad at the EFF and redirected all links from the EFF to You'reTheManNowDog.com (it's currently down, but it used to be a ridiculous picture of Sean Connery and was based on him saying "You're the man now, dog" in Finding Forrester).

[ Parent ]
Resource cap (3.71 / 7) (#15)
by El Volio on Sat Jan 04, 2003 at 08:35:22 PM EST

Frequently, the problem isn't bandwidth, it's the CPU resources on the server. This happens when everything on the site is dynamic — even stuff that could be static. Problem is, by the time the admin realizes it would be a good idea to at least temporarily create a static page, it's too late. The slashdotting has commenced.

The real problem (3.57 / 14) (#19)
by Krueger Industrial Smoothing on Sat Jan 04, 2003 at 09:23:45 PM EST

The real problem is that the Internet is controlled by telephone monopolies who keep bandwidth costs artificially high.

The operator of a website has no control over who visits his site - the idea of shutting him down, or charging him huge sums of money because he has exceeded his "bandwitch cap" is just plain ridiculous.

Imaging running a business and being informed by the owner of the building that you have to shut down for the rest of the month because too many people have come into your store.

It should be like phones (3.33 / 3) (#40)
by Nickus on Sun Jan 05, 2003 at 04:58:52 AM EST

I know this is very difficult to implement but ISP charging should be like phones. The person who causes the traffic should pay. If someone calls me on my phone line then they pay. If someone visits my website they should pay for the bandwidth.

Perhaps you could implement this by making a state table of the whole Internet :-)

Due to budget cuts, light at end of tunnel will be out. --Unknown
[ Parent ]
Cry ne a fucking river. (1.30 / 39) (#22)
by Phillip Asheo on Sat Jan 04, 2003 at 10:19:45 PM EST

Who gives a fuck whether some page on the Internet is unavailable. We are about to bomb Iraq into the stone age, for no real valid reason, and you want to talk about 'the ethics of linkage' ? For goodness sake GET SOME FUCKING PRIORITIES!

"Never say what you can grunt. Never grunt what you can wink. Never wink what you can nod, never nod what you can shrug, and don't shrug when it ain't necessary"
-Earl Long

Cry me a fucking river. (1.37 / 51) (#23)
by Phillip Asheo on Sat Jan 04, 2003 at 10:20:03 PM EST

Who gives a fuck whether some page on the Internet is unavailable. We are about to bomb Iraq into the stone age, for no real valid reason, and you want to talk about 'the ethics of linkage' ? For goodness sake GET SOME FUCKING PRIORITIES!

"Never say what you can grunt. Never grunt what you can wink. Never wink what you can nod, never nod what you can shrug, and don't shrug when it ain't necessary"
-Earl Long

There Are Solutions (4.66 / 6) (#25)
by FlightSimGuy on Sat Jan 04, 2003 at 10:22:08 PM EST

The suggestions I've seen in this discussion are excellent ones, and are also quite simple to implement. Although there is no google-cashe-redirection module to my knowledge, there is a very powerful bandwidth throttling module, mod_throttle.  With this, it shouldn't be hard to create your own throttling policies based on your available bandwidth and such. If this one doesn't do what you want, search the apache modules site for "bandwidth", and I'm sure one of the modules in the results will.

Additionally, there are only a handful of sites which could cause you problems by linking to you, and are so arbitrary in their linking decisions that they might actually do it (say, you don't have to sweat CNN linking to you). In this case, you can just come up with a mod_rewrite rule to forward requests from those sites to a "YOUR NOT WANTED HERE" page or something. When the slashdot editor clicks the link in the submission, he'll get the message. If not, do as somebody else here suggested and forward them to goatse. The story will be pulled off slashdots homepage in seconds -- and it'll be really funny to watch it happen.

Indeed, if your bandwidth is so limited that you're sweating this, there is no excuse not to have implemented one of the above solutions before your site ever goes up. Or better yet, just get a nice dedicated server from RackShack ($100 per month) and host your site there. These babies come with 400GB of monthly transfer, so now that I've got one, the slashdot editors can link to me 'til they're blue in the face, and I'll still be up and running. :)

And a cheap host is going to let you install mods? (none / 0) (#138)
by mozmozmoz on Sun Jan 12, 2003 at 07:02:45 AM EST

Really. I'm just going to email support@5dollarhosting.com and say "I want this module installed, and I want to be able to configure it *my* way. And they're going to do it. Right.

There's lots of comedy on TV too. Does that make children funnier?
[ Parent ]

Actually.. (none / 0) (#139)
by FlightSimGuy on Fri Jan 17, 2003 at 11:35:54 PM EST

I was mostly referring to dedicated servers where you are the admin. In the shared hosting situation, the admin is responsible for making any such changes so the customers find it satisfactory. With that in mind, you just go to them and explain that you would rather have your HTTP bandwidth throttled than be charged for extra if you get slashdotted, and leave the technical measures of how exactly to accomplish that up to them. If they don't agree to do it, leave. There are plenty of good (and cheap!) hosts out there that are dying because customers insist on sticking with pathetic ones who won't do what they want. Read the WebHostingTalk bulletin boards to find them.

Plus, how could you possibly sleep well at night knowing that you might get a $10000 bandwidth bill tomorrow, which you pretty much already agreed to pay when you signed up with a shared host that doesn't throttle.

[ Parent ]

How times have changed (4.25 / 4) (#26)
by X-Nc on Sat Jan 04, 2003 at 10:29:40 PM EST

I remember back int he prehistoric days of the web it was common practice to politely ask permission, or at least give warning, to anyone before linking to them. OC, there was nothing like the /. effect then either.

Aaahhhh!!!! My K5 subscription expired. Now I can't spell anymore.
Oh yes there was... (5.00 / 3) (#45)
by JKew on Sun Jan 05, 2003 at 06:35:48 AM EST

Mosaic's "What's New" page was the biggie to get onto at the time.

[ Parent ]
Not quite the same (3.00 / 1) (#123)
by X-Nc on Mon Jan 06, 2003 at 03:39:19 PM EST

Yes, I was one of them, too. But the effect of everyone viewing "What's New" going to your web site was minimal. Remember, I said that there wasn't anything like the /. effect, not /. itself.

Aaahhhh!!!! My K5 subscription expired. Now I can't spell anymore.
[ Parent ]
Things are different now. (4.00 / 1) (#61)
by pberry on Sun Jan 05, 2003 at 11:27:50 AM EST

Back then, the web felt "small" and there was a sense of community. It was easy to ask. But seriously, I would wager that more people asked for links than to make links. It was one of the few ways to get traffic back then. How many of you asked Yang to link to yahoo.stanford.edu?

Now you know that email you just fired off to webmaster@site.com probably won't be answered for days, if at all.

The one area it seems that people still communicate about linking is the weblog area. There are a couple of interesting systems that have been devised to discover who is linking to you like TrackBack, PingBack, and referer log parsing. Of course, it's done after the fact, but there is still some communication going there, even if it is automated.

[ Parent ]
webmaster@site.com? (none / 0) (#102)
by nstenz on Mon Jan 06, 2003 at 12:25:27 AM EST

Every time I find a web page I'd really like to visit that doesn't work properly in Mozilla, I attempt to contact the site's owners. If I can't find a 'contact us' page, I do the obvious and write the 'webmaster' address.

Every single time I've written to webmaster@somewebsite.com, the mail has bounced. Every single time.

So much for being helpful.

[ Parent ]

spammers are somewhat to blame (none / 0) (#103)
by coryking on Mon Jan 06, 2003 at 12:35:29 AM EST

I mean, if you are a spammer - why not try webmaster@blah.com? And while you are at it - why not sales, info, postmaster, etc. Most are highly likely to exist.

In the end - many website owners have probably given up on the old webmaster@xyz.com. It's just not worth the hassle of the large quantity of spam such an account might recive.

[ Parent ]

There's a reason for that. (none / 0) (#104)
by Andy P on Mon Jan 06, 2003 at 01:02:00 AM EST

It's because along with sales@, info@ and help@, webmaster@ is one of the best spam catchers.

Barrels are just crates with delusions of grandeur
I masturbate to AOL commercials

[ Parent ]

Bah. (4.66 / 9) (#29)
by paine in the ass on Sat Jan 04, 2003 at 11:02:12 PM EST

This shouldn't be too hard to deal with for a competent webmaster; simply password-protect any pages you put up with the following topics:
  • Things made out of Legos.
  • Case mods.
  • Really old computers (as in "Hey, I got Linux/Apache running on the Analytical Engine!").
  • Linux advocacy.
  • MS-bashing.
  • Rumors about the next LOTR/Star Wars movie.
Then if Slashdot finds out about it, they can't link to it, and if someone gives out/cracks the password, you sue under the DMCA to recoup your bandwidth costs, adding delivious irony to the process.

I will dress in bright and cheery colors, and so throw my enemies into confusion.
analytical engine? (none / 0) (#114)
by ethereal on Mon Jan 06, 2003 at 10:15:53 AM EST

I'm imagining some webmaster cranking really, really fast as the slashdotting occurs :) Or was it run by steam?


Stand up for your right to not believe: Americans United for Separation of Church and State
[ Parent ]

On being Slashdotted (and memepooled) (4.96 / 33) (#31)
by johnny on Sat Jan 04, 2003 at 11:22:14 PM EST

I've been Slashdotted twice and I hope that it happens again. I'm a writer of self-published novels aimed at the kind of people who read Slashdot, and a good portion whatever success I've had selling these books can be attributed directly or indirectly to that one site.

I had heard of the site and surfed it a few times, but I only began to understand its influence at Boston's Geek Pride Festival in April, 2000, when Rob Malda (CmdrTaco) spoke and a giant crowd of people hung on his every word as if he were Jesus come back or something. I had actually met him the night before and given him a copy of my book; I had not known at that time that he was a quasi-deity.

Anyway he gave the book to Hemos, who liked it and gave it a nice review in late May 2000. At that time my site was hosted on a local mom-and-pop ISP, long since gone out out business, on the island of Martha's Vineyard, where I live. I don't know how much traffic that review generated because my ISP didn't have any tracking software available at that time and the little freeware hit-counter that I had installed just panicked. According to it, over the three days of the most intense Slashdot effect I got -200 visitors to my site. (That's negative 200). But the ISP performed admirably, and the site stayed up. I really have no idea how many hits I got, but within 20 hours of my Slashdotting the rank of my book went from 79,000 to 64 on Amazon's list. (It wasn't until a few days later that I realized that it had only taken about 300 sales to thus catapualt me. I had had visions of selling 20,000 copies!). Also I got about 60 emails within a few days of the Slashdot review.

After that Hemos and I began a sometime correspondence by email, and actually met in person a few times (and had one famous miss ). (It was Mr. Hemos who suggested to me that I check out a certain new site called Kuro5hin.). Over lunch I told him that my second novel Cheap Complex Devices was almost ready, and asked him if I could count on a Slashdot review. He said yes, but of course that he could not guarantee that it would be a positive review. At that time I was planning to publish both novels, "Acts" and "Devices," in one upside-down volume. Some while later I finished writing the book and sent him a copy in PDF. I told him that I would really appreciate a review in time for LinuxWorld San Francisco, which was about two months the future at that point. Hemos said that he would probably be able to do it, but that he could not make any promises. So I signed a contract to rent a booth, at great expense, to sell my book at LinuxWorld. And then I drove across the United States of America with a truckload of books.

I went out to Linuxworld, foolishly counting on a Slashdot review to drive thousands of people to my little booth. No review appeared. Sales were good but I had counted on "great." The venture was turning into a disaster. From the floor of LinuxWorld I sent Jeff (Hemos) an urgent note asking if he could post that review? He responded 12 hours later saying, as I recall, that he had other things on his mind than my stupid book. His company was in turmoil, he had just had to tell several friends that they no longer had jobs, he was in Japan while his wife was home with two very young children, and he did not much appreciate my breathing down his neck. His note did not tell me outright that I was being an asshole for pestering him, but in fact I was being an asshole. Such is the power of Slashdot that I had driven 3,000 miles in anticipation of one review and had placed the value of being Slashdoted over that of a friendship. This was not a pleasant realization. The Slashdot effect had become my "one ring." I felt silly and a little ashamed of myself.

Having lost about $1500 on the LinuxWorld venture, I flew back to Massachusetts. I sent several apologies to Hemos for having been such an asshole, for having pestered him and so presumed on our friendship. He did not answer, and I figured that I had really pissed him off beyond repair. And then out of the blue he sent me a note saying that he liked the book and that a review would be posted soon. Almost immediately thereafter I was Slashdotted a second time. Hemo's review was positive but confusing, and a good part of the confusion came from his saying that both books were available in one volume. That had been my plan, but I had changed it.

By the time that this review appeared my site was hosted on a new ISP, which promptly shit the bed. Within 1 hour of the review's going up my site was down due to lack of bandwidth. I was in the middle of contacting my ISP's customer support to buy more emergency bandwidth when I got a call from Dear Wife Betty. Her car had broken down, again, this time in the middle of the intersection of Franklin and Greenwood, about a 1/2 mile walk from our house. She was not in a good mood. "I'm at work," she said. "you go deal with that damn car. I left it sitting in the middle of the street." (The car had conveniently broken down about 1/4 mile from the library where she works.)

So I ran down the street, pushed the car out of the intersection, spent half an hour futzing with the ignition, and got the car started and drove it home. Just when I arrived I got a phone call from youngest daughter, at the soccer field. She was on the mend from a bout of Lyme disease, and Coach was concerned that she was not well. Coach wanted me to come get daughter right away and to call her doctor to make sure she was well enough to play soccer. So I went and got her, and called doctor and took care of all that. About that time my wife asked me if I could bring her car to the library, as she had a meeting of the Vineyard Committee on Hunger for which she was late. So I did that. Then I went home and got in touch with the ISP and arranged for them to increase my bandwidth. By that time the review had scrolled off the page. Many of the Slashdot comments were to the effect of "another wretched loser gets slashdotted and his site goes down. How lame!"

I don't know how many hits that second review generated because I had not yet installed the traffic-watching software provided by my new ISP. I installed it the next day. In any event the effect on sales was less pronounced than that first review two years earlier.

Despite the disappointing results of the second Slashdotting, it's hard to overestimate the value of those Slashdot reviews. It's not only that they led to sales; it's also that they gave me credibility that allowed me to get other reviews, such as one on Salon.com. It was only by virtue of having been Slashdotted that I was able to garner the attention of Salon--and that in turn has opened up lots of other doors. The guys over at that site can link to me any time they want.

One of my Kuro5hin diaries was allegedly memepooled. I found this out when a friend of mine told me about a neat story that he had read from a memepool link and I recognized it as my own. Here again I have no idea how much traffic that link may have generated, but in any event it would have brought the traffic to K5, not to my site. But now google gives ambiguous results, so maybe my friend dreamed it all up.

By the way, can somebody explain mirroring to me? If I knew I were about to be slashdotted and wanted to go about getting my site mirrored, how would I do it?

yr frn,
Get your free download of prizewinning novels Acts of the Apostles and Cheap Complex Devices.

How much bandwidth did you have... (none / 0) (#32)
by Stick on Sat Jan 04, 2003 at 11:43:56 PM EST

With the second ISP? Was it a 40gb per month or 400gb per month job?

Stick, thine posts bring light to mine eyes, tingles to my loins. Yea, each moment I sit, my monitor before me, waiting, yearning, needing your prose to make the moment complete. - Joh3n
[ Parent ]
Ten GB, and (none / 0) (#34)
by johnny on Sat Jan 04, 2003 at 11:56:03 PM EST

My current ISP is Superb.net; I've been with them about a year and have been happy. (I was with another outfit for about six months; what a disaster. The site was down more than it was up). I had had the 'U fully virtual' and upgraded one level after the slashdot incident. The nice thing about the current arrangement is that I can allocate the bandwidth as I want; the other package just offered linear. Also I can now purchase extra bandwith in increments if I need to. That hasn't happend, but we can always hope for some kind of monster publicity.

yr frn,
Get your free download of prizewinning novels Acts of the Apostles and Cheap Complex Devices.
[ Parent ]
Diary was memepooled. (3.00 / 1) (#53)
by ffrinch on Sun Jan 05, 2003 at 08:37:54 AM EST

Mmm, coincidence: I only visited that site a few times and I remember following the link.

If you search memepool by "Axhole Rose" then it's right there at the top.

"I learned the hard way that rock music ... is a powerful demonic force controlled by Satan." — Jack Chick
[ Parent ]

thanks that explains it (none / 0) (#54)
by johnny on Sun Jan 05, 2003 at 08:57:10 AM EST

I was googling for "memepool axl rose."

yr frn,
Get your free download of prizewinning novels Acts of the Apostles and Cheap Complex Devices.
[ Parent ]
Mirroring (3.00 / 1) (#106)
by nstenz on Mon Jan 06, 2003 at 01:14:20 AM EST

Just means replicating your site's content at another location to take some strain off your server. You make a copy of your web page and put it somewhere else, then direct people to go there if your primary site is not working or is too slow. When a web site asks you to pick a location to download a program from, it's giving you a list of mirrors.

[ Parent ]
that's what I thought it meant, but (none / 0) (#113)
by johnny on Mon Jan 06, 2003 at 10:12:38 AM EST

I wondered if there was any server-side magic involved to do the load-balancing? In any event I think I've got enough bandwidth in reserve on the chance that I get /.'d again, and if not, that will only be a good thing! Thanks,

yr frn,
Get your free download of prizewinning novels Acts of the Apostles and Cheap Complex Devices.
[ Parent ]
yeah but there is this slashdot disease (3.71 / 7) (#33)
by turmeric on Sat Jan 04, 2003 at 11:47:54 PM EST

'im not responsible for anything. everything is someone elses fault'. thus, slashdot feels no reason whatsoever to care about the people it crashes. any more than it cares about its users.

fame sucks doesn't it? (2.36 / 19) (#35)
by circletimessquare on Sun Jan 05, 2003 at 12:01:10 AM EST

fame sucks doesn't it? i mean, the internet is supposed to be about the dissemination of information. you put the info out there because you want to share it with others. if that bit of info happens to be wildly popular beyond your dreams and your server's bandwidth, well that just sucks for you doesn't it?


the problem with this story is that it goes against the nature and purpose of the internet. it is similar to the deep-linking controversy. it is just crazy for a site to not expect people to deep link to their site, or ask permission to, or expect remuneration. likewise with this story. both scenarios go against the spirit and purpose of the internet.

the rule should be if you make it available on the internet, you get what you deserve. and what do you deserve? by placing it out there on the internet, you are giving up your right to decide what you deserve. public information is public information is public information. end of story.

and if you get fame... i don't really understand what your problem is.

it's like someone putting information out there for the whole world to see, and then complaining when the whole world wants to see it. i mean come on, you can't have it both ways.

this story is hypocrisy and goes against the spirit of the internet and the freedom of information it represents. if you don't want to whine and bitch and moan about your server getting capped DON'T POST ANY WEBPAGES.

if you want whatever your site is about to be only for your little circle of friends, put up a password, put up a robots.txt, etc. the internet is for EVERYBODY.

this post whiffs of snobbery. whine, whine, bitch and moan. welcome to the world wide frickin web.

The tigers of wrath are wiser than the horses of instruction.

Get more bandwidth. (1.66 / 3) (#38)
by br14n on Sun Jan 05, 2003 at 03:16:37 AM EST

I know that isn't a useful suggestion, but I still think it's the best solution. We can't be reaching for the stars and doing all the neat stuff we should be doing worrying about bandwidth caps.

Short term (4.00 / 1) (#44)
by ensignyu on Sun Jan 05, 2003 at 06:23:13 AM EST

Slashdotting and similar are quick strikes that go away after a couple days. It's pointless to upgrade from a 8mbit/s connection to a 12mbit/s connection just because your site happened to draw interest for the  15 minutes or 15 hours of fame. I suspect a lot of the sites that get Slashdotted don't normally see hundreds of hits a second.

You might buy an extra block of 10 gigabytes transfer for the month so people don't miss out on our site. By the time you contact your ISP and it gets through the ISP's tasklist, the Slashdotting may be over, anyways.

Unless, of course, you happen to offer something that's more than a passing glace, and tens of thousands of users suddenly sign up for your site. Then you'd definately want to upgrade your bandwidth for the long-term, and *quickly* before people forget about the site.

[ Parent ]

In all fairness (3.40 / 5) (#39)
by Martigan80 on Sun Jan 05, 2003 at 04:04:57 AM EST

I agree that the slashdot effect can be crippling, but not many people out there know about slashdot or other websites with such a powerful influence. Any how I would put the burden of this on the owner of the website. Just like normal laws in America ignorance is not a good enough reason. If your going to have you web site on a server that will charge for extra bandwidth you should check up on that. Frankly as some other readers have said, the Internet is about the dispersion of information, not a selective reading.

actually... (3.80 / 5) (#41)
by boxed on Sun Jan 05, 2003 at 05:21:14 AM EST

...what offends me the most about this is that /. doesn't give a shit about its readers. Think about it. They KNOW 50% or so of the pages they link to will DIE within seconds. This means that a huge majority of the readers will never get to see what they are linking to. Do /. care? No? Then why in gods name did they link to it in the first place? Seems to me like they just look at the pages themselves, then link them up and don't give a flying fuck if the faithful readers of their site will ever see it.

OK, let's be serious here... (none / 0) (#83)
by mdpye on Sun Jan 05, 2003 at 04:45:44 PM EST

> They KNOW 50% or so of the pages they link to will
> DIE within seconds. This means that a huge
> majority of the readers will never get to see what
> they are linking to.

OK, I read slashdot daily for something like the last 6 months, and perhaps once or twice a week I would come across a site which has folded under the pressure.

Yes, the problem has become common enough to aquire a name and be widely joked about, but it's not THAT prevailant.

Your "50% or so" renders the rest of your post a simple hate campaign against slashdot. Now slashdot has many faults, but despite them all, it's provided me with enough enjoyable reads not to actually wish them dead or anything!


[ Parent ]

That's why the web is going Commercial (4.40 / 5) (#42)
by opendna on Sun Jan 05, 2003 at 05:33:11 AM EST

NEWS FLASH: The costs of the world wide web are borne disportionately by the hosters of content.

It's the exact opposite of email, where the costs of spam are borne disportionately by the recipients.

The logical effect of the "slashdot effect" is that sites which live and die on traffic - those that sell advertising or products - will proliferate while those that are just there for the entertainment of the owner and visitors will become relatively rare.

If you have no sympathy for the poor sops who get slashdotted off the web or get hit with high bills for traffic, you really should shut up about how annoying banner ads and pop-ups are. Likewise, you should recognize that you are encouraging the net to evolve in a direction which allows censorship of pirated materials and unpopular politics. Bandwidth discrimination could effectively turn a many-to-many network into a few-to-many broadcast medium.

I'm not a tech, so I've got no solution. It seems that somehow the host should get paid for the traffic it attracts - perhaps that means overpaying for bandwidth on the downstream to subsidize traffic on the upstream. I don't know. The Canadians' solution to music sharing comes to mind: It is legal to copy a tape or CD and share it with friends because the royalty has been added to the price of the blank media.

I donate to sites I visit often, but I'm not about to hit Paypal just to send somebody the $0.25 worth of pleasure their lego opus gave me.

This is why the web should go wireless (5.00 / 1) (#55)
by pyramid termite on Sun Jan 05, 2003 at 09:28:52 AM EST

... on an new internet that is based on a wireless network. No one owns the bandwidth that would be used and although I'm sure that traffic jams would still be possible, at least people wouldn't be charged money for recieving them. Unless someone comes up with an innovative means to figure out distribution on the current net, we are in danger of heading towards the web you've described.

On the Internet, anyone can accuse you of being a dog.
[ Parent ]
innovative distribution (5.00 / 2) (#56)
by martingale on Sun Jan 05, 2003 at 09:58:48 AM EST

That will, eventually, be the answer. What we need to figure out for that, in each case, is what exactly is valuable to authors and what isn't. For example, if spreading information is important, then mirroring is the solution. Like usenet news feeds, software repositories, etc. Those are examples of structures where the author of the content is not penalized. Of course, with software and email messages, the author can be properly credited, and free software doesn't need a lot of registration infrastructure.

Keeping the above in mind, I think in the future we'll see formalized structures where your typical lego swilling, beer building (or the other way around, for conventional geeks :-) author can create his content and it gets automatically propagated (ie hosted) on lots of machines if and only if it is popular. Bandwith bottleneck solved. The formal structure is needed to decide about intellectual property issues (which ones are important for each type of content, etc) and solve them in standard ways. Like the GPL and related licenses do for Free software.

Overall, I don't think it's so much a technical problem as a problem of figuring out what rights and obligations (ie licenses) are needed for a variety of different types of content production. At one extreme, commercial distributors want complete control over their servers, and hence should pay for everything, while at the other extreme Free software and Free speech doesn't care about copies and modifications (an oversimplification, of course) and therefore can take advantage of full mirroring capabilities and associated alteration risks.

[ Parent ]

Impractical (5.00 / 1) (#57)
by xL on Sun Jan 05, 2003 at 10:14:19 AM EST

Wireless networks are not going to replace solid infrastructure any time soon. At the curent capacity and reach of wireless networks, congestion is inevitable. Pipes running between ISPs are already mostly in the Gigabit range. Several magnitudes higher than what affordable wireless connections can offer.

The internet can deal with multiple path situations, but not very gracefully. Current routing architectures can deal easily with links that disappear, but not with links that get congested upstream. The price we pay for that little bit of stability is already quite high; Networks that want the benefit of multiple paths must invest heavily into router hardware with enough capacity to keep tabs on available best routes to every possible network that exists.

If you apply these solutions to a wireless network, where at the level of every access point routing information has to be exchanged, the event horizon in complexity is soon reached, where the amount of available routes as well as the number of routing updates per unit of time increases in such a way that you need a Cray-class computing device to calculate the best path for any given packet while still getting measurable throughput.

To completely get rid of commercially provided bandwidth, you will also have to find a creative way to get your wavelan card to burst across the atlantic ocean and other natural barriers. All that provided your government will let you operate transmitters of such strength without an expensive license.

[ Parent ]

Canada Copyright and Fair Use (none / 0) (#120)
by PunchMonkey on Mon Jan 06, 2003 at 01:37:31 PM EST

The Canadians' solution to music sharing comes to mind: It is legal to copy a tape or CD and share it with friends because the royalty has been added to the price of the blank media.

No! It certainly is not legal to do this. See this section of the Copyright which states what is fair use.

[ Parent ]

Heh. Fooled me! (none / 0) (#125)
by opendna on Mon Jan 06, 2003 at 05:54:08 PM EST

I have a booklet on I.P. laws from the Government of British Columbia which says I got it right: Copies still can't be sold but sharing a tape with a friend is specifically listed as fair use.

This is one of those publications handed out to folks who want to start their on businesses and such, it's a companinion to "How To Incorporate in BC".

Oh well.

[ Parent ]

(subject here) (4.66 / 3) (#48)
by kurodink on Sun Jan 05, 2003 at 07:28:40 AM EST

I checked fark for the first time ever(!), and found a link to this site.
The galleries are gone, because someone FARKED it. Over 56674 megs of bandwidth has been consumed in less then 2 days. If any of you could PLEASE help me out, DONATE a few dollars, ten dollars, twenty dollars.. Anything!! It's all going to the server bill! PLEASE HELP ME OUT! I've got to pay over $400 in bandwidth charges.. every little bit helps. Thanks.

Irony. (heh) n/t (none / 0) (#109)
by opendna on Mon Jan 06, 2003 at 04:04:53 AM EST

[ Parent ]
Why /. doesn't mirror (4.50 / 2) (#49)
by anno1602 on Sun Jan 05, 2003 at 07:31:32 AM EST

The reason why /. doesn't mirror is, according to them, a legal one. They have the capacity for doing so, but mirroring a site means copying its content, and that requires permission by the owner of said content. A lot of sites don't want to be mirrored, especaily the big news outlets, as they generate revenue by page views. The argument continues that asking each and every site linked for permission to mirror prior to posting the story would take too much time, so no mirror it is.

The question remains whether this is a fig leaf because Slashdot doesn't want to pay for the bandwith. After all, Google mirrors all the sites they crawl and thus is a frequent recourse for /. readers trying to view a dead site. However, I remember reading an article somewhere that the Google Cache is actually legally problematic. In conclusion, before you whine about /. not caring about their readers' needs, keep in mind that mirroring is not as easy as copying the page(s).

"Where you stand on an issue depends on where you sit." - Murphy
the big news sites don't get /.ed (4.50 / 2) (#71)
by modmans2ndcoming on Sun Jan 05, 2003 at 12:39:36 PM EST

it is the small dude...and I think that malda should, when it is an off the beaten path site, ask them if it is ok if they are cached to prevent being billed up the butt for bandwidth they cannot afford.

[ Parent ]
Mirroring versus caching (5.00 / 1) (#98)
by jpeisen on Sun Jan 05, 2003 at 10:40:06 PM EST

What Slashdot should do is cache, not mirror, the linked content.  Caching is a way of life for web content providers.  Smart ones use it to their advantage while still getting the "hit count" they so desire.  By simply caching the content and obeying the various cache-related headers Slashdot should be able to avoid the legal issues...

Most of the other technical suggestions I've read here are, well, lame.  Not everyone on the net needs the ability to serve thousands of hits per day, much less per second.  Bandwidth costs are not high due to a Telco conspiracy.  Special modules might save the web site, but then the Slashdot readers can't get to the content -- I'm not sure that's a win...


[ Parent ]

Yeah, right. (4.00 / 1) (#105)
by NFW on Mon Jan 06, 2003 at 01:09:07 AM EST

Sure, there are legal issues if you mirror without asking, but, really... how many DoS victims were offered mirroring services before Slashdot unleashed the hordes upon them?

My guess is zero. I'm sure that if the /. editors asked this question routinely, at least one site would have been properly mirrored by now. Far as I know, voluntary mirroring has never happened. If that is indeed the case, it strongly suggests that the people who run /. never bother to ask, because they just don't care.

Got birds?

[ Parent ]

DMCA rider lets US residents cache content (5.00 / 1) (#115)
by pin0cchio on Mon Jan 06, 2003 at 11:09:32 AM EST

They have the capacity for doing so

I'd question whether Slashdot could support the bandwidth for such a cache for non-subscribers.

but mirroring a site means copying its content

True, a public cache must make a copy of the web page in question, but...

and that requires permission by the owner of said content

Not in the USA, according to 17 USC 512(b), a rider to the DMCA.

A lot of sites don't want to be mirrored

HTTP/1.1 defines cache-control header lines.

[ Parent ]
It doesn't have to be like this (4.87 / 8) (#50)
by xL on Sun Jan 05, 2003 at 07:42:42 AM EST

A lot of the problems that arise when normally rarely visited sites get an unexpected rise in hits are generated by the business models of a lot of hosting ISPs, that are basically designed to milk smaller customers on a volume market as cash cows. Most of these ISPs are almost forced into this model because bandwidth providers use similair models to screw the smaller customers.

In the ISP world, bandwidth economics use a model where the costs go down as the volume goes up. This is normal and healthy, but it strikes me that the actual parameters for this are way out of balance. Watching the prices for a lot of ISPs, it is typical for 10GB of bandwidth to cost, per effective Megabit, about 20 to 50 times as much as the price for traffic at a level of 5 Megabit (I've seen ISPs where this number was actually 100 or 200). On top of that, most ISPs know that of all people taking 10GB, less than 10% will typically use more than 25% of that.

Where ISPs, in my opinion, really screw over their customers is that they will keep charging smaller customers those over-inflrated prices even if their volume rises (occasionaly or structurally) and will only let their customers smell those lower prices if they commit themselves to buying larger volumes for at least a year.

The amount of margin made on your typical small website with like 10GB of traffic is scary: Average usage lies at 2GB, which accounts, on average, to a peak usage of 0.01 Mb/s. A provider that has a bit of scale (having a total volume of, say, 60 Mb/s) and is a bit smart about peering will typically pay $45 per Mb. That boils down to $0.45 on bandwidth costs per user averagely, on a product they will most likely charge $10 or more for. If they stick 1000 of such sites on one  machine (again perfectly doable, I've seen joints where they could keep 4096 sites on a single intel server), so hardware costs per user are neglicible.

The support costs are usually partially deflected to 0900 call charges. That leaves them with a fat margin of, say, 95%. With margins that high, there is very little reason to screw your customers sideways if they incidentally take more traffic, but they do it anyway.

I predict that, as the market matures, these prices will become less inflated and these tactics will become less offensive. In fact, in Europe I am seeing the signs of a beginning price war on bandwidth at an ISP level, since struggling transit carriers are dropping their prices dramatically to increase volume. I suspect it will not take long until ISPs (some of them struggling themselves) will use these lower prices to increase their competitive advantage.

Reasons for bulk charges (5.00 / 3) (#60)
by idiot boy on Sun Jan 05, 2003 at 11:27:31 AM EST

There is one simple reason why all ISPs don't just switch to a "pay as you use" model and that is volume prediction. By having customers pay for job lots of bandwidth, the ISP makes the job of working out how much BW is *actually* required to service its customers much easier.

The reason for this is that they have sold X Gbps, they know that they only therefore need to provision X Gbps (an oversimplification I know).

In order to supply pay as you go provision to all their customers and still provide SLAs that they can stick to, the ISP would necessarily have to overprovision the BW that they purchase from their downlevel supplier.

In the telelcoms industry, this problem has traditionally been overcome by using peak and off-peak charges (demand is high during the day, therefore high price and vice versa - supply and demand). The problem for ISPs is that because the object of their business (the internet) ignores time zones to a far greater extent than do telecoms networks, they cannot implement such tariffs.

The end result is block purchasing to allow the ISP to make provisioning decisions.

Oh, and to gouge the customer too. Though it should be remembered that there aren't many ISPs out there actually making a *lot* of money. Even in the colo industry, the server, rack space and services that go along with them make far more cash than selling "commodity bandwidth"

Science is a way of trying not to fool yourself
[ Parent ]

Yes, very well (5.00 / 1) (#66)
by xL on Sun Jan 05, 2003 at 12:13:54 PM EST

I din't argue that overcharging at the low end had no merits. It always has. My point is, that the rate of overcharge is in no way mandated by actual real world usage patterns. Even if you host at a moderate scale, the occasional slashdotting of one of your customers does not affect your bandwidth in such a way that the overinflation in surcharges makes any sense beyond charging for it "because you can".

Also, it's the choice of the ISP to not throttle sites if they start generating truckloads of traffic that will most likely create astronomical bandwidth charges. This is pretty close to a swindle. Most people find out about the rip-deal they closed with their hosting provider when it's already too late. The ISP could even be paying idiots on competitor's cable networks to browse to customer sites every day, post them to slashdot or otherwise incite traffic from unsuspecting victims.

The volume-effect on prices will never disappear, but the bad cases where someone overstacks his quota with 50 GB in one month and has to pay for that privilege at a rate of $2400 per Megabit, when an ISP can buy transit at $60, are likely to fade away. Competition will take care of that at some point..

I've seen far too many stories in the past two years about $poor_student getting a $15,000 bandwidth surcharge for traffic on his homepage, with huge PR backfires for the hosting parties involved, to think that no ISPs will arise that wll treat hosting as a conventional business with conventional profit margins, cornering the market or getting it to follow suit.

[ Parent ]

Yup - Gouging isn't good (5.00 / 1) (#75)
by idiot boy on Sun Jan 05, 2003 at 01:39:35 PM EST

Sorry about that, I didn't mean to come off as critical (though I probably did ;)). I was trying to provide an explanation for their behaviour. I actually agree completely that at the moment, the ISPs are taking any oportunity to gouge their customers.

This is still a new market and like any, it's very much a case of "caveat emptor". The problem is that the market is very opaque. It's extremely difficult at the low end to work out which ISP or colo is actually the cheapest.  

A similar situation exists in the UK with the mobile phone networks. All have a collection of extremely complex tariffs. It's nigh on impossible to work out which is "cheapest". Rather you go for the one that has the best deal on your "usual" activity (i.e. off peak calls), and hope that you don't have to use the network at any other time (as you'll pay through the nose).

As with BW, you only start to get transparency at the top end where you pay a fortune up front in order to then receive low tariffs.

I think that all we can reasonably do about it is avoid those ISPs that we know are gouging their customers.

The bottom line is that we always get stiffed ;)

Science is a way of trying not to fool yourself
[ Parent ]

Whuh? (4.50 / 2) (#63)
by p3d0 on Sun Jan 05, 2003 at 11:36:46 AM EST

...it is typical for 10GB of bandwidth to cost, per effective Megabit, about 20 to 50 times as much as the price for traffic at a level of 5 Megabit...
Can you explain that again? I don't understand. What are you comparing here?
Patrick Doyle
My comments do not reflect the opinions of my employer.
[ Parent ]
What I am comparing (5.00 / 1) (#64)
by xL on Sun Jan 05, 2003 at 11:54:30 AM EST

Is the amount of money for the actual bandwidth you are consuming, you are paying at the low level versus at the bulk level.

[ Parent ]
I still don't get it (none / 0) (#133)
by p3d0 on Tue Jan 07, 2003 at 11:13:56 AM EST

Which one is the low level, and which one is the bulk level?
Patrick Doyle
My comments do not reflect the opinions of my employer.
[ Parent ]
Precedent? (3.50 / 2) (#52)
by idiot boy on Sun Jan 05, 2003 at 08:26:42 AM EST

Surely asking permission in selected cases would have the potential to generate the odd lawsuit. let's say that day to day, Slashdot check with the authors of "Lego 'Citizen Kane'" that they're happy to be linked. All is fine and dandy.

Along comes the day that someone finds a document on an MS website detailing a nefarious conspiracy involving BillG to have Linus T kidnapped and murdered. Malda etal link away and the next day get  a mail from MS lawyers arguing that they should have been asked whether or not they could be linked to.

What I'm trying to say is that it seems to set a precedent that website owners can *expect* to be asked before being linked to. I think that this is dangerous (look at theinquirer.net who got a nasty letter from Sun when they posted a link to a PDF on Sun's website that shouldn't have been public).

We don't want a situation where meta or RealNews(TM) sites have to *warn* sites that they're gonna get linked to.

I'm not a lawyer but any excuse seems to be used these days. Does this argument hold any water?

Science is a way of trying not to fool yourself

The irony of slashdot not mirroring sites (4.22 / 9) (#59)
by ennui on Sun Jan 05, 2003 at 11:16:03 AM EST

On the one hand, you have /. saying "there's copyright issues involved if we mirror the site," so they don't. However, the easiest path to rack up karma is to cut and paste the content of a slashdotted site into a comment, so /. effectively is mirroring the site anyway, with the added bonus that whatever was cut and pasted might be altered or incomplete. In spite of their "comments are owned by whoever posted them" pseudodisclaimer they do and will delete comments when scared enough, so it's not a great leap to say they are ultimately responsible for copyright violations in comments on some level.

So, instead of biting the bullet and trying to do real caching, they let commenters do it and pretend it doesn't exist or say "a poster did it, we're not responsible for what they do" until they get a scary enough letter from a lawyer.

kirby loves you

Which ones? (4.00 / 1) (#65)
by Donblas on Sun Jan 05, 2003 at 11:58:00 AM EST

they do and will delete comments when scared enough

I remember seeing comments deleted that were purported to have threatened the life of the POTUS, and heard about some resulting from Scientology threats. Are there others?

[ Parent ]

Yes (none / 0) (#86)
by damiam on Sun Jan 05, 2003 at 05:44:39 PM EST

There were some that exploited a Javascript bug turing all links on the page into goatse.cx. I seem to remember that there were one or two other cases, but I can't think of the details.

[ Parent ]
Copyright isn't an issue for caching web proxies (5.00 / 1) (#97)
by pin0cchio on Sun Jan 05, 2003 at 10:07:59 PM EST

On the one hand, you have /. saying "there's copyright issues involved if we mirror the site," so they don't.

Since October 1998, there is no copyright issue involved in running a caching web proxy in the United States. A rider to the Digital Millennium Copyright Act, codified as Title 17, U.S. Code, Section 512, permits any person to run a caching web proxy so long as he provides a contact to remove works that the owner doesn't want published and hides any content in question.

AOL Time Warner Inc. already runs a caching web proxy for members of the America Online service. Slashdot could do the same as an extra perk for subscribers.

[ Parent ]
/.ed, twice (4.57 / 7) (#62)
by zygo on Sun Jan 05, 2003 at 11:30:08 AM EST

We have been slashdotted twice. The first time one of the team posted an article with the link (and the mirrors), unfortunately the mirror died because it was mirroring RedHat8.0 and Mandrake9.0 too at the moment. The main page stayed up. We also sent an email to one of the sysadmins some days before posting to /., but he didn't fully understand the term "being slashdotted" and so couldn't imagine the amount of traffic generated. We were /.ed at 2am, and when at 8am the sysadmins saw the traffic generated by one single machine they reacted by
  • Panic!
  • Suspect virus and/or attack to the machine and sen email to the owner of the machine
  • Cut the connection to the machine
  • Get all angry because of the huge shock and the big traffic
  • Bring the connection back at 10kbps instead of 100kbps

They may still hate us for the /.ing. Some of the sysadmins said thet it's good for the school's image, but other consider it a waste of bandwidth.

I think it is more the poster's duty to tell the webmaster about the possible /.ing.

The second time it wasn't us posting the article but as soon as we noticed it we informed the sysadmins.

communication would solve so much (none / 0) (#128)
by kpaul on Mon Jan 06, 2003 at 07:35:54 PM EST

re: "I think it is more the poster's duty to tell the webmaster about the possible /.ing. "

2014 Halloween Costumes
[ Parent ]

Preemptive Panhandling (5.00 / 4) (#67)
by Khuzud on Sun Jan 05, 2003 at 12:28:38 PM EST

I do think sites like Slashdot, which have the potential to direct a damaging amount of bandwidth at an unsuspecting site, should have a corresponding responsibility to take care that the sites they link to are not harmed.

Other posters here have talked about Apache mods that would redirect Slashdot traffic, to be turned on after you discover you've been Slashdotted. I think it would be interesting to install such a mod preemptively.

Any traffic from Slashdot would be redirected to a page which said "I'm sorry, I can't support a link from a high-traffic site like Slashdot. Please click the PayPal link below to make a dontation to my bandwidth fund, and you'll be forwarded to the article in question. A donation of $x will pay for 100 of your fellow Slashdotters to view the article along with you."

A nicely sophisticated module would track total donations, and the panhandling page would go away once there was enough funds to pay for bandwidth.

Would people pay so that other people could see the article? Maybe. But more importantly, Hemos or CmdrTaco could see the page and, if they felt it was good enough, pay enough so that their Slashdot readership wouldn't have to pay anything. The Slashdot editors cover the cost of bandwidth, which feels right because it's their fault you're being slashdotted, right?

And it's all automatic. Install the mod on any site you think might ever be linked to by a high-readership meta site. If you never get linked to by the site in question, no problem, nobody will ever see the donation request. But when if does happen, it's all taken care of automatically.

hey...that might actualy help the subscription (none / 0) (#70)
by modmans2ndcoming on Sun Jan 05, 2003 at 12:32:49 PM EST

since taco and hmos have problems with how to get folks to pay for /., they could offer, as part of their service, a way to avoid that redirect by paying for each member.

[ Parent ]
COuple of probs (4.00 / 2) (#108)
by resquad on Mon Jan 06, 2003 at 03:10:33 AM EST

While the concept sounds awsome to me too, there are a few problems.
  1.  Some clients don't send refer info...Ya wouldn't know
  2. I click on link from slashdot and get stupid message to donate because of refer...ok,,, type in link instead, no refer... Problem Solved (for me).

[ Parent ]
But it still reduces the bandwidth.... (none / 0) (#112)
by Elkor on Mon Jan 06, 2003 at 10:09:23 AM EST

Most people use IE or Netscape, which does the refer info, so that would work.

And most users probably wouldn't think to type the link in (gasp) MANUALLY!

While I agree that there are the problems you pointed out, for the most part it would reduce the amount of traffic to the site. Which, to counteract Slashdotting, would slow down the bandwidth consumption.

Short of putting a password on the page, there will be a way to get around any redirect or reftag check.

Such as using one the sites that provides autoredirects for long URLs, and put a link to the redirect in their post. Then the ref comes up from the redirect site, not from /. Then it turns into a long laundry list of sites that need to be put into the file.

"I won't tell you how to love God if you don't tell me how to love myself."
-Margo Eve
[ Parent ]
I think linkage is ok...but (2.66 / 3) (#68)
by modmans2ndcoming on Sun Jan 05, 2003 at 12:29:22 PM EST

Fark despratly needs to impliment threads....blahh.....god, get some slashcode or something.

fark is to meta... (none / 0) (#127)
by kpaul on Mon Jan 06, 2003 at 07:28:18 PM EST

what drudge is to news:
nothing new, nothing new...

2014 Halloween Costumes
[ Parent ]

Worrying about a non-issue (4.55 / 9) (#72)
by pla on Sun Jan 05, 2003 at 01:01:29 PM EST

For the purposes of the Slashdotting itself, yes, a site can stop responding to (most) visitors. That does not equal the site actually "going down", however, nor does it equal a huge bill for bandwidth.

Any sane admin should fully EXPECT the possibility of an extremely large burst of activity, and configure the web server accordingly.

Two obvious steps that come to mind, without even looking into 3rd party tools:

Apache - Use the "MaxClients" field (it defaults to 150). Set it to two or three times the highest you normally see, and you'll have no problem. Although a burst will result in a LOT of people not actually seeing your page, you will not unexpectedly get a $25k bill for bandwidth at the end of the month. For "personal" web servers, that get basically no traffic (ie, very rarely more than one visitor at a time, two or three would count as a mini-burst), a value of "6" will suffice. For light-duty public web servers (most smaller corporate/organizational POPs), 30 should do just fine (keep in mind, that means 30 people AT A TIME have to try to get to your site before the server starts turning people away).

Linux (BSD presumeably has similar functionality) - Enable QoS, and rate-limit your web server to something that won't cost you a few thousand dollars if a three-day burst occurs. 64kbps (kiloBITS) provides decent responsiveness under normal use (unless you serve something huge, like full-length movies), yet incurs a maximum throughput of approximately half a gigabyte per day. Since bursts usually last less than one day, plus a day or two of tapering off, this shouldn't put *anyone* over their monthly limit.

Basically, although I agree that sites with enough viewers to cause the Slashdot effect should show a bit more responsibility, geeks *constantly* slam companies for using security models that depend on the end user to "play fair". Why should we hypocritically exhibit that exact same trait? Technological measures for preventing the damage (if not the burst) already exist. Use them, or give up any right to complain.

our enemies (3.00 / 1) (#87)
by rkh on Sun Jan 05, 2003 at 06:11:43 PM EST

I don't think that the "slashdot effect" is as simple as too many http requests hitting your box. In the past(say, 2-3 years ago) this may have been the case, but things have shifted a bit now that the computer industry has lost some of the wind that was in its sails. Slashdot/Fark/etc have enemies now, enemies that doubtlessly are knowledgable in cramming countless poisons down a victim's pipe. If one of my sites was ever posted on one of these large sites I wouldn't be afraid of the site slowing, I would be afraid that it would get rooted. I say that if the robots.txt is willing, then a local cache link should be given, with the caveat that page hits be tallied and sent to the site's maintainer.

[ Parent ]
read this (2.00 / 18) (#74)
by anonymous pancake on Sun Jan 05, 2003 at 01:21:59 PM EST

here is a relevent article I posted a few months ago about the slashdot effect....

As an assistant member of the security team of a large fortune 500 company, I have discovered a new form of terrorism stemming from the deepest underground of the Internet.  A site catering to hackers, communists and anti-Americans called Slashdot.org has created a new type of denial-of-service attack known as `the Slashdot effect'.  This attack has been used against what are seen as the enemies of the `Open source movement' which include many large American companies such as Microsoft as well as many American media companies such as Time-Warner-AOL. The Slashdot Effect could have a potentially crippling effect on the American computer industry and I feel it is justified to offer my own advice on this problem.

What is the Slashdot Effect?

The Slashdot Effect (also known as Slashdotting) is a new form of denial-of-service attack stemming from the site Slashdot.org. Once they find a `target' (whether it be a large media company or small personal homepage) the URL of the site is posted on the front page of Slashdot.org. Members of this site attempt as quickly as they can to follow these links and overload the target server. This causes the `target' website to slow to a grinding halt before going offline. It can sometimes take days or even weeks for the site to recover from such a surge of traffic, and often the servers can be damaged beyond repair (that is, they cannot be fixed with a simple defrag!).

Who is normally the target of the Slashdot Effect and how is it done?

Many American companies have already been attacked by the Slashdot Effect. Targets often include news sites such as the New York Times as well as well as large American companies such as Intel. Sites that criticize the open-source movement are a prime target. For example, lets say an American media website such as the London Times does a review of a little known operating system known as Linux. Linux is an operating system developed by a hacker from communist Finland, which is based on code stolen from an American operating system known as Unix. It was created in cooperation with a communist group known as g.n.u. (Which stands for Glorified Novelty Unix) and is generally unusable by non-hackers. Obviously since it is such an archaic and unstable operating system compared to those made by American companies such as Microsoft it would get a bad review on the London Times. Once a Slashdot member discovers this honest review the URL would be posted on the front page of Slashdot.org. A flood of users would follow the link to the site and bring the server to a grinding halt. Since most of these users are terrorists they would probably have ads disabled using European hacking software. This would mean a potential loss of thousands of dollars worth of ad revenue. To top it off, members of Slashdot.org often plagiarize the articles and post it on illegal mirrors, furthering the loss of ad revenue. Members of Slashdot are rewarded for plagiarizing in the form of `Karma', a form of hacker currency, on Slashdot.org.    

  What can I do to avoid the Slashdot Effect and how would I deal with it if it happened?

The easiest way to avoid the Slashdot effect is to refrain from posting anything about any open-source software, especially Linux. Focus your website on fine American companies such as Microsoft. You can also set up your server to reject any links from Slashdot.org, something many people have done. If you think your site is being attacked by the Slashdot Effect, contact the authorities immediately and report this act of terrorism. The penalties against hacker/terrorists are stiff and you can feel confident that the perpetrators of this terror will be punished in the harshest possible means.

. <---- This is not a period, it is actually a very small drawing of the prophet mohhamed.

P2P Geocities... (3.50 / 4) (#76)
by dipierro on Sun Jan 05, 2003 at 02:01:45 PM EST

The solution of course is P2P. Basically Freenet, only without all the privacy crap.

P2P caching would be easier (5.00 / 1) (#79)
by MfA on Sun Jan 05, 2003 at 03:14:13 PM EST

If you take out all the privacy crap from freenet there is no more reason to do everything in a distributed manner. A single authoritive source for content then becomes the most robust approach again, we dont need P2P hosting but P2P caching.

The original server can simply keep a list of caches and redirect new clients, not as elegant as more complex schemes but practical. You would need a ridiculous amount of hits before the bandwith for simple referrals became an issue.

[ Parent ]

Same thing? (1.00 / 1) (#81)
by dipierro on Sun Jan 05, 2003 at 03:48:32 PM EST

If you take out all the privacy crap from freenet there is no more reason to do everything in a distributed manner.

Redundancy. Same reason that Akamai exists. It's faster, and less likely to go down.

A single authoritive source for content then becomes the most robust approach again, we dont need P2P hosting but P2P caching.

Assuming static webpages isn't this the same thing?

[ Parent ]
Only if I misunderstood (5.00 / 1) (#94)
by MfA on Sun Jan 05, 2003 at 08:38:05 PM EST

From Akamai's PR on EdgeSuite:

Site owners maintain a minimal "source" copy of the Web site and EdgeSuite provides global delivery ...

So it is just one big specialized cache.

Since you mentioned Freenet and Geocities I thought you meant an approach where the P2P network truely supplied the hosting for the content, instead of just being a cache. This has a lot of problems, you need a distributed directory service to find anything and keeping stuff in sync is hard. It also requires a lot more dedication from the "peers" on the P2P network, since at all times the content must be kept alive and online by at least 1 "peer".

This is less P2P and more a distributed server.

[ Parent ]
Akamai? (1.00 / 1) (#116)
by dipierro on Mon Jan 06, 2003 at 11:12:32 AM EST

This has a lot of problems, you need a distributed directory service to find anything and keeping stuff in sync is hard.

We already have a distributed directory service - DNS. Keeping stuff in sync is easy if you use Freenet's method, basically versioning. Versioning isn't what causes Freenet to be so shitty. It's the lack of a good directory and the need to pass pages around so much, both results of the privacy requirements.

It also requires a lot more dedication from the "peers" on the P2P network, since at all times the content must be kept alive and online by at least 1 "peer".

Just like Freenet it shouldn't be a guaranteed service. If you want your site up 24/7 guaranteed, then you can have a main site and then use it as a cache. Eventually maybe a trust mechanism could be incorporated to allow permanent sites, but that's just gravy.

As for Akamai, I am fairly certain that no redirects are involved, but rather DNS tricks are used. Since one of the main points of Akamai is redundancy it would be kind of stupid if they relied on a central server through which all hits propagate.

[ Parent ]
Peers are not servers (5.00 / 1) (#132)
by MfA on Tue Jan 07, 2003 at 09:43:33 AM EST

Versioning doesnt help without a list of "peers" which have a copy, otherwise the older versions around the network cant be efficiently purged/updated (unless you rely on diffusion like Freenet).

DNS isnt fully distributed, it cant be for the same reason a fully P2P hosting system would have problems ... there have to be authoritive sources.

Of course a DNS server doesnt take too much bandwith, so maintaining a non distributed dns.p2phosting.org server somewhere isnt too big of a problem. We are getting a little far away from P2P though ... DNS was not designed to accomodate resolving names to end points which change frequently.

Im sure Akamai does not use redirection, I was only suggesting to use that for the P2P caching, but they do use a central authoritive source for the content for which their network is an elaborate cache. To you the cache just seems the original server since you never communicate with it directly.

Akamai has reliable servers to act as the cache though, the only reliable point in the P2P caching solution would be the original server ... so it has to be visible, and the one to receive clients initial requests.

BTW Im doubtfull Akamai just rely on DNS, it is too coarse grained and updates spread too slowly. It is fine for loadbalancing between different subnets and for localization, but Im sure they use IP based redirection once the traffic gets inside their server centres. They can do that because they control their internal networks, we cant control the internet so we cant just use DNS.

[ Parent ]

Bittorrent (5.00 / 1) (#92)
by jacoplane on Sun Jan 05, 2003 at 08:18:07 PM EST

BitTorrent could be used to solve most of these issues, at least if the technology would be built into major browsers like Mozilla. If you're hosting any big image files, make the links bittorrent links, and they will be downloaded P2P through bittorrent. Of course, as I said, this kind of thing would only really be useful if it were built into major browsers...

[ Parent ]
Solutions for everyone.... (1.50 / 2) (#78)
by Niha on Sun Jan 05, 2003 at 02:51:57 PM EST

  I have to say I have no very much idea of the technical stuff,but still I think this should be solved to satisfy everyone as possible.Mirror the site sounds a good idea

Adaptive Rate-Limiting? (4.75 / 4) (#84)
by mawa on Sun Jan 05, 2003 at 05:00:29 PM EST

What I'm thinking could help out on the side of a 'victim' webserver is a bandwidth rate-limiting thing that changes over time. Some people have recommended capping bandwidth at, say, 64 kbps. This, however, is not very practical for people with broadband, and if numerous people are trying to get to it, it's even slower.

What I think would work far better is an 'adaptive' rate-limiting thing. Normally, the server could be allowed to use 1.5 Mbps (or whatever). However, if it's been at, say, 1 Mbps for more than fifteen minutes, the usage cap drops to 1 Mbps. Ten minutes later, it's still at 1 Mbps, and it drops some more. It would keep dropping over time until a pre-determined limit was met -- you could tell it, for example, "I'm allowed 50 GB of transfer a month," and it would be able to say "I can stay at 64 kbps now, and if it lasts all month, I'll still be all set." (And as the connections subside, the bandwidth quota would go back up.)

What this does is allows for people to have a very high-speed connection, but when they're suddenly hammered with unexpected traffic, they can still regulate the speed appropriately, without exceeding whatever limits they have. Now... Does anyone know if there is software/hardware out there that does this?
mySig v.0.0.1-pre -- new sig to come soon
Adaptive Rate Limiting (4.00 / 2) (#88)
by waveclaw on Sun Jan 05, 2003 at 06:24:35 PM EST

The networking tool to do this is called a 'traffic shaper.' These devices help manage very bursty traffic from the server side. The need for this is largely due to issues with the HyperTest Transport Protocol which is designed to be simple, not robust.

On the client side is the gopher protocol and its clients, but that is largely a matter of Internet history since the mid-90's rise of HTTP+HTML.

[ Parent ]

robots.txt (4.25 / 4) (#89)
by dazk on Sun Jan 05, 2003 at 06:40:43 PM EST

Search engines look at robots.txt, maybe a similar txt file could be placed that is meant for metanewssites or similar stuff. Let's call it mirror.txt and you put in there something like


etc. That way smaller sites could indicate that they want to be mirrored to esape being slashdottet.

Just my 2c...
----- Copy kills music! Naaah! Greedyness kills Brain! Counter: Bought 17CDs this year because I found tracks of an album on fileshare and wanted it all.

Good idea (4.00 / 1) (#93)
by hex11a on Sun Jan 05, 2003 at 08:23:44 PM EST

I don't know enough technically, but if this isn't already available it should be. A mirror file or tag or whatever saying that people can mirror this site for x days so long as nothing is changed should take care of the slashdot effect, leaving slashdot to deal with all the bandwidth for its mirrors. It shouldn't be too hard to implement a mirror for x days and then destroy script for linkage sites.

Of course, this doesn't solve the problems of those who don't want to be mirrored or slashdotted but it's a big step in the right direction. Good thinking.


[ Parent ]

Use a gateway instead (none / 0) (#95)
by MfA on Sun Jan 05, 2003 at 09:09:07 PM EST

Why not set up an URL based gateway? (ie. http://gateway.slashdot.com/original-url) Using all the usual HTTP/HTML caching directives to make sure the site works as intended, and can even still be updated transparantly.

Its a rather limited solution to a limited problem though ... Id rather see something more far reaching which would allow clients to cache content for sites like online cartoons, in addition to solving the slashdot problem.

[ Parent ]

linkage (3.33 / 3) (#96)
by el on Sun Jan 05, 2003 at 09:57:47 PM EST

if you dont want your bandwidth run up, take precautions. password protect, or take the files down after a few days, its your account thats being assaulted with visitors, take care of it.

some ideas for "victims" (5.00 / 4) (#99)
by ryochiji on Sun Jan 05, 2003 at 11:17:17 PM EST

A couple of friends of mine, whose site I'm hosting on my server, posted a story about a little movie they made to Slashdot and it's been "pending" for close to two months now. We asked around and were told that it's most likely being held for a "quickie" that may or may not come at some indeterminate point. This gave us the unique opportunity of preparing for a potential slashdotting.

Since my server only comes with 15GB/month of transfer and the movie file is 10MB, we brainstormed ways to make the file available to people without having to shoulder additional bandwidth costs. Some of the ideas we came up with are:

  • Self destructing file - One idea was to setup a script that would allow people to access the file a fixed number of times, and then delete the file once that cap was reached.
  • Mirrors - We put the file everywhere we could. Our university web accounts, ISP web spaces, my work computer (with a semi-static global IP).
  • Lower number of clients - On both my server and my work machine, I set Apache's MaxClient value to relatively low numbers. This will allow some requests to get through without killing the machines (and slow down bandwidth consumption).
  • P2P networks - We put up a note on the website asking people to share the file through P2P networks. It doesn't look like this has quite worked yet...
If we do indeed get slashdotted, I'll be monitoring bandwidth usage very closely. At the end of the day, whether or not people can access the site and files is the lessesr of my concerns. As far as I'm concerned, I'll be shutting down the site as soon as bandwidth usage approaches the limit, unless someone agrees to pay for the additional bandwidth costs.

IlohaMail: Webmail that works.
pending since 9-11-2002 (none / 0) (#117)
by dirvish on Mon Jan 06, 2003 at 11:47:12 AM EST

2002-11-09 21:25:25 Mmmmmm, beer floats. (articles,humor)

Technical Certification Blog, Anti Spam Blog
[ Parent ]
[that other site] *ate* my story... (none / 0) (#126)
by kpaul on Mon Jan 06, 2003 at 07:18:43 PM EST

Seriously. Really strange. Got this cryptic email:

Your story has been accepted. Please stand by the stairs so we can push your story on the site.

2002-10-07 23:58:45 Don't Link to Us! (articles,internet) (accepted)

Never saw it (section or otherwise), tho, and as of yet haven't received a ransom note. Very curious indeed.

My host (plug to spinweb) just told me to let them know so they could keep an eye on the box. ;)

2014 Halloween Costumes
[ Parent ]

Damnit (3.50 / 2) (#119)
by Psycho Les on Mon Jan 06, 2003 at 12:55:13 PM EST

The only honorable thing to do is to redirect to goatse.cx

[ Parent ]
Hello, that's the business model (4.00 / 3) (#100)
by bill_mcgonigle on Sun Jan 05, 2003 at 11:37:31 PM EST

Look, none of these sites are going to put a cache in because that would ruin their business model.

The whole point of a portal site is to serve up low-cost text pages while offering the content of the linked pages to the customer.  The linked pages pay the real bandwidth costs, 'cause, let's face it,  most interesting stuff on the web is bandwidth-heavy.

The real problem that noone's talking about is that unless you run your own Apache, most ISP's make it very difficult to cap your account's bandwidth.  Funny how they make money by not letting you automatically cap.

Some cron jobs with some accounting tools might work.

slashdotting (3.66 / 3) (#101)
by kha0z on Mon Jan 06, 2003 at 12:10:12 AM EST

ugh... just the thought is scary at times how little free home pages can end up being such a burden on an unspecting victim.  i suppose common courtesy is an ideal solution but in truth this is never really the case. i think a possible solution to this problem comes with the responsibility for an ISP to protect its customers. a modified apache installation or a log monitor can check for rapid HTTP REFERER values in the incoming HTTP GET headers and should monitor for the slashdot effect. essentially preventing access when such an event occurs and notifing their customer. this in essence should prepare the customer from such an event and let the customer choose whether or not to accept in the incoming traffic.

this is another question of business ethics which we all know do not really exist.

i disagree (3.25 / 4) (#107)
by jmd2121 on Mon Jan 06, 2003 at 01:21:08 AM EST

while i think your intentions are noble, I disagree.

it is the responsibility of the data provider to manage his costs.  there ARE solutions out there to prevent excessive bandwidth and they should be used if it is an issue.

it is NOT the responsibility of provider one to look out for the costs of provider two.  while I understand it may seem courteous, it is simply not workable.  who do you contact, what if they say no, should you not link to them?  I see content on te web largely as free speech.  links are speech, and just because someone else doesn't want you to post their link, too bad.  It is their responsibility to protect themselves.

That said, there are cases where you can be an asshole and I'm not saying you should do that either...  it is a slippery slope to ask permission all the time, because soon you have people expecting that you will ask permission, and then demanding it, and then suing you if you don't.

as for copies, mirroring presents clear copyright issues.  not ok.

This can happen any time (none / 0) (#111)
by Quila on Mon Jan 06, 2003 at 09:47:01 AM EST

It's the nature of the net. After some guys starting a barbeque with liquid oxygen (three seconds!) got put in an article by Dave Barry, they almost maxed out their university bandwidth. And that was several years ago. The discussion of possible solutions pops up on Slashdot from time to time, but the admins never do anything about it. Mostly, the suggestions are to notify a site in advance and offer to mirror, but I guess that takes too much effort. Mirroring without permission -- which would be the most effective method -- has legal problems. Google cache often helps though.

Mirroring vs. Caching? (5.00 / 1) (#121)
by PunchMonkey on Mon Jan 06, 2003 at 02:00:35 PM EST

Mirroring without permission -- which would be the most effective method -- has legal problems. Google cache often helps though.

Can anyone tell me why there would be copyright problems with mirroring a page, but not with caching the page? (Isn't that the same thing?)

[ Parent ]

Very good idea! (none / 0) (#129)
by Quila on Tue Jan 07, 2003 at 02:45:34 AM EST

You should go over there and bring it up.

Slashdot could set up a cache engine that would cache a page after the first link from Slashdot is followed.  If the number of links from Slashdot gets too much (track the response time maybe), people could be redirected to the cached page.

Sounds perfectly legal to me.

[ Parent ]

Would slashdot forego its own banners? (none / 0) (#130)
by MfA on Tue Jan 07, 2003 at 04:37:38 AM EST

That is the biggest problem I think, would slashdot be willing to provide the bandwith without putting their own banners in front? Without consent that would almost certainly be illegal.

Another problem is implementation Caching it on their own http proxy would be legal, this would be awkward though ... I dont think anyone would use it.

Caching it behind a different URL which still had the original URL as its tail is already treading on thin ice, it could be argued that is misrepresenting the source ... but hell, Google is getting away with more than that.

[ Parent ]

Good questions. (none / 0) (#131)
by Quila on Tue Jan 07, 2003 at 06:14:39 AM EST

For one thing, Google doesn't have ads when caching, but they do have their own bit up there identifying it as a cached page. Slashdot could do the same. Since cache is specifically exempted in the DMCA, they should be legally okay, especially if any Slashdot cache only has a three-day lifespan, which is probably longer than most heavy Slashdottings.

[ Parent ]
please... (none / 0) (#122)
by kpaul on Mon Jan 06, 2003 at 02:24:55 PM EST

Don't link to us!

Thank you.

We now return you to your regularly scheduled k5 post.

2014 Halloween Costumes

Dot That Slash (5.00 / 2) (#124)
by jefu on Mon Jan 06, 2003 at 05:27:54 PM EST

This article brings up some interesting problems, problems that probably have no good solutions.

Many posts have mentioned Slashdot as a prime example of a troublesome site. Certainly the verb "to slashdot" comes from that site, and certainly much slashdotting comes from slashdot. But the same thing can occur with memepool or metafilter or even K5. Indeed, with the size and interconnected nature of the blogging community, a popular link hitting the right blogs can do essentially the same thing.

Is this a bad thing? Probably not. It is certainly a pain for the people on the receiving end who wind up paying for bandwidth they did not anticipate, but if they had not wanted people to read their web pages they'd not have put them up.

I do think in general that it is unreasonable for network service providers to cost out bandwidth in that way. Suppose, for example, that a user has a web site that has a typical bandwidth usage of BTU (Basic Transfer Unit)/month (all numbers completely hypothetical and fanciful) and they're paying for 5BTU/month (thats the only level at which the service provider will charge them). After five months they've paid for 25 BTU and used 5 BTU. Now,in the sixth month they get slashdotted and over the course of a month use 20 BTU. Over six months the total usage was 25 BTU and they paid for 30 BTU. But the service provider is going to charge them for this usage as though their service is going to continue to be 20 BTU/month instead of dropping back to their BTU.

The service provider is probably really complaining because they maxed out their bandwidth - and that probably because they bought bandwidth at a rate corresponding to their average usage which is computed on the expected usage by their clients (that is the 1 BTU/month). I'd be surprised, in most cases, if the service providers would not find themselves snowed under if all of their users were to use all the bandwidth they're ostensibly paying for.

Perhaps this just means that the pricing model needs to be adjusted a bit. It certainly means that service providers need to be ready to limit bandwidth to protect their users. The mismatch between instantaneous bandwidth and average bandwidth is part of the problem, but not all of it. Should a service provider cap someones bandwidth if it peaks at (for the above user) 25 BTU/month - even if that only lasts for 1 day and gets nowhere near the limit?

I think the notion proposed in an earlier comment is also valuable. That is, to provide a file "mirror.txt" that describes allowable mirroring behavior - so that slashdot (or fark or whoever) could mirror files linked to automatically when needed. For this to work, linking sites would need to be willing to mirror pages - but I suspect that, as this could provide another page to put advertising on that it wouldn't bother them too horribly much.

This doesn't solve all the problems though, as some nice service provider could say "never mirror the files here" and thus collect more money from their users. Maybe this needs to be part of the file (a defined meta keyword, perhaps).

The most intriguing notion though was the idea of using peer to peer, distributed replication services. Obviously, this would work only for static content (at least with todays technology) but I think it looks very interesting indeed.

Some in this discussion have said that the users should never put anything on their websites that might be of interest. Or that users should be ready to take down their websites if they get too much use (what if the user is out of town?). Or perhaps sites like slashdot should be prohibited. Or linking should be prohibited.

Perhaps, some might say, only sites prepared for being linked to should be allowed to publish at all. Ick.

Farked! (none / 0) (#137)
by jefu on Thu Jan 09, 2003 at 12:58:34 PM EST

fark recently posted several links (they're still appearing) to stories about the police department of Cookville, Tenn, officers of which killed someones dog after a traffic stop in the belief that it was attacking the officers involved.

The police department in question put up a web page about the incident (I'll not link to it here, you can find it easily enough at fark). Today the website has been very effectively slashdotted. Or, to borrow a bit from the Good Mr. Carroll :

The their pages got jammed by the surfers today
A thing, as the Bellman remarked,
That frequently happens when fame comes its way
And the website is, so to speak, "farked"

This is especially interesting to me after reading this discussion and then noting that the page in question is the department's version of affairs. Other web pages with (possibly) other slants have been hosted by newspapers and the like and are still functioning normally. I'm not sure what it means, but I suspect it means something. (Of course, it may well depend on what "means" means. )

The ethics of linkage | 139 comments (137 topical, 2 editorial, 0 hidden)
Display: Sort:


All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!