Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
Multicasting, IPv6, QoS - the Internet's failures

By jd in Technology
Fri Apr 20, 2001 at 09:59:17 AM EST
Tags: Politics (all tags)
Politics

At first, one might quite reasonably ask, "What failure? These are wonderful technologies!" And that is perfectly true. Wonderful though they are, they're simply not getting deployed in the Real World.


Ok, before continuing, I'll back up a little, so that those less familiar with these concepts can see what I'm talking about.

First, Mulicasting. This is a technology which permits you to stream data to multiple people, without overloading the network or the server. Exactly one copy is transmitted by the server, and the network makes duplicates of the data as and when necessary.

As streaming data (such as newscasts, music, concerts, etc) is now a significant part of the Internet, AND one of the biggest problems is overloading, due to the attempt by servers to send a copy to each and every recipient in turn, multicasting offers some significant advantages.

(A classic example was the Leonid meteor shower, a few years back. Webcams in areas that could see the shower were hopelessly overloaded and many collapsed entirely, due to the demand. With multicasting, you can serve one person or one billion. It's all the same to the computer.)

Next comes IPv6. This is a complete re-write of the Internet protocol, providing support for much larger addresses, transparent migration from one network to another, automatic configuration, flow control, security, simpler packet structure (leading to faster communication), "anycasting" (where you can request a service where you don't know, or don't care, what the server's address is) and greater flexibility for programmers.

You don't need to understand every single one of these advantages to notice that there are a lot of them. About the only one that is likely to matter to the average user is the migration. This allows people with hand-helds, lap-tops and other small computers to be on-line while travelling, without interruption. As soon as you reach the limits of one ISP, the computer will automatically switch to another, ensuring that all your existing connections stay up and running. (For the technical purist, this is Mobile IP, with IP Migration.)

QoS is the most interesting of all, but there isn't a particularly good website, at least as far as I know. QoS basically tries to distribute available network resources between applications, such that no one application can hog the network, and/or applications which need extra resources are guaranteed those resources.

Typical QoS software includes CBQ (Class Based Queueing - allocate resources by the type of service used) and RSVP (Reservation Protocol - reserve a certain percent of network resources for the use of a specific application).

So far, so good. So why are these failures?

This is where we move from the technical to the political. The main reason these aren't in common use is that they do allow more efficient use of the Internet.

X-Files! No, not quite. Let me explain. Multicasting means that you could have one server transmit radio-style or TV-style broadcasts to an unlimited number of users. Since it's not restricted by the speed of the connection, there is no reason why anyone would need to buy fast connections from ISPs, for this kind of work.

As a result, deploy multicasting and you kill one of the bigger sources of income for these people. Further, since there's no way of determining how many people are receiving, you can't do billing by access.

This last point has a flip-side, too. Since there IS no way of determining how many people are receiving, there's no means for advertisers to know how large their audience is. They'll know if it's zero, or not zero, but that's it. Result: since many such netcasts depend on advertisers to survive, they have absolutely NO incentive to obliterate their bank balance, just to provide a superior service to more people.

IPv6 is moribund for fairly similar reasons. The very strengths of IPv6 are killing it in the eyes of the people who need to buy into it, if it is to survive. Mobile IP? Switching ISPs??? You think that any ISP in its right mind is going to encourage you to buy a rival service?!

Then, there's anycasting. If you're not tied to a specific resource, but can scan the net for the one best-suited to your needs and location, where is the benefit in paying some ISP for some specific one that you might end up never using, because it's just not suitable?

Simpler headers & simpler network topology mean simpler, faster routers. Cisco is having a hard enough time as it is, without their top-of-the-line cash cows being turned into hamburger.

Built-in security? That is going to go down well with companies selling expensive crypto technology, isn't it? IPv6 uses IPSec, which in turn uses DES3, which is reasonably fast and practically unbreakable, at least at the present time.

Now we move onto the last part. QoS. By now, you should see that most ISPs and Internet Backbone providers would rather be eaten by lions than provide superior-quality service for less money. That's not out of some kind of paranoia, it's simply that these people are in business to make money, not hand it out.

And THAT is the point of this entire article. Those in a position to deploy or promote superior technology are ALSO required by shareholders and the profit motive to not do so. Ignoring the money factor would be suicide for any company, which (in turn) makes any new technology they deploy meaningless, because they're no longer there to give anyone access to it.

This leads me to the ultimate question -- is the Internet too important to leave to the private sector?

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Poll
These technologies...
o Should be deployed immediately. Call the National Guard! 30%
o Are important, if the Internet is to grow. 51%
o Are too prone to abuse and are much harder to censor. 0%
o Are of no practical value. Keep things as they are. 0%
o Are so much piffle. I say we re-install IPv1 or Token Ring. 5%
o Were developed by aliens. That's why they keep crashing. 2%
o Are Media Fiction, intended to sell more Linux Journals. 4%
o Would eliminate the Slashdot Effect. Go for it! 5%

Votes: 72
Results | Other Polls

Related Links
o Mulicastin g
o IPv6
o Also by jd


Display: Sort:
Multicasting, IPv6, QoS - the Internet's failures | 51 comments (38 topical, 13 editorial, 0 hidden)
To important for the private sector? (3.33 / 3) (#3)
by daystar on Thu Apr 19, 2001 at 09:46:07 AM EST

Do you generally get better or worse service from government-run organizations, or private companies? The private sector generally does a pretty good job of giving people what they want. I don't know how you can conclude that internet technologies would develop/be adopted faster if more non-technical people (in the form of government) were involved.

--
There is no God, and I am his prophet.
Hmmm. (3.00 / 2) (#6)
by jd on Thu Apr 19, 2001 at 11:00:43 AM EST

Let's see...

  • Amtrak vs British Rail or Euro Rail.
  • Overpriced health insurance that's been the subject of endless inquiries vs. the NHS
  • AOL vs. JANET
  • UNIX vs. MULTICS
  • BASICA vs. ADA
  • 757 vs. X-34
  • K-Mart telescope vs. the Hubble Telescope

Let's also remember that the Internet =was= originally a Government project, and was only handed completely over to the private sector relatively recently.

I'd say that, when/if Governments put their minds together, small & limited as they are, they're capable of producing something infinitely superior to anything Bill Gates has turned out.

The Government is not the enemy. It's simply an organization that someone compiled with -g3 rather than -O3. Yes, there are stupid people in politics. There are also greedy people in politics. BUT, unlike some quango corporation, you do actually have a choice as to whether they STAY there. I don't recall companies having term limits or public voting rights.

You also have the choice of running, yourself. You want to see some technological marvel come to pass? Can't afford to set up your own company? Think your idea would solve all America's problems? Willing to actually see what America thinks? You are? Then what's your problem?

[ Parent ]

I still dont get it (3.00 / 2) (#12)
by eLuddite on Thu Apr 19, 2001 at 12:41:25 PM EST

When the NFS bowed out the Internet became a cloud of private networks. If you regulate technologies such that they become a barrier to entry for these private networks, you must want to do so for a reason:

  • ip6 - more ip space for more private networks. I.e., it's coming, gov't or no.
  • multicasting - free shit subsidized by private networks who would rather assume the considerable expense of laying fiber so that you can enjoy a superior point to point connection. Why is multicast a win if they're paying for unicast protocols? I realize multicasting gets more mileage out current bandwidth but that bandwidth is privately held and I'd rather see efforts at surpassing the limits of 'current,' myself.
  • QoS - Sounds like a competitive advantage in the making.
  • Security - I agree with regulation on this point. You should be liable for the damage caused by your insecure network and network stacks should be certifiably secure.
Apart from the (debatable) merits in the last bullet point, where is there a requirement for or an advantage in govt regulation of internet protocols and applications?

---
God hates human rights.
[ Parent ]

its not for "their" benefit its for soci (5.00 / 1) (#23)
by akb on Thu Apr 19, 2001 at 07:35:15 PM EST

Why is multicast a win if they're paying for unicast protocols?

Its not for the network owners, which is exactly the point the author is making.

Why is the Internet revolutionary? Because it has the potential to put everyone on the same footing with respect to information distribution. Text and images are essentially flat metered on the Internet, the barriers are not very high to reach a very large audience. Audio and video on the Internet on the other hand cost about $70/per user, which is a barrier to reaching and audience of any size. Removing this barrier with multicast would be of great value to society.

The author is asking whether there is sufficient incentive available to the private sector to remove this barrier. The Internet is valuable because it functions as a public space, despite it being privately owned.

Collaborative Video Blog demandmedia.net
[ Parent ]

Incentive can be there... (5.00 / 1) (#39)
by jason on Fri Apr 20, 2001 at 06:16:55 PM EST

The incentive is getting there, too. Two good starting points (imho) are Shenker, Clark, Estrin, and Herzog's Pricing in Computer Networks: Reshaping the Research Agenda and Chuang and Sirbu's Pricing Multicast Communication: A Cost-Based Approach. The latter recommend a group-size-based pricing scheme up to a saturation point. The growth rate of group size will be astronomical the first couple of years multicast is widely available. That type of growth looks really good to investors. Unfortunately, the initial cost of billing and accounting are likely to be high, but that cost should grow slowly with group size.

There's incentive, but it naturally carries the risk that multicast for entertainment (the money-maker, I'm sure) will flop. A few entrepreneurs doubtless will try; we'll see what happens.

(BTW, if you've never tried it, Research Index is a wonderful tool.)

[ Parent ]

yeah its for society (none / 0) (#46)
by OzJuggler on Sat Apr 21, 2001 at 09:09:59 AM EST

In the Western world (and other financially wealthy contries) we generally don't have to worry too much about whether we will starve to death or if NATO will bomb our family home, so there's plenty of more idealistic things that we have time to think about.

One of these things is the individual equality that is offered by the Internet.

When water and electricity were deemed to be necessary to everyone's (shared) ideal of a way of life, they both became fundamental commodites - water and electricty are infrastructure that is now taken for granted and upon which people depend for their lives.

I would argue that the same is becoming true of the Internet. As it becomes more important to have peer-to-peer electronic communication, so the Internet (and access to it) will (and should) become basic infrastructure upon which people's freedom and integrity will depend.

In many places, the government still runs common infrastructure such as roads, water, and electricity - so why not with Internet access?
It makes sense, and one day it will happen.

-OzJuggler.
"And I will not rest until every year families gather to spend December 25th together
at Osama's homo abortion pot and commie jizzporium." - Jon Stewart's gift to Bill O'Reilly, 7 Dec 2005.
[ Parent ]

Multicast and QoS are more common than you think (none / 0) (#49)
by drhyde on Tue Apr 24, 2001 at 06:24:51 AM EST

> multicasting - free shit subsidized by private
> networks who would rather assume the
> considerable expense of laying fiber so that you
> can enjoy a superior point to point connection.
> Why is multicast a win if they're paying for
> unicast protocols? I realize multicasting gets
> more mileage out current bandwidth but that
> bandwidth is privately held and I'd rather see
> efforts at surpassing the limits of 'current,'
> myself.

The company I work for does broadband satellite internet services - using the satellite both ways so the latency is kinda sucky, but if you're in a remote area which will never get DSL or cable, then we're the only game in town. Anyway, despite us offering "broadband" services, bandwidth really is very limited. So, when we do remote upgrades on the *nix boxes which do the magic between the clients' networks and their satellite dishes, we use multicast. We also use cbq to ensure that we always have remote access to those boxen regardless of how much porn the client is downloading, and things like that. OK, we don't use ipv6 :-)

I believe that my DSL provider at home also uses multicast for similar purposes, and plenty of companies have some kind of QoS on their routing boxes - particularly on the smaller lines they use for backup purposes when they have a plumber argue with the E1.

Just because you don't see these things being used as an ordinary user doesn't mean that they're not being deployed. They most definitely are being deployed, and are being used to solve thorny problems which could not easily be solved otherwise.

[ Parent ]
There is no conspiracy here (3.20 / 5) (#5)
by Fireblade on Thu Apr 19, 2001 at 10:40:32 AM EST

Implementation and adoption of these technologies is going to take time. If it's any consolation, your list of "failures" reads an awful lot like the list of must have features for the next release of our router.

I'd agree, in part. (4.33 / 3) (#7)
by jd on Thu Apr 19, 2001 at 11:08:16 AM EST

Yes, technology takes time to filter through to the private sector. On the other hand, multicasting was introduced in the 70's, and IPv6 was first proposed in the mid 80's.

That means it takes in excess of 30 years for a protocol to reach the "common folk". That's a long time, for what is nothing more than an sdjusted bit-stream.

With hardware, I can understand that. That's a perfectly reasonable trickle-down time for virtual reality, CAVE systems, parallel processing arrays, etc. These things aren't developed overnight, and have major consequences if they fail.

On the other hand, if you lose a few packets every so often from an unreliable datagram protocol, where packets are going to be lost all the time, anyway, it's not going to seriously scar your life. You can afford to deploy early.

Indeed, if you look at the software that has been successful (eg: Linux), it's software that follows the philosophy of "Release Early, Release Often". Those programs that -don't- follow that guiding principle are either VERY slow on the up-take (eg: the *BSD's) or fail completely & utterly (eg: early Mac OS)

Without feedback, you're dead. Without mindshare, you're dead. Without growth, you're dead. And none of the technologies I've outlined have any of those.

[ Parent ]

But there is growth (3.33 / 3) (#10)
by Fireblade on Thu Apr 19, 2001 at 12:01:42 PM EST

Without feedback, you're dead. Without mindshare, you're dead. Without growth, you're dead. And none of the technologies I've outlined have any of those.

This is where I disagree with you. The router that I am working on is targeted for, and only for, the major telecomm service providers. These are the people that are driving our product requirements and what they want, nay, what they demand, is QoS, QoS and QoS. We have had to supply timelines for the introduction/completion of each technology that you have mentioned in order to even be considered by these companies.

This time, the check really is in the mail.

[ Parent ]

timelines? (2.50 / 2) (#14)
by Michael Leuchtenburg on Thu Apr 19, 2001 at 01:15:23 PM EST

Can you tell us what said timelines are? I want to know when I'll be able to buy an ipv6 connection from any-odd ISP. :)

[ #k5: dyfrgi ]
[ TINK5C ]
[ Parent ]
RE: timelines (3.00 / 2) (#16)
by Fireblade on Thu Apr 19, 2001 at 03:16:27 PM EST

We have promised IPv6 by this fall. Everything else jd mentions by July of this year. From my POV, I wouldn't expect IPv6 to be available in the private sector any time soon, though, as that one is lower on the list than the others.

[ Parent ]
Try Japan. (4.00 / 1) (#40)
by jason on Fri Apr 20, 2001 at 06:34:06 PM EST

ISPs in Japan (IIJ at least, NTT?) offer native IPv6 now. Not tunnelled. There's one in Australia, too. Or go to the IPv6 Forum and look at the deployment section. Most are academic, but there are a few commercial offerings. I just wish Sonic (or even PacBell) would offer native v6.

[ Parent ]
I still don't understand why you're surprised... (4.50 / 2) (#25)
by tankgirl on Thu Apr 19, 2001 at 11:57:49 PM EST

...by the 30 year timeline adoption. How long did it take the _Internet Protocol_ to reach the "common folk"? Thirty plus years (see my comment below about the book "Where Wizards Stay Up Late". BTW, early introductions of these technologies were not scalable. It took quite a while to get these issues straightened out, so early deployment wasn't really an option.

Also, your example of Linux doesn't apply, an end user can install on _a single machine_ and make use of it. These technologies are more like languages, if you don't speak French you can't talk to the French guy in the corner (unless he happens to be bilingual in your language). Everyone has to speak multicasting for a packet to be properly routed on the MBONE, and make a difference. So the best way to cultivate it's use, is to remind the Network operators that there's a demand for it. It's like that old american shampoo commercial- where you tell two friends, and they tell two friends, and so on....

Tell your ISP "I want my IPv6, Multicasting, and QoS." Get your friends to do the same. IMHO, that's what will help to get this stuff introduced into the mainstream in a more timely manner.

cheers,
jeri.
"I'm afraid of Americans. I'm afraid of the world. I'm afraid I can't help it." -David Bowie
[ Parent ]
Crap, I tell you! (4.00 / 10) (#9)
by trhurler on Thu Apr 19, 2001 at 11:19:05 AM EST

Your whole hypothesis is ridiculous. The reason these technologies haven't seen widespread deployment is that they're expensive and time consuming. Outside of the open source world, IPv6 isn't really ready for primetime, and even there implementations are still not really polished. Multicasting is largely a matter of cooperative agreements; business model is hardly the problem - maybe you've heard of television, radio, and so on? The problem is, it requires a lot of people to sign on the dotted line, and those people have to see some benefits. The current shakeout in ISPs will help; it reduces the number of people involved and increases the value of signing the agreements for those who are involved. QoS is dying on the vine largely because it isn't necessary. Most sites only run one kind of traffic in any significant quantity, and that's usually http and/or https. The few that run other high bandwidth stuff are usually doing mirrored ftp/http access to a subrange of their http document roots, and could care less which protocol gets more use; it's all one big thing to them. It is easy to say "well, but it COULD be useful!" but nobody wants to pay for what customers won't buy.

Now then, about this private sector thing: you ARE aware that the whole internet is privately owned, pretty much - right? If you start regulating it and someone doesn't like your rules, he's free to just take his toys and go home. Or maybe you mean to nationalize every business that's on the net?:) Simply put, if you regulate the internet in the way you're proposing, it won't be ten years before the internet will be a dead wasteland, and all the companies and people that matter have created a different network built on the same protocols that thrives. If governments prevent that, then the variety you see today will end up being the same corporate shit we have on television and radio. Thanks, g-man. I really appreciate the way you create artificial barriers to entry so that things can SUCK MORE.

--
'God dammit, your posts make me hard.' --LilDebbie

Things take time. (4.57 / 7) (#11)
by Merekat on Thu Apr 19, 2001 at 12:39:52 PM EST

There is no vast conspiracy to deny people access to these new technologies. They are also not really failing either. It just takes time to implement. In the very early days of the web, it was expected that if you moved a document, you made an effort to let people know you had done so to avoid broken links. What I'm getting at is that although the technology was there, for smoothest running it needs human being communicating behind it.

Looking first at multicasting. Technological capabilities are there in current, existing deployed Cisco IOS and have been for quite some time. So hurray, you can configure your router for multicast. But who are you going to peer with? Where are you going to receive those broadcasts from or send them to? It really is of little use unless you group into communities. And since the internet is a little bigger than it used to be, this all takes time. So while in the past, everybody would know everyone else and things could get done quicky, this doesn't happen any more.

On to IPv6. Just like with IPv4, unless it is for internal use only, in which case, it is certainly not a conspiracy by the ISPs, you cannot just pluck IP addresses from thin air. In Europe at least, you need to receive addresses from a registry which requires certain technical requirements, moreso if you are going to aim for being a regional registry as most large ISPs would like to be. Fair enough - no point doling out addresses to people who are not going to use them. This means taking time out for trialling etc. which is not often easy, especially for commercial ISPs. "Excuse me, Mr. paying customer, would you like to join in a project for the advancement of an internet protocol for no guaranteed monetary return". It is easier for National Research Networks but still takes time and manpower, something most ISPs are short of.

QoS - I have little to say about this that hasn't already been said except to emphasise that for telcos, as well as just ISPs, this is a very big deal indeed.

And on another note, if you are reading this with mixed or topical comments only, specifically set you preferences to all comments because there is some interesting topical discussion stuck in editorial threads.
---
I've always had the greatest respect for other peoples crack-pot beliefs.
- Sam the Eagle, The Muppet Show

Conspiracy, heh, lack of Clue more likely. (4.80 / 5) (#13)
by tankgirl on Thu Apr 19, 2001 at 01:06:18 PM EST

I find your arguments ridiculous, especially for Multicasting.

The Internet is like a highway system for cars in the real world. Big packets are the sixteen wheeler semi's and little packets are the econo cars. Who creates more wear and tear on the roads? The big guys.

Streaming media uses big packets that tax a networks backbone much more. The backbone of the Internet is is an "irregular mesh topology", where certain 'routes' are more popular than others. Keeping duplicate traffic to a minium on these poplar routes with multicasting would _save_ any backbone provider tons of money in the long run. The real problem is Clue and adoption, for multicasting to be useful all the major backbones (Sprint, UUNet, Genuity/BBN, Cable&Wireless, etc) would have to adopt it. Then smaller providers would have no trouble implementing it. The larger providers find it harder to adopt because it requires a large pool of technicians with Clue to start with, and as we all know there's limited amount of Clue in the universe (an entirely different topic ;-). This comes up at IETF and NANOG all the time. Witness the all the RFC's on it with a search for keyword "multicast" here.

Expect it to be adopted _someday_, because it's the only way the backbone of the Internet can survive in the long term and support the growing user load. I suggest reading Where Wizards Stay Up Late for a better perspective on why it takes so long to get anything into the mainstream, while the entity you're attempting to upgrade continues growing exponentially. Education, in this environment, becomes a continuous necessity.

To go back to my original analogy, I can't understand why you think any large network backbone would _want_ an end node transmitting ten 'semi's carrying the same data over it's single popular main highway. It doesn't make sound financial sense from a backbone perspective. That's the part the costs ISP's the most, as it's a shared resource. When a streaming media vendor joins an ISP's miniature community that doesn't support multicast it creates a 'tragedy of the commons' situation where no one benefits.

jeri.
"I'm afraid of Americans. I'm afraid of the world. I'm afraid I can't help it." -David Bowie
Why multicast? (2.75 / 4) (#17)
by dennis on Thu Apr 19, 2001 at 03:20:42 PM EST

I guess I just don't get it--what's the point of multicast? I mean, I understand that you can simultaneously transmit to lots of people much more efficiently. But aside from the occasional special event (meteor shower, etc), who cares? We already have simultaneous broadcast technology--it's called TV. The nice thing about the Internet is you can get your content on demand. You can multicast a radio station if you want, but it's easier to just turn on the radio--the real value is in Napster, where you ask for the song you want and get it right then. Multicast doesn't help you do that.

Sure, multicast on the Net gives you access to more radio and tv stations. Maybe it'll be good for sports events. But for the most part, asynchronous communication just seems more useful to me.

Multcast works with _any_ streaming media... (4.00 / 2) (#18)
by tankgirl on Thu Apr 19, 2001 at 05:08:35 PM EST

...and as more people begin adopt video conferencing, Internet radio (it's more common than you think for cubicle workers), streaming video, etc., our Internet backbone is taxed further. So when you little packet request to see a web site heads out the the Internet at large, it has to fight heavier and heavier traffic. So via the 'trickle down effect' you will someday be affected (if you're not already :) by other people's web habits.

cheers,
Jeri


"I'm afraid of Americans. I'm afraid of the world. I'm afraid I can't help it." -David Bowie
[ Parent ]
media democracy (4.75 / 4) (#22)
by akb on Thu Apr 19, 2001 at 07:09:45 PM EST

The Internet is hailed as a revolutionary because it allows anyone to be a publisher. You don't have to have a billion dollars or prostitute yourself to one of the shrinking number of media companies, anyone with $20/month and a computer can make their voice heard to an unlimited number of people who also have $20/month and a computer. Viewpoints that can't be heard, geniuses that are ahead of their time, people helping people, information that wants to be free, this is the Internet I know and love.

But wait. Dang, its only text and pictures. Audio and video? Sorry. That'll run you about $70 per listener. Buy a TV station you say? That'll cost you $30 million.

I'm involved with a group that does alternative media, during the recent presidential inauguration we did an audiocast (archive clip, realaudio) that peaked at 7,000 listeners, we were rebroadast on FM in the Netherlands, we got a call in from South Africa. It was only possible because we had it donated to us, otherwise it would have been tens of thousands of dollars. Multicast would let anyone reach unlimited audience sizes for the cost as text and pictures.

Collaborative Video Blog demandmedia.net
[ Parent ]

The killer Multicast app: Games (3.50 / 2) (#44)
by Misagon on Fri Apr 20, 2001 at 09:28:16 PM EST

Multicast would be a real bandwidth saver for realtime computer games. Most realtime action games (think Quake) are client-server based, with the server in the loop because it is the simplest way to keep people from cheating (not that people are not trying anyway). This means that each move a player does first has to go to the server and then out to each other client. If the last part could be done using multicasting, bandwidth requirements would be cut almost in half.

I have visited a company who are actually trying marketing their own propietary unicast/IPv4-based solution for multicasting to IPSs and game developers - with their own custom routers and protocols. It would be much better if we all used IPv6, because then multicasting would be much simpler - and open.
--
Don't Allow Yourself To Be Programmed!
[ Parent ]

It's not just 'special' media events (3.50 / 2) (#48)
by briandunbar on Sun Apr 22, 2001 at 12:47:39 PM EST

*Any* kind of data can be multicast.

I'm aware of places that are using multi-cast to beam software / application updates to thousands of desktop computers.


Feed the poor, eat the rich!
[ Parent ]

IPV6 (3.50 / 2) (#21)
by enterfornone on Thu Apr 19, 2001 at 06:30:08 PM EST

According to net engineers I work with, IPV6 isn't taking off because Cisco refuse to support it. Cisco's reasons are that the routing tables require far more memory and processing power than they can currently put into their equipment.

--
efn 26/m/syd
Will sponsor new accounts for porn.
Re: IPV6 (3.50 / 2) (#30)
by UrLord on Fri Apr 20, 2001 at 07:48:17 AM EST

Check this out. http://www.cisco.com/warp/public/732/ipv6/ . Just found that a second ago while looking at links from another post. Have not read all of it yet though.

[ Parent ]
Real-life uses. (4.66 / 3) (#24)
by jason on Thu Apr 19, 2001 at 11:34:59 PM EST

Nokia is investing heavily in IPv6. Their next-gen cell network is IPv6 with mobile IP. And you're missing the point on selling service... The AAA work will allow the local provider to charge the roaming agent. Even Microsoft's on-board now (they also offer an open 6to4 gateway). And the built-in security standards are going over well VPNs, etc. They've got the designs and the expertise, so they can entice other manufacturers just to buy that. Plus they can entice customers with standards.

And multicast is used inside many companies, etc. in multicast islands. That's how multicast usage seems to be shaping up... Reliable multicast is in the early implementation stages, and many `real' uses need it. There are definitely some great ideas out there, and they're being implemented. BTW, you can't track the size of a television audience, but tracking or estimating multicast group size is necessary for many multicast protocols. (Follow links through google, which can also take you to ways for ISPs to price multicast.)

The single, main issue holding up multicast and IPv6 adoption is router state. Remember that IPv6 protocols use multicast heavily... Keeping track of multicast groups is a technological pain. That's why Cisco has been officially luke-warm (while dedicating non-trivial R&D resources to the issues).

Quality of service is mixed into the above through the work I've cited and others. Best effort really seems good enough for huge numbers of uses, up to the congestion control problem. That's a focus of current research, and it looks solvable in many ways. Hard-core, guaranteed QoS through DiffServ works just fine and is available. Router state is an issue here, but diffserv will be used far less than multicast.

None of these are failures. They've been pushing the boundaries of available knowledge and technology. You should (partially) credit them with the pushes for more intelligent routing, application-level framing, etc.

I'm not saying they'll be everywhere tomorrow, but IPv6, multicast, and QoS are on the way. If you want to hurry them, remember that the IETF is a volunteer organization... However, you need to learn a good deal first. Start by reading the current drafts in these areas, and then skim the mailing list archives. Many of your, um, theories will go away. (Many. Not all. There is a bit of truth in there...)

Have you....? (4.40 / 5) (#27)
by mystic on Fri Apr 20, 2001 at 03:59:35 AM EST

Did you talk to anyone who is actually working in any of the fields that you mentioned in the post? Did you ask them what they think about these technologies and why they have not been put to wide use?

If you had, you will know how damn difficult it really is. I know this atleast for IPV6 and QoS. The issues related to IPV 6 is enormous. The mapping involved, the translations that will need to be done for a IPV6 and an IPV4 networks to co-exist, the algorithms that need to be used for the QoS, the ingress and the policying algo that needs to be in place.. all these are far from developed.

These things will flower when the time comes, you cannot hurry them. Hurrying them will just create more problems. Conspiracy? Nope. Difficulty? Yes.

Counting audience is not a problem (3.00 / 2) (#32)
by poor thing on Fri Apr 20, 2001 at 11:12:37 AM EST

This last point has a flip-side, too. Since there IS no way of determining how many people are receiving, there's no means for advertisers to know how large their audience is. They'll know if it's zero, or not zero, but that's it. Result: since many such netcasts depend on advertisers to survive, they have absolutely NO incentive to obliterate their bank balance, just to provide a superior service to more people.
Well, tv and radio can't exactly determine size of their audience too, but it doesn't seem to be a problem for them. Probably tv-style ads model needs a huge audience to work, which isn't found on the Net now. And overall IPv6 infrastructure is not ready yet.

More reasons why... (4.00 / 2) (#33)
by DrEvil on Fri Apr 20, 2001 at 12:14:15 PM EST

I might also add, the users will have to find some way to find the stream right? Whether that be a clickthrough on a website, or by other means (like the station buttons on real player, etc.) which can easily be tracked! I don't find this to be as big of a problem as it seems, and just as the above poster mentioned, there is no true way of finding out what people watch from a television signal without surveys or what have you so this will probably show the popularity of something more than a tv broadcast would.

[ Parent ]
Excuse me. I'd just like to point out... (5.00 / 3) (#34)
by afreeman on Fri Apr 20, 2001 at 01:38:23 PM EST

X-Files! No, not quite. Let me explain. Multicasting means that you could have one server transmit radio-style or TV-style broadcasts to an unlimited number of users. Since it's not restricted by the speed of the connection, there is no reason why anyone would need to buy fast connections from ISPs, for this kind of work.

The author obviously hasn't understood a damn thing about multicast! Listen carefully:

Multicast takes load off the backbone. It makes no difference to the bandwidth required at the local loop!

Did I say that slowly enough? In other words, multicast makes no difference to ISPs, but it could make a lot of difference to telcos struggling to make their infrastructure meet the demands of broadband.

As many observers have already pointed out, the problem is that multicast requires end-to-end support from the internet fabric, which can only be guaranteed within certain subnets within the internet. The MBone is an academic example.

aF

"Men forget, but never forgive. Women forgive, but never forget."
You mean reciving local loop.... (3.75 / 4) (#35)
by icepick on Fri Apr 20, 2001 at 02:14:46 PM EST

You get bandwidth savings at the server all the way till it breaks out near the clients. Thus the ISP's get less money from the server.

They don't get money from the client so they don't care.



[ Parent ]
Think about the economics (5.00 / 3) (#41)
by sigwinch on Fri Apr 20, 2001 at 06:44:19 PM EST

You get bandwidth savings at the server all the way till it breaks out near the clients. Thus the ISP's get less money from the server.
Where is it written that "ISPs shall charge only for bandwidth"? There are lots of ways to charge for multicast:
  1. Office work teams need lots of face time. The only way to get that currently is to centralize the business at a single location. The main thing keeping us away from Snow Crash-style network collaboration is the lack of ubiquitous broadband multicast. The "ubiquitous broadband" part is being solved as I type, but without multicast it'll just saturate the backbone routers. When ISPs offer cheap multicast, whole new vistas of paying customers will open up.
  2. ISPs can just plain refuse to route multicast packets unless you pay their fee. I.e., $10k/month for your multicast packets to reach the world. Or they could charge high bandwidth rates for multicast traffic (say, 100X unicast). The latter case is nice, as the ISP can keep current revenues but their transport expenses fall through the floor.
  3. There are some bandwidth costs for multicast (router-level negotiation does use a few packets). ISPs could bill $5/octet for this traffic. ;-)
  4. ISPs could charge royalties to multicast large data feeds.
  5. Large multicast transmitters (movie studios, TV studios, software distributers, radio stations) will need highly-reliable multicast. This is a premium service that ISPs can make a lot of money on.
  6. Every data transaction has two ends: transmitter and receiver. They can just raise receiver bandwidth prices to compensate.
  7. A lot of ISPs (e.g., AOL) are receiver-heavy. The lion's share of streaming content over AOL is probably from somewhere else to AOL's customers. Ergo, anything that reduces AOL's transport costs will be deployed.

--
I don't want the world, I just want your half.
[ Parent ]

I think you missed his point... (4.00 / 3) (#43)
by Alhazred on Fri Apr 20, 2001 at 09:26:47 PM EST

The point was that BOTH the backbone AND the local loop are utilized to a much lesser degree with multicast.

Considering that most service providers are now worrying about overcapacity its not too surprising that they might think this way.

Not that I agree with the original hypothesis, but...
That is not dead which may eternal lie And with strange aeons death itself may die.
[ Parent ]
Another use for multicast (2.50 / 2) (#36)
by evanbd on Fri Apr 20, 2001 at 02:54:05 PM EST

How about just multicasting the graphics for large web sites? serve the active html on demand, but have it reference graphics that get multicast every second or two. Websites with lots of hits would benefit especially. Are there reasons no one has talked about this? is it even possible, or am I way off base here?

For local bulk transfer, yes. (4.50 / 2) (#38)
by jason on Fri Apr 20, 2001 at 06:03:27 PM EST

See some of the uses in the reliable bulk data transfer design space RFC. The amount of overhead associated with a reliable multicast makes somewhat impractical for j. random site. Typical usage patterns are better served by local caches (passive or pro-active). For advertising-like uses, however, it could be quite handy. They could provide one image on initial connection, then pump out one ad every k seconds. This could also be distributed in an Akamai-like fashion for really huge ad-like things.

Note that changing schedules, news tickers, etc. fit what I'm calling ad-like traffic.

[ Parent ]

Multicasting software (4.50 / 2) (#42)
by sigwinch on Fri Apr 20, 2001 at 06:49:48 PM EST

Certain widely-distributed software can even be multicasted, such as popular operating system distributions and patches. The trend towards downloaded rented/subscription software will encourage this.

--
I don't want the world, I just want your half.
[ Parent ]

since... (3.00 / 2) (#45)
by Shren on Sat Apr 21, 2001 at 04:45:46 AM EST

I'm sure that most of these threads are stored in some giant kuro5hinisk database, does anyone want to make any bets on when the last IPv4 machine will go offline, to finally be replaced by IPv6, making the net IPv6 across the board?

My guess is 2030.

stupid thing insists upon a subject line... (3.50 / 2) (#47)
by eudas on Sun Apr 22, 2001 at 10:28:34 AM EST

I'll call you an optimist and raise you 20 years, to 2050.

eudas
"We're placing this wood in your ass for the good of the world" -- mrgoat
[ Parent ]
sounds about right... (none / 0) (#51)
by Shren on Wed Apr 25, 2001 at 01:48:52 AM EST

30 years. Hmmm. 50 years. Hmmm. I guess the most scientific way of looking at this is predicting when the educational sector of the computing industry, such as colleges and technical schools, start pumping out graduates for whom IPv6 is it and IPv4 is dead wood that should be cut. After all, it doesn't happen untill people implement it, and you need a lot of IPv6 capable people to IPv6 the net.

I think it might be closer to 50 than thirty, on retrospect.

[ Parent ]

Multicasting, IPv6, QoS - the Internet's failures | 51 comments (38 topical, 13 editorial, 0 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!