Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
Streaming Across Multicasts

By conraduno in Technology
Fri May 18, 2001 at 09:42:10 AM EST
Tags: Technology (all tags)
Technology

A technology I read about the other day just blew my mind, from Digital Fountain. They have produced a way of transmitting data across multicasts (networks such as MBONE). The primary problem with transmitting data across such a network is packet loss.. what do you do if client A suffers packet loss but client B does not? You're multicasting, so you can't just re-transmit to A. The solution is ingenious.


ADVERTISEMENT
Sponsor: rusty
This space intentionally left blank
...because it's waiting for your ad. So why are you still reading this? Come on, get going. Read the story, and then get an ad. Alright stop it. I'm not going to say anything else. Now you're just being silly. STOP LOOKING AT ME! I'm done!
comments (24)
active | buy ad
ADVERTISEMENT
Basically they have figured out a way to encode a chunk of data N bytes in size, into a stream M bytes long. This stream is then streamed out across a multicast network, and in order to reconstruct the orignal chunk of data all a client must do is retrieve any N bytes of data from anywhere in this stream. These N bytes can be from any point in the stream, and they do not have to be contiguous, so this eliminates the need for packet retransmission. If a client fails to retrieve a packet, it just gets the next one. The packets are encoded in a manner so that they are not the data in the original chunk, but rather they are representations of it. Whitepapers in PDF format located here.

This technology has been around for a while, but in the past the algorithms for creating these streams only allowed for them to be something like 4 or 5 times the size of N, which is not very practical as a situation with massive packet loss, a client might not be able to retrieve N packets from the stream before the stream restarts itself. Digital Fountain's solution allows for much larger stream sizes (not sure how much larger).

The implications of this are huge. For example, I have read that a number of linux distributions have stopped putting ISO's on the web due to the immense traffic they generate. With this technology, RedHat/Slackware/(insert your favorite distro here) could have an ISO broadcast stream, and all one would need to do is get 600 or so megs from this stream to be able to recreate the original ISO. Right now multicast is only being used for video and audio streaming because they are the only things that can handle a stream distribution method, but the potential for data streams could change the way we look at bandwidth. No longer does a software distributor need a huge pipe, they merely need an MBONE connection.

This is too cool.

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Related Links
o Digital Fountain
o here.
o Also by conraduno


Display: Sort:
Streaming Across Multicasts | 28 comments (27 topical, 1 editorial, 0 hidden)
Very interesting, but not well explained (5.00 / 1) (#1)
by DesiredUsername on Thu May 17, 2001 at 03:25:02 PM EST

If I understand you correctly (and I had to read the first paragraph of the body 4 times and the entire thing twice to get this far) here's the story:

Source sends out a continuous stream of bytes consisting of the Message M repeated N times (stream is therefore M x N bytes long). If the client misses any byte of M on repetition R (less than N), it just waits until R+1 and snags it then.

I have three thoughts:

1) Yes, this is neat and just the kind of thing I'd like to see MORE of on K5. Unfortunately, it doesn't seem all that original to me because of
2) Isn't this just what SETI people do and are looking for?
3) If my understanding *is* correct, this *only* works for things like ISOs. For (live) video and audio it's pretty much useless.

Play 囲碁
I'll take a stab (4.50 / 2) (#3)
by MisterX on Thu May 17, 2001 at 04:07:37 PM EST

I popped over to their site. I can't say I was thrilled with the technical content. Heck, they claim to have a patented technology but there's no reference to the patent number. Now that would be interesting, because I think their claim is this:

  • You wish to receive content C which consists of N packets of data.
  • You connect to the streaming server which is constantly sending the so-called "meta content" packets.
  • Your client receive N packets of meta content data - any N packets.
  • The client plug-in reconstructs C from the arbitrary sequence of received meta content packets.

So, if your content C is 3 packets long, you can connect to the server at any time, retrieve any three packets and successfully reconstruct C. That's an impressive claim. I'd like to see it working. Unfortnately, the products don't really exist yet - the first public demonstration is next week.

The reason I wanted to see the patent is that I couldn't find an indication of the size ratio of the meta content packets to data packets. I wouldn't understand the math but I'm sure I could glean the information I want.

Also, since the client plug-in is proprietary (I didn't see any open source reference on the site), platform support may be patchy.

Bear in mind, this is their software downloading technology. I can't see how this would work with live streaming data. They have another product they want to sell you for that ;-)



[ Parent ]
Explaination (4.00 / 1) (#4)
by conraduno on Thu May 17, 2001 at 04:21:30 PM EST

Yep thats what the technology is.. sorry I didn't explain it too clearly, it's a somewhat difficult to explain. :P The white papers (linked to in the article) contain the technical information you're wondering about, actually they might be a bit too technical... They start discussing the actual algorithms and the use of Tornado Codes based on Reed-Solomon codes, neither of which I am familiar with. And to answer DesiredUsername, yes, this is only for data transmission. Audio and video, which are already inherintly stream based, would not benefit from this. But data would benefit tremendously, as it is inherintly non-streamable.
non.
[ Parent ]
Sometimes, it's all in the formatting (4.50 / 2) (#6)
by MisterX on Thu May 17, 2001 at 04:50:29 PM EST

thats what the technology is.. sorry I didn't explain it too clearly

Nah... your explanation was accurate. I just laid it out differently in a way that made more sense to me.

it's a somewhat difficult to explain

And I'll tell ya why... it's magic. Until I see this actually working, hear of its success from a reliable source or gain access to the patent, it's all techno-babble bullshit designed to sell expensive pretty purple server boxes to morons.

That, my friend, is called "healthy cynicism". Or lack of sleep. Pick one. ;-)

With reference to your main article point about downloading a 600MB ISO. Nowhere on the site could I find a statement saying that to get 600MB of content you download 600MB of data. They talk in "packets" and "meta content packets". These are most certainly not comparable.

I'll stick my neck out from my impregnable fortress of ignorance to say this: you'll be downloading more than 600MB of data to reconstruct your 600MB content. You just have to. If you are to reconstruct your content from an arbitrary packet stream there has to be some encoded positional and integrity information. That has to occupy space somewhere. The ratio of data packet size to meta content packet size is vital. If the meta packets are 5% larger than the data packets then they're probably onto a winner. If that figure is 100%, the technology doesn't look so good, eh?

Btw, I'm not a computer scientist, I don't have a degree. I've just been a coder for 20 years. Don't assume I know what the fuck I'm talking about, please. All this maths stuff is way out of my league - I just love speculating!



[ Parent ]
Dr. Dobbs to the Rescue (4.00 / 1) (#10)
by conraduno on Thu May 17, 2001 at 06:40:29 PM EST

Been doing some research, and found this Dr. Dobbs article. According to the article:

A Tornado code may require slightly more than k blocks to reconstruct the original k blocks, but the value of k may be on the order of tens of thousands. It is beneficial to increase the value of k as much as possible.

So it seems that you are correct, you do need more blocks, but not much. How much is "slightly more" I am not sure, but Dr. Dobbs carries a fair amount of esteem, and I would think if it where anything over 5% they would have made note of it. Also, Digital Fountain has published numerous white papers on this, and all information I have been able to find on them generally suggests that they are not a vapor-ware company, but this is the real deal. I guess they presented this at SIGCOMM 98 too.

Anyways, I'm not a computer scientist either, actually I'm only 18 and you shouldnt assume I know what I'm talking about either. This just looks pretty cool to me, and it seems pretty legitimate ;)
non.
[ Parent ]
Good ole DDJ (5.00 / 1) (#12)
by MisterX on Thu May 17, 2001 at 07:17:56 PM EST

Nice bit of leg-work. Sounds like something I should take a look at next time my brain is fully functional. If this technology works, I'll be impressed. When an open-source client is developed, I'll be happy.

Digital Fountain has published numerous white papers on this, and all information I have been able to find on them generally suggests that they are not a vapor-ware company

I was a bit harsh. I wasn't implying that they were a vapour company. Until I've seen some trustworthy evidence of their technology, that's how I'll think of them.

There's a lot of snake oil sold in the computer industry. There are many morons in the computer industry. Coincidence? I think not!

I guess they presented this at SIGCOMM 98 too

3 years from presentation to product launch. That's almost a reasonable product development time. In an industry which currently seems to view 3 month development cycles as still a touch too long. I like this company better already.

This just looks pretty cool to me, and it seems pretty legitimate ;)

This is why I've held off voting on your story. Initially, my confidence level in the technology and company was pretty low. I like your article but I'll be damned if I'm going to vote up a tech story which contains no real technology in it! Now, my confidence is higher. I'll do some leg-work of my own and hopefully the story will still be around for me to vote on.

Anyways, I'm not a computer scientist either, actually I'm only 18 and you shouldnt assume I know what I'm talking about either.

So there you have it. You now know that in the next 14 years you are going to learn absolutely nothing. Hope you're not too disappointed. ;-)

Now, for a bit of a laugh, try imagining the whole sentence in my previous post from "Until I see this" to "morons" being spoken in a mid-tone posh Scottish accent. Add the words "fuck" and "fucking" in a few choice places and you'll know exactly what I sound like in real life.



[ Parent ]
How many bits (none / 0) (#27)
by KWillets on Mon May 21, 2001 at 05:15:52 PM EST

ECC's are fairly easy to estimate. If you're going to correct, say, 1-bit errors in a k-bit string, you need to expand each code point p to include each possible error string within one bit-flip of p. So p, and the k bit strings around p, all map to p when the ECC is applied. To figure how many bits are needed (beyond k) to represent all the strings, we can count up all the p's and their error neighborhoods:

number of ECC code points = (2^k) * (1+k).

If we take the log2 of both sides, we get:

#bits = k + log2(1+k)

i.e. logarithmic overhead. That's a minimum figure; your ECC may vary.

(note: I made an error in using 1+k as the number of strings within 1 bit-flip of p - since p is more than k bits, there are more than k bitstrings within one bit-flip of it. But k is a good estimate).

[ Parent ]
so... (2.50 / 2) (#7)
by cicero on Thu May 17, 2001 at 04:56:21 PM EST

if, as in your example, the content C consists of 3 meta-packets,
and
they are continuously streamed,
and
you can recieve any 3 packets to reconstruct C

would simply recieving one packet, and having the client duplicate it twice (triplicate?) work?
that just doesn't seem right.


--
I am sorry Cisco, for Microsoft has found a new RPC flaw - tonight your e0 shall be stretched wide like goatse.
[ Parent ]
Well spotted (none / 0) (#13)
by MisterX on Thu May 17, 2001 at 07:23:46 PM EST

My wooly words. I should have said "any three unique packets".

As an explanation for this ambiguity, I refer you to the last paragraph of this comment.



[ Parent ]
How it works (5.00 / 2) (#17)
by ikillyou on Fri May 18, 2001 at 11:29:19 AM EST

A message of M bytes is expanded to N specially coded bytes, where N>M. Now the cool thing is that you can reconstruct the original message from any M of the N bytes i.e. pick any M bytes out of the N bytes sent, and you can still reconstruct the original message.

This is extremely cool, but it's not a new idea - it's called forward error correction (FEC).

Sounds like magic, but the basic idea is not hard to grasp: You can think of the original message as a vector of length M. This vector is multiplied by a MxN (N > M) matrix to give a length N vector (the received signal). If the matrix is chosen such that any MxM block in the MxN matrix is invertible, then with some thought, you can see that the original message can be reconstructed from any M bytes in the N bytes received.

What digital fountain has done is to come up with a faster algorithm for performing FEC, which they call tornado codes.

How is this useful in multicasting? Well suppose you have two clients, A and B, receiving the multicast stream. Clients A and B have dropped say, 10 packets each, but the packets which A has dropped are different from the packets which B has dropped.

Without FEC, you would have to send one set of the missing packets to A, and then another set of the missing packets to B. But with FEC, you only have to multicast one set of packets, which can be used by both A and B - even though the packets dropped by A and B are different!!!



[ Parent ]

Reed Solomon Codes (none / 0) (#19)
by conraduno on Fri May 18, 2001 at 01:37:09 PM EST

And actually, Tornado codes are based largely on Reed Solomon codes. The big advancement that Tornado has over Reed Solomon, is that with N source blocks, and M stream blocks, Reed Solomon generally only works well with an N,M having 64,255 as the typical limits. Tornado codes allow an upper limit of M as being in the tens of thousands. Which is much better for the stream, as you would have to restart the stream less. Because, if you only caught half of N blocks in a stream, and then the stream restarted itself, you would have to discard the blocks you had received and start all over again. And as we can see, 64,255 does not allow for a very large stream size. :)
non.
[ Parent ]
Before we get people too excited... (none / 0) (#24)
by tjb on Sun May 20, 2001 at 02:45:28 AM EST

You guys are going to have people useing FEC for everything :)

As stunningly cool as it is, here are a few trade-offs to using FEC.

1) Distributed erroring is bad, especially if it consistently hits your parity bytes. RS encoding, and I'm assuming tornado encoding (though I haven't read it that closely yet), are designed to recover from impulse errors. In DSPs (where I use RS), an impulse error is when your A-D goes out to lunch for a data symbol or two. In this case, an impulse error is a dropped packet. If you lose, say 200 bytes consecutively, assuming your interleaver depth (matrix size) is high enough, the FEC can reconstruct what those bytes should be. But, if you lose 200 bytes distributed across a large bit of time (again, depending on your interleave depth), the FEC may do more harm than good as your parity bytes are randomly hosed and the FEC is correcting where it shouldn't be. However this shouldn't be an issue at all for what they are talking about, assuming the TCP/UDP stack checks for CRC errors, but it does limit the usefulness of FEC in general.

2) Latency. Before anyone goes and makes a quake protocol using FEC, they should realize that the latency is a bitch. Even with the relatively small interleaver sizes of RS encoding, a codeword rate of 250 us in a 128-deep will have a minimum end-to-end latency of 64 ms, more if there are errors. Ouch. And it will be grotesqely more for general use of this Tornado encoding. But, again, not really an issue here, though I wonder if the creators of this realize that they may have created what is probably the first encoding scheme likely to have its latency measured in hours :)

3) Overhead. This one, I can see being a problem. A huge problem. As I said above, I haven't read the spec too closely yet, but I'm forsee this taking copious amounts of parity bytes to work properly. For RS, a 255 byte codeword will probably have 16 parity bytes and 239 data bytes. I can't see this scheme using a better ratio given the enormous codeword size they plan on using. But then again, these are probably some smart people, maybe I should take a closer look at that spec... Anyway, I somehow doubt its going to be 5% overhead. 10%, maybe... but likely higher.

Tim

[ Parent ]
Well, (4.00 / 1) (#2)
by trhurler on Thu May 17, 2001 at 03:27:45 PM EST

Sort of, yes. However, such a scheme requires constant bandwidth utilization. What this really means is, if you have a multicast route and everyone involved is ok with it, you can do this. Keep in mind that even if most people don't get your multicast, your physical upstream still has to deal with the traffic at at least one and probably numerous points - constantly. Typical service arrangements are only economically feasible under the burst-mode usage model. When a few customers start streaming, that's a novelty for your amusement. When many do it, that's a disaster. Multicast helps eliminate redundant traffic, but the real solution is higher bandwidth, lower latency, and greater reliability. Don't worry, though - they're on their way:)

--
And when you consider that Siggy is second only to trhurler as far as posters whose name at the top of a comment fill me with forboding, that's sayin
Streaming commonly used data (none / 0) (#5)
by sigwinch on Thu May 17, 2001 at 04:46:36 PM EST

When a few customers start streaming, that's a novelty for your amusement. When many do it, that's a disaster.
Unless it's something that the many would all get anyway, like daily software updates, weather maps, etc. I suspect there isn't enough of that traffic to warrant the effort, but I'm not paying the transport bill for a big ISP.

As to their special super-duper patented technology, it's just ordinary error-correcting techniques applied to large data sets. <yawn>

--
I don't want the world, I just want your half.
[ Parent ]

Not constantly! (none / 0) (#23)
by abo on Sat May 19, 2001 at 11:21:18 AM EST

"your physical upstream still has to deal with the traffic at at least one and probably numerous points - constantly"

What do you mean? If noone wants the data it will not be sent, even if you're using multicast! Just put a router in the right place.


-- Köp BRUX!
[ Parent ]
Yes, but (none / 0) (#25)
by trhurler on Mon May 21, 2001 at 10:49:47 AM EST

If nobody wants the data, then you probably aren't streaming it anyway. This technology would be used in cases where someone has something that's going to be popular. Problem being, with the rising popularity of free software and related trends, generally speaking there is no necessary correlation between popularity and ability to pay big bucks for things like bandwidth. Kuro5hin is an excellent case in point regarding the wealth/popularity disconnect; were this a purely commercial enterprise in the traditional sense, you can bet it would have physical facilities far superior to anything Rusty can afford as things stand(which, I must add, would not necessarily be a good thing.)

--
And when you consider that Siggy is second only to trhurler as far as posters whose name at the top of a comment fill me with forboding, that's sayin
[ Parent ]
"Secret Sharing" (4.00 / 1) (#8)
by mwright on Thu May 17, 2001 at 06:01:18 PM EST

I don't actually know much about how this works, but it seems very similar to "secret sharing", a way of breaking data into m units, where only n are needed to reconstruct the message. One way of doing this, discovered by Shamir (the "S" in RSA) works with polynomials.

A polynomial is in the form a+b*x+c*x^2...
It is easy to see that the value when x is zero is a (0 times any number is zero, of course). Also, any polynimial of degree n can be described completely using n+1 points lying on it. So, a polynomial with a as the message (encoded as a number, of course) and degree n+1 can be generated, and m random points can be taken. It's easy to see that any n of these can reconstruct the polynomial, and find the value of a.

The method used for streaming is probably different... but still, these are related (so I'm not completely offtopic!), and I find this really neat.

It will be Rabin's IDA (none / 0) (#16)
by Paul Crowley on Fri May 18, 2001 at 11:02:43 AM EST

It won't be Shamir secret sharing; it'll be Michael O Rabin's "Information Dispersal Algorithm". This allows you to choose any M,N,b, M < N, and break up Mb bytes of data into N packets of size (B + a few bookkeeping bytes) such that any M of those N packets are sufficient to reconstruct the original data. It's damn cunning stuff. Oh, and it depends on that linear algebra stuff K5 was decrying a few articles ago :-)
--
Paul Crowley aka ciphergoth. Crypto and sex politics. Diary.
[ Parent ]
How it works (4.00 / 1) (#9)
by dennis on Thu May 17, 2001 at 06:16:09 PM EST

I saw this in a financial magazine article about them, but it seemed like a decent summary. Haven't read the papers yet, but here's what the magazine said:

The basic idea is that each packet you send out is actually two or more randomly-selected packets of the file, all XOR'd together, with an indication of which packets in the sequence went into it. When the recipient gets all these packets, they can be xor'd together in various combinations to retrieve the original packets, since ((a xor b) xor b) = a. Apparently a random selection lets you find these combinations with high probability, without needing to increase the bandwidth too much.

Old stuff (3.00 / 2) (#11)
by darthaya on Thu May 17, 2001 at 07:04:25 PM EST

This technology has already been done, well implemented and widely used in satellite industry.

Check out www.kencast.com :)

the problem about streaming data (none / 0) (#15)
by captain soviet on Fri May 18, 2001 at 05:13:48 AM EST

retrieving video or audio streams is not really that much of a big problem. If you miss a packet, you can just pick the next one, - you will have a unnoticable error in your audio/video stream but you have a high error tolerance.

If you had a data stream containing an ISO Image, you have zero error tolerance. Let's say the server might be repeating his stream every thirty minutes and your connection allows you to catch only every other package. So you will get 50% of the data within thirty minutes. As you cannot request any packages from the server, you will get only 50% of the packages you need in the following half of an hour. So after an hour you have 75% of the data, although you would have had the whole data if you were downloading it in the classical way.

After two hours of downloading from the stream, you will still only have 93.75% of the data you needed. If you were unlucky (although this is only a mathematical possibility) you might be scanning the stream for a single packet forever and miss it every time.



Read the article again (4.00 / 1) (#18)
by dbowden on Fri May 18, 2001 at 11:29:25 AM EST

Especially the part where it says:

all a client must do is retrieve any N bytes of data from anywhere in this stream. These N bytes can be from any point in the stream, and they do not have to be contiguous, so this eliminates the need for packet retransmission. If a client fails to retrieve a packet, it just gets the next one.

The whole point of the article was that you don't need to get a contiguous image. You just need to get N packets. ANY N packets.

It's actually faster than downloading it the classical way because you don't need to bother with error checking. If you miss a packet, you just grab the next one.

[ Parent ]

Streaming - where this technology falls down (none / 0) (#28)
by andyclap on Tue May 22, 2001 at 08:26:58 AM EST

I think you hit the nail on the head here: Streaming large amounts of content is where this technology is positioned (such as video on demand etc), but reading the white paper it seems like you have to have a full complement of unique meta-content packets before you can start decoding them. I.e. no streaming from a single meta-content packet source.

It's proposed that large streams, eg video, will be broken down into segments of manageable length so that when a segment is completely received the content can be played while the next segment is downloaded.

Your comment about successfully receiving the requisite number of unique meta-content packets indicates that the meta-content stream will have to be longer than the data length (relative to the tolerance required), therefore you'll need even more segments. The flaw here is that each segment requires a new source of meta-content packets.

It looks to me that as the required latency decreases, and the content stream length increases, the number of segments increase. Surely the overall performance (server bandwidth and processing vs client latency and bandwidth) scales linearly with content size in exactly the same way as would several small looped regular multicasts.

[ Parent ]

Use the source, Luke... (5.00 / 1) (#20)
by artemb on Fri May 18, 2001 at 03:08:24 PM EST

There are quite a few web pages and papers on the subject. Probably a good place to start is http://www.icsi.berkeley.edu/~luby/

Another gererally helpful resource is CiteSeer ( http://citeseer.nj.nec.com/cs ).

For instance, here's an interesting paper: Accessing Multiple Mirror Sites in Parallel: Using Tornado Codes to Speed Up Downloads and bunch of related papers - http://citeseer.nj.nec.com/nrelated/942421/319843

For those, who wants to use the source, try http://www.people.fas.harvard.edu/~rross/cs222/ and get the source code from here

Streaming over multicast (5.00 / 1) (#21)
by codemachine on Fri May 18, 2001 at 04:28:35 PM EST

My undergraduate research work for the summer includes working on a project that uses similar technology to stream media over multicast using erasure codes. From what I have read of the Digital Fountain, it is more suited to downloadable content than it is to streaming media such as video content. Although they have a streaming product, I'm not sure if it uses the same technology as their downloads server or not. So far I haven't been able to find out.

Anyhow, here are a few links to the work being done at my university on this subject.

papers on the subject available at: http://www.cs.usask.ca/faculty/eager/

an outdated page on the SWORD project: http://www.cs.wisc.edu/~vernon/sword.html

Unfortunately our most recent work is not going to be published until later this year, so the implementation details are left mostly untouched in the above links.

Full Disclosure: I work for Digital Fountain (5.00 / 2) (#22)
by MyEvilTwin on Fri May 18, 2001 at 08:37:06 PM EST

I can understand how the technology can seem like magic at first, but it's really not hard to understand the fundamentals. Basically, say you have a file, and you chop it up into sections A, B, and C. You send the client: (A) (A xor B) and (A xor B xor C). With these three packets, the client can reconstruct the original file. Of course, in practice it's much more complicated and I'm no mathematician, so I'll just say that you need to receive 105% of the size of the original file in "meta-content" in order to be certain of successful reconstruction.
As far as streaming video/audio goes, we sell a product specifically for streaming video (we demo'd this at NAB in Vegas about 3 weeks ago). The way it works is that the video is broken up into chunks, each of which is encoded and decoded separately. The client downloads and decodes the first chunk, starts playing it, and downloads and decodes further chunks while you watch the first. Of course the download rate needs to be somewhat higher than the playout rate. Also note that you get a perfect copy of the original video, as opposed to say Real, where you drop frames if you lose packets.
Finally: it doesn't just work over multicast. With the addition of a Replicator, which is basically a custom NIC, you can replicate a meta-content stream to unicast clients. You give up the bandwidth savings you get with multicast, but you have less of a server load because you don't have to maintain many simultaneous TCP sessions. The server just hands off the meta-content streams to the Replicator, which sends them to the clients via unicast UDP. Anyone who wants more details should check out http://www.dfountain.com/technology/DFTechWhitePaper2.9.pdf . It explains the techology well, without being comprehensible only to math professors :).

Isn't a lossy solution preferable? (none / 0) (#26)
by sanity on Mon May 21, 2001 at 05:04:57 PM EST

Surely a solution which exploits the fact that frames can be dropped without significant interference, such as real network's solution, is preferable than a solution which requires more bandwidth under normal operation and presumably just stops the stream if there is too-much packet loss?

[ Parent ]
Streaming Across Multicasts | 28 comments (27 topical, 1 editorial, 0 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest © 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!