Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
Humans and machines indistinguishable in 100 years?

By Justinfinity in Technology
Thu Nov 30, 2000 at 09:23:37 AM EST
Tags: Culture (all tags)
Culture

I'm a wanna-be 3D artist. I play with Blender, Strata3D, and TrueSpace in my spare time. Nothing big. I also pick up a couple of 3D-art magazines whenever I'm in the local Border's. In the August 2000 (Volume 23, Number 8) issue of Computer Graphics World, the Viewpoint column caught my attention. It contained a simple timeline of the next 100 years based on an essay by Ray Kurzweil, "The Evolution of the Mind in the 21st Century." The essay is based on Kurzweil's Siggraph 2000 keynote speech, entitled "The Human-Machine Merger: Why We Will Spend Most of Our Time in Virtual Reality in the 21st Century."


Here is my summary of the timeline:

<paraphrase>

2010

  • Machines equivalent to a modern high-end system are embedded through out our environment.
  • Cables have virtually disappeared.
  • Interfaces with the machine consist of unnoticeable, wearable outputs (glasses and contacts that project directly onto the eye, microscopic earphones) and speech recognition recognition.
  • Mortality due to cancer and heart disease is virtually eliminated with bioengineering.

2020

  • A $1000 (inflation not withstanding) machine is roughly equivalent to the computational power of the human brain.
  • These machines are virtually invisible, being embedded everywhere, including clothing, jewelry, even bodies.
  • Holographic 3D displays are embedded in glasses and contacts.
  • The primary input to these machines will be spoken word and gestures.
  • The immersion of virtual reality enables anyone anywhere to do anything with anyone else.
  • Physically disabled persons overcome the disabilities with nerve-stimulated robotic prosthetics.
  • People begin to have relationships with virtual personalities.
  • Reports/rumors of computers passing the Turing Test run rampant.

2030

  • A $1000 (inflation not withstanding) machine has the computing capacity of approximately 1000 human brains
  • Direct neural pathways to the brain are perfected.
  • Nanobots in the bloodstream and other body-systems enhance perception, interpretation, memory and reasoning.
  • VR makes use of nanobots in the brain to directly effect the perceptions of the user.
  • Automated agents begin to learn on their own.
  • Human employment is almost nil. Machines do labor while humans are free to enjoy all of life.
  • Rights of computers begin to be contested.
  • Computers routinely pass valid Turing Tests. Controversy persists.
  • Machines themselves claim to be conscious.

2050

  • Nano-enhanced food is commonplace. All types of food are instantly available, anytime, anywhere.
  • Nanobot swarms create tactile projections of people and objects.

2100

  • There is no longer any distinction between human thinking and machine intelligence.
  • Most apparently conscious entities do not have a physical presence (outside of nano-projections), but exists on the 'Net itself.
  • The number of electronic brains vastly exceeds the number of carbon cell-based brains.
  • Even most carbon brained entities use extensive nano-implants to enhance and extend their capabilities.
  • Life expectancy is no longer a viable term in relation to intelligent beings.

</paraphrase>

Now that you all can see it, let's analyze! I'm not trying to specifically target Kurzweil for criticism in any way. His essay just happened to be one that piqued my interest.

 

Whoa! A Matrix-esque environment by 2030! In the words of another Keanu Reeves character: Excellent! I'm very happy to be alive and relatively in the know technology-wise right now. This is a wonderful time. Even if all of Kurzweil's estimates are 100 years off, we're still in the midst of exploring some very new and uncharted territory. Nano-technology, AI, VR (the real stuff, not the early 90's stuff), etc.

One of the main things I feel will effect the accuracy of this timeline is the economy. I recently read an article about a space laser to be used for shooting down missiles is planned to be in operation by 2013. 12 years! Jeez, we already have ground based lasers capable of taking out in-flight missiles. Is it really going to take 12 years to move that into space? The way NASA and the US government (the article was about a US based system) waste money, I wouldn't be suprised.

I keep seeing articles about nano-technology with the scientists saying that although their ideas are coming together at a fantastic pace, the technologies won't be available to the consumer for many years do to the high cost of production. Wonderful. The great capitalists do it again. Greed taking precedence over an actual advancement of the human race.

On to the next part of the timeline: 2050. Star Trek replicators and The Jetson's auto-kitchens become reality. And I'm sure this can be extended to all materials, not just food. Hungry? Have a pizza 'faxed' over. Need a new pair of pants? Get a custom fit pair instantly without leaving your house. Want to play tennis? Just order up some rackets and have your nano-swarm form up into an opponent. I'm sure by then AI will be more than sufficient to give us mere mortals a run for our money. Come to think of it, the Star Trek holodeck could become a reality with nano-projections. Although you will be able to get the same experience in Virtual Reaility, some people may wish to actually use their bodies for something :-P

Ahead 50 more years, to 2100, and humans have become the machine, and vice-versa. If you have never read Isaac Asimov's short story "The Last Question", I urge you to find a copy and read it. As well as John Barnes' "Mother of Storms". Both stories, besides being fun and interesting to read, talk about the possibility of human consciousness being integrated into an electronic system. Oh and don't forget Arthur C. Clarke's "2001" series, with Dave Bowman and HAL becoming much like a program stored in the expansive memory of the monolith.

"2001" brings me to another point. A lot of science fiction writers have envisioned our present and near future to be greatly advanced compared to our reality. Clarke, for example, expected a moon terraforming process to already be established, Mars in the beginnings of settlement, and manned missions to Saturn to at least be feasible by the 21st century. Meanwhile, we haven't even gone back to the moon since the early 70's, never mind a manned mission to Mars or past the asteroid belt. Why? Was it the scare of Apollo 13? The tragic Challenger accident? Or was it the economy again holding the human race back.

HAL9000 (which BTW stands for Heuristically-programmed ALgorithmic computer, not IBM bumped up down letter) is far from being a reality. Although we're getting there, with some wonderful work on AI and voice & visual recognition taking place at MIT, Carnegie-Mellon, Berkeley and the other research labs.

Again using an example from the "2001" series (I just finished to whole series, so it's in my head :-P ), Clarke put forth the idea that many lifetimes worth of human memories could be stored in approximately one petabyte (a giga-gigabyte). He based this on Louis Scheffer's "Machine Intelligence", a paper in which it is estimated that the entire mental state of a 100 year old human could be stored in a single petabit (2^50 bits). Well, we've got terabyte RAID arrays already, so a petabit is only about 3 orders of magnitude away. Soon!

So...I guess I'm still undecided. If you take into account Moore's Law (although it may be broken soon unless we find something better than silicon to build processors out of), we may have the computing power to make this timeline a reality. On the other hand, the beaurocrats currently in charge and the people with the money do have this rather nasty trait of totally ignoring anything that won't give them an immediate monetary return, so these projects may never get off the ground in the near future. I'd like to see it all happen. I'll be the first to sign up to try out new interfaces of the human brain to the vast computing power we'll be developing.

(Well, I'm getting distracted because I noticed that my download of the Seti@Home client for FreeBSD has been finished for a while now. I'll post a diary on why that is distracting me so much)

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Poll
Could you "plug-in" forever?
o Definitely 17%
o I hope so 13%
o I don't need to plug-in 22%
o We already are, you just don't know it ;-) 11%
o No way! I'm addicted to IRC 2%
o No way! It'll fry my brain 3%
o My brain is already fried 14%
o Only if they have virtual bongs :-P 14%

Votes: 88
Results | Other Polls

Related Links
o Computer Graphics World
o Ray Kurzweil
o Also by Justinfinity


Display: Sort:
Humans and machines indistinguishable in 100 years? | 136 comments (126 topical, 10 editorial, 0 hidden)
Who knows what the future will bring? (3.33 / 9) (#4)
by pb on Thu Nov 30, 2000 at 06:10:51 AM EST

Well, I hope I could be virtually immortal and indestructible by 2100, but we'll see. This timeline looks a little optimistic; remember, I don't have my own personal helicopter yet; people still drive to work. However, the subject fascinates me.

Who has read Earth, by David Brin? That's the future prediction novel with the best chance of (at least some of it) coming true. I can now see accessing the net on a large, flat, wall panel, but I wasn't sure if that would be possible 10 years ago, even.

Also, just because you have the computational power to do something doesn't mean you can build it. If computing speed continues to grow exponentially, I think software development will stay somewhat linear; that'll be our Malthusian dilemma of the future.

However, what's the point of arguing with a future prediction? I say, wait and see. The only thing I'm certain ov now is that we'll all be snickering in 50 years no matter what we say, when we read this again...

...I just hope that by then, we still have all the articles and comments from Kuro5hin archived! Rusty? Who's for mirroring the database, burning CD's, or somehow ensuring that we have a time capsule? Space is cheap, but content is priceless; this is something I wish slashdot had actually done two years ago...
---
"See what the drooling, ravening, flesh-eating hordes^W^W^W^WKuro5hin.org readers have to say."
-- pwhysall
good point about software (2.50 / 2) (#6)
by Justinfinity on Thu Nov 30, 2000 at 06:22:54 AM EST

yes, we may huge amounts of CPU power in the future, but without good programs to run on it, it's useless.

i'm into computer games (playing and making, hence the 3d art stuff) and i think that alot of innovation is going to come from that area. Tim Sweeney, of Unreal/UT fame, is researching a new area of programming languages, expanding even further on the OO area. i'm interested where this goes, since Tim also likes to released the tools to make content for his games too :-). and some of the AI being develop for bots and NPCs is awesome compared to the old if (see(player)){ runto(player); shootat(player); } :-P

-justin
[ Parent ]

Yes... (4.50 / 2) (#10)
by pb on Thu Nov 30, 2000 at 06:38:26 AM EST

The strength of good software development is that once you've done it right once, you shouldn't have to do it again for a while. I don't know if OOP will provide that or not; it still has to be done right in the first place. Also, I've been pondering doing some OOP in C; maybe I'll implement something non-trivial in a few different languages, and see what I like best.

The old "if (see(player)){ runto(player); shootat(player); }" example reminds me...

Did you ever play C-Robots or P-Robots? Those were great games. There's also a J-Robots now, and a language-independent implementation called <A HREF="http://realtimebattle.sourceforge.net/">RealTimeBattle</A>, where your robot is essentially a client that talks to a server.

Basically, your robot is given a little information about the world, and it uses this to seek and destroy other robots. It's somewhere between "Hunt The Wumpus" and "Doom", I guess. I remember making bots for P-Robots back in the day; my RandomBot wasn't bad (it was fast, at least) but it wasn't as good as some of the more sophisticated, predictive bots that people wrote.
---
"See what the drooling, ravening, flesh-eating hordes^W^W^W^WKuro5hin.org readers have to say."
-- pwhysall
[ Parent ]
archiving (2.50 / 2) (#8)
by wolfie on Thu Nov 30, 2000 at 06:25:19 AM EST

hmm yeah, funny you should say that, i was thinking about how it sucked that old things didn't get kept.., it would be pretty cool if stuff was periodically archived, even if it was on CD, i'd probably shell out 15 bucks a year or so, for an archive set, or whatever, probably much easier to find someone to donate one of those cd-tower things, and just have them downloadable..

it would be a pretty cool time capsule.. as you say..

content is priceless.

[ Parent ]
Human - machine equaivalence (4.00 / 13) (#9)
by Simon Kinahan on Thu Nov 30, 2000 at 06:34:24 AM EST

I notice a couple of places in this article where the author taked about computer power in terms of multiples of human brain computational power. As something of an AI skeptic, although I emphatically believe human conciousness to be a natural phenomenon, I have a few problems with this.

There's not evidence that anything important about being human rests on the computational power of our brains. Indeed, with regard to, say, obtaining a concious knowledge of a mathematical result, a computer+human can already thrash a human acting alone. To the extent that we do anything that even remotely corresponds to the kind of algorithms we write for computers faster than they do, it does not happen at a concious level. Catching a ball, for instance. Until we can actually sort out how our brains produce our subjective experience, an area inn which we are making a little progress in spite of the philosophical problems, the idea that computers and brains are the "same kind of thing" is merely an assumption and one that raises for questions that it answers.

Even if we assume that brains and computers are the "same kind of thing", the idea that we can calculate the computational power of a human brain, and measure computers relative to that, when we can't even come up with a reliable way of determining the computational power of a computer, seems deeply suspect.

Simon

If you disagree, post, don't moderate
Well... (3.60 / 5) (#12)
by pb on Thu Nov 30, 2000 at 06:57:33 AM EST

If you assume that computers and brains aren't the same kind of thing, (that is, a brain as a system isn't Turing-based, but rather some superset, perhaps) then hopefully you'd be able to describe what kind of system the brain actually is. Otherwise, you'd be able to simulate a brain on a computer.

As for hardware design, the brain is a very complex and different sort of computing device, and it's linked with many other bizarre peripherals that we also can't precisely duplicate with today's technology.

If we could create a robot with an equivalent body and a very powerful brain, we'd still have to wire it to make it think or learn, and at the moment we don't really know how that works. Some sort of neural network, although it sounds promising, would have to be very large and complicated indeed.

Consider for the moment how much you actually have memorized. I know pi to at least 12 places; I don't multiply two-digit numbers by adding numbers, but rather, I look them up. I can touch-type without thinking about it. I can form entire sentences without really thinking about which words I want--they just appear to me. I can say them just as easily; in fact, when I think of them, I essentially am saying them to myself.

All of this is below the surface, handled by other parts of the brain, with a cache that remembers rote activities and hands them off as needed. That's why you never forget how to ride a bicycle, even if you have amnesia--those are different parts of the brain. It's also why it's much easier to remember a song from the beginning--that's a cue that moves you from one kind of memory to the other, so you know exactly how to sing it unconsciously.

Therefore, a well-implemented AI brain would probably cache as much as possible, far past the limits of current RAM; having a computer brain essentially entirely in RAM is a wondrous thing; the "huge lookup table" model of AI that seems so infeasable might actually be one of the approaches the brain takes for frequently used knowledge.

As to the rest of it, well, I'm sure we'll find out more about how it actually works in the future, but this is one example why computer AIs aren't virtual brains yet: not enough RAM to hold everything it might need, even if it knew it all. The closest thing we have to *that* is the CYC project, and that's been going on forever, but at least they're working on it.

Most interesting questions raise far more questions than they answer; that's what makes them so interesting.

We can definitely determine measures of computers or of the brain, but it all depends on what you're measuring. Benchmarks are just as suspect as are intelligence tests.
---
"See what the drooling, ravening, flesh-eating hordes^W^W^W^WKuro5hin.org readers have to say."
-- pwhysall
[ Parent ]

I agree with most of that, but ... (4.00 / 2) (#26)
by Simon Kinahan on Thu Nov 30, 2000 at 09:53:49 AM EST

I have some niggles. To whit:

If you assume that computers and brains aren't the same kind of thing, (that is, a brain as a system isn't Turing-based, but rather some superset, perhaps) then hopefully you'd be able to describe what kind of system the brain actually is. Otherwise, you'd be able to simulate a brain on a computer.

I don't assume either that the brain is or is not Turing machine equivalent, and I'm not sure which is more likely. To my mind the key to the question is another question: how does the brain give rise to subjective experience ? Once we know that, we'll know whether or not that phenomenon can arise in a digital computer. Until then, we can only guess.

One point that does need to be made even everyday digital computers are not straightforwardly TM equivalent. They only have the required unbounded amount of storage once you take the rest of the world into account (I believe the normal analogy is to having a factory capable of turning all the matter in the universe into paper tape), and once you do that you have to consider that real computers have IO peripherals, which Turing machines don't. I'm not sure anyone has ever thought through the philosophical consequences of this.

If the brain does turn out to be TM equivalent, it'll have to be the brain+world combination in some form that actually has the formal equivalence, because of the need for unbounded storage, and we'll have to have pretty good formal models of both the brain and the world as it interacts with the brain. That knowledge is some way off. If the brain turns out not to be TM equivalent, my suspicion is the key will be that it is an analogue, not digital, device. There are systems, such as some chaotic systems, for which you cannot produce digital models. My hunch is that the brain falls into this category. This is, however, just a hunch.

Consider for the moment how much you actually have memorized. I know pi to at least 12 places; I don't multiply two-digit numbers by adding numbers, but rather, I look them up. I can touch-type without thinking about it. I can form entire sentences without really thinking about which words I want--they just appear to me. I can say them just as easily; in fact, when I think of them, I essentially am saying them to myself.

This is true. Most of our brain works below the level of conciousness. Bits of experience, memory and fantasy only seem to become concious once they reach some level of intensity or significance. Conciousness almost seems to be a kind of monitoring system. For instance, as I type this, I form the ideas into sentences which fit into the predetermined structure, as they occur to me. While I'm certainly concious of what I'm doing, and conciously checking for errors and inconsistencies, I can't really claim to be conciously determining what I type.

Actually this is one of the things that makes me most suspicious about the idea of equivalence. We've built computers to simulate some of the highest level, most verbal activities of the human mind. They do logic, arithmetic and certain aspects of language perfectly, whereas we do them deeply imperfectly. However, we then expect to be able to build on top of that base something like the lowest level, the instinctual, semi-concious level, of our minds. I see no particularly good reason why this is possible, and, indeed, I suspect the idea would never have been thought realistic where mathematicians and engineers not such verbal, logical, people.

As to the rest of it, well, I'm sure we'll find out more about how it actually works in the future, but this is one example why computer AIs aren't virtual brains yet: not enough RAM to hold everything it might need, even if it knew it all. The closest thing we have to *that* is the CYC project, and that's been going on forever, but at least they're working on it.

Personally, I think CYC is silly :) If such a project ever really does produce an AI, its going to be an utterly different kind of mind to a human one. Human minds are resolutely fixed in their environment, whereas all CYC seems to be is a big collection of data and rule of inference, with next to no environment.

And yes, of course this question is interesting because it raises even more questions. My point was only that assuming that the answer is either way raises so many more questions that there's no net gain in knowledge. Better, I think, to observe both possibilities and the issues they raise than to jump either way.

Simon

If you disagree, post, don't moderate
[ Parent ]

Computers vs. people (3.00 / 4) (#22)
by reshippie on Thu Nov 30, 2000 at 09:32:38 AM EST

Computationally, computers have already passed us. Whenever I talk to people about programming that have absolutely no idea, I tell them that computers are extremely dumb, they're just very good at math.

I don't think anyone here will really argue with that, with Gigahertz processors readily available, it sounds a little ridiculous that someone would compare human processing to computers.

OTOH, computers are only as smart as the people that program them. That's what I see as the real thing slowing down AI. A cluster of 1GHz computers with a TB of hard disk space, and Gigs of RAM isn't goin to do anything unless it is told to.

So I guess the question isn't Where will technology be in 100 years, but Where will programming be in 100 years?

Those who don't know me, probably shouldn't trust me. Those who do DEFINITELY shouldn't trust me. :-)
[ Parent ]

Heuristics (4.00 / 1) (#28)
by Mad Hughagi on Thu Nov 30, 2000 at 10:21:10 AM EST

I'd have to agree with your comments on the importance of programming.

Recently I've taken to studying computer chess programs and I've followed the history and the theory from it's inital implementations up until the mid 1980's level. One of the things that is indicated by all the GM chess players is that computer chess programs based on hardware simply 'search the possibilities mindlessly'. While this makes for good tactical play (where little overall thought is needed) most good chess players could easily beat these types of programs by playing a solid game with a good strategy. Strategy was one thing that the computers did not have - and the basis for strategy is the heuristics used by the computer to analyze it's situation. Unfortunately, I haven't begun reading into the 90's, so I'm not too sure about how Big Blue was written, but I have heard that it could see 15 moves ahead, rendering it virtually unstoppable. I guess all I'm driving at here is that even in the world of computer chess the majority of the development has simply been making faster/larger computers, while the actual programming and theory hasn't progressed for over 30 years. I think your final question pretty much sums it up - we've allready shown that we can make faster/bigger, now it's just a matter of implementing some of our 'natural algorithms'.


HUGHAGI INDUSTRIES

We don't make the products you like, we make you like the products we make.
[ Parent ]

I though... (1.00 / 1) (#30)
by tayknight on Thu Nov 30, 2000 at 10:44:11 AM EST

Big Blue could see all the foreseeable moves in that game to the endgame for all combinations left for that game. Even against a world-class player, I thought he could win becuase he could see every single possible move that would be made for the rest of the game. At that point, isn't it just a matter of finding the best game to play to win the game? At level of prediction, I would assume that no human can win. Even so, Kasparov has beaten Big Blue. Wow.
Pair up in threes - Yogi Berra
[ Parent ]
Deep Blue's advatages against human chess players (4.00 / 1) (#37)
by Anonymous 242 on Thu Nov 30, 2000 at 11:45:50 AM EST

Among other advantages Deep Blue had, it was given the complete set of Chess games ever played by Gary Kasparov. Part of Deep Blue's advantage is that it could examine past games of its opponent and predict what moves its opponent was likely to make.

Another common trait of computer chess players (the advance ones) is a database of complete games, so that the computer can choose future moves from a set of known games with identical layouts that result in victory for the computer.

IIRC, at the point Deep Blue played Kasparov, Deep blue could calculate all possible moves about 8 moves into the future within the allotted amount of time while Kasparov could "feel" about 12 moves into the future. However, the advantages listed above were sufficient to allow Deep Blue a number of victories. It is intersting to note that at least one of Deep Blue's victories was because of Kasparov resigning a game it was possible for him to win, but he was so flustered at Deep Blue's moves that he was emotionally unable to finish the game.

[ Parent ]

Another prediction . . . I predicted that. (3.77 / 18) (#11)
by duxup on Thu Nov 30, 2000 at 06:41:24 AM EST

I hate to be so negative here, but I can't help but become less interested every time someone does a story (not specifically on k5) on humans, technology, and/or artificial intelligence and the future.

Every time the article begins with sighting how technology has changed our lives. No big discovery there. Then they make predictions for the future based on current research and theory. The predictions are rarely very complex and often very similar (or identical) to other predictions we've heard already.

It is all mildly interesting right up until the point that I realize I've head that I've heard it all before. To make it worse I can't help but think of those wonderful films and theories about the future that people came up not too long ago. I loved the films from the early 1950s where we all were flying around in our own personal hovercraft :-) Sometimes I think watching the Jetsons is more insightful.

Now I would note that I don't mean to belittle Isaac Asimov or his intellectual peers. I am a fan of his and I believe that he proposed many original ideas that are still valid today. Yet the massive amount of thoughtless future predictions just makes me lethargic.

I would like to make one prediction, in the future there will be more predictions. Most of them will be wrong, and a massive amount will be far to shallow and unoriginal to be of any use.

The guy's a pseud. (3.76 / 17) (#13)
by streetlawyer on Thu Nov 30, 2000 at 07:39:58 AM EST

Pseud clues:

"A $1000 (inflation not withstanding) machine is roughly equivalent to the computational power of the human brain. ". Is there a standard measurement for "the computational power of a human brain? Of course not. Is "the computational power of a human brain" an interesting property? Not really.

"Direct neural pathways to the brain are perfected". "Neural" here is being used in a way which clearly shows that the author doesn't know what he's talking about -- I'm nothing more than an occasional reader of popular science magazines and I know that there's much more to the brain than neurons.

"Human employment is almost nil. Machines do labor while humans are free to enjoy all of life. " Star Trek economics -- a pet fucking peeve of mine. If machines do all the labour and human employment is almost nil, then humans will be "living in a state of unemployment and misery", not "free to enjoy all of life". Unless we have a Communist revolution between now and 2030 (which I would have thought would be worth mentioning if he's predicting it), then the same people who own the capital stock now, roughly, will own it in 2030. So, if human employment is almost nil, who's going to pay the unemployed humans for doing nothing? Isn't it much more likely that humans will still be employed in a load of crappy service industries producing crap for each other to consume?

"VR makes use of nanobots in the brain to directly effect the perceptions of the user. " Assumes that "perceptions" are represented in the brains of all human beings in an identical way. Also ignores combinatorial explosion.

"Computers routinely pass valid Turing Tests. Controversy persists. " Anyone talking about Turing Tests in 2030 as if they were something that mattered will presumably have been recently taken out of cryogenic storage. No serious researcher in AI regards the Turing Test as more than a curiousum of the field's prehistory. Computers routinely pass valid Turing Tests *now* for chrissake -- I bets there's twenty doing it right now on AOL. Just shows that the Turing Test isn't important.

And so on. The guy's a flake.

In the spirit of charity, however, I make one prediction of my own:

2030: Nanotechnology remains the preserve of wild-eyed futurists.






--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
Re: The guy's a pseud. (3.00 / 3) (#44)
by TetsuRyu on Thu Nov 30, 2000 at 12:58:25 PM EST

I would suggest you look a little deeper into Ray Kurzweil before calling him a psued, He's done some really amazing things with pattern recognition, OCR and Voice recognition etc. Which doesn't necesarily qualify him as Nostradamus. He has written 2 books of predictions, one in 1990, called the Age of Intelligent Machines, and one in 1999, called the Age of Spiritual Machines. The latter expounding on his first book and where he went wrong on the 90's. The Age of Spiritual Machines is also where all the info in his article came from. That said, I will agree that his predictions are a bit generic and highly optimistic pipe dreams. -Tetsu

[ Parent ]
don't understand yr point (4.00 / 5) (#51)
by streetlawyer on Thu Nov 30, 2000 at 01:39:08 PM EST

So, he's an expert on a highly specialised subject, using the prestige gained in that field to pontificate on wider social and technical issues which appear to be closely related, but which on closer inspection turn out to be mere punditry. How, exactly, does this differ from the definition of a pseud, and what exactly might I find out about Mr Kurzweil that would make me change my mind? I spotted the tell-tale signs, and, it would appear, called correctly.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]
General agreement (none / 0) (#89)
by Simon Kinahan on Fri Dec 01, 2000 at 11:09:15 AM EST

I tend to agree that Kurzweil's prognostications are not worth much, and in addition the author of the article has filtered the book, which is a little more measured, into a kind of happy futurism thats pretty pointless. However, I have to disagree on some of your points.

On employment: If human labour really does become unnecessary (unlikely to me, but thats not the point here), you'll end up with either a situation where all humans own some capital in order to pay for their lives, or with mass unemployment and starvation. If the latter occurs, social pressures are likely to lead either to violent revolution, or to government action to redistribue wealth. Either way, you do get a Star Trek like society if you make the appropriate assumptions.

On the Turing Test: nothing yet written can pass the Turing test as Turing originally proposed it - that is, they cannot fool a human comparing a computer with a human in a free-ranging conversation in an attempt to determine which is which. The so-called passes recorded so far are either cases in which the domain of conversation was limited (to maths or chess, usually), or cases where the human was not instructed to try to determine whether they were talking to a computer. AI researchers very rarely express much interest in the philosophy of mind these days, having made so many stupid predictions in the past.

Simon

If you disagree, post, don't moderate
[ Parent ]
There are 3 common problems with the future... (3.33 / 6) (#14)
by porkchop_d_clown on Thu Nov 30, 2000 at 07:46:09 AM EST

Or, at least, predicting it.

  1. The predictor underestimates the difficulty of certain problems.
  2. The predictor overestimates the difficulty of certain other problems.
  3. The predictor fails (reasonably enough) to take the law of unintended consequences into account.

Thus, Jules Verne's books are full of things that we would recognize as modern devices, but he creates them 100 years early. Larry Niven created a world where, by 1980 or so, organ transplants are handed out like so much candy, and the UN has become the world government. Not to mention the oft repeated claims about (a) the country will only need 5 computers and (b) 640k RAM is enough for anyone.

It's the same with this time line. I do think the time will come, someday, when wetware, hardware and software become integrated. I look forward to it with a mixture of anticipation and dread. (Read Rudy Rucker's books for a hilarious look at the possible "unintended consequences" of AI and the merger of organic and inorganic computing.) But I think it's extremely unlikely that any of these things will happen in our lifetimes - particularly those bits about computing power vs the human brain. First, despite all the hype and the IQ tests, we still don't know how to measure true intelligence (as opposed to computing power). Second, what, really, is the economic incentive to develop intelligent machines? Or, rather, machines with personalities and egos? What corporation wants to deal with the lawsuits that would occur when their "Universal Appliance" starts unionizing?



People who think "clown" is an insult have never met any.
Lifetime. (3.00 / 1) (#18)
by B'Trey on Thu Nov 30, 2000 at 08:25:46 AM EST

"But I think it's extremely unlikely that any of these things will happen in our lifetimes - particularly those bits about computing power vs the human brain."

I think it's extremely likely that machines will meet/exceed the "computational power" of the human brain in our lifetime. (The average lifetime is over seventy years and increasing all the time. If you're 25 now, you can reasonable expect to live another 50 years, possibly another 75. How much has computer technology changed since 1950?) Although it's difficult to exactly measure a human brain in terms of computations, you can make some rough comparisons. According to a recent article in either Discover or Scientific American (can't remember which off the top of my head), a 1.2GHz x86 compatible system is roughly equivalent to a mouse brain in raw computational power.

Computations don't tell the whole story, of course. Having a computer roughly equivalent to the human brain in terms of computations doesn't mean an intelligent system. Whether we'll ever develop truly intelligent software is another question entirely.

[ Parent ]

interesting speculations (3.50 / 2) (#36)
by Anonymous 242 on Thu Nov 30, 2000 at 11:33:52 AM EST

Computations don't tell the whole story, of course. Having a computer roughly equivalent to the human brain in terms of computations doesn't mean an intelligent system. Whether we'll ever develop truly intelligent software is another question entirely.

I agree entirely.

The real question is whether true intelligence will be programmed. I can think of three alternatives (I'm sure I've overlooked some) to answering the question.

(1) Materialism correctly describes reality and human intelligence is simply the product of "random" processes. In this case, the sure fire way to produce machine intelligence is to develop a computer capable of mimicing the evolution of the human brain. We would need machines many, many orders of magnatude higher in computational power to develop machine intelligence within the lifetime of any one person. If the evolution of the machine mind is guided by human intelligence, we may be able to cut the time down some.

(2) Machine intelligence can be entirely designed by human intelligence. In this case, machine intelligence will take as long to develop as it takes human society to evolve the tools and intellectual understanding of intelligence necessary to designing machine intelligence. Anybody's guess as to how long this would take is as good as anyone else's guess. We might hit a wall in computational power. We might hit a wall in understanding just what intelligence is. Someone might come up with the magic formula tinkering away on the week end.

(3) Machine intelligence is an impossibility. If this is the case, machine intelligence will never exist. The interesting part is that even if this is the case, software may grow to be sufficiently complex to fool society into believing (1) or (2) is the case. So if (3) is the case and we have sufficiently advance technology we may never know.

[ Parent ]

Machine Intelligence impossible? (4.00 / 1) (#86)
by porkchop_d_clown on Fri Dec 01, 2000 at 09:34:11 AM EST

(3) Machine intelligence is an impossibility. If this is the case, machine intelligence will never exist. The interesting part is that even if this is the case, software may grow to be sufficiently complex to fool society into believing (1) or (2) is the case. So if (3) is the case and we have sufficiently advance technology we may never know.

Well, first I would agree with Turing - that there's no difference between being intelligent and "fooling" people into believing that you are.

However, I do agree that this is a very interesting point. One issue that technologists seem to overlook is that all existing computer designs are (a) deterministic and (b) synchronous. As near as we can tell, the human brain is neither. This may mean that human style machine inteligence is possible just not with our existing "Von-Neumann" machines.

I suspect that intelligence is a lot more likely to arise out of subsumption architecture machines, like M.I.T.'s artificial insects.



People who think "clown" is an insult have never met any.
[ Parent ]
why I don't believe in the Turing test (none / 0) (#99)
by Anonymous 242 on Fri Dec 01, 2000 at 01:07:38 PM EST

Well, first I would agree with Turing - that there's no difference between being intelligent and "fooling" people into believing that you are.

Aside from meaning that we already have true AI, no formulation of the Turing test that I have seen takes into account whose intelligence is doing the fooling. (Admittedly, I don't really follow AI aside from reading the occasional neural net article.)

I think a distinction between a program that is intelligent and one that merely acts intelligent is key. The computer that is intelligent is an agent. The computer that merely acts intelligent is controlled by an agent.

[ Parent ]

Computer power (none / 0) (#76)
by a humble lich on Fri Dec 01, 2000 at 12:38:23 AM EST

"According to a recent article in either Discover or Scientific American (can't remember which off the top of my head), a 1.2GHz x86 compatible system is roughly equivalent to a mouse brain in raw computational power."

   I think that that greatly depends on what you mean by raw computational power. There are many computaions that my 8086 cound do much faster than me. I imagine I would perform quite poorly on a LAPACK benchmark. (how many flops can *you* do). On the other hand even very simple animals can do remarkable image processing and AI type stuff (although I guess they aren't "artificial" intelligence). I know it has was at a neuroscience talk I went to that there was an insect with only around 42 neurons that could do things that modern supercomputers couldn't do




[ Parent ]
I suppose it depends on how you define (none / 0) (#88)
by porkchop_d_clown on Fri Dec 01, 2000 at 09:58:01 AM EST

"computational power". In terms of math, many here have already pointed out that computers blow humans away. When it comes to the ability to process real-time sensory input, at high resolution, and to respond to that input, while regulating a large number of semi-automatic systems, I think that people underestimate how hard that is.

And I fundamentally disagree with people who think that just because the circuit counts are similar, computers will automagically start behaving like people. We still don't have an AI with the skills of a typical bumblebee.



People who think "clown" is an insult have never met any.
[ Parent ]
Economic incentive: efficiency (2.33 / 3) (#47)
by rexona on Thu Nov 30, 2000 at 01:05:56 PM EST

Quote:
"Second, what, really, is the economic incentive to develop
intelligent machines? Or, rather, machines with personalities and egos? What corporation
wants to deal with the lawsuits that would occur when their "Universal Appliance" starts
unionizing?"

Answer, based on developments in industrialized countries:
- we have shifted much of routine manual work to machines
- this improves our efficiency: one human can produce a lot more physical stuff
- we have shifted much of routine office work to machines
- this improves our efficiency: one human can produce a lot more intellectual stuff
- we will shift much of routine social responsibilities to machines
- this improves our efficiency: one human can maintain relations with a lot of people
- and it's always efficient to make machines smart enough to adjust dynamically to encountered situations, without disturbing us in our more important tasks
- so a machine can modify our personalities case-by-case, as seems best fit
* making a purchase, it will stubbornly complain about lower prices in next door store
* arranging a date, it will add some intelligent and empathic words to our stupid phrases
* discussing with the boss, it will add the latest hype issues that make you look smart
- in the meantime, we spend our time playing Quake XXII with friends (more important)
- now separate our personality from us, and voila: we have a machine with an ego
- so the demand comes from us small individuals, looking for easier lives


[ Parent ]
Nothing you said requires intelligence. (none / 0) (#85)
by porkchop_d_clown on Fri Dec 01, 2000 at 09:14:57 AM EST


Answer, based on developments in industrialized countries:
- we have shifted much of routine manual work to machines
- this improves our efficiency: one human can produce a lot more physical stuff

True. Which of those routine manual tasks require a human style intelligence, as opposed to a rule-following machine?

- we have shifted much of routine office work to machines
- this improves our efficiency: one human can produce a lot more intellectual stuff

I hate to break it to you, but economists are still waiting for some evidence that computers have improved office productivity. And, again, which office tasks would be improved by robotic intelligence?

- we will shift much of routine social responsibilities to machines
- this improves our efficiency: one human can maintain relations with a lot of people

"Routing Social Responsibilities?" What the hell are those? Remembering birthdays? You need an human-level AI to make gift suggestions? Amazon doesn't do that now?

- and it's always efficient to make machines smart enough to adjust dynamically to encountered situations, without disturbing us in our more important tasks

Making a machine adaptable and making it intelligent are two different things. Cockroaches are adaptable. Last time I checked, they weren't intelligent.

- so a machine can modify our personalities case-by-case, as seems best fit

I'm sorry, what? You want a machine to modify your personality?

* making a purchase, it will stubbornly complain about lower prices in next door store

which would no doubt be ignored by the machine it's complaining to.

* arranging a date, it will add some intelligent and empathic words to our stupid phrases

Which would no doubt be treated by the recipient with condecension, since they would know that you used a machine rather than taking the time to express yourself.

* discussing with the boss, it will add the latest hype issues that make you look smart

And your boss wouldn't realize you were doing this? And he wouldn't be doing it himself? And people wouldn't be using it on you? So that in short order everyone learns to ignore personal hype the way they ignore marketing hype now?

- in the meantime, we spend our time playing Quake XXII with friends (more important)

In the meantime, your friends program their 'bots to play quake for them, so that they can look better.

- now separate our personality from us, and voila: we have a machine with an ego

And if that machine has an ego, what is it's incentive to do anything you tell it to, as opposed to doing what it wants to? You've just invented a new class of slave labor.

- so the demand comes from us small individuals, looking for easier lives

The same way modern technology has already made our lives easier?



People who think "clown" is an insult have never met any.
[ Parent ]
Indeed, they are examples of a process (none / 0) (#121)
by rexona on Mon Dec 04, 2000 at 11:44:47 AM EST

The first two developments don't require machine intelligence. They simply show that humans want to get rid of routine jobs that can be fulfilled by other means. In the first case, industrial machinery has been around ever since the invention of water mills (grain into flour). In the second case, especially after the forties, accounting and calculation duties have been shifted from office workers to punch card machines and computers (cheaper, faster, fewer mistakes). They were just examples that show the underlying process.

As for the third one, the basic functionality is indeed very rudimentary and unintelligent as you mentioned. Automata such as answering machines, email auto-replies, funny sigs. But they are being developed further: reply to your authenticated friends with an answer that says where you can be reached, change your sig according to the send-to addresses. Then relay more information in the messages and apply rules that depend on your and their moods, location, schedule. If you feel like going out and having a drink, and have a vacant slot in you evening, and are nearby, then get together. Let your device take care of the arrangements and message exchange, just accept proposals.

The more your own agent learns about you and your contacts, the easier it is to negotiate with other people (through their agents). Your agent helps you fence off unnecessary contact attempts that would eat your time. It will shield you from spamming and translate messages that you can't understand. The better it can imitate you and your preferences, the more you have time for things that interest you. It will be a long time before we can call it intelligent behavior, but eventually all improvements lead that way.

And now we get to the interesting part.

As the agent is your only means to communicate with people that you are never going to meet physically, it can also change the way you are presented. From the target's point of view, the agent has changed your personality because that is the only personality known to the target.

The counter-measures are certainly important. It will be an arms race. In a free market, the level of your agent's sophistication depends on your wealth. And the same applies to agent detectors, which are the flip side of the agents. As always, the rich and powerful get first the tools that make their lives easier. If you can afford an advanced agent, you can fool some folks to an advantage. But if you (or your agent) assume that the target has a better agent, don't try to distract it.

It all boils down to trust. The farther you go from our present world where we interact physically, the less trust there is. It is a lot easier to pretend to be someone else in a phone call than in a real life conversation. If you catch your friends cheating in that Quake with bots, can you trust them anywhere else either? Where do you draw the line of "friends"? Given enough resources, enemies can fool you as they like. If you have such enemies, you have to be completely paranoid. So you'd better choose your enemies wisely, or not have them at all.

Yes, intelligent machines are like slaves, pets or watch dogs. Depending on the level of their intelligence, they may get their reward in continued existence/nourishment/resources, or getting some leisure time. Whatever, this isn't the issue here. Let's leave it to scifi writers.

Our own machines will make our lives easier, to balance the complexity caused by the agents adopted by other people. Nobody has to use them, just as nodbody has to use a car. People can always walk if they want to. Some people don't like it, but do it because that's what the society has adopted as a standard practice.


[ Parent ]
The Mighty Micro (4.11 / 9) (#15)
by Paul Johnson on Thu Nov 30, 2000 at 07:47:44 AM EST

Anyone remember a book from (IIRC) 1979 called "The Mighty Micro" by Christopher Evans? It forcast "ultra-intelligent machines" by the mid 90s which would cure all diseases by 2000. See any of that happening?

Before reading current prophecy its always a good idea to go back and look at some old prophecy. Prophets seem to go wrong in some fairly consistent ways:

  • Over-estimating change in the near term. For example above we have "cables disappear" and "cancer and heart disease eliminated" in the next 10 years. Not likely.
  • Under-estimating change in the long term, although I don't think that this particular prophecy is guilty of that.
  • Ignoring social and political change. In 1900 the videophone and cellphone were predicted, but the huge social change wrought by motor cars was not foreseen. Nor was the sexual revolution that started in the 60s and is still going on. These are much larger effects. But telling people that things they hold as fundamental truths are going to vanish like shadows when the light goes on is never popular. Incidentally, note that videophones have been forecast as the next generation of telephone almost since it was invented. We have the technology, but we don't have videophones. The reasons are social, not technical.

Paul.
You are lost in a twisty maze of little standards, all different.

Cancer /will/ be gone. (2.71 / 7) (#19)
by Crutcher on Thu Nov 30, 2000 at 08:58:19 AM EST

Cancer /will/ be gone, or at least almost completeley so, by 2010. The reason has nothing at all to do with technology we 'will' have. It is the application of technology that we /do/ have, extendend by 2 generations.

There is a company in california that is working on (awaiting approval for clinicals) a reactor that will let them take a blood sample, and a few days later supply you with a /constant/ stream of several hundred times your bodies normal white blood count, /and/ give those new white blood cells tags for whatever you want them to focus on killing. Talk about your /massive/ immune response.

And if that isn't enough, IBM is building a supercomputer for simulating protein folding, which a generation down the line, will make it viable to take a cell sample of the cancer, and of a normal cell, sequence them (using the ever cheaper/faster/smaller automated sequencers), figgure out what tags are different on the surface of the cancer cell, and cook up a custom viral phage to kill not just the cancer, but just /your/ cancer, exactly.

As an aside:
Predicting the falsity of futurists predictions also tends to be a bad approach, as though we are usually over zealous in the short term, we /always/ overshoot the long term predictions.
Crutcher - "Elegant, Documented, On Time. Pick Two"
[ Parent ]
The once and future Cure for cancer (3.80 / 5) (#35)
by Anonymous 242 on Thu Nov 30, 2000 at 11:18:58 AM EST

Cancer /will/ be gone, or at least almost completeley so, by 2010.

Pretty much every six months to a year for as long as I can remember being interested in science, some medical company or research institution has made some breakthrough discovery that was supposed to bring about the end of cancer in the next few years. Fifteen to twenty years later, cancer is still among us.

Cancer may very well one day be only a dim memory (except for people who can not afford treatment), but I'll wait until we have clinical trials that demonstrate the cure before I consider that cancer will be eradicated from the richer societies on the planet within the next ten years or so.

This is no reason to be overly pessimistic, but it is a good reason to be realistic. As the saying goes, hope for the best, but prepare for the worst.

[ Parent ]

Videophones ... (2.60 / 5) (#23)
by StrontiumDog on Thu Nov 30, 2000 at 09:47:05 AM EST

... are constrained by bandwith problems, not social problems.

[ Parent ]
Not really. (4.50 / 4) (#27)
by Simon Kinahan on Thu Nov 30, 2000 at 10:01:06 AM EST

You can get pretty decent pictures over ADSL, cable or ISDN like connections with modern streaming video compression, though not over the internet due to latency problems. If there were the same pressure for videophones as there is for faster internet connections, the telcos would be dealing with it. However, there is not.

I have used videophone systems for meetings, and they're worse than audio teleconference systems in many ways. Disregarding latencies, issues like where the camera is pointed, people walking in and out of shot, and positioning of microphones are all important.

To have anything like a usable videophone for home use, it would require you to sit in one place to keep the mic and camera in range, to modulate the volume of your voice differently to a phone or ordinary conversation, and, if the meeting were at all important, to wear makeup. Personally I'll stick with an audio telephone.

Simon

If you disagree, post, don't moderate
[ Parent ]
Teleconferencing is a different ball game ... (3.66 / 3) (#31)
by StrontiumDog on Thu Nov 30, 2000 at 10:51:04 AM EST

Telephone converstations have, until the advent of ADSL and ISDN, traditionally been over copper lines, where bandwith is a problem (and ISDN does not solve this completely either). Minitaturization of the necessary hardware, and the attendant costs have also been a problem until fairly recently. Teleconferencing differs strongly from one-on-one calls; the setup, intent and choreography is different. There is no inherent technical difficulty in placing a fixed camera (optionally with smart tracking, such as on the basis of voice direction) to enable one-on-one videophoning: the popularity of webcams and their widespread use testifies to this. Webcams, unless they use some real-time protocol like RTP, suffer from latency problems even if both ends are connected by a high speed broad band connection. Nokia and Eriksson are considering integrating video into future generations of mobile phones, and have shown some demos, but the general infrastructure simply isn't ready yet. Streaming video and webcams are popular enough, and socially acceptable, and are a driving factor in the move by telcos to upgrade bandwidth.

I cannot agree that purely social considerations are responsible for the lack of videophones; until recently a number of vital ingredients simply weren't available: bandwidth, advanced compression techniques, cheap miniature video cameras, widespread cable infrastructure. The advent of the internet is financing all this, and making videophones a practical possibility; I don't think the videophone market on its own would have been able to supply enough capital to make the investments by telcos worthwhile, certainly not in direct competition with the traditional telecom market.

But if you have any interesting pointers to material that suggests the contrary, I will be more than happy to read them. I am not an expert in the telcom industry, but after seeing the widespread and rapid acceptance of webcams, and given the general populace's fondness for home videos, mobiles, etc., I find it hard to believe there is any serious social resistance to videophones.

[ Parent ]

infrastructure vs. culture in video-conferencing (4.50 / 2) (#39)
by Anonymous 242 on Thu Nov 30, 2000 at 11:58:41 AM EST

I'm working as a consultant at a Forture 500 company. At world headquarters we have at least two video-teleconferencing systems set up with an office in another state. Most of the time, people prefer to simply use a regular old audio-only bridge when setting up a meeting. Even in cases where more than one person at hq needs to talk to more than one person in another state, the audio-bridges are the preferred form of teleconferencing.

The reasons are mostly cultural. In my expereince most people don't want the other people to see them most of the time. In audio only teleconferences, once the mute button is hit, people can have a confidential chat about the meeting. Notes can be passed between meat-space people with no one at the other end being the wiser. Eyes can be rolled and no one can see. Grimaces and frowns don't carry over the audio-wire.

And therefore, because of the lack of high demand, there is very little reason for video-conferencing manufacturers to even attempt to shift from low-output/high-margins to high-output/low-margins. I would contend that even if bandwidth were no barrier that video-phones would be far less popular than a good number of people think. Maybe others are different, but if my mother calls me from California and wants to visualize me, I'd rather her be looking at a canned picture of me than than sitting in front of a video camera. I'd rather be able to walk through the house on the cordless phone and not have my mother see what type of face I'm making at some of her comments.

[ Parent ]

Makes sense ... (3.00 / 1) (#42)
by StrontiumDog on Thu Nov 30, 2000 at 12:34:41 PM EST

... but why is it that people press the mute button on an audio phone in order to chat, but don't press on the "show canned picture" button when a similar situation arises in a video conference? I for instance, will happily talk on my mobile while on the john, and I can imagine that my conversation partner would rather not see that. But I could simply replace the live feed with a canned still ("Sorry, Jim, I'm putting you on 'screensave', gotta pee") or let the phone point at something innocuous, like the stack of TP rolls next to the potty. Making obscene gestures is also easy with a videophone; simply lift the finger out of camera range.

The problem I have with sociological causes for the lack of video phones is that the explanations are not entirely convincing. People and cultures adapt easily; while your audioconferencing example is interesting, audioconferencing is uncommon, certainly compared to normal phone usage. This is anectotal evidence on my part, of course, but I know of absolutely no-one who has had the opportunity to use, or even see, a consumer video phone in any way whatsoever -- there are simply no models within reach of the average consumer. Even if videophones were culturally unenticing, there should still be niche markets for them -- there are many legitimate situations in which personal videophones are desirable -- but no such market exists, or has ever existed,leading me to believe that the main obstacles are technical, rather than sociological.

(Sorry if this is geting long winded; I have to do something between compiles :-)

[ Parent ]

niche markets do exist (4.00 / 1) (#45)
by Anonymous 242 on Thu Nov 30, 2000 at 12:59:21 PM EST

Your counter points miss the mark because of one oversight. There is no compelling reason for the average consumer to use a video-phone over an audio-phone. Given that the equipment is available (albeit not cheaply) the fact that there are few niche markets implies that that there is little demand for video-conferencing.

Those niche markets do exist, they simply remain very small. Just because something can be done doesn't mean that people want to do it.

And you are correct that people are adaptable. In ten years or so, the next generation might very well desire video-phones just like the current generation doesn't. If I had was to attempt to prognisticate the advent of the video-phone, my guess would be that the likeliest way for them to gain popularity would be after the wide spread adoption of game consoles capable of broadband multi-player games. I can envision a Quake VI clan wanting to see an inset window of their opponents faces as they rack up the frags. I can't see my grandma or my sister especially wanting to see a live video-feed of me as we talk on the phone.

I guess what I'm trying to say is that before video-conferencing becomes wide-spread, a certain critical mass of acceptance needs to take place. Currently, there are not enough people that want video-conferencing to make it worthwhile for suppliers to invest in the technology.

[ Parent ]

re: Makes sense (4.00 / 1) (#68)
by luethke on Thu Nov 30, 2000 at 06:08:16 PM EST

Well, one of the differences between pressing a "show canned image" and the mute button is the person on the other end doesn't know you pressed the mute button. They don't know you are on the can. If you were a boss talking and told somebody to do something and all of a sudden the canned image pops up for a few second and then the person is back you would have to wonder why. Lifting you finger out of camera range would still be shown that you are visibly hiding something: "John, you need to rewrite this whole proposal", john casually raises one arm untill it is no longer on camera and them promptly lowers it, "sure thing boss". This is going to sound somewhat sexist, nut in my experiance it has been true. Women do most of the phone talking (not that men don't talk any, but if most men have a three hour phone conversation it is usually with a woman). Most people, and especially women, want people to see them when they look good (look at the huge amount of fashion magazines produced for women as apposed to men). I can see it now "honey get the phone, my hair is a mess and I don't want to be seen". You say send the canned image? well that is still a red flag saying "I am doing something I don't want you to see", do you really want someone's imagination running wild at what you are doing?
as for niche markets, they still have to be large enough to justify the cost of manuyfacturing. Web cams now have most of the nich markets and are much cheaper than with a video-phone. The technology needs to be very cheap (35-100 dollars US), look decent(picture wise), and be portable to even think about overcoming these social probelms. Unless we become less private it may never be technologically feasable to overcome the social problems.

[ Parent ]
Funny book (none / 0) (#118)
by goonie on Mon Dec 04, 2000 at 02:29:07 AM EST

Well, at least it was when I read it in 1995. Though, to be fair, his vision of wireless devices and ubiquitious communcation wasn't too inaccurate, even if the timeline was a little optimistic.

Futurists have got such an appalling track record at predicting how long inventions are going to take that it's not worth even trying any more. What is interesting and useful are people who can accurately predict the consequences of technology. Childhood's End, for instance, makes some fascinating comments about the consequences of DNA paternity testing and the oral contraceptive, and some of Clarke's 80's-era work which mentioned global communications networks was pretty perceptive (though by then of course he'd probably seen and used BBS's and arpanet).

[ Parent ]

minor quibble... (3.14 / 7) (#17)
by daystar on Thu Nov 30, 2000 at 08:21:50 AM EST

I keep seeing articles about nano-technology with the scientists saying that although their ideas are coming together at a fantastic pace, the technologies won't be available to the consumer for many years do to the high cost of production. Wonderful. The great capitalists do it again. Greed taking precedence over an actual advancement of the human race.


Wouldn't capitalists be MORE interested in getting new technologies to market? Is there anything in history that makes new toys avaliable faster than capitalists?

--
There is no God, and I am his prophet.
Re: minor quibble... (none / 0) (#24)
by Delphis on Thu Nov 30, 2000 at 09:47:58 AM EST

Wouldn't capitalists be MORE interested in getting new technologies to market? Is there anything in history that makes new toys avaliable faster than capitalists?

Well, yes and no ... Quite a lot of investors (ones with the SERIOUS cash to put into these things) want to see a RETURN on their investment and in order to do that they like tangible results, not just a scientist saying that they're working on it still and it'll be a while before the breakthrough comes. Also, if an investor decides to pull out of the whole deal, it can mean that further investors are even more skeptical and the project fades.. all due to capitalism AND the short-sited view a lot of companies take.

I don't have a solution to it. Though, maybe slowing the 'trend' to be day trading and moving investments about so much (i.e. buying and selling stocks so much). I heard the Japanese hold stocks for a lot longer and truely INVEST in a company, not try to make a quick buck off it. Then those companies go on to invent wonderful new things. Honda's ASIMO robot is something recent that I thought was incredible. Although having said that, the Japanese economy took a nose-dive a while ago didn't it? .. Ah well, I dunno how to solve it.. I'm not an economist.. just a programmer.


--
Delphis : For Pay Distributed Processing
[ Parent ]
capitalism is an imperfect solution... (none / 0) (#34)
by daystar on Thu Nov 30, 2000 at 11:07:03 AM EST

But all of the other solutions SUCK.

If investor squeamishness is a problem, that doesn't mean there's something wrong with INVESTORS, just that we need to be breeding stronger ones. Also, I think that a lot of the time we technical people will get all cranky about a cancelled project, when the investor was, in fact, RIGHT to cancel. Large, unweildy solutions are rarely good ones. Small, incremental change is generally more successful.

I just thought the initial statement that capitalists were somehow SLOWING DOWN market action a little odd.

--
There is no God, and I am his prophet.
[ Parent ]
Robots and future presence (3.00 / 3) (#21)
by Dries on Thu Nov 30, 2000 at 09:15:28 AM EST

In drop.org's submission queue is an interesting write-up on future rbots and their relation with human mankind (original source: redherring.com). It is still in our submission queue (at the time I write this) so feel free to vote it up to the front page! :)

The article itself particulary deals with robots and future presence. According to Rodney Brooks, director of MIT's AI Lab, consumers and industry will be able to project themselves into different locations. A repair person, for example, could project himself into your computer and repair your boiler from his or her office.

One quote is actually amuzing:

"In battle, you can't have the commander hacking away in C++.", states Mr. Brooks. "He has to be able to tell the robot, 'Go down the hill, take a look, and tell me what's going on.'"

Check it out.

-- Dries
http://drop.org/
-- Dries

Speculations (3.00 / 4) (#25)
by Alternity on Thu Nov 30, 2000 at 09:53:17 AM EST

Here is my timeline of the future... 2000 - Everyone will drive in flying cars - We will be able to take a space shuttle to go spend our vacations on Mars. - The army won't use men anymore but cyborgs 100 times stronger and tougher - We will only eat pills of concentrated nutriments You see my point. While these suppositions are interesting and can bring to good discussion, I think we should not think them too seriously. Don't take me wrong, I am not saying that technology has not progressed fast enough, I'm just saying that it took different directions.


"When I was a little kid my mother told me not to stare into the sun...
so one day when I was six I did
"
Fyi... (3.50 / 4) (#29)
by 11223 on Thu Nov 30, 2000 at 10:36:32 AM EST

This isn't just an essay, but an entire book called The Age of Spiritual Machines. It's quite a good read. Besides, Kurzwiel has an excellent track record for not just predicting things but working to make them happen as well. While I don't know about the timeline, it will happen eventually. What's a couple of hundered of years among friends, anyway?

--
The dead hand of Asimov's mass psychology wins every time.

Economic reality (4.10 / 10) (#32)
by Global-Lightning on Thu Nov 30, 2000 at 10:51:05 AM EST

Kuweil's outline fails to take into account economic realities and the reaction current institutions will have towards new technologies

In particular, in 2030 he predicts that "Human employment is almost nil. Machines do labor while humans are free to enjoy all of life." The same falacy has been predicted for every new technology since the invention of the steam engine. The reality is that every new exploitable technology is incorporated into the economy to increase productivity and efficiency, not to ease the workload on personnel. What changes is the nature of the job, while time and resources dedicated toward work aren't reduced. We see this in the progression from farming and crafts up to the 18th century, the industrial revolution and manufacturing in the 19th century, then rise of clerical and later information technologies in the 20th.
Furthermore, he bypasses the current global economic imbalance. Virtual Reality technology will mean absolutely nothing to impoverished nations that are still struggling to get a reliable telephone infastructure in place...

His utopian vision doesn't take into account less noble uses of new technologies. For example, every one of the advances he forsees has great military value. The advances in interface devices can be used to create more accurate weapon systems. Bio-engineering technology will create more lethal and precise pathogens for waging biological and chemical warfare. Imagine a nanotech "cloud" that envelops an adversary and moleculary dissapates anything trapped inside.
Speaking of nanotech, what if some some future cracker creates the first (and maybe last) uncontrollable, self replicating carbon-based nanobot?

It may be cliche, but this is another example of "Don't apply a Star Trek solution to Babylon 5 problem."

At some point in time,,,,, (none / 0) (#115)
by cryon on Sat Dec 02, 2000 at 03:00:32 AM EST

things that have been predicted because there are reasons for them to come to pass (i.e., humans desire that they come to pass), but that have not yet come to pass, will probably come to pass at some point in the future; the question is when. So therefore, pointing out that past predictions have not come to pass really means little, although it resonates emotionally.
HTGS75OBEY21IRTYG54564ACCEPT64AUTHORITY41V KKJWQKHD23CONSUME78GJHGYTMNQYRTY74SLEEP38H TYTR32CONFORM12GNIYIPWG64VOTER4APATHY42JLQ TYFGB64MONEY3IS4YOUR7GOD62MGTSB21CONFORM34 SDF53MARRY6AND2REPRODUCE534TYWHJZKJ34OBEY6

[ Parent ]
Why we haven't been back to the Moon (3.30 / 10) (#33)
by baberg on Thu Nov 30, 2000 at 11:06:08 AM EST

It's really quite simple why the U.S. hasn't been back up to the moon (or why any other countries haven't gone there yet). There's no reason to. The only reason that the U.S. wanted to get to the moon was becasue of Cold War propaganda that basically said "Sputnik?!?! Oh no! We can't let those Russians take control of space, we have to make it American and prevent the other planets from falling to Communism!". Hence the incredible amount of money spent to put a U.S. team on the moon before any Russians go there.

Since there is no Cold War to win, the U.S. has lost its interest in space exploration or colonization. It largely comes down to a monetary issue (doesn't everything?) Senators in the late 1960's knew that the U.S. just had to beat those "ignorant red commies" to the punch and "secure the universe for democracy" (for the record, this is sarcasm). But once that was accomplished, nobody really cares anymore. Now senators only care about getting re-elected and protecting their special interest groups (but don't get me started...)

These predictions for the future always bring a smile to my face. Things don't change half as fast as "visionaries" expect them, and they certainly don't change in the ways "visionaries" expect them. I mean, honestly. Aside from computers and the Internet, with some advances in medicine, how has life changed in the past 20-30 years? We still drive cars, we still war over petty borders, and we still go to work day after day after day.

The more things change, the more they stay the same.

Well... (3.50 / 2) (#46)
by Zeram on Thu Nov 30, 2000 at 01:01:01 PM EST

People are much more in touch with each other than ever before. One of the biggest changes in life in the past 20-30 years is that fact that people are easily capable of staying in contact with each other 24/7 with out being physically together. Which has (and really still is) fundimentaly changing the way people live.
<----^---->
Like Anime? In the Philly metro area? Welcome to the machine...
[ Parent ]
my impression is the opposite (2.00 / 1) (#65)
by speek on Thu Nov 30, 2000 at 05:17:13 PM EST

It seems to me people are less in touch with each other than ever before. I guess it depends on how you look at things.

--
al queda is kicking themsleves for not knowing about the levees
[ Parent ]

oh... (none / 0) (#72)
by chrisbolt on Thu Nov 30, 2000 at 09:58:45 PM EST

I thought it was because we found out it wasn't made of cheese...

---
<panner> When making backups, take a lesson from rusty: it doesn't matter if you make them, only that you _think_ you made them.
[ Parent ]
Cheese (none / 0) (#77)
by baberg on Fri Dec 01, 2000 at 01:31:18 AM EST

Shhhhh... That's our little secret. See, I plan to confuse everybody on K5 by talking like I actually know something...

You see, it really IS made of cheese... it's below the powdery top layer, which I call "topsoil". Beneath that, there's layers upon layers of stuff I call "cheese".

Sorry to throw so many technical terms at you, but I've been thinking about this a lot, and I just KNOW that there's cheese up there. I like cheese.

[ Parent ]

You forgot about the hormones... (2.00 / 3) (#38)
by Luke Scharf on Thu Nov 30, 2000 at 11:55:52 AM EST

Human's aren't just intellectual beings. We're very much animal like, too.

Ok, so the machines are doing all the work. They have rights, so I can't just smack them into line like I do now. They can beat me in a philosophical discussion.

What would I do with my time? Being a young guy, I think I'd spend most of my time trying to make more people. ;-)

I think our basic nature will keep us from becoming machines. In the happy view of humanity[0], we all want to care about family, enjoy friends, and try to make a living. If making a living becomes easier, that's nice. It won't change human nature any more than the autombile has - it won't change it at all.

[0] I'm in a good mood right now.



The Rich wont allow it to happen (2.33 / 3) (#57)
by Nelson Sandalwood on Thu Nov 30, 2000 at 04:04:40 PM EST

If making a living becomes easier, that's nice. It won't change human nature any more than the autombile has - it won't change it at all.

The Rich folks of this world wont allow it to happen, sure Nano will arrive, but a world where all people have every thing they could want wont arrive. If it did how would the mega rich be any different from you and me.

If I could have my house redecorated by simply saying "Computer, reconfigre the house for the summer settting we had last year" then what use has Bill Gates for his billions?

[ Parent ]
"The Rich" are a caste - WTF? (4.50 / 2) (#59)
by Luke Scharf on Thu Nov 30, 2000 at 04:28:03 PM EST

The Rich folks of this world wont allow it to happen, sure Nano will arrive, but a world where all people have every thing they could want wont arrive.

Well, they could get richer by selling this stuff. It's not like they're a consipracy or anything.

If it did how would the mega rich be any different from you and me.

Are you suggesting that a rich man is different than me? The only reason why I'm not "mega rich" is because I haven't worked long enough, hard enough, and haven't come up with the one thing that that a lot of people buy but never thought of(e.g. Windows). I take offense at being put into a caste. BTW, I'm an opinionated American with a Libertarian bent. :-)

If I could have my house redecorated by simply saying "Computer, reconfigre the house for the summer settting we had last year" then what use has Bill Gates for his billions?

Well, he could buy stuff that he wants. I mean, he earned to money just like everyone else[0]. Just like everyone else, he can buy goods & services.

Silicon Valley is built on the fact that businesses will develop a better product to outsell the competition. Someone is selling a 4mhz processor. I develop a 6mhz processor that does the same thing and costs less. Gee, I think I'll sell it and make some money. I'm not going to let the 4mhz guy sell his processor out of any sort of "we're both rich" stupidity.

There are good reasons why there won't be a world where we just get everything we want, but a conspiracy by the rich (at least in America) isn't one of them.

[0] Granted, he's much better at earning money than I am. But it's possible that I could do the same thing, given the motivation.



[ Parent ]
Some truth to that.... (none / 0) (#114)
by cryon on Sat Dec 02, 2000 at 02:57:14 AM EST

Wealth is basically relative, and those who have it now will use it to stay ahead; that is the way the game is played by social animal everywhere in the animaql kingdom.
HTGS75OBEY21IRTYG54564ACCEPT64AUTHORITY41V KKJWQKHD23CONSUME78GJHGYTMNQYRTY74SLEEP38H TYTR32CONFORM12GNIYIPWG64VOTER4APATHY42JLQ TYFGB64MONEY3IS4YOUR7GOD62MGTSB21CONFORM34 SDF53MARRY6AND2REPRODUCE534TYWHJZKJ34OBEY6

[ Parent ]
Lacking biological contact (2.25 / 4) (#41)
by Knile87 on Thu Nov 30, 2000 at 12:10:36 PM EST

Are we going to reproduce solely in vitro? And how about our mother's touch? Some things, our bodies and minds and souls require to function.
You can kiss a computer, but it won't kiss back.

"We're all on a big ship! We're on a big cruise, across the world!" -- Iowa Bob, in Hotel New Hampshire


my own prediction (2.50 / 2) (#58)
by GreenCrackBaby on Thu Nov 30, 2000 at 04:18:19 PM EST

You can kiss a computer, but it won't kiss back....yet.

[ Parent ]
Venture capitalists (2.66 / 3) (#43)
by Zeram on Thu Nov 30, 2000 at 12:53:43 PM EST

Venture capitalists seem to love tech stuff right know, I read an article on CNet the other day that said that something like 20 Billion will be raised by venture captialists to be funneled into tech startups.

As much as I agree that society at large will have a hard time swallowing new technologies like Nanobots and machine intelligences, the money is certainly there in the private sector.
<----^---->
Like Anime? In the Philly metro area? Welcome to the machine...
Human supremacy to machines (3.00 / 4) (#48)
by dj@ on Thu Nov 30, 2000 at 01:19:05 PM EST

I don't like to fashion myself a supremacist, but I really feel that humans always have been and always will be supreme where it matters, in areas such as compassion, courage, love, kindness, humility, caring, creativity, grace, etc.

Since the challenge these days is to differentiate ourselves from the tasks that machines can take over, this might push the drive for enhancing the more human qualities in us. Machines definitely have the knack in areas like automation and brute force, but who wants to hold on to those anyway.

Finally, I think we will definitely learn a great deal about ourselves as we develop more sophisticated machines. We will probably "borrow" many elements from the natural design of the human in designing machines, and the end result will be to appreciate just how incredible the human body, mind, and soul are. Even given unlimited time, we could not produce something even slightly as sophisticated.

The problem (3.75 / 8) (#49)
by Zeram on Thu Nov 30, 2000 at 01:27:52 PM EST

Is that the autor of the article really does an injustice to Ray Kurzweil by not completly posting his thoughts. In "The Age of Spirtual Machines" Kurzweil does actually adress the fact that computers are basicly autistic, capable of great feats of math but not really capable of even simple human feats. His thought is that once we hit rock bottom on modern processor technology, what ever we use to replace the logic gates of today will have a sort of co-processor that is capable of "fuzzy" human type actions (essentiall it'll be capable of pattern matching). And that is where the "processing power of the human brain" idea comes in. It becomes relevant on the basis of the fact that computer storage mediums will have suffecient space to acutally record the "information" stored inside of a human brain. And before you pick me apart, #1 it's not my concept, and #2 the idea is to replicate the fireing of the neurons. In order to digitally recreate a human intelligence, it would require a large amount of storage, and a processor capable of the speed of thought that we are acustomed to. And really that is the point of saying "A $1000 (inflation not withstanding) machine is roughly equivalent to the computational power of the human brain." As a segway to AI and the uploading of wetware into hardware.
<----^---->
Like Anime? In the Philly metro area? Welcome to the machine...
Has anyone else noticed... (3.83 / 6) (#50)
by dennis on Thu Nov 30, 2000 at 01:37:17 PM EST

...that they're assuming processing power doubles in a year? Last I heard Moore's Law was still holding at 18 months, which would give us 15 years to improve by a factor of a thousand.

This timeline also makes pretty low assumptions about the processing power of the human brain. There are some indications that the processing power within individual neurons is several orders of magnitude higher than previously assumed.

Don't forget about time to market (4.00 / 2) (#56)
by Wah on Thu Nov 30, 2000 at 04:04:27 PM EST

if you're predicting major social/cultural changes, the processing power has to propogate to a large enough percentage of the population before any real effects will be felt. The effects of broadband net access on the overall media spectrum are just starting to be felt. People still drive 40 year old cars. New technology advancements "that cushion the impact of accidents with near instantaneous pillows of air, erradicating serious and fatal car injuries" are lost to these people.

Personally I think people are underestimating the complexity of the brain. As well as its storage capacity. Not to mention the incredible fluctuations that exist between individuals. But then again, mass amounts of information storage and brute force searching (like the Big Blue memorizing all Karparov's game history example mentioned in another post) will allow a convincing illusion of intelligence, which is probably as far as many people want to go.
--
Fail to Obey?
[ Parent ]
I hear that (3.66 / 3) (#70)
by spezz on Thu Nov 30, 2000 at 08:02:28 PM EST

We always say that we'll make machines with x-times the processing power of a human brain.

How are they benchmarking this?

I'm misquoting and I can't cite a source (you want vague? I got it) but some sciency type person said something to the effect of:

"if our brains were simple enough for us to understand, we wouldn't be bright enough to try to understand them in the first place"

[ Parent ]

Computational Power of the Human Brain? (3.40 / 5) (#52)
by Khedak on Thu Nov 30, 2000 at 01:49:24 PM EST

As people have pointed out, the scaling may be a bit off. That's 18 months, not 12.

Anyone who saw me whining in the Programming Language and Language discussion knows that I don't think (just an opinion, don't ask me for a proof) that Human minds aren't Turing machines, so how exactly do you calculate the processing power of the Human Brain?

He says that about the same time machines become 1000 times more powerful than a human brain, they claim to be conscious. Whatever that means. So they aren't conscious when they're of equal power to a human brain? So what do you mean by equivalent to a human brain?

Also, please don't make fun of capitalists for withholding nanotechnology because of its high production price. There's plenty of more important stuff (like medical research and plain old food production) that capitalism withholds from the needy. Asking for nanotech too makes us look whiny (by us I mean those concerned with greed destroying human life). :)

Greed (3.00 / 3) (#54)
by trhurler on Thu Nov 30, 2000 at 03:03:32 PM EST

You're a real piece of work. Medical research, being held back? Are you on drugs? Corporations fight tooth and nail to overcome GOVERNMENT obstacles to bringing as many medical discoveries as possible to as wide an audience as they can, every single day. A single new drug typically costs several hundred million dollars to bring to market, largely because of the government; who are you to tell the people who put that kind of money into it that they now have to give the drug away, thereby losing that hundreds of millions and going out of business, never again producing a new drug, and very quickly ensuring the death of medical science? This argument is all the more compelling when you actually go and read about the pricing policies of the various drug companies; typically they sell drugs in poorer countries for a tiny fraction of what they charge in the US, and they give away huge quantities as well.

The food argument is similarly stupid; the US is the closest thing we have to capitalism, and it isn't really capitalist, but notice that the US is the largest exporter and sells at the lowest prices, and gives away more food than all other nations combined. Notice that it does this because otherwise, it could not maintain food prices high enough to pay its bills in the US, which is a result of capitalist EFFICIENCY. Notice that the government, not the capitalists, lessens that effect by buying up a lot of food and letting it rot deliberately.

--
'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
Thanks for your opinion! (4.00 / 4) (#60)
by Khedak on Thu Nov 30, 2000 at 04:31:05 PM EST

Regarding the withholding of food, the government does that for the capitalists. The government intervenes on behalf of capitalists, because the richest people with the most power, those making decisions, are using the government to direct resources and protection to them. You would say "ah-hah, those people are not true capitalists", but so what? Are you saying that if it weren't for the pesky government buying food, corporations would give food away? No, they wouldn't, they'd probably extort people into indentured servitude in exchange. Oh wait, that's already happened in history, so either they have to do it elsewhere or with slightly different rules. The whole point of capitalism is to exploit the rest of the world for everything you can make it give you.

US food production (and production of everything else for that matter) isn't better because we're more efficient, it's because we cheat and break the rules and exploit other countries for all they're worth, through force and threat of force where necessary (as we have throughout history). Colonization stopped for one reason: no more places to colonize. We have the resources because our capitalist elite and our government collaborate to keep our country on top, by sabotage, insurrection, terrorism, tariff, scandal, murder, theft, poisoning, pillage, rape, and of course, market capitalism.

And yes, I didn't say companies weren't persuing medical research as quickly as they are allowed. What I said was that they were withholding their results from the public, sometimes because they don't see a potential market for a drug that takes a single dose to cure a disease. A drug that takes several hundred doses over a lifetime, ah, that is preferred. Unfortunately, it takes a lot of research to find the difference. I don't doubt the research is happening: But I do doubt that companies are fighting to save lives. They're fighting to earn cash, and if they want to withhold information that could save human lives because it'll make their stock jump three-fourths of a point higher, they'll do it. Your example doesn't even make sense:

A single new drug typically costs several hundred million dollars to bring to market, largely because of the government; who are you to tell the people who put that kind of money into it that they now have to give the drug away, thereby losing that hundreds of millions and going out of business, never again producing a new drug, and very quickly ensuring the death of medical science?

So, you're telling me that if that company decides their drug costs too much to produce to be profitable, and doesn't produce it, that's for the benefit of medical science? Because if they release (even one) drug based on potential for treating disease and not based on profit, they'll go out of business? Even if everyone plays by the same rules? This will end medical science? So the system as it stands is perfect? None of that follows.

The government may be evil, too, but that doesn't mean that we should give the capitalists free reign. The only thing that keeps them from being just as greedy is the fact that can simply have the government to do it for them. Duh?

[ Parent ]
More capitalism (3.00 / 7) (#66)
by trhurler on Thu Nov 30, 2000 at 05:26:17 PM EST

Regarding the withholding of food, the government does that for the capitalists.
You'll be happy to know that if the government worked the way I think it should, this would not be possible.
The whole point of capitalism is to exploit the rest of the world for everything you can make it give you.
There are a great many of us capitalists who do not think so. Notice that the US never needed any kind of government social programs until income taxes stunted the economy and government protectionism caused the stock market to crash. It wasn't that people were unkind, but rather that they were so kind that there would have been no point to government programs.
US food production (and production of everything else for that matter) isn't better because we're more efficient, it's because we cheat and break the rules and exploit other countries for all they're worth,
So then we AREN'T the world leader in developing new farming techniques, or in leveraging economies of scale in agriculture. I see. I suppose there' s a secret cult in the Midwest here where I live that I've never heard of that enslaves foriegners and makes them work the fields to keep prices down. This is the stupidest argument I've ever heard.
What I said was that they were withholding their results from the public, sometimes because they don't see a potential market for a drug that takes a single dose to cure a disease. A drug that takes several hundred doses over a lifetime, ah, that is preferred.
Can you cite an example, or is this just the 100mph gasoline engine urban legend differently clothed and spewed forth all over again? Making up highly improbable fairytales to support your point of view isn't going to help you any.
So, you're telling me that if that company decides their drug costs too much to produce to be profitable, and doesn't produce it, that's for the benefit of medical science?
The cost of a drug is already sunk by the time you decide whether or not to produce it; that was my point. Companies ALWAYS want to produce them, because every one they sell, even if it isn't enough to break even, is less loss they take. You apparently know absolutely nothing about the economics of medical research; I suggest you get an education before continuing to carry on about how drug companies are withholding miracle cures to make profits off of people who don't have any money anyway.

--
'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
My Response (4.00 / 4) (#69)
by Khedak on Thu Nov 30, 2000 at 07:33:27 PM EST

You'll be happy to know that if the government worked the way I think it should, this would not be possible.

Um, thanks? Oh, how about: irrelevent.

There are a great many of us capitalists who do not think so. Notice that the US never needed any kind of government social programs until income taxes stunted the economy and government protectionism caused the stock market to crash. It wasn't that people were unkind, but rather that they were so kind that there would have been no point to government programs.

Okay, so are you talking about pre-Depression then? You're right, before the Depression nobody starved and nobody was exploited. Oh, except that the native american population had been subdued by this time, so the government was free to do with the land as it pleased. And of course inequality was a fact of life, with women and non-whites having fewer freedoms than male whites. Children were being exploited for labor in the United States (as elsewhere), and this is about the time we started 'convincing' latin american countries to support us (Cuba, Panama, Mexico). You're right, before the stock market crash, this was truly a utopia. For the rich white christian males, maybe.

So then we AREN'T the world leader in developing new farming techniques, or in leveraging economies of scale in agriculture. I see. I suppose there' s a secret cult in the Midwest here where I live that I've never heard of that enslaves foriegners and makes them work the fields to keep prices down. This is the stupidest argument I've ever heard.

Well, if the midwest hadn't been stolen from the native americans (during the pre-stockmarket golden age you're so fond of), they wouldn't have the land to farm at all. And in the south, slavery up until the Civil War is what enabled the plantation owners to get rich and establish the agricultural economy. Try some historical context. And as I said, think of more than just food production.

Can you cite an example, or is this just the 100mph gasoline engine urban legend differently clothed and spewed forth all over again? Making up highly improbable fairytales to support your point of view isn't going to help you any.

I exaggerate with the "one pill cure" example, but it's not far off. Here's a link to some information on what is being done, for example, with possible treatments for HIV. This is just one example that I got with a quick search on Google. And regardless, even in a hypothetical situation, the argument stands.

The cost of a drug is already sunk by the time you decide whether or not to produce it; that was my point. Companies ALWAYS want to produce them, because every one they sell, even if it isn't enough to break even, is less loss they take. You apparently know absolutely nothing about the economics of medical research; I suggest you get an education before continuing to carry on about how drug companies are withholding miracle cures to make profits off of people who don't have any money anyway.Uh, if you think that economics is that simple, you're wrong. Read the article provided. Companies don't just "sell drugs," they use political leverage and every other means available to ensure the greatest profits, which was my point. "Every one they sell is less loss for them" is true, but the company has all the time in the world to negotiate the most profitable trade agreement, while people suffer and die in the meanwhile. If you think that companies always sell their product for a fair price as soon as they can, and never discontinue products or decide not to release products, even after considerable research investment, then that shows "You apparently know absolutely nothing about the economics of medical research", because it's a demonstrable fact that "companies are withholding miracle cures to make profits off of people who don't have any money anyway." Here's some more references:

This one talks about inflated drug prices in the United states.
This one talks about how the United States' pharmaceutical industry is dealing with post-Apartheid South Africa.
This one talks about the environmental impact of biotechnology patents. This is a little OT, so if you don't really care about environmental issues, just skip this one.
Here is some more information about possible AIDS treatments and donation spending, with lots of specifics.
And of course, here is a good lecture by Chomsky on pretty much the general topic we've been discussing: The United States as a capitalist power in world affairs, as related to our current (and past) prosperity.

If you want more references, I think you can find the sources yourself. This was just to help you get started.

[ Parent ]
Irrelevant? I don't think so. Also... (3.00 / 7) (#71)
by trhurler on Thu Nov 30, 2000 at 08:17:36 PM EST

Um, thanks? Oh, how about: irrelevent.
If someone says "capitalism is good" and you disagree, and he then says that his idea of capitalism is different from yours, that isn't irrelevant.
Oh, except that the native american population had been subdued by this time,
I was talking about governmental social programs being unnecessary in an unregulated economy. I never said there weren't problems at the time. Do you have any ability to follow a logical argument whatsoever?!
And as I said, think of more than just food production.
Ok. Here's some historical context for you. Everyone's ancestors have done unconscionable things to other peoples' ancestors at one time or another, stealing land, raping, murdering, and so on. Therefore, by your logic, the whole planet is screwed and we should all just give up and die, because nothing we do can ever be right again. Or, we can give up on this inane doctrine of secular original sin you seem to be promulgating, admit that, as one example, I personally have never stolen from anyone or killed anyone, and try to move forward with life despite the past, which we cannot change no matter how horrific it might be. Which do you prefer?
And as I said, think of more than just food production.
Ok. The US pretty much invented mass production of goods, mechanical refrigeration, mass transportation, controlled distributed electrical power grids, and most of the other core technologies that underlie modern civilization. I'm not saying that individual people and companies haven't done things that were wrong; some certainly have. However, all too often, it was government influence rather than private clout that actually did the deed, and moreover, the fact that some people screw up, no matter how badly, is not an excuse to accuse everyone involved of being an evil, greed-driven monster. Your animosity towards all things productive is not rational.

By the way, the drug companies' policies are entirely made possible by ridiculous patent laws, which are a tool of statism, not capitalism. They derive lineage directly from old laws made by kings to help ensure loyalty, and they should be thrown out. If they were, then most of what you complain about simply could not exist. But hey, go on blaming businessmen for the failings of government; I'm sure it feels good to attack people who are more successful than yourself, doesn't it?

--
'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
No Problems? (3.00 / 3) (#73)
by a humble lich on Thu Nov 30, 2000 at 10:36:42 PM EST

The US before the depression needed "social programs" more than anything. There was a reason the Serman anti-trust act was passed. The reason there weren't social programs or government intervention wasn't that they weren't needed but that the politician were all bought. The conditions of imigrant sweatshops was truely horrible. That is what started government intervention (with Teddy Roosevelt and the Progressives) not the depression.

I would say that without the Roosevelts' intervention (I'm alking about both now) there was a good chance that we would have seen a communist/socialist revolution. Both Roosevelts said that they were interfering with buisiness not because they were socialists but because they were worried that if something wasn't done there would be a class war.



[ Parent ]
History is not well covered by public schools... (3.25 / 4) (#93)
by trhurler on Fri Dec 01, 2000 at 12:09:11 PM EST

The US before the depression needed "social programs" more than anything. There was a reason the Serman anti-trust act was passed.
The Sherman Act was passed by conservatives who were acting as cronies of businessmen who were failures and could not compete. If you actually read the history of the Sherman Act, you'll find out that every time it broke up an industrial trust, prices rose and quality dropped. Standard Oil, ALCOA, US Steel, all of them were more efficient and produced better products left alone than after Sherman Act pillaging. The Sherman Act makes not a single reference to protecting consumers; this is because the people who passed it were honest enough to openly admit that it was going to harm consumers, but that they believed that it was necessary to protect the "diversity of businesses" in the US. Of course, it is an open question whether they believed with their minds or with their wallets, and it seems likely to be the latter. You obviously learned history in a public school; I suggest you go and fill in the gaps they didn't bother to cover for fear that you might actually develop an intelligent point of view.
The reason there weren't social programs or government intervention wasn't that they weren't needed but that the politician were all bought.
Every last one. Wow, that's impressive, since there were no political action committees back then and a great many candidates weren't really affiliated closely with any major political organization. Hint: TODAY they're all bought. Back then, there were some honest men here and there, albeit not as many as we'd all like.
The conditions of imigrant sweatshops was truely horrible.
And yet it was better than what they could do any other way, or they wouldn't have done it. It isn't like they were chained up and enslaved.
I would say that without the Roosevelts' intervention (I'm alking about both now) there was a good chance that we would have seen a communist/socialist revolution.
This is because you have little if any knowledge of the history of the communist movement in the US. It was immensely popular - amongst academic eggheads. The common man reviled it, and rightly so. The Roosevelts may or may not have thought themselves socialists, but they did socialism a greater favor in the US than anyone else ever has or hopefully ever will.

--
'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
the pseuds are out in force, I see (3.25 / 8) (#83)
by streetlawyer on Fri Dec 01, 2000 at 09:02:27 AM EST

statism, not capitalism

False dichotomy, and a clear sign of someone who's learned his politics and economics from alt.libertarian. "Statism" is orthogonal to capitalism, particularly in a capitalist state. A capitalist state passes patent laws which benefit capitalists. Stop pretending that your cranky laissez-faire is the One True Capitalism.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

Look! Streetlawyer, wrong again! Amazing!!! (3.33 / 6) (#92)
by trhurler on Fri Dec 01, 2000 at 11:58:47 AM EST

False dichotomy, and a clear sign of someone who's learned his politics and economics from alt.libertarian.
I didn't even know there was such a group, because I don't waste my time on such things. Maybe you do, which would explain why your discourse is so content-free and flamefilled. As for capitalism, I'd say the problem here is that you probably learned about it from a macroeconomics professor.
"Statism" is orthogonal to capitalism, particularly in a capitalist state. A capitalist state passes patent laws which benefit capitalists. Stop pretending that your cranky laissez-faire is the One True Capitalism.
People who use the law to their benefit rather than respect the law as an independent arbiter of disputes are not capitalists. They are oligarchists at best, and thugs at worst. Yes, my laissez-faire IS the One True Capitalism, and until this century, everyone pretty much agreed on that statement. Only recently, with the need to have an "enemy" for communism, has the word capitalism been distorted to mean what the US does today.

--
'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
bollocks (3.42 / 7) (#94)
by streetlawyer on Fri Dec 01, 2000 at 12:09:13 PM EST

until this century, everyone pretty much agreed on that statement.

That's simply not true. Adam Smith didn't. David Ricardo didn't. Vilfrid Pareto didn't. And in this century, Friedrich (von) Hayek didn't. That's why they *argued* for laissez-faire as opposed to other forms of capitalism (corporatism, fascism, social market, autarchy). The first person to avoid all this hard work and just define capitalism as liassez-faire was Ayn Rand, and nobody outside her cult followed her in it.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

Apparently, (3.75 / 4) (#95)
by trhurler on Fri Dec 01, 2000 at 12:17:59 PM EST

you know nothing about von Mises or any of his descendents, or perhaps you just don't care to talk about them. Smith, as I understood, thought that the other "variants" of capitalism had more in common with feudal guild systems or the licensing issued by kings in later times than with any sort of free market(and was right, if he thought so.) The essential distinction of capitalism that makes it something other than oligarchy or rule by thugs is unfettered trade among people who own the goods they are trading; without this, capitalism would just be another word for some other system of economic and political positions, although which one depends quite heavily on exactly HOW you would change the definition, and nobody seems to agree on that. It could be oligarchy and fascism, it could be any number of other things, but it certainly lacks the element of liberty which makes trade a valuable concept at the individual level.

--
'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
Quickly, this time. (3.20 / 5) (#101)
by Khedak on Fri Dec 01, 2000 at 01:41:31 PM EST

you know nothing about von Mises or any of his descendents, or perhaps you just don't care to talk about them. Smith, as I understood, thought that the other "variants" of capitalism had more in common with feudal guild systems or the licensing issued by kings in later times than with any sort of free market(and was right, if he thought so.)

Adam Smith thought that division of labor and labor as capital would make humans "as stupid and ignorant as it is possible for a creature to become." That's from the Wealth of Nations.

The essential distinction of capitalism that makes it something other than oligarchy or rule by thugs is unfettered trade among people who own the goods they are trading;

Well, if you think government and market are totally seperate, as you said before, then this sentence is non-sequitur. Oligarchy and thug rule are governments, and capitalism doesn't have anything to do with government in your view.

On the other hand, if you admit that governments and systems of trade go hand in hand, then it's still non-sequitur. How can you insist that free trade will end rule by the few or the powerful, when capitalism allows people to gain wealth and power more so than any other system known? That's why Bill Gates is richer than the richest monarch of any country in the world. Since markets and governments aren't seperate, it's clear that capitalism encourages centralization of power and resources. Pure capitalism is oligarchy, where the few = the rich.

without this, capitalism would just be another word for some other system of economic and political positions, although which one depends quite heavily on exactly HOW you would change the definition, and nobody seems to agree on that. It could be oligarchy and fascism, it could be any number of other things, but it certainly lacks the element of liberty which makes trade a valuable concept at the individual level.

It lacks the element of liberty which makes trade valuable? "It" refers, I beleive, to capitalism without unfettered trade. So you assert that capitalism, without free trade, lacks the freedom which makes trade valuable. I'm not sure what your point is, this looks like more circular logic.

Actually, as I just said, either capitalism has nothing to do with oligarchy and fascim, if market and government are seperate (which I assert they are not), or capitalism, being intertwined with government as it is, causes oligarchy (rule by the rich). It looks like you're trying to argue two contradictory things. Why don't you take some time off and contemplate what you're talking about?

[ Parent ]
An interesting lack of clarity on my part... (4.50 / 2) (#103)
by trhurler on Fri Dec 01, 2000 at 02:03:34 PM EST

Well, if you think government and market are totally seperate, as you said before, then this sentence is non-sequitur.
Replace "are" with "should be" and you will likely cease to be confused as to my meaning. To the extent that so-called capitalists use the government to coerce people rather than respecting it as an independent arbiter of disputes, they are are not capitalists - they are thugs. It is proper that this should be made as close to impossible as can be managed - but the means of doing so is precisely NOT to regulate, because regulation is the power the men with money need in order to clobber one another; you first complain that the government is run by these men, and then you say you want it to regulate them, but what this means is that those of them who have the firmest control over it at any one point will "regulate" the others to their own benefit - a ludicrous situation. If there is no regulation, then there can be no interference by businessmen with political pull in the business of others using government force.

Most if not all of the rest of what you wrote hinges on this idea that businessmen will inevitably centralize power, which I have here disputed to some extent, so I think I should wait to reply to whatever you have to say in response to this instead of responding to it now. I feel no particular need to repeat myself senselessly, and I doubt you will benefit from it either.

--
'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
you don't understand your own subject (3.25 / 8) (#104)
by streetlawyer on Fri Dec 01, 2000 at 02:26:39 PM EST

To the extent that so-called capitalists use the government to coerce people rather than respecting it as an independent arbiter of disputes, they are are not capitalists - they are thugs.

(actually, I learned political economy from Hayek's biographer, so I think I can be expected to know what I'm talking about). This is not true. You are smuggling "all things bright and beautiful" into your definition of capitalism. A capitalist is a capital owner, or one who believes in the system of production characterised by the private ownership of the means of production. There are laissez-faire capitalists, social market capitalists, fascist capitalists, corporatist capitalists and all manner of other varieties. Von Mises is entirely wrong in his Randite moods when he asserts otherwise, and indeed said the exact opposite in his earlier papers which won him his reputation.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

Hayek's biographer? (3.20 / 5) (#107)
by trhurler on Fri Dec 01, 2000 at 02:54:54 PM EST

Who cares? First off, this is a blatant argument from authority. Second, the vast majority of famous people in any field of intellectual endeavor, among their other traits, happen to be -wrong.- And third, as yet, you have done nothing but argue semantics; it does not even matter whether we decide to call waht I advocate "capitalism" or some other term, except insofar as we must both know what it is that I am advocating. In the end, you still will need actual arguments rather than name games if you want to demonstrate that I am incorrect.

--
'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
You are incorrect (3.00 / 2) (#110)
by Khedak on Fri Dec 01, 2000 at 03:40:00 PM EST

Who cares? First off, this is a blatant argument from authority.

Maybe, but it was a parenthetical remark and not the thrust of his argument.

Second, the vast majority of famous people in any field of intellectual endeavor, among their other traits, happen to be -wrong.-

If you say so. Regardless, that's why argument from authority is a fallacy. Fortunately, he didn't argue from authority, his argument has its basis elsewhere, notably outside the parenthetical opening comment.

And third, as yet, you have done nothing but argue semantics; it does not even matter whether we decide to call waht I advocate "capitalism" or some other term, except insofar as we must both know what it is that I am advocating. In the end, you still will need actual arguments rather than name games if you want to demonstrate that I am incorrect.

Okay, so you admit then that your definition of capitalism doesn't match any of the experts you're attempting to cite. As long as we're clear on that, sure.

[ Parent ]
Experts (5.00 / 1) (#111)
by trhurler on Fri Dec 01, 2000 at 05:03:33 PM EST

I don't base my arguments on experts. "Streetlawyer" does. I base my arguments on reasons. Occasionally I quote or cite a particular work if I think the wording is particularly good or the subject is so broad that I cannot possibly post a reasonable explanation of it all, but desire to point at one.

At any time, either of you is welcome to quit arguing over what words mean and start arguing things that matter. I don't expect that I have to worry about it actually happening.

--
'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
OK, let's hear your reason (2.33 / 3) (#119)
by streetlawyer on Mon Dec 04, 2000 at 03:12:09 AM EST

When you know that there is a perfectly good term for "laissez-faire", (clue: it starts with "l"), why do you persist in referring to it as "capitalism"? Here are my three reasons for criticising that:

1. The crucial difference between capitalism and other systems has nothing to do with liberty and everything to do with ownership. If you are advocating capitalism and libertarianism, that it two positions, not one, and you should not conflate them.

2. By using the word "capitalism", you are often tempted to refer to important thinkers and to successful economies as if they were examples of laissez-faire when in fact they are not.

3. It is the accepted meaning of the word. When the issue is the meaning of a technical term of academic discourse, it is simply not possible for "the vast majority" to be wrong.

And finally, I'll note that trotting out dull lists of rhetorical fallacies, as if they were items on a loading docket, is a true mark of a pseud.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]
Actually (4.00 / 1) (#108)
by Khedak on Fri Dec 01, 2000 at 03:26:13 PM EST

You give me little to reply to, but here you go:

If there is no regulation, then there can be no interference by businessmen with political pull in the business of others using government force.

So you're saying that government should not use force except to prevent others from using force, a minarchist point of view. You are an objectivist, aren't you? Of don't you realize that force isn't the only means of coercion? You can starve people by buying their food, you can leave them to freeze to death by withholding electricity or gas, you can trick them into dangerous situations, and many other possibilites. Most of these are easily within your reach if you have some money. That's why regulation exists, because people aren't stupid. The reason regulations are exploited, on the other hand, is greed, but that doesn't mean that lifting regulations will end greed. To argue otherwise doesn't follow.

And you say briefly that capitalism doesn't centralize power. Well, that's not true. Power is roughly equivalent to money in a pure capitalist state, and, well, the people with the most money are the people who have all the same requisites as the most savvy "honest" businessman you can think of. Except that on top of that they are willing to rape, murder, cheat, steal, rob, lie, extort, bribe and defraud and are lucky enough to get away with it. That's why capitalism favors a lack of morals, because a lack of morals allows you to acquire a greater amount of profit. It's really simple.

In general, people don't like this, and so we try to regulate the market to prevent this sort of thing. Of course, corruption turns up in the government regulation too, so that's not an ideal solution, but it's dumb (sorry, I mean non-sequitur) to suggest that without regulation greed would vanish.

[ Parent ]
yet more nonsense (3.33 / 6) (#106)
by streetlawyer on Fri Dec 01, 2000 at 02:37:33 PM EST

Smith, as I understood, thought that the other "variants" of capitalism had more in common with feudal guild systems or the licensing issued by kings in later times than with any sort of free market(and was right, if he thought so.)

You're utterly wrong. Have you read "Wealth of Nations"? Even Smith's famous "Invisible Hand" passage makes it clear in context that he was speaking of localised markets in which all participants were bound by social codes.

The essential distinction of capitalism that makes it something other than oligarchy or rule by thugs is unfettered trade among people who own the goods they are trading;

The essential feature of capitalism is the private ownership of the means of production. Oligarchy is a natural tendency of capitalism.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

god you're ignorant (2.70 / 10) (#84)
by streetlawyer on Fri Dec 01, 2000 at 09:07:29 AM EST

The cost of a drug is already sunk by the time you decide whether or not to produce it; that was my point.

As a general statement about the pharmaceuticals manufacturing business, this is incredibly ignorant.

Notice that the US never needed any kind of government social programs until income taxes stunted the economy and government protectionism caused the stock market to crash.

As a statement about American monetary history, this is incredibly ignorant.

It wasn't that people were unkind, but rather that they were so kind that there would have been no point to government programs.

As a statement of American social history, this is incredibly ignorant

So then we AREN'T the world leader in developing new farming techniques, or in leveraging economies of scale in agriculture. I see.

You very certainly aren't. Europe is on a per agricultural worker basis, SE Asia on a per cultivated hectare basis. "Leveraging economies of scale in agriculture"! My sainted aunt. Agriculture canonically has no economies of scale. What you mean is "having big farms". Incredibly ignorant.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

Let's have a look at what you said... (3.00 / 5) (#90)
by trhurler on Fri Dec 01, 2000 at 11:41:56 AM EST

I've quoted every line in the post so that people don't have to go back and read your highly overrated tripe in its original "glory."
As a general statement about the pharmaceuticals manufacturing business, this is incredibly ignorant.
Really? That's interesting, since it costs hundreds of millions to get a drug past the trials that determine whether it is even feasible to bring to market, and generally only a few million to produce it in such quantity that it can be sold worldwide at whatever price you set. Of course, you don't bother to provide ANY backup for your insult, which I've noticed is pretty typical for you.
As a statement about American monetary history, this is incredibly ignorant.
So then what you're saying is that either we had government social programs prior to 1900(false,) or that there was widespread homelessness and/or starvation(false) or what? Again, you provide no argument, no citations, and no anything else; you're just saying what you know many people will agree with, presumably to get a post with a high rating or somesuch stupidity.
As a statement of American social history, this is incredibly ignorant
Ok, so you're saying that the US at that time did not have the highest per capita adjusted for income charitable donation rates in human history(false.) Again, I've snipped nothing; you literally provide no response except to blather about how I'm wrong.
You very certainly aren't. Europe is on a per agricultural worker basis, SE Asia on a per cultivated hectare basis. "Leveraging economies of scale in agriculture"! My sainted aunt. Agriculture canonically has no economies of scale. What you mean is "having big farms". Incredibly ignorant.
Europe has more agricultural workers because it is less efficient; France in particular drags Europe down the tubes by legally enforcing methods of production that are considered fossils in the US. SE Asia has more land cultivated, and yet somehow, they produce a lot less - imagine that. And yes, just as in ANY material production and distribution effort, there are MASSIVE economies of scale in farming; this is a good part of why Americans have such a ridiculous surplus of food despite having less farmable land than either Asia or Africa and not much more than Europe. There are economies of scale in the use of larger, more efficient machines. There are economies of scale in a flatter distribution model, because you largely eliminate the accumulation stage. There are economies of scale in being able to buy supplies in volume. There are economies of scale in having more clout when you sell larger quantities. There are economies of scale in greater degrees of automation. There are economies of scale in industrial quality control techniques. You obviously have absolutely no clue about what you're saying, so please, please, either find out a bit more about just WHY the US produces so much more food both per worker AND per unit land than anyone else or else quit calling people ignorant, because it obviously is you who are ignorant.

--
'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
Everyone can pretty much see the facts here (3.62 / 8) (#96)
by streetlawyer on Fri Dec 01, 2000 at 12:20:02 PM EST

1. For some drugs, most of the cost is in trials, for some, most of the cost is in manufacturing. A number of major heart disease drugs have been approved but are not produced because there is no way to synthesise the active compounds in commercial batches. Typically, you need to reduce the manufacturing process to no more than four steps for production to be economic; this is often a far more difficult feat than merely detecting and testing a compound in a lab.

2. Your ignorant claim was that government protectionism caused the great crash of 1929. Since the crash was comparable in size to numerous previous stock market crashes, and since the catastrophic effects of the crash were directly attributable to *lack* of government regulation, this is ignorant.

3. Your statement that America was able to provide for its unemployed and/or destitute to any satistfactory degree via private charity during the 19th century is directly contradicted by contemporary eyewitness accounts.

4. Europe has fewer agricultural workers than the USA, not more. SouthEast Asian agriculture is more productive per cultivated hectare, not less (Thailand, for example, is the world's third largest exporter of food).

5. At least two of things you describe as "economies of scale" (purchasing power and seller's power) have nothing to do with the production process. This leads me to the inescapable conclusion that you don't know what you're talking about.

etc, etc, etc. You're dull.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]
I'm starting to think I'm arguing with a bot... (3.20 / 5) (#97)
by trhurler on Fri Dec 01, 2000 at 12:49:02 PM EST

For some drugs, most of the cost is in trials, for some, most of the cost is in manufacturing.
If the drug cannot be manufactured, then there is no "decision" to be made. If it can be manufactured(not synthesized in tiny quantities, since I know you apparently don't know the meanings of English words, but "manufactured,") then it will be cheaper to do so than it was to get past the FDA. If you can point out a single exception, I'll gladly agree that this isn't _always_ true, but it certainly is the general case, and if you inspect the procedures required to actually get a drug past even the initial, much less the final human trials, you'll see why. This doesn't even take into account the losses a company accrues while the FDA sits on its thumbs for months or years before actually even reviewing a given drug's application.
Since the crash was comparable in size to numerous previous stock market crashes,
You've been reading public school textbooks again. Here's a couple of little tidbits they didn't mention: First, overconfidence in financial markets was directly attributable to government sponsored "insurance" and "regulatory" efforts. Second, the government made the Depression orders of magnitude worse than it had to be by attempting to use "monetary policy" and taxation to abate it. Third, previous stock market crashes, regardless of their size, did not totally wreck the US economy.
Europe has fewer agricultural workers than the USA, not more.
Total, -maybe,- although I doubt it. Per unit produced, I know it is false.
SouthEast Asian agriculture is more productive per cultivated hectare,
If you only produce grains and such to the point that your agriculture does not provide a reasonable diet, then yes, you can get more out of your land in terms of mass. However, US grain/rice/etc farming, which is what you're talking about in almost all of Asia(Thailand, in particular:) is considerably more efficient than anyone else's; the fact that we also grow other things is a reflection of the fact that you only need so much carbohydrate... Notice that Thailand also imports a lot of things they either can't or won't produce, and that contrary to popular sentiment, the diet of their common man is not particularly healthy nor is it particularly enjoyable.
At least two of things you describe as "economies of scale" (purchasing power and seller's power) have nothing to do with the production process. This leads me to the inescapable conclusion that you don't know what you're talking about.
"economy of scale"(n) - an efficiency increase whose magnitude is a function of an increase in production quantity

Since being able to buy and sell at better prices is an efficiency increase, and since it is a direct result of doing more production, I think perhaps it is you who, as usual, does not know what he is talking about.

--
'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
[ot] carb based diets (none / 0) (#98)
by Anonymous 242 on Fri Dec 01, 2000 at 12:59:57 PM EST

If you only produce grains and such to the point that your agriculture does not provide a reasonable diet, then yes, you can get more out of your land in terms of mass. ... the fact that we also grow other things is a reflection of the fact that you only need so much carbohydrate... Notice that Thailand also imports a lot of things they either can't or won't produce, and that contrary to popular sentiment, the diet of their common man is not particularly healthy nor is it particularly enjoyable.

The problem with grain based diets (at least in the West and more and more often in the east) is that we bleach virtually all of the nutrients out of our grains. For example, wheat and rice have a tremendous amount of protein, but once you bleach the wheat or rice kernels, that protein gets cut in half at best. When you process out most of the nutrients in grains, it is not surprising that people become undernourished. The problem is not with carb based diets, but with processed carb diets. Complex carbohydrates (such as whole grains) are one of the best food types to base one's diet on.

And at least in my opinion, processing grains to the point of nutritional death also removes most of the flavor. This is why the twenty-five cent loaf of white bread at the local store tastes like paper.

[ Parent ]

You are right, (5.00 / 1) (#100)
by trhurler on Fri Dec 01, 2000 at 01:22:18 PM EST

although there are certainly things you need that you won't get from even whole grains. I tend to eat whole grains a lot, because, as you point out, they are tastier. I despise white bread, although some forms of white rolls are ok.

I've never understood why so many kids hate wheat breads; I can see some kids not liking them, because there's always someone who doesn't like any given thing, but the massive numbers lead me to believe that the problem is that kids equate "healthy" with "tastes bad." Of course, if so many parents didn't force lima beans and other similarly tasteless gooey crap down their kids throats...:)

--
'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
godchrist (3.57 / 7) (#105)
by streetlawyer on Fri Dec 01, 2000 at 02:32:05 PM EST

First, overconfidence in financial markets was directly attributable to government sponsored "insurance" and "regulatory" efforts.

OK, let's see your reference to the existence of federal deposit insurance prior to 1930. I hereby cease arguing with you on this point, as you don't know elementary facts and I don't want to run a remedial education class.

If you only produce grains and such to the point that your agriculture does not provide a reasonable diet, then yes, you can get more out of your land in terms of mass.

An admission, thank you.

Since being able to buy and sell at better prices is an efficiency increase

Of course it isn't -- efficiency refers to production, not exchange. I hereby cease, etc, etc.

Second, the government made the Depression orders of magnitude worse than it had to be by attempting to use "monetary policy" and taxation to abate it. Third, previous stock market crashes, regardless of their size, did not totally wreck the US economy.

I hereby resume, etc, etc, as this is quite fun. The US government did not attempt to use monetary policy, nor could it, as it was on a Gold Standard. When it came off the Gold Standard and began to use monetary policy, the recession was brought to an end. Previous stock market crashes (that of 1870, for example) did in fact cause very severe recessions.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

That isn't what I said... (3.33 / 3) (#109)
by trhurler on Fri Dec 01, 2000 at 03:27:13 PM EST

OK, let's see your reference to the existence of federal deposit insurance prior to 1930.
If I had said "federal deposit insurance" this claim would have a lotof validity. However, I did not. When/if you know enough about the history of US government intervention in the economy to know that federal deposit insurance was not the only possible referent of "government sponsored insurance" please come back to play. Meanwhile, kindly either quit bothering me or find a topic you know something about.
Of course it isn't -- efficiency refers to production, not exchange.
Why? Because some idiot you worship who wrote a book said so? Guess what? That doesn't matter. The concept of efficiency of a business as a whole certainly does include supply costs and sale prices, regardless of what term you choose to apply to it. In evaluating whether US farms or other farms produce the most food at the lowest prices, which was, getting down to brass tacks, what we were arguing about, it CERTAINLY matters.
The US government did not attempt to use monetary policy, nor could it, as it was on a Gold Standard.
Perhaps you don't understand gold standards. You can't print more money than you have gold, but you do get to do any or all of the following: reduce the money in circulation to less than the gold you have, alter the ratio of metal to money, borrow from various sources to cover metal deficiencies(indeed, this was a prominent argument, that borrowing metal from other governments or private institutions made the so-called gold standard a joke,) and/or borrow money and then spend it on your pet projects and/or dump existing government cash reserves into the economy. Now, they didn't DO all of that, but they did play games with the amount of money in circulation, and you simply cannot deny this fact; most of those who favor government interventions proudly brag about it, actually.
the recession was brought to an end.
This is silliness; it is like Bill Clinton taking credit for the present economy. The government had less to do with this than did the simple passage of time and effort of people to better themselves.

--
'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
all that's wrong with libertarian economics . . . (2.00 / 2) (#120)
by streetlawyer on Mon Dec 04, 2000 at 09:44:46 AM EST

... in this one little post.

Because some idiot you worship who wrote a book said so? Guess what? That doesn't matter.

As we see, thurler is living in the Humpty-Dumpty world, where "a word means just what I choose it to mean". When you're talking about production, that doesn't mean you're not also talking about exchange, in this world. Of course, this has some important drawbacks, such as the fact that the price system no longer works, but Austrians don't believe in macroeconomics, except when it suits them. Curiouser and curiouser ....

The concept of efficiency of a business as a whole

And now we're whirling down the spiral staircase. Here we're talking about a "business", there about an industry, over there about an economy as a whole. You can't seem to put your finger on the exact subject when you're talking to an Objectivist, because their whole view of economics depends on not being clear about these important distinctions.

certainly does include supply costs and sale prices, regardless of what term you choose to apply to it.

Of course, it doesn't. If you're measuring productive efficiency, you measure it in terms of units of output per unit of input. The separability of production and exchange was, of course, the first great insight of Adam Smith, not that one can expect the modern laissez-faire ditto-head to know anything about Smith beyond the words "invisible hand". Supply costs and sale prices are part of the profitability of a firm, not its efficiency, and in the real world, it is in fact very rare for the most efficient producer to be the most profitable firm.

This is all interesting, as it allows us to observe the three greatest weaknesses of libertarian economics in their natural habitat:

  • Fallacies of composition. What's good for a firm is good for the economy. The price level is fixed for the firm, and it's always possible for everyone to do the same thing.
  • Missing crucial distinctions No difference between consumption and investment, or between producing something and selling it. No transactions costs of any kind. After all, Austrian economists don't believe in ever doing quantitative work, so they don't need to bother about trivial things like adding up constraints, or consistent modelling. Or any modelling at all, for that matter.
  • Relentless dismissal of all other schools Anyone who's actually thought about the matter is per se disqualified from having an opinion. They're just "people who wrote books", and we all know how wrong books are. Attempting to do economics in any systematic way is not dealing with the libboe's version of the "real world" (never checked against the actual real world). And appeals to anyone other than Ayn Rand are just "appeals to authority", and therefore wrong.

    truly pitiful. but amusing

    --
    Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
    [ Parent ]

I've had enough of you. (5.00 / 1) (#122)
by trhurler on Mon Dec 04, 2000 at 01:42:22 PM EST

From the first time I saw a post from you, you have been rude, condescending, and inconsistent. You attach labels to people you've never met, then relabel them in mutually exclusive ways. You argue semantics, then try to claim that your opponent is wrong because he cares about the meanings of the terms he uses. In short, you are such an ass that your best use would probably be for dogfood, presuming that animal would be willing to consume you.

This is the last of what I have to say, and it is only a response to those things which I feel are particularly obviously either inflammatory or outright factually incorrect:
"a word means just what I choose it to mean"
I have never met a thinker of any quality who did not insist that words are a means of communicating ideas, and that therefore, no definition of a word except that which is intended is appropriate to any interpretation of what he says, nor have I read one. Yes, there is some obligation, especially on demand, to explain the use of terms, but arguing that this or that is the One True Definition of some term is a direct admission of incompetence.

Austrians
I quote an Austrian, therefore I am one. Typical of your kind.
Objectivist
And now I'm a believer in a philosophy that advocates an economic theory similar to but incompatible with the Austrians. Amazing! Apparently, I can be whatever is needed in order for this third-rate anal sculpture to make fun of me, at his whim. I do wish I had his ability to cause reality to warp on command!
Or any modelling at all, for that matter.
In real sciences, models are expected to both fit known data and predict unknown data. In economics, you can't even make a model fit the known data, much less predict anything accurately. Here's a hint: an economy is more complex than a weather system. Models are useless, and will be for a long, long time unless we make some amazing sudden leaps in information processing capabilities.
appeals to anyone other than Ayn Rand
Have I done this? No. Have I uttered her name, or the word "Objectivist," or anything similar? No, but if you call yourself "streetlawyer," I guess facts don't have to matter. What a moron. When you get out of kindergarten and grow a forebrain, come back. Meanwhile, I simply will not respond to any more posts by you, for the same reason that I basically quit reading usenet about 3 years ago. Like 99.999% of posters there, you're a waste of amino acids.

--
'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
Oh, I forgot number 4 (2.00 / 2) (#123)
by streetlawyer on Mon Dec 04, 2000 at 01:56:44 PM EST

"forecasting is impossible, no matter how often it is done". A commonplace among Austrianists etc.

In economics, you can't even make a model fit the known data, much less predict anything accurately

You have to studiously avoid all the existing empirical work which rather proves that you can fit models to data in order to maintain this, but when you're ignoring so much other empirical evidence, it's not much marginal burden. Of course, there's an element of "Humpty Dumpty" to this again, as the ideal economy beloved of laissez-faire types with its perfect competition and supply and demand curves is, of course ... a model. But it's not the wrong kind of model that nasty statists use, so it's not nice to call it a model.

Alan Greenspan (interestingly enough, a good economist with form as a Randite) seems to be very much of the opinion that he can forecast the economy to a close enough approximation. And indeed, he's right.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

Without (none / 0) (#124)
by aphrael on Mon Dec 04, 2000 at 06:08:43 PM EST

getting into the substance of your debate with streetlawyer --- i'm a political scientist but political economy was not my specialty ---- I do find it disturbing that you keep needing to say: things like: When you get out of kindergarten and grow a forebrain, come back. How can comments like that possibly lead to a more rational, coherent discussion? Using emotion-based attacks on your sparring partner is just more likely to cause them to become emotional, causing the quality of discussion and thought to deteriorate further.

Since you claim to have left usenet because you were frustrated with it, i'm amazed that you missed this point; i know i saw it in operation often enough to be terrified of accidentally walking down that road.



[ Parent ]
Well, yes... (3.00 / 2) (#126)
by trhurler on Mon Dec 04, 2000 at 08:28:27 PM EST

but I wasn't the first to open up with the cheap shots and emotional attacks in that exchange. The number one rule is, don't start. The number two rule is, if the other guy starts, be better at it than he is.

--
'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
Reverse thrusters (none / 0) (#127)
by aphrael on Mon Dec 04, 2000 at 08:30:22 PM EST

The number one rule is, don't start. The number two rule is, if the other guy starts, be better at it than he is.

I disagree; if the other guy starts, disengage and seek a better conversation somewhere else.



[ Parent ]
Oh, yeah, the "pretend ad hominem" trick (2.50 / 2) (#125)
by Estanislao Martínez on Mon Dec 04, 2000 at 06:29:51 PM EST

I have never met a thinker of any quality who did not insist that words are a means of communicating ideas, and that therefore, no definition of a word except that which is intended is appropriate to any interpretation of what he says, nor have I read one. Yes, there is some obligation, especially on demand, to explain the use of terms, but arguing that this or that is the One True Definition of some term is a direct admission of incompetence.

Strawman. jsm clearly said that he was insisting on using the words as used in the academic discipline which is being discussed. This is not a case of claiming a word has "One True Definition", but that it has a standard, conventional definition, andasking that this be respected-- and this is a fundamental convention of academic discourse.

Your entire response wholly ignores his substantive arguments. Do you address his claims that you are collapsing the distinction between production and exchange, between business, industry and economy as a whole, and between efficiency and profitability? No. You just use the rhetorical trick of "pretend ad hominem".

--em
[ Parent ]

Interesting... (3.00 / 2) (#128)
by trhurler on Mon Dec 04, 2000 at 08:36:51 PM EST

So what you're claiming, against all evidence I presently possess(my academic experience is mostly in computer science and philosophy,) is that academics don't define terms in order to allow for the expression of their ideas, but rather engage in their debates according to formerly recognized terms in all cases.

In the case of philosophy, this is laughable. EVERY philosopher redefines terms liberally unless he or she is purely derivative of someone who has already done so.

In the case of computer science, it is even more laughable. If a term can be usefully redefined, it IS done, period, end of discussion.

In the case of economics, I honestly don't know how often this is done; what I do know is that refusal to pay attention to what is being said because of the definitions of terminology used is properly demonstrated by refusal to respond, rather than by ridicule. If a professor at a university published what "streetlawyer" has written recently(that is, if he COULD publish it,) the odds are that for all intents and purposes, his career would be over. Nobody would ever again take anything he said seriously, and probably most people wouldn't even read it. So much for trying to defend him using the academic tradition.

--
'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
Strawman arguments (4.00 / 3) (#129)
by Estanislao Martínez on Mon Dec 04, 2000 at 09:33:55 PM EST

So what you're claiming, against all evidence I presently possess(my academic experience is mostly in computer science and philosophy,) is that academics don't define terms in order to allow for the expression of their ideas, but rather engage in their debates according to formerly recognized terms in all cases.

Strawman. Nowhere I say anything of the sort.

Academics who specialize in a field are expected to know what are the general ideas and concepts used in their field, what is the term the field uses to talk about each of those, and to use that term when they refer to that concept. You define *new* terms when you are presenting a concept which doesn't have a standard term.

None of these cases are what you're presently accused of-- you're accused of using preexisting technical terms with conventional meanings within the field you are discussing with a different meaning. You are using words to express things they don't conventionally express. Streetlawyer hit the nail on the head by calling it "Humpty-Dumptyism".

In the case of philosophy, this is laughable. EVERY philosopher redefines terms liberally unless he or she is purely derivative of someone who has already done so.

But what particular terms are you talking about, and can they *really* be redefined aribitrarily, with the acceptance of the practicioners of the field? I mean, there *is* a convention that "Ethics" refers to a certain area of philosophy, with some problems recognized as "classical" problems of Ethics. People may disagree about the precise definition, but there *is* a convention about what things are clearly Ethics, which aren't, and which are controversial.

Let's put it in another way. If you have to philosophers with different definitions of "Metaphysics", and get them to talk about the topic, they will, for the most part, agree they are talking about the same thing, though they may disagree about the specifics of this thing.

In the case of computer science, it is even more laughable. If a term can be usefully redefined, it IS done, period, end of discussion.

The sense in which you could "redefine" logico-mathematical notions such as "algorithm" is very special-- you can give a "different" definition *of the same thing*. If you "redefine" such a notion, there is a rigorous standard you must meet-- it must be provably equivalent to the original. The underlying notion, that of "effective process" or "algorithm", remains the same if you define it as Turing machines or recursive functions.

In the case of economics, I honestly don't know how often this is done;

And why should we take you, a Philosophy/CS guy, as an authority on what is the conventional meaning of terms in economics? And especially, since you use the terms in such a way that fails to make distinctions that have standardly been made in the field. Do you believe there is no difference between efficiency and profitability, for instance?

If a professor at a university published what "streetlawyer" has written recently(that is, if he COULD publish it,) the odds are that for all intents and purposes, his career would be over. Nobody would ever again take anything he said seriously, and probably most people wouldn't even read it. So much for trying to defend him using the academic tradition.

If he is using the terms in question in the standard way, and deviates from this vocabulary when appropriate, everybody who understands the vocabulary of the field would follow his argument perfectly.

Capitalism is not "laissez-faire". You can redefine "capitalism" to be such, but then you're using the term contrary to academic standards, and clobbering distinctions that the vocabulary makes. Same for "efficiency"-- it is a term about possible and actual outputs given the factors available, not about profits. Hell, all the economics I know is from looking through an introductory economics book, and even I know that much.

--em
[ Parent ]

Talk about straw men... (3.00 / 2) (#133)
by trhurler on Tue Dec 05, 2000 at 01:12:00 PM EST

you're accused of using preexisting technical terms with conventional meanings within the field you are discussing with a different meaning.
Your entire post missed my entire point: "using preexisting technical terms with conventional meanings within the field you are discussing with a different meaning" is done ALL THE TIME by experts in those fields, and as long as they provide explanations of what they're saying, while there is often some whining, everyone goes on about their business. Your assertion that it does not happen is flat out wrong. I do not claim to be an expert on conventional economics; I have certainly read some of it, but frankly, it reminds me of reading old philosophy: I do it because it provides historical context, rather than because it is correct. I -will- claim to be fairly expert in the specific viewpoints I advocate; I've put more time and effort into them than most people with masters' degrees.
And why should we take you, a Philosophy/CS guy, as an authority on what is the conventional meaning of terms in economics?
I never once said you should. Perhaps if you read what I wrote, instead of making up something to respond to, you would have more success in actually being relevant.
Do you believe there is no difference between efficiency and profitability, for instance?
Can you say, "begs the question?" This depends ENTIRELY on how you define the terms; yes, if you define them conventionally, then there is obviously a difference, but that is precisely what you're accusing me, correctly, of NOT doing. As it happens, I think there -is- a subtle distinction, but I also think conventional economic theory fits reality about the same way that a mesh of 64 polygons fits the topography of a square mile of the Rockies: approximately at best, and at worst, it doesn't even begin to look right. Part of the reason is that the distinction between efficiency and profitability, or between production and distribution, and so on, while it exists in that they are different, is massively overblown; the fact of not being the same thing does not imply having fundamentally different effects nor does it imply having fundamentally different causes nor does it imply that one does not cause the other or vice versa, and traditional economic theory tends to assume that if two things are different, they both exist and act in isolation, which simply is not true.

--
'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
ridiculous (2.00 / 2) (#134)
by streetlawyer on Tue Dec 05, 2000 at 01:59:27 PM EST

Part of the reason is that the distinction between efficiency and profitability, or between production and distribution, and so on, while it exists in that they are different, is massively overblown; the fact of not being the same thing does not imply having fundamentally different effects nor does it imply having fundamentally different causes

Here are the differences:

  • The profitability of an enterprise depends on production possibilities in the rest of the economy; efficiency does not.
  • The profitability of an industry depends on the wage rate; its efficiency does not.
  • The profitability of a company depends on the tax rate and on its financing structure; efficiency does not
  • Profitability, not efficiency, determines trade patterns.
  • The efficiency of a production unit can be judged in isolation; profitability implies a price system.
And most importantly, given the context of your actual argument, the fact that a production unit has access to a cheaper set of inputs means that it is more profitable; it does not mean that it is more efficient, or that, if given access to the same inputs, another production unit could produce more output. Purchasing economies are scale economies at the level of the firm, but not necessarily at the level of the industry. For example, if Microsoft were to manage to buy up all the software angineers and monopsonise their services, it would be more profitable, but not more efficient. traditional economic theory tends to assume that if two things are different, they both exist and act in isolation, which simply is not true.

It does not. Traditional economic theory goes to a lot of trouble proving separation theorems which demonstrate that, under assumptions at least as reasonable as those made by the Von Mises Institute, certain problems are separable, such as production and exchange, operation and financing, and investment and consumption. These theorems are rare gems in the field; usually, economics assumes that all things affect all others, which is why the phrase "general equilibrium" is so common in the field.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

redefining words (4.00 / 2) (#130)
by jlb on Mon Dec 04, 2000 at 09:49:38 PM EST

In some cases, like when discussing new ideas, it is necessary to change or twist the functional definition of the word for purposes of the discussion.

However, none of what you're discussing is new or different enough that there aren't appropriate words for the discussion.

I think he made a fair point, and you're trying to make it seem like he's being pedantic. I disagree.

Adequacy.org.
[ Parent ]

just to clear this up (2.50 / 4) (#131)
by streetlawyer on Tue Dec 05, 2000 at 02:38:16 AM EST

The difference between consumption and investment is not a difference in terminology.

The difference between a single business, an industry and the economy as a whole is not a difference of terminology.

Whatever terminology you use, these things are actually distinct. Philsophers and computer scientists have the luxury either of never having to check things against the real world, or of being able to choose the manner in which they relate to the real world. Economists do not have this luxury, unless they are Austrians and pretend that they do.

--
Just because things have been nonergodic so far, doesn't mean that they'll be nonergodic forever
[ Parent ]

What about the planet? (2.00 / 6) (#53)
by maketo on Thu Nov 30, 2000 at 02:07:35 PM EST

It sounds to me the Earth will fall apart far before that from overpopulation and all the sh*t we are pouring onto it....So better move through space and find some more "M-class" planets cause this one sure is on its way to burst.
agents, bugs, nanites....see the connection?
Enough, already... (3.90 / 10) (#55)
by trhurler on Thu Nov 30, 2000 at 03:07:54 PM EST

A hundred years ago, we were all going to have our own aircraft by now, disease was going to be a thing of the past, physics was a closed subject with nothing new to discover, and the idea of a computer had not occurred to anyone but a freakish mathematician who died before he could build it.

Now, some of what people predicted did come true, but most did not, and many things happened that nobody ever predicted. This is the way predictions always have worked, and always will. Here's my prediction:

In 2050, there will be lots of cool new toys and the economy will be doing better than ever. People will be looking back and laughing at the "nearsighted fools" who failed to predict 2050 so completely - but the same people will be nearsightedly trying to predict 2100, and getting it all wrong. People such as myself will be quietly sitting in the back of the room, trying to keep the laughter low enough that nobody notices.

--
'God dammit, your posts make me hard.' --LilDebbie

Future of machines. (3.00 / 1) (#61)
by Emma_Peel on Thu Nov 30, 2000 at 04:32:15 PM EST

The only sad thing about these assumptions is that I will certainly die before I can get my grubby little hands on that kind of technology. Then again I digress as usual, wishing for technology similar to that of Ghost in the Shell. Why is their not more of a "Panic" on over-population these days, obviously if we do not destroy ourselves with that why bother. What happend to the thought inspired from the 60's, "Take care of Mother Earth" be careful or our population will double by the 90's. "Surprise" we're here. Once again breeding quanity over quality. :)
------------------------------------------------ When you get the message saying "Are you sure?", click on that Yes button as fast as you can. Hell, if you weren't sure, you wouldn't Be doing it, would you?
Problem is ....w e are all dying... (none / 0) (#113)
by cryon on Sat Dec 02, 2000 at 02:50:49 AM EST

That's the catch with this biological stuff--you die, at least all biological beings so far have died; of course that doesn't mean the day won't come when they stop dying.
As far as what happened to the zpg movement? Corporate money soldified its control over the popular media: it kinda like the rancher and his sheep---rancher always wants more sheep, not less. Of course that's what happened to west texas dontcha know--too many sheep. Corporation == rancher; us == sheep. Immigrant labor == more sheep. ZPG movement of seventies == bad for rancher. Rancher uses media to control sheep, End of story.
Cry cynicism if you must, but that's the song that monehy sings, and it's a very old tune....
HTGS75OBEY21IRTYG54564ACCEPT64AUTHORITY41V KKJWQKHD23CONSUME78GJHGYTMNQYRTY74SLEEP38H TYTR32CONFORM12GNIYIPWG64VOTER4APATHY42JLQ TYFGB64MONEY3IS4YOUR7GOD62MGTSB21CONFORM34 SDF53MARRY6AND2REPRODUCE534TYWHJZKJ34OBEY6

[ Parent ]
Limitation of the human mind (1.50 / 2) (#62)
by GreenCrackBaby on Thu Nov 30, 2000 at 04:34:39 PM EST

Futurists are a funny bunch. They're a bit like psychics, in that they are blowing air out their ass and calling it a prediction, but unlike psychics they can almost always have their predictions proven incorrect at that point in the future when their prediction was for.

The problem with predicting the future is that you only have the information of today to base it on. Ask someone in 1901 what the future would be like and you'd get a tale of steam-powered automatons. They had no notion of the computer. 20 years from now, who is to say what will be invented or discovered by humans that may change the course of humanity? If all you base a prediction on is what is currently available (along with a hash of star trek episodes), then you'll end up with a nice plot for a star trek episode.

Question... (3.75 / 4) (#64)
by eskimo on Thu Nov 30, 2000 at 05:04:49 PM EST

Would a tactile representation have an appendix?

Just a couple things about predictions while I am here. First of all, Asimov foresaw robots taking over the galaxy, but he still had people using paper on space ships. Now I ain't Asimov, but there was a leap there that just didn't get made. And they even had means to store information electronically then.

Also, Arthur C. Clarke talked about Virtual Reality in Childhood's End, and he talked about how there was initially much hope for it, but that as an art form, it proved to be less than remarkable. Of course Clarke probably didn't have an inkling of nanotechnology at the time.

So all I want to say really, is that progress is not linear. It seems linear, because even the most enlightened among us aren't that enlighted. There are all kinds of influences on technology that people never foresee. We are ascending, I agree, but in technology is usually most influenced not from the shoulders of giants, like science, but spontaneously. My current outline has just three Roman numerals: I. Gun Powder, II. Mass Production, and III. Transistor.

Predicting what 'IV' will be is the catch. Assuming it is nanotechnology is just that, an assumption. In 1945, everybody was sure that the 'III' was Nuclear Power.

I am my own home. - Banana Yoshimoto

Steam Engine (none / 0) (#67)
by eskimo on Thu Nov 30, 2000 at 06:04:20 PM EST

I guess the steam engine should go between gun powder and mass production.

I am my own home. - Banana Yoshimoto
[ Parent ]

There's a reason in Asimov's stories... (4.00 / 1) (#87)
by porkchop_d_clown on Fri Dec 01, 2000 at 09:53:04 AM EST

Asimov didn't exclude computing machines from all his stories. But his Foundation stories take place in a particular context - the context being that humans had, first, rebelled against having robots amoung them and, later, were being steered into being as self-sufficient (self-actualized?) as possible. Hence, people did not use computers because those are mental crutches. By the time of the foundation itself, few people realized that such devices were even possible - or desirable.



People who think "clown" is an insult have never met any.
[ Parent ]
Okay, but...Eskimo Loses Focus (none / 0) (#102)
by eskimo on Fri Dec 01, 2000 at 01:59:27 PM EST

Two things: first of all, doesn't that sort of go back to my initial comment, that there are all kinds of influences on technology, both cultural/societal, and scientific?

Second, I was not bagging on Asimov. I was just talking about the impracticality of selling nuclear trinkets, but having people sign real contracts and keeping real books on a giant spaceship. Imagine Darth Vader having to initial a bunch of documents everyday on the Death Star. Imagine him fretting over giant bundles of blueprints. Imagine R2 and C3P0 lugging blueprints all over...

I wandered a little. Sorry. Lets just stick to the first point. But it is funny to imagine Darth Vader doing paperwork. And it just dawned on me, there were no women on the Death Star. Storm Trooper doesn't seem like such a good deal now.

I am my own home. - Banana Yoshimoto
[ Parent ]

Only if they have virtual bongs :-P (none / 0) (#74)
by pistols on Thu Nov 30, 2000 at 10:53:12 PM EST

What about 'virtual' happiness/ecstacy? If I'm going to live in a 'net world, there better be something in it for me. Yeah, immortality and all the interesting things you could do as a 'virtual being' would be nice, but if I'm not "Happy", is it really worth it? Would easily attainable happiness drive me insane (ala 3001)?

Another thought: how would indivuality work in a society where beings are manifested as the processes of a vast machine? Would people be able to become more 'powerful' by occupying more machine time? Could you 'kill' someone?

Actually I'm making the assumption that people would not . How would humans be transferred into the 'net world? Physical links into the brain? Emulation of the brain inside the computer? If so, would this occur on the neural level? The molecular level? Something below?


Doug


typo (none / 0) (#75)
by pistols on Thu Nov 30, 2000 at 10:55:04 PM EST

> Actually I'm making the assumption that people would not

possess physical bodies.

[ Parent ]
Why The Singularity is bogus... (3.50 / 2) (#78)
by Sunir on Fri Dec 01, 2000 at 02:42:17 AM EST

I discussed this not too long ago on MeatballWiki. I'll repost my essay here.

Many so-called pundits have suggested that humanity is moving towards "TheSingularity"--the convergence of everything; man and technology, technology and art, art and man. Taking a queue from MooresLaw, they see the graph of technology's growth stretching to the asymptotic which could only mean that technology will overwhelm everything else. Humankind will be merely a small part in technology's great thrust towards the future.

JaronLanier defines what TheSingularity is aptly in this quote from [1],

[B]iology and physics will merge with computer science (becoming biotechnology and nanotechnology), resulting in life and the physical universe becoming mercurial; achieving the supposed nature of computer software. Furthermore, all of this will happen very soon! Since computers are improving so quickly, they will overwhelm all the other cybernetic processes, like people, and will fundamentally change the nature of what's going on in the familiar neighborhood of Earth at some moment when a new "criticality" is achieved- maybe in about the year 2020. To be a human after that moment will be either impossible or something very different than we now can know.

And really, Lanier (and other Digerati) thinks this is a neat thing. A good thing. An amazing, transcendant thing.

I disagree.

If anything has been shown in the past five years, it's been that the limitations of humans have restrained technology has time and time again. For instance, consider that computers want to get smaller, but they cannot because we can't use displays one square centimeter large. Consider the amount of effort put into making computers even easier to use--and consequently much less powerful.

But this is not a bad thing. For, what purpose does technology have if not to solve our problems? It may be somewhat romantic to suggest science for science's sake, but technology for technology's sake never works out. I mean, who cares what neat thing you can do if it isn't doing anything for me? Whizbang wonderful, sure, but I'm already onto the next thing.

Really, I believe in quite an opposite form of TheSingularity. Technology won't subsume humans. Humans will continue to subsume technology. PervasiveComputing will make computers part of the background, but not of our body. Indeed, technology will dissolve the substrate of our society like rock dissolves in water. We will be enhanced by the material only in the sense that we aren't detracted by it as much.

Moreover, the network may be the computer, but society is the network. We can only enhance and strengthen that truism as we move forward. BarnRaising is a fact of life; making that efficient drives our world ever faster. It's not as if you will get more free time; always less. Imagine a world where you can never get away from the office. That's what interconnectedness means... failure to disconnect.

A singularity of people.

"Look! You're free! Go, and be free!" and everyone hated it for that. --r

Actually... (3.00 / 2) (#79)
by -ryan on Fri Dec 01, 2000 at 04:13:23 AM EST

You said:
JaronLanier defines what TheSingularity is aptly in this quote from [1],

[B]iology and physics will merge with computer science (becoming biotechnology and nanotechnology), resulting in life and the physical universe becoming mercurial; achieving the >supposed nature of computer software. Furthermore, all of this will happen very soon! Since computers are improving so quickly, they will overwhelm all the other cybernetic processes, like people, and will fundamentally change the nature of what's going on in the familiar neighborhood of Earth at some moment when a new "criticality" is achieved- maybe in about the year 2020. To be a human after that moment will be either impossible or something very different than we now can know.

And really, Lanier (and other Digerati) thinks this is a neat thing. A good thing. An amazing, transcendant thing.

If you read Jaron's half of a manifesto you will see that he believes these concepts to be just the opposite, absurd. He refers to this convergence as "Cybernetic Totalism".

The manifesto is really a good read, I believe it was even republished in Wired. You should read it

[ Parent ]

You're right, I blew it. (2.00 / 1) (#80)
by Sunir on Fri Dec 01, 2000 at 05:12:55 AM EST

> If you read Jaron's half of a manifesto you will see that he thinks
> these concepts are just the opposite, absurd.

You're right, I blew it. When I read it a long time ago (relatively), I must have stored that fact away reversed. So, when I was searching for a good description of The Singularity and his manifesto came up, I immediately "remembered" he was on the wrong side of my argument.

So, my deepest apologies to Lanier for misrepresenting him. And my greatest thanks to you for pointing out that error (which is really bad).

But I continue my attack on The Singularity itself...

"Look! You're free! Go, and be free!" and everyone hated it for that. --r
[ Parent ]

Lanier is anti- (none / 0) (#112)
by cryon on Sat Dec 02, 2000 at 02:37:44 AM EST

Lanier actually is a Neoluddite and against change. What makes it easy for him and others to pick apart the "singualrity" hypothesis is the unrealistically optimistic timelines of the most ardent of the transhumanist/extropian/etc faction. But all this probably will come to pass in one form or another, if not in N days, then in N+1 days, and if not in N+1 days, then in N+2 days, ad infinitum....
HTGS75OBEY21IRTYG54564ACCEPT64AUTHORITY41V KKJWQKHD23CONSUME78GJHGYTMNQYRTY74SLEEP38H TYTR32CONFORM12GNIYIPWG64VOTER4APATHY42JLQ TYFGB64MONEY3IS4YOUR7GOD62MGTSB21CONFORM34 SDF53MARRY6AND2REPRODUCE534TYWHJZKJ34OBEY6

[ Parent ]
Holodecks and Replicators. (3.00 / 1) (#81)
by erotus on Fri Dec 01, 2000 at 05:19:02 AM EST

"On to the next part of the timeline: 2050. Star Trek replicators and The Jetson's auto-kitchens become reality. And I'm sure this can be extended to all materials, not just food. Hungry? Have a pizza 'faxed' over. Need a new pair of pants? Get a custom fit pair instantly without leaving your house. Want to play tennis? Just order up some rackets and have your nano-swarm form up into an opponent."

Imagine living in a star trek type world where people own replicators in their homes. Imagine using a tricorder to scan common ordinary things so you could replicate them. You could replicate coca cola, quarter pounders with cheese, computer hardware, or even clothing - what then? Would tricoders then fall under the DMCA? Will McDonalds sue people who put happy meal images on the internet? Image that kind of warez page!
click here to download: happymeal.img
click here to download: coke_classic.img

How will tennis rackets be copy-protected since things can be replicated ad infinitum much like mp3's today. After all, replication will make everything a software even if the end result is a tangible physical object. Sure, I can pay to have Dillard's beam a bottle of Eternity Cologne to my replicator, but why do that when I tweak my replicator to 'copy' my friends bottle. The more things change the more they stay the same, hence hackers will still be hackers. If you think SDMI and the DMCA are draconian, just wait for this type of future.

What about holodecks(for those star trekie types). There could be many positive aspects. I'll never have to fly out of state to go skiing or leave my house to go to a gym. I could workout in my own personal gym. Like the author said I could form a tennis opponent with ease. What about the other implications. I could create the perfect mate(or mates) and spend my days sexing it up in my holodeck. Hell, I could scan that cute chick next door with my tricorder and have my way with her simulated double. What about paedophiles, they could program their decks to do all sorts of unthinkable things with simulated children.(Oh God!) The gaming industry could sell Unreal Tournament as a holodeck simulation where you could virtually kill opponents in a very realistic environment. The holodeck would be a great tool indeed and since it is just a tool, it could be used for good or bad. I'm sure many of you could think of more examples.

There are people today who are "addicted" to the net. There are people who play too much counter-strike. What if holodecks become a reality? There will be those who will lose themselves in their holodeck fantasies and will lose touch with reality. The future technologies will solve many problems, however these technologies will create many more problems as well. I don't believe there will ever be an utopia where "Machines do labor while humans are free to enjoy all of life."






Related article in Reason Magazine (4.00 / 1) (#82)
by Ray Dassen on Fri Dec 01, 2000 at 08:29:10 AM EST

Quoting an Extrodot item: "Reason Magazine is featuring an excellent article on Ray Kurzweil's keynote speech at the Foresight Institute's 8th Molecular Nanotechnology Conference in early Novemeber." [sic]

Wrong, wrong, wrong... (3.50 / 2) (#91)
by Sheepdot on Fri Dec 01, 2000 at 11:49:45 AM EST

In 2020, (only twenty years from now) he believes that a 1000 dollar computer will be equivelent to the human brain. The human brain has the equivelent of 400,000,000,000,000 bytes of information available, and can be likened more to RAM than a Hard Drive.

That is 400 Terra of Memory that could be achieved if we continue to work at the same rate on Information Technology. What futurist unlike myself always fail to predict is that limitation and setbacks abound.

The world is on the verge of serious large Internet regulations, and believe me, the world is ripe and ready for them. They will set down upon us and no one will bitch, it'll be debated, but the utility of having the regulations will always prevail.

This will damper the ability for information technology to progress at the same rate it has been.

So look to the next step: Nanotechnology. The nanotech age will soon (20-30 years) be replacing the information age and information will eventually get to the computing power of a human brain, but it will be around 40 to 50 years from now.

And of course, this is assuming that the United States (which I am a citizen of) doesn't collapse from internal struggles around 12 years from now for various reasons, amongst those: Gun rights, Civil Unions, and nationalization of corporations.

Think I'm crazy? Maybe I am.. Send me an email and I'll explain it to you. It has to do with a fracturing of political views and expressions happening in the country and the move towards an authoritarian state. Good stuff if you have the patience.

With the US gone or humiliated with its own internal problems, China will reacquire Hong Kong and Taiwan, thus cutting off the major suppliers for RAM in computers. This will be a serious blow to Information Technology, and a serious threat to the country since it will not focus on outside issues.

I have no timelines for my events, but they would be much different from the one listed. I would expect a serious drop in Information Technology around 10 to 15 years from now. It will pick up again, but there is always a setback of some sort.


Could Be... (none / 0) (#116)
by Triton on Sat Dec 02, 2000 at 03:43:07 AM EST

I find the timeline to be a distinct possibility, I dont see computers being overshadowed by some new revolutionary technology. I can only see computing technology advancing into new fields, such as VR and nanotechnology, but not AI. A computer is simply a logic machine, It has no imagination or uniqueness. We have had machines that rely on logic for as long as human beings have existed, as far back as the lever, push down on one side, the other side lifts up, what is so different about a computer, except that it can do more varied things. At this current point in technology, the use of the majority of computers can be abstracted to a machine that controls a matrix of multi colored lights on the back of a flat piece of glass. All we have done is condense the function of machines, and now we call it a computer. Most likely we will continuw this trend and simply condense more and more functions into this machine we call a computer.

there must be a point.. (none / 0) (#117)
by Justinfinity on Sun Dec 03, 2000 at 09:00:23 PM EST

where the huge number of logic gates combined with the fact that nothing in the universe will ever be perfect produces what we consider "thinking". with 50 septillion transistors, we really don't know what will happen. could have just a really powerfull machine. could have some of those little logic units doing their own thing, creating, thinking, yet still working somewhat towards the goal of the entire system. i guess we'll just have to wait and see :-)

-justin
[ Parent ]
The future ain't what it used to be (none / 0) (#132)
by PenguinWrangler on Tue Dec 05, 2000 at 08:17:29 AM EST

The one thing we can reasonably predict about the future is that it won't be how we're expecting it.

2001 is less than four weeks away, and when I was a lad I'd read learned articles which told me I'd be driving a nuclear powered car and commuting to the moon by now... In the late 60's Pan Am were selling tickets on the moon shuttle. Now only any use as a collector's item.

Dig up any old articles from the sixties predicting Life In The Year 2000. Have a good laugh at how hopelessly, horribly wrong they all are.


"Information wants to be paid"
Foolish predictions (none / 0) (#135)
by JonesBoy on Thu Dec 07, 2000 at 10:02:47 AM EST

HAHAHAHA... The author of those predictions must like eating his words! 'Cmon, really. What has changed since, say, 1960? We are still driving cars burning gasoline, using paper, working. Heck, the fastest (reported) airplane in the world is one designed with a slide rule, not a computer! How have computers changed the world, really? I think they haven't. Sure, things are a little more glitzy, but thats about it. Pull joe(jane) schmoe out of 1930 and put him in todays world. Betcha (s)he will be adjusted in a week. I would doubt the 2030 predictions are even partially true in 250 years. Lets look at the year 2020 by that timeline. We will have computers with the brain the size of a small planet in everything from jewlery to clothing. What will my wash bag do all day? I think it will watch TV. I guess most people will be making TV programming for their clothing and appliances in the future. These predictions are pointless, foolish, and only good for shock value. 2030 - $1000 for the power of 1000 human brains?? Why? Do two hammers put a nail in faster than one? If we have computing power surpassing our own brain, and it can interact with all 5 senses, whats the point of needing all this other stuff anyway? Nano enhanced food my BUTT! Just run the fudge covered big mack program while eating porridge. Why waste the time and energy farming/creating food, flavor enhancing nanobots, etc when you have what sounds like perfect VR technology? I think people are missing the point about VR. You want to use VR to make things that were too complex/abstract in RR (real reality) to make/experience. What about overpopulation? Will that be controlled by an overpowering government? Won't people fight for their freedom? Wild animals hate to be caged, and yes we are wild animals. Look at the current 'net. All this free info and ideas, but there are still malicious hackers. I personally figure the near future to be more like it was in the movie "Demolition Man". Basic life as usual, with no privacy, integration of buisness and government, major class separations, and reasonable technological advances (voice recognition, self driving option on cars).
Speeding never killed anyone. Stopping did.
Humans and machines indistinguishable in 100 years? | 136 comments (126 topical, 10 editorial, 0 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest © 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!