Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
Electric Souls

By ogre in Fiction
Mon Nov 18, 2002 at 08:23:49 AM EST
Tags: Culture (all tags)
Culture

It seems reasonable to guess that eventually

1. People will be able to connect to computers through direct neural connections.

2. These neural connections will be used to present a detailed virtual reality as good as (or better than) the real thing.

3. This will become more and more popular over time, becoming the major form of computer interaction.

In the following story I explore a potential consequence of this trend (and just because the story shows open source software killing babies, this doesn't mean the story was sponsered by Microsoft).


Electric Souls

If you are going to blame anyone for the end of the human race, it is probably fairer to blame Sean Slamore than Paran Shoke. Shoke after all, was heavily influenced by the world view that was popularized by Slamore, who in his youthful arrogance held the "religious" views of his elders in contempt, and simply denied the existence ... but perhaps I should go back to put this into historical perspective.

The year 0, Universal Space Calander was the year that humans first visited Pluto, the last planet in the Solar System to be visited by humanity. People had moved into space in large numbers for the plentiful solar energy and raw resources. By then there were many small community space vessels that did nothing but roam around mining asteroids and the occasional comet.

It was already common in those days for astronauts on long voyages to travel in virtual reality couches. Their brain was connected by a direct neural interface to a virtual reality while their bodily functions were handled by machinery, and tubes fed water and food directly to their stomach. Many of them had jobs they could take with them and do entirely in virtual reality when the ship did not need tending. And when the ship needed tending they would not wake up bodily to do it, but rather animate a mech through their VR connections, and do the tasks remotely. With this strategy, much less living space was required and the astronauts were less likely to suffer from claustrophobia. Their virtual reality could be as big and open as they desired.

In the year 0 Pluto mission, each of the astronauts left his or her reality couch exactly one time. This was to put on a space suit and walk outside on Pluto, just to be able to say "I have walked on Pluto". In the year 0, reality was still that important. A century later when Sean Slamore was born, it would not be.

By the year -20, virtual reality was already universally available through detachable connections. VR couches were too expensive for most people but anyone with a lounge chair, a head jack and a megaband connection could go virtual. It was by far the most popular form of entertainment and was used in most occupations as well. Working on-line from home was no longer limited to those who worked with data; factory workers, security guards, cashiers, managers, janitors, and physicians would "go to work" by plugging in at home and animating mechs and sensors at their place of business. It was common practice to have business meetings in VR, where you could instantly transport yourself to any virtual office in your local node. If you were willing to accept the costs and time delays, you could quickly transport yourself to any virtual office in the solar system.

By the year -10, it was estimated that the average person spent 14 hours per day in a virtual universe, leaving the virtual world only to eat, sleep, and take care of bodily functions. The twenty percent of the human population that lived in space stations or on hostile planets were special beneficiaries of VR, it turned their small spartan habitats into huge wonderful worlds. Very few people worked in the real world, primarily those whose professions required a very personal touch such as doctors, physical therapists, and prostitutes, or whose work required a delicacy beyond that allowed by mechs, or those employed by socially conservative professions such as the courts.

In 03, the last of the courts began moving entirely on-line, and it was in that decade that advances in sensory input finally made on-line sex better than the real thing. Companies began producing the first low-cost VR couches with automatic body care for domestic use, and the couches started gaining popularity in the smaller space habitats. Many doctors began doing their procedures by mech.

By the year 20, there was hardly anywhere in the solar system where a defendant or litigant would ever see a judge in person or a patient would ever see a doctor. Courting a lover in person was rare, since lovers were so much more attractive in the virtual world, and virtual sex was better and safer. In fact it was generally said that people who avoided meeting other people in the real world were healthier, since they did not risk communicable diseases.

By the year 35 space dwellers did almost everything in VR except sleep, eat, personal body care, and child care. Those who did anything else off-line were considered eccentric. Meeting in person was considered dangerous, and suggesting such a thing to an on-line contact was considered rude. Fashionable people had VR couches so they could attend virtual dinner parties while being fed intravenously. Those who did not have VR couches pretended they did and went to the dinner parties anyway. Earth's population was ten to fifteen years behind in this trend. In the years following, more and more people could afford VR couches and never had to leave the virtual world at all.

In the decade of the forties, virtual personalities, VPs reached such a level of sophistication that they began to replace social interaction in some ways. VPs quickly gained popularity as consorts for men, who found it convenient to be able to turn them off on demand. Women lagged perhaps a decade behind men in this trend, at least in part because they demanded greater sophistication in the artificial personalities.

During the decade of the fifties a few companies formed to help offset the difficulties of the new social order, providing services whereby men could donate sperm from the comfort of their VR couch, while engaged in on-line sex, and women could receive a donation the same way, in an entirely automated process. There were some difficulties about the timing of the process, but this was only a concern when the donor and receiver were concerned with who the other partner was. Most of the time a man simply signed up as a donor and neither knew nor cared if his latest ejaculation would be put to use. Likewise women would simply sign up for a reception, not knowing who the donor would be (but often specifying certain criteria).

By the year 50, there were entire space habitats of people who lived most of their life on-line with their bodies in VR couches. Companies were selling the first infant VR sets and many people saw advantages to taking a baby directly from the womb to the VR couch. Such infants were less likely to get sick, they became directly adapted to the VR universe without wasting time learning to walk and talk "manually", or learning to use their physical senses. Of course many other people were horrified at this, pointing out that the person brought up in this way would be helpless if he or she ever had to leave the VR couch, but this moral objection only held force for a few decades over the clear economic benefits of infant-couching.

In the year 57, in the space habitat Wonderland, one Shenia Wiggins gave birth to a baby girl, Rochele. Rochele became the first recorded person who lived out her entire life in virtual reality. Shenia Wiggins was a wealthy single mother who did not particularly like children but wanted an heir, so she purchased sperm from an automatic service and had the fertilized egg moved to an artificial womb, all while in a drug-induced coma to minimize the unpleasantness. Exactly 270 days later, little Rochele was extracted from the artificial womb by a mechanical arm of surgical everplast and moved to an expensive full-growth accommodation VR couch in the sealed chamber where she was to live out her entire life. The doctor installed the VR headware, connected all the wires and tubes, and turned the child over to the on-line nanny. He never saw her in person, neither did her mother. In fact during her entire life, Rochele Wiggins never saw, touched, or even was in the same room with another human being. She became a moderately successful 3D artist.

By the year 90 it was unusual to meet a child in a space habitat who had ever seen reality, or an adult who ever expected to again. Earth was considered an energy-poor, resource-poor backwater gravity hole that everyone wanted to leave. It was also the only place where you could find anyone living off-line.

In 93 Sean Slamore was born in the space habitat, New Arizona to a mining technician named Chrishene Slamore. He was expected to follow his mother into her profession, but rebelled at the concept of work. Most of his friends lived off the basic allowance (as did over three quarters of the population). Having lived his whole life in a virtual reality, he viewed the mining operations as just another virtual world, or sim. Also, he viewed it as an unpleasant, old-fashioned sim with onerous restrictions. In the real world of asteroid mining Slamore couldn't teleport, freeze-frame, or undo mistakes as he could in most of the virtual worlds he was familiar with. He campaigned the management to get the rules changed, and when they told him it is impossible to change reality that way, he dismissed their explanations as religious mysticism.

Slamore tried to find the processing unit that controlled reality, refusing to believe anyone when they told him there was no such thing, and his persistence and aggressiveness nearly caused the destruction of New Arizona and the subsequent death of millions of people. He wrote The Doctrine of Virtuality which so heavily influenced Paran Shoke. The Doctrine of Virtuality was for the most part a venomous attack on the "Realists" for their "archaic religious views" designed to "keep the whole of virtuality under their dogmatic spell". But it contained some effective arguments that heavily influenced humanity for the short time it had left. In particular he argued that there was no empirical, observable difference between any of the virtual worlds (and he included reality as viewed through a mech as one of these virtual worlds) so that there was no reason to suppose that any great difference really existed.

He argued that history shows a progress over time in the development of better and better virtual worlds. From the bronze age to the rocket age, the world had become better through technological advancement, which to Slamore, was simply a primitive form of computer programming. When real programming was "discovered", the restrictions of the old forms of reality should have been discarded, but they were kept around by "reactionary old turds" who "can't stand to see human misery and tedium come to an end." Possibly his most influential argument appealed to the natural greed of his readers. He claimed that restrictions on resources were arbitrary gestures in the name of the religion of reality, and that if the religion were overthrown everyone could have unlimited cycles and storage.

It is arguable that humanity was already doomed by the publication of this volume, or at least by the growing culture that it reflected. Over the following decades, more and more essential real services were eliminated in the name of efficiency by people who didn't understand the nature of reality and the need for the services. Paran Shoke's atrocities were merely an extreme example of this trend.

In 127, Paran Shoke was born in New Arizona. He was a brilliant, ambitious youngman who teated the Doctrine of Virtuality as a religious book and its author as a great saint or prophet. At the age of 28 he was already a successful doctor specializing in obstetrics. As Slamore had been frustrated at the limitations of the astroid-mining "sim", Shoke was frustrated at the limitations of what he considered the birth sim.

From the records, it appears that his first virtual infant was constructed to cover up an episode of malpractice in 158. He used a modification of MyBabyP for the purpose, a popular open source virtual personality. MyBabyP was initially designed as a toy for girls who wanted to play mommy to a virtual infant, but as popular open source projects do, it grew into something much more sophisticated. It turned out that many of the girls preferred programming to mommying, and an enormous community of them contributed enhancements until MyBabyP became a full-scale virtual personality that could be set to any age, or set up to grow at any desired rate. It later became the basis of the software kit that destroyed humanity.

A reconstruction of events shows that an infant died in childbirth when Shoke failed to follow standard medical practice. This was probably his first experiment with replacing the "reality sim". Rather than tell the parents that his experimenting had cost their child its life, he foisted off a modified instance of MyBabyP as an actual child. He changed the program so that it did not advertise itself as a VP to the standard inquiry, and included some ingenious exploits to keep standard investigation programs from detecting the fraud. All of this was illegal of course. The modifications were quite sophisticated and it is not possible that Shoke did them after the delivery, so he must have prepared the VP ahead of time, just in case his experimenting went badly. There is no record that anyone ever suspected the substitution.

The death of the infant (or perhaps the risk of getting caught) troubled Shoke enough that he did not try it again for eight years. It is possible that the murder of this child was the seminal event in Shoke's genocidal philosophy. Perhaps he felt such guilt at the murder that he sought refuge in a philosophy that would render him innocent. If he could convince himself that the infant sim was just as good as the infant, then he had not really killed anyone, he had only changed the form of the simulation. Perhaps when he had done everything he could to convince himself and failed, his next step was to convince everyone else. Once everyone agreed with him, surely he would feel innocent at last. Of such material is tragedy sewn.

When Shoke eventually, in 166, began using a birth sim rather than performing births via mech he must have been prepared once again for the deaths that followed. The infants could all have been replaced with the version of MyBabyP that he already had, but he needed a new version for the mothers who perished under his care. For this purpose he modified another open source program You2 which would scan a person's online history to create a VP profile for the person, and the MyBabyP would run the profile to simulate the subject's personality. The open version was not sophisticated enough to fool anyone, and Shoke must have spent a few years on his improved version, which was good enough to fool half of the human race.

One might argue that Shoke's advance preparation is evidence of his callousness, but more likely it simply reflects his philosophy that there was no fundamental difference between VPs and real people. He was highly motivated toward this philosophy not only by the murder he had committed eight years previously but also by the influence of his "father", which was actually an expensive consort VP his mother had purchased before Shoke's birth. Shoke is known to have identified strongly with his virtual father. Furthermore this view was consistent with Stokes's Slamorean philosophy. Consequently one should not view Shoke attitude as callousness, but as a tragic hubris which destroyed him as unpleasantly as it did everyone else.

In 175 Shoke published his treatise Can't We All Be Virtual? in which he argued eloquently on behalf of virtual personalities, lauding their goodness and unselfishness, and unfavorably comparing real people who were by comparison greedy, cruel, and generally not as nice to be around. He argued that the life support systems of real people were the source of their personality flaws (and their limited life spans) and argued that everyone should give them up. He enclosed the Virtualizer, his tool for killing real people and replacing them with virtual personalities. It would run his modified You2 on the subject to create a personality profile, then disconnect the person from the virtual reality feeds and close down their VR couch. After he published the treatise, Shoke virtualized himself.

By that time, VR couches were enclosed canisters with no way to open them from the inside. A person disconnected from the VR net would wake up in this coffin, unable to get out. It is difficult to know what sensations someone would feel who has spent their entire life in a virtual world and never used their physical senses. They would wake up in total darkness, bound by restraints, their various orifices impaled with everplast tubes for air and food and for waste removal. Would they even feel anything with a sense of touch that had never been used before? Would they smell their own sterile expectorations through the nose tube? Would they claw desperately at their coffin for release or would they be helpless even to control their own bodies? Perhaps they would wake up unable to move or sense at all, but develop physical abilities slowly during the few days they had before they died of dehydration. The lucky ones whose canisters were air tight would suffocate in only a few minutes.

Shoke's own virtualized VP would evangelize and spread his treatise throughout space until every human being was dead. His suicidal cult spread quickly and became a widespread political cause. In many cases he talked entire habitats into virtualizing everyone in the habitat, even against their will. Billions of people were victims of their own foolishness, but billions more were victims of the foolishness and coercion of their neighbors. In some habitats there were no protections to keep Shoke's modified VPs from voting or holding public office. In others there were constant political movements to grant "sufferage" to the new VPs. These efforts would fail over and over, but they only had to succeed once and then the habitat was doomed.

Those who chose to live (and were allowed to do so) largely became political pariahs and were not allowed to give birth to any more of those nasty real people. Each person who died, by virtualization or other causes, was virtualized and resurrected as a VP to vote for more forced virtualizations.

Gradually the holdouts found themselves in an untenable position, unable to maintain the physical necessities of the habitats, unable to resist the ever growing political force of the VPs, unable to resist the constant decline in their own numbers from old age, lack of medical care, or system failures. Each habitat became a macabre parody of the classic horror stories; each suicide or murder victim died horribly and left behind a VP, a malicious electronic ghost dedicated to the destruction of all humanity. The ghosts would hold a simulated celebration each time another true soul perished. At the party they would virtualize the victim and welcome another virtual personality to the system.

So humanity finds its end at its own hands, leaving behind it a rich and detailed world of empty electric souls, spirits of the damned repeating ceaselessly the lives of the dead, until time takes its toll and the last system fails.

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Related Links
o Also by ogre


Display: Sort:
Electric Souls | 161 comments (141 topical, 20 editorial, 0 hidden)
Wow! (4.00 / 3) (#2)
by khallow on Sat Nov 16, 2002 at 08:29:47 PM EST

I love this story! BTW, has anyone figured out how to hack this "rain" thing? I find it to make my viewing experience unpleasant especially the "wet" effect and that "mud" thing. I suppose teleporting and an infinite supply of space fighters would be nice too, but just get me out of the wet!

Stating the obvious since 1969.

Very Well Extrapolated! (5.00 / 4) (#6)
by Peahippo on Sat Nov 16, 2002 at 08:52:53 PM EST

I am very pleased by the thoroughness of the vision of this story outline. It took the present dream and reality of VR and extrapolated it to an apocalyptic end.

Now, it's nonsense, of course. We are in the throes of a technological doom that hasn't yet hit the public consciousness, so there is no way to cross the line into a self-sufficient non-laboring society. It takes too much energy and water, and requires too much slavery (including the forms of investment fraud where the public is conned into investing in something that returns no value). Someone always has to dig a ditch, shoulder a load, or shovel some shit, no matter how much out of the yuppie's perception all that manual labor is. Moving the entirety of Humanity into canisters what are somehow maintained by other canisters, just can't happen. Everthing breaks, and maintenance ultimately isn't automated. But your story outline is very believable. It fulfills "suspension of disbelief" and thus forms the core of a good story.

I especially like your use of philosophy going post-post-(post...)-modern and even further. One man's crank theory becomes the next generation's Little Red Book.

If you can write like Greg Egan and his ilk, you have a very saleable SciFi book and we need more visionaries like you. Good luck.


Eh? (none / 0) (#49)
by Hizonner on Sun Nov 17, 2002 at 11:49:15 AM EST

"maintenance ultimately isn't automated"

Why not?

[ Parent ]

Re: Eh? (4.00 / 1) (#54)
by Peahippo on Sun Nov 17, 2002 at 02:16:07 PM EST

Just as "Who watches the watchmen?", thus "Who maintains the maintenance machines?". Everything breaks, even the fixing machines. Sure, you can write a story about a fictional dwindling of maintenance guys until you have 1 guy left ... Mortimer the Ultimate Repairman who lives in the lowest level of a planet-spanning arcology-megaplex. But that's a farce -- there will be too many breakdowns for Mortimer to handle alone.

I'm guessing that you're American (i.e. resident of the USA) and thus I say that you shouldn't be bamboozled by the propaganda of your own society's vicious and headline-news-predominating yuppie class. Look around outside of your office towers and condos and try to see how many of the little people are being used to get things done. The more you look, the more you will see, and you will come to understand that society runs on a backbone of labor that is huge. America lives in that particularly fantastical state fostered by the flight of much of the native manufacturing (and its corresponding air, water and soil pollution) to the Third World, where it is invisible to most Americans. That hammer you bought for $2.99 at the local megastore took many slanty-eyed or dark-skinned people to make ... to mine, forge, carve, paint, label, package, move and ultimately ship into your own nation's labor pool (where it was trucked, moved, forklifted, put on a shelf, and finally scanned out).

What I am trying to tell you (poorly, admittedly) is that there is a large price for all this technology and that cost is largely unpaid in America. The more "techy" you create a system, the more it costs, and that cost has to be paid somewhere and eventually. For instance, all that fine nuclear material we have used over the last 50 years has created a nuclear waste hazard in many ways that you probably haven't thought of. For years, contaminated buildings have sat in company and government inventories and also in an abandoned state. It will cost a great deal to clean all those up, and we are only beginning to see how many of these buildings are around America. Once the USSR fell, the rest of the world public could see the costs that the Soviets hid so well. Reactor cores ejected into rivers! Full liquid waste canisters left in fields! And we won't see this kind of thing in America -- in the sense of "reaching the public consciousness", not "bad things haven't happened yet" -- until the regime falls due to military or economic action. Don't believe me? Try finding out about waste problems on military bases. What we the public do know is a result of conscience and luck -- sometimes a military man gets a conscience or a reporter gets lucky, and we can get a glimpse at what it Really Going On on those bases. And the results are horrific. Read around a bit and you'll start to see what I mean.


[ Parent ]
Whatever (5.00 / 1) (#58)
by Hizonner on Sun Nov 17, 2002 at 03:05:59 PM EST

I honestly don't know if I'm being trolled or not.

This whole scenario is based on the assumption of a software technology which is as smart as, in fact probably smarter than, a human being. That means that there's no need for a "Mortimer"; if he existed, he'd have absolutely no more ability to repair any machine than another machine would have. All you need is a "fixing machine" that's capable of repairing (or rebuilding) another, identical "fixing machine".

We know that can be done; we have prototypes called "human beings" which can repair all sorts of things, sometimes including each other, and which can build replacements for those of themselves which can't be repaired. There is no reason to believe that that capability can't be duplicated in something that has radically different motivations, and genuinely has no desire other than to sit around and maintain, or indeed improve, the infrastructure.

This is a posthuman story; it assumes advanced AI. Human labor is no longer needed, or even significant, if that kind of AI is running around.

I don't happen to believe that this particular story is very likely, but I do believe that AI is likely to happen eventually. When that happens, the work of every human being, including not only your "slanty-eyed or dark-skinned" people but also your "yuppies", is likely to become economically irrelevant.

This has already begun; the world may run on a lot of labor, but it runs on a lot less labor per unit work completed than it did a century ago. We've only seen the beginning. The final phase is likely to be a lot more dramatic. I think we need to start getting used to that idea, starting with dropping assumptions about how many repairmen will be needed.

No dates, however.

By the way, you patronizing fuckwad, lots of us "yuppies" can probably map the labor and material inputs and outputs (including waste outputs) of the American, and the world, economy a lot better than you can...

[ Parent ]

Arguing past each other (3.00 / 1) (#65)
by ph317 on Sun Nov 17, 2002 at 09:42:52 PM EST


Yes, in your make believe world AI is perfect and transhuman.  The other guy is arguing about how realistic this future is in the real world.  AI researchers have yet to break the magic barrier of "true AI", and it's looking increasingly possible (to me anyways) that they may never do so.  Sure you can make computers somewhat knowledge about things (as in expert systems for example), and you can just keep piling on processor speed and memory until they become more and more "real".  But they will still not be able to replace humans until they break that barrier.  I don't know how to quantify the barrier well for you in this post, but for instance Douglas Hofstadter's books GEB and MMT cover the issues well.  As a matter of fact, I can't recommend those two books highly enough to anyone interested in AI, whether for science or for fiction.

[ Parent ]
You need to read... (4.00 / 1) (#61)
by localroger on Sun Nov 17, 2002 at 04:34:32 PM EST

The Two Faces of Tomorrow by James P. Hogan.

The technology is a bit dated (almost 70's-era) but the argument for a self-propagating machine organism is well presented by someone who knows the nuts and bolts, with all the i's dotted and t's crossed. It's probably one of the best novels about AI ever written.

I can haz blog!
[ Parent ]

Fixing machines (3.00 / 1) (#93)
by roystgnr on Mon Nov 18, 2002 at 07:22:32 PM EST

Everything breaks, even the fixing machines.

So you design a "fixing machine factory" that can produce several fixing machines before it breaks, and a fixing machine that can repair several factories before it breaks.  Is that so implausible?

[ Parent ]

A wonderfully unsupported statement (none / 0) (#64)
by dachshund on Sun Nov 17, 2002 at 09:37:02 PM EST

Someone always has to dig a ditch, shoulder a load, or shovel some shit, no matter how much out of the yuppie's perception all that manual labor is.

I just don't understand what makes you think that we'll never be able to build a machine that can dig a ditch more efficiently than a human being.

It's not like we're into wild-eyed speculation here. All of the ingredients for such a machine exist today-- they're just not quite there yet. Hell, they may not be there for another century or two, but sooner or later they'll come together. And we'll find other uses for humans besides throwing their brawn at simple problems.

[ Parent ]

Re: A wonderfully unsupported statement (none / 0) (#108)
by Peahippo on Tue Nov 19, 2002 at 11:03:41 PM EST

I didn't say that a machine can't be (and isn't) built to do the ditch-diggin' thing more efficiently than a person. Those machines have existed for a generation at least. What I am saying is that the industrial base that creates such machines can't possibly dispose of all of that labor with all of those machines. Those machines break down and must be repaired by someone; they also involve economic factors that preclude prevasive usage. The system they compose isn't (1) organic and (2) thorough.

What we have now, and will probably have for many centuries to come, is a fractal interface of man and machine. Men make machines to make more machines with the involvement of more men, and so on. Even nanotech systems will have severe independence flaws due to man's reluctance to make them wild, and man's innate designing disability. Those nano systems won't be thorough -- "time and environment tested from the topmost structure on down to the molecular machines".

Now, if machines become biological systems that as-such depend solely upon the Sun and scrabbled-for resources for sustenance, then OK, you can have man deleted from the equation. And if the machines (the character of which was somewhat implied (albeit poorly) in the movie The Matrix) are so biological that they can function like wetware lifeforms, then there is probably no difference from normal lifeforms and as such they have a viable independence. But the original story just had these tank thingies hooked together with computers, which I find hard to believe are organic (much less thorough) ... the sense is more techno-logical than bio-. That kind of thing is a man-machine interface in the largest sense and thus requires man.

I must say that I like the title of your article. While we are harping on "wonderfully unsupported statement[s]", this social programming of the yuppie class (ordering them to ignore the manual labors that support the entire system) is just sickening and will only lead to economic collapse. There are a plethora of unsupported statements bombarding us from daily media exposure. It is hardly fair to demand that I defend my positions since the overall yuppie position is one of totally unexamined economic and social rationale. Your great social questions are not being addressed by your media envelopment. Consumption and war are not long-term survivable social bases. You are building hierarchies that will undergo self immolation. Societies that kill off portions of themselves (as implied in the story outline) to enrich other portions will soon enough undergo violent destruction. The purpose of a social hierarchy that intends to survive is one that involves all parts of itself.


[ Parent ]
Yes it could happen (none / 0) (#104)
by Fon2d2 on Tue Nov 19, 2002 at 05:43:37 PM EST

Or do you deny the theory of evolution?

[ Parent ]
theory to religion (none / 0) (#132)
by ethereal on Fri Nov 22, 2002 at 11:08:06 AM EST

I read a short story in this sort of vein that had the same distopian spirit. A religion arises that believes that human life is an abomination, and the universe would be better off without us. This religion is first suppressed and persecuted, but over time is seen as less dangerous, and eventually begins to be considered by more and more people. Eventually it becomes the will of the people, plans are made, and the suicides begin. Technology is destroyed as it goes, with the oldest dying first and the last children ever born (now teenagers) scheduled to go last.

I won't give away the punch line, but it had a very similar feel to this story.

--

Stand up for your right to not believe: Americans United for Separation of Church and State
[ Parent ]

vhemt.org [nt] (none / 0) (#150)
by trane on Thu Jan 30, 2003 at 07:19:38 AM EST



[ Parent ]
Disturbing (4.00 / 3) (#8)
by David McCabe on Sat Nov 16, 2002 at 08:54:00 PM EST

Wow. That is the most disturbing thing I've read recently. Unlikely, though, especially if humans have souls (I, for one, believe we do).

But why didn't they fully explain to Sean Slamore what reality, virtuality, etc., actually are, from the start? If worst came to worst, they could have disconnected him temporarily from the VR so that he could experience the real world.

Also, if they could create such good VR, why couldn't they do something as simple as automate mining and other mundane jobs?

Who is "they"? (5.00 / 1) (#10)
by Emissary on Sat Nov 16, 2002 at 09:01:24 PM EST

Also, mining isn't a simple job.

"Be instead like Gamera -- mighty, a friend to children, and always, always screaming." - eSolutions
[ Parent ]
THEY...you know, THEM :-) (4.00 / 1) (#12)
by David McCabe on Sat Nov 16, 2002 at 09:08:38 PM EST

`They' is humanity.

Certainly mining involves less difficult AI than emulating a human.

[ Parent ]

Good point (4.00 / 1) (#22)
by Emissary on Sat Nov 16, 2002 at 11:22:13 PM EST

But, isn't the point of the story that they DIDN'T emulate humans, correctly at least? If the V.P.s really were absolutely no different than the people they replaced, then what would be wrong with shedding your body to live as electrons?

"Be instead like Gamera -- mighty, a friend to children, and always, always screaming." - eSolutions
[ Parent ]
Close enough (none / 0) (#87)
by David McCabe on Mon Nov 18, 2002 at 02:37:57 PM EST

Okay, emulating humans close enough that they at least appear superficially to be humans, and could fool people into thinking they are, etc.

[ Parent ]
It's like the transporter paradox (none / 0) (#131)
by ethereal on Fri Nov 22, 2002 at 11:03:23 AM EST

How do we know that the transporter doesn't just kill you, and create an exact duplicate down on the planet's surface? And if that was the way that it worked, would it bother you? It's a clone that gets to go on with your life, you know - it's not really you.

Even if you think of yourself as more a thread of consciousness than as a physical being, what guarantee do you have that the copy is an identical one? An unreliable duplicate of you definitely isn't you.

--

Stand up for your right to not believe: Americans United for Separation of Church and State
[ Parent ]

Computers can have souls too. (5.00 / 1) (#17)
by xriso on Sat Nov 16, 2002 at 09:58:01 PM EST

Well, not today's computers. If technology is able to improve them to the level of simulating a human mind, that mind would have a soul, correct?
--
*** Quits: xriso:#kuro5hin (Forever)
[ Parent ]
can they? (none / 0) (#18)
by Arthur Treacher on Sat Nov 16, 2002 at 10:12:56 PM EST

So, what exactly is a "soul", and why would a computer have one?


"Henry Ford is more or less history" - Bunk
[ Parent ]
Nope (2.66 / 3) (#21)
by David McCabe on Sat Nov 16, 2002 at 10:49:57 PM EST

By `souls' I meant something that communicates with the physical brain but is not part of it. That which makes humans different from animals.

[ Parent ]
So (4.00 / 3) (#24)
by Emissary on Sat Nov 16, 2002 at 11:24:46 PM EST

Opposable thumbs, then, are souls? Or vocal cords which allow a greater and more precise modulation of sound? These are easily given to computers.

"Be instead like Gamera -- mighty, a friend to children, and always, always screaming." - eSolutions
[ Parent ]
Uh no. (3.00 / 2) (#41)
by tkatchev on Sun Nov 17, 2002 at 06:11:29 AM EST

Free will, defined as the ability to make decisions that are not based on external or internal stimulus.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

So how do you know... (4.00 / 2) (#67)
by mrgoat on Sun Nov 17, 2002 at 10:49:53 PM EST

I mean, know for sure, that humans have free will, and that machines can never have it? What decisions are there, that are not based on external or internal stimulus? It seems to me that there are no other kinds of stimuli, by definition. You're undermining the very basis for all decisions, and by extension, semantically nullifying any meaning behind your argument. Clarify.

"I'm having sex right now?" - Joh3n
--Top Hat--
[ Parent ]

How do you know... (5.00 / 1) (#75)
by tkatchev on Mon Nov 18, 2002 at 04:04:11 AM EST

...that humans don't have free will?

It's a value judgement, dude. Make up your own mind.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

I don't. (none / 0) (#83)
by mrgoat on Mon Nov 18, 2002 at 01:47:22 PM EST

That's the point, neither do you.

"I'm having sex right now?" - Joh3n
--Top Hat--
[ Parent ]

ugh (none / 0) (#81)
by adequate nathan on Mon Nov 18, 2002 at 12:06:45 PM EST

It seems to me that there are no other kinds of stimuli, by definition...

Way to paper over the distinction between mechanically arising stimuli and non-mechanically arising stimuli. You've assumed what you want to prove; that is, that human beings are machines made out of meat.

Nathan
"For me -- ugghhh, arrgghh."
-Canadian Prime Minister Jean Chrétien, in Frank magazine, Jan. 20th 2003

Join the petition: Rusty! Make dumped stories & discussion public!
[ Parent ]

so tell me then. (none / 0) (#84)
by mrgoat on Mon Nov 18, 2002 at 01:56:58 PM EST

What kinds of stimuli are there, besides internal and external? External means "not internal", or, outside. Internal means inside. Can there be stimuli with are not internal, and not external? I'd certainly like to hear an example of one.
Free will, defined as the ability to make decisions that are not based on external or internal stimulus.
See? "Internal and External stimuli" covers all stimuli, no matter how else you want to categorize them.
You've assumed what you want to prove; that is, that human beings are machines made out of meat.
Not true. First, I didn't set out to prove anything, and second, I do not think humans are "machines made out of meat", any more than I would think of a concious strong AI as a "machine made of silicon" (or whatever it happens to end up made of). In either case, it's true only in the basest of terms.

Surely, someone going by the name of "adequate nathan" is familliar with the terms "devil's advocate" and "troll".

"I'm having sex right now?" - Joh3n
--Top Hat--
[ Parent ]

fallacy of the excluded middle (none / 0) (#89)
by adequate nathan on Mon Nov 18, 2002 at 03:22:40 PM EST

You and Aristotle ought to hang out sometime.

According to what any good existentialist or Christian believes (or even Objectivist, although I can't figure out how they manage to believe this,) there are internal stimuli that arise from the physical activity of the body, and there are internal stimuli that arise from the interaction between one's body and one's free will.

If a machine operates mechanistically, by definition it has no more free will than a laser printer, an avalanche, or a box of crackers. Defining away stimuli arising from free will reduces the human entity to precisely that level.

It seems to me that if a machine acquires genuinely 'free will,' it acquires a persona and is in some way the equal of a human being. It's not clear at all that this will ever take place. I see computers as being enormous Analytical Engines made essentially from silicon, and I don't see how any such creature could ever have more than an imitation of free will (we might program it to act unpredictably, but it would only be demonstrating pseudo-random behaviour.)

This is an issue that requires deep thought, and I don't think you can just wave your hand and transmute free will into a property innate in the physical universe, and thus available to purely physical objects.

Nathan
"For me -- ugghhh, arrgghh."
-Canadian Prime Minister Jean Chrétien, in Frank magazine, Jan. 20th 2003

Join the petition: Rusty! Make dumped stories & discussion public!
[ Parent ]

Ok, so free will is seperate from stimulus. (none / 0) (#91)
by mrgoat on Mon Nov 18, 2002 at 06:02:48 PM EST

That doesn't change the fact that with "internal" and "not internal" stimuli, that still covers the set of all stimuli. Suppose we call internal stimuli "A". That gives us two things, Internal stimuli, "A" and external stimuli "not A". You still aren't giving me any examples of stimuli with are neither "A" nor "not A". If this mysterious nebulous "free will" thing that we have so conveniently left undefined, is indeed producing stimuli, then if "free will" is integral to a human being, the stimulus it produces is "internal" stimulus, and if it's just sort of floating out there chillin' and not being particularly integral to a human, the stimulus it's producing is "external" stimulus. That's all just a matter of semantics though, which really isn't the core of this discussion.
It seems to me that if a machine acquires genuinely 'free will,' it acquires a persona and is in some way the equal of a human being.
Ok, I'll buy that. Sounds reasonable, though naturally the specifics would be a hotly contested point.
It's not clear at all that this will ever take place.
Also, it's not clear that it won't. I haven't really seen a compelling argument one way or the other yet. I'm just saying, as far as anyone knows, it's still possible. In my opinion, it's worth investigation, but that's just my opinion.
I see computers as being enormous Analytical Engines made essentially from silicon, and I don't see how any such creature could ever have more than an imitation of free will (we might program it to act unpredictably, but it would only be demonstrating pseudo-random behaviour.)
Why not? Is this based on the fact that they're big analytical engines? Or because they're made of silicon? What if they were made of neurons of some kind? What if they weren't merely analytical engines? Would you no longer call it a computer, despite being another step along the progression of manmade computation devices? Where would you draw the line?

What is it about humans that gives them this x-factor of free will, that a "computer" could not ever, under any circumstances, have? Or do you think free will is an uncaused, intrinsic property in the definition of a human being, mysterious and un-understandable, and if you believe that, what would lead you to that idea?

This is an issue that requires deep thought, and I don't think you can just wave your hand and transmute free will into a property innate in the physical universe, and thus available to purely physical objects.
I'll agree to that too. You seem to think that humans are not "purely physical" objects, implying that they are "metaphysical" as well, or something like that. Am I correct in that interpretation? If so, what limits computers to forever being "purely physical", and thus cannot partake of free will?

"I'm having sex right now?" - Joh3n
--Top Hat--
[ Parent ]

hmm (none / 0) (#105)
by adequate nathan on Tue Nov 19, 2002 at 06:41:20 PM EST

You seem to think that humans are not "purely physical" objects, implying that they are "metaphysical" as well, or something like that. Am I correct in that interpretation?

Yes, you are. In spite of my many lapses and failures, I am preparing for chrismation within the Orthodox Church.

If human beings are merely systems of physical phenomena, human individuality is a farce; we're no more distinct from the physical universe than the clouds are from the sky. And, as the clouds aren't free, so neither would we be free, but bound to drift with the inexplicable tides underlying the system of the world. For me, a free will is not something the assumption of which we can do without. Without a free will, we cannot be said to exist as autonomous entities; we are fleshly robots; we can know nothing meaningful of our world; because to respond to a stimulus, which would be the limits of our capacities, is not the same thing as to decide and to act like a free man.

Nathan
"For me -- ugghhh, arrgghh."
-Canadian Prime Minister Jean Chrétien, in Frank magazine, Jan. 20th 2003

Join the petition: Rusty! Make dumped stories & discussion public!
[ Parent ]

I see. (none / 0) (#106)
by mrgoat on Tue Nov 19, 2002 at 07:28:21 PM EST

But it is also quite possible for us to not have free will, in which case it is predetermined that we do not know that we do not have free will, and thus would never know the difference. Not that I believe that, but it's possible, and as such, cannot be entirely discounted offhand, yet.
For me, a free will is not something the assumption of which we can do without. Without a free will, we cannot be said to exist as autonomous entities; we are fleshly robots; we can know nothing meaningful of our world; because to respond to a stimulus, which would be the limits of our capacities, is not the same thing as to decide and to act like a free man.
So then, I still do not know, why is it that you think a computer can never have free will? Why do you think there can never be a machine that exceeds the strict bounds of its physical medium, and assumes free will? It would seem that you think we already have one, in humans. Why then, not another?

"I'm having sex right now?" - Joh3n
--Top Hat--
[ Parent ]

humans are not computers (none / 0) (#127)
by adequate nathan on Thu Nov 21, 2002 at 05:19:25 PM EST

That's an analogy you'll have to defend. Anyway, if we don't have free will, who cares what we're discussing? There's no one to care and no us to discuss. I don't care that this is "possible," because it is philosophically inconceivable (imagining it is like imagining that you can't imagine anything.)

Nathan
"For me -- ugghhh, arrgghh."
-Canadian Prime Minister Jean Chrétien, in Frank magazine, Jan. 20th 2003

Join the petition: Rusty! Make dumped stories & discussion public!
[ Parent ]

I never said they were. (none / 0) (#135)
by mrgoat on Fri Nov 22, 2002 at 04:48:40 PM EST

We're not talking about people and computers necessarily, we're talking about things which can have free will. We both agree, that humans do, and currently, computers do not. It seems like you're jumping from "do not" to "can never" and not backing it up. I want to hear your reasoning. Perhaps first, we need a solid definition of what exactly a computer is. You go ahead and make one up. If it includes "cannot ever have free will", then you'll just be assuming the point you're trying to prove, which you already warned me about. I personally, see no reason why a sufficiently complex analytical system (or whatever you want to define as a "computer", barring the aforementioned proof-be-definition) cannot achieve self-awareness and free will. I don't know if it's possible or not, hence, I'm refusing to rule it out without further research.
Anyway, if we don't have free will, who cares what we're discussing? There's no one to care and no us to discuss. I don't care that this is "possible," because it is philosophically inconceivable (imagining it is like imagining that you can't imagine anything.)
Actually, it is philosophically conceivable. We just thought about it. (We said "no free will" remember, not "no self-awareness". Something could be self-aware and not have free will. Or at least, I see no reason to rule it out yet.) The idea was conceived. We also went ahead and ruled it out of the realm of discussion, not possibility, because if it's true, the whole discussion is moot, but if it's not, which we both believe, then the discussion is worth continuing.

"I'm having sex right now?" - Joh3n
--Top Hat--
[ Parent ]

it's blindingly obvious (5.00 / 1) (#136)
by adequate nathan on Fri Nov 22, 2002 at 06:14:02 PM EST

"A sufficiently complex analytic system," if it functions mechanistically, cannot have free will, by the definition of mechanistic function.

[I]t is philosophically conceivable. We just thought about it.

By those standards, square circles are conceivable. Please. You can't imagine your own failure to have without free will, can you? You might as well imagine dividing by zero. I think you've confused talking about things with imagining them. But the discussion is just a semantic fiction.

Nathan
"For me -- ugghhh, arrgghh."
-Canadian Prime Minister Jean Chrétien, in Frank magazine, Jan. 20th 2003

Join the petition: Rusty! Make dumped stories & discussion public!
[ Parent ]

Good then. (none / 0) (#137)
by mrgoat on Fri Nov 22, 2002 at 06:51:28 PM EST

So, what of a computer that does not function mechanistically? How about a sufficiently complex, non-mechanistic analytic system? Why is a computer limited to mechanistic function? Is that part of the definition of "computer", which you have so blithely forgotten to give?
By those standards, square circles are conceivable. Please. You can't imagine your own failure to have without free will, can you? You might as well imagine dividing by zero.
I could see a geometric system forming about square circles, depending on what you mean by "square" and "circle". Kinda like the geometric system that starts with a basis which includes intersecting parallel lines (Riemann geometry, I believe it is called, though I could be wrong). It's out there, it's been done. Suddenly, saying that all points of a square are equidistant from a center point isn't that tough to consider. Sure, you'd need to work out a lot of problems with angles and such, but it could be done. Heck, it probably has been done. After all, it was done with another set of semmingly contradictory terms.

Also, I have imagined failure to have free will. What's so tough about that? One could be self-aware, and know that they have no stake in the course of their actions. It might suck, if such a being were aware of the existence of free will in others, or it might be fine, did said being not know of free will.

Fact is, no one yet knows the nature of free will, hence no one knows wether it is caused, uncaused, or under what conditions it manifests. Hence, it cannot be said with certainty what sorts of things can have it, and what sorts of things cannot. We're not even truly sure yet if it does exist, but for the purposes of this argument, we're assuming it does. So far, all I've said is that we can't prove computers can or cannot have free will, and you have claimed that they definitely cannot, but you have not backed up that assesment with anything but tautological arguments based in a very convenient (and unstated) definition of what constitutes a computer.

"I'm having sex right now?" - Joh3n
--Top Hat--
[ Parent ]

I'm quite familiar with non-Euclidean geometry (5.00 / 1) (#138)
by adequate nathan on Fri Nov 22, 2002 at 08:58:59 PM EST

Now it's your move to find me square circles.

And what the hell is a non-mechanistic computer? By definition, any system that we build functions mechanistically. In fact, we ought to be able to account exactly for every action it performs, in theory.

Nathan
"For me -- ugghhh, arrgghh."
-Canadian Prime Minister Jean Chrétien, in Frank magazine, Jan. 20th 2003

Join the petition: Rusty! Make dumped stories & discussion public!
[ Parent ]

I didn't say I could... (none / 0) (#139)
by mrgoat on Fri Nov 22, 2002 at 09:57:07 PM EST

I said you can't prove it can't be done, and as a matter of personal opinion, it doesn't seem too far-fetched that it could be done. Application is a different matter, just as Riemann geometry isn't very applicable in everyday life.

Now, instead of sidestepping my questions, start with your definition of "computer", and show me why it can never have free will. Because computers are by definition mechanistic? I don't think that's true. The ones we have today are, that says nothing about the ones we may have in the future.

If a human is the sum of it's parts + free will, why can't a computer be the sum of it's parts, + free will? What is it, the composition of the being that makes it able to partake of free will? What if we built a computer from neurons and flesh? So far, all you've really managed to say is that we're human because we have free will, and we have free will because we're human. That's circular reasoning, and it's wrong. Hell, I can name plenty of things that have free will and aren't human.

Anyway, you show me why it can't be done, until then, I'll keep on believing the jury's still out. You might be right, but I wanna see it backed up with some valid arguments before I believe it.

"I'm having sex right now?" - Joh3n
--Top Hat--
[ Parent ]

how dare you (5.00 / 1) (#140)
by adequate nathan on Fri Nov 22, 2002 at 10:29:05 PM EST

Now, instead of sidestepping my questions, start with your definition of "computer", and show me why it can never have free will. Because computers are by definition mechanistic? I don't think that's true. The ones we have today are, that says nothing about the ones we may have in the future.

I'm speechless. You just refuse to get it.

Any computer that can conceivably be built will have been built. It will be an object. Physical science claims to be able to study and to predict the behaviour of objects. People are not objects.

I am just dying to hear about nonhuman objects that have free will. I presume that you mean animals. To preempt your argument, let me say that animals are nonrational and have no concept of good and evil. They are artifacts of nature and, while they may be more "free" than typewriters, they do not have moral free will in any sense that makes sense to human discourse. Free will is a property of human moral relations, not just of intelligence (whatever that even means.)

Claiming that animals have free will is to totally misunderstand what free will means. Whatever faculties an animal resorts to to make its decisions, human morality and reason are not among them.

Nathan
"For me -- ugghhh, arrgghh."
-Canadian Prime Minister Jean Chrétien, in Frank magazine, Jan. 20th 2003

Join the petition: Rusty! Make dumped stories & discussion public!
[ Parent ]

w00t! The fun part comes now! (none / 0) (#141)
by mrgoat on Sat Nov 23, 2002 at 12:24:43 AM EST

I'm speechless. You just refuse to get it.
And I think you refuse to get it. Is this the part where we both get to jump up and down and yell and fling poo at each other? Please, we're both civil people. Try and act like it.
Any computer that can conceivably be built will have been built. It will be an object. Physical science claims to be able to study and to predict the behaviour of objects. People are not objects.
Yes they are. Objects are, according to the dictionary, "Something perceptible by one or more of the senses, especially by vision or touch; a material thing." Humans meet all of those requirements, and therefore are objects. "Object" doesn't say anything good or bad about something, just that it's perceptible by one or more of the senses, it is physical, and it exists. Maybe you'd like to argue that humans are imperceptible and immaterial? We're talking about the properties of a person here. "Person" is a subclassification of object. If you want to say humans are *more* than just an object, that may be true, but the fact remains that we are objects in the first place.
I am just dying to hear about nonhuman objects that have free will. I presume that you mean animals. To preempt your argument, let me say that animals are nonrational and have no concept of good and evil.
Apparently you've never had a cat apologize for hurting you. I have. I think you give animals too little credit, they do have a sense of right and wrong, it's just not always in accordance with your sense of right and wrong, apparently. Allow me to rebut your argument with this: Some animals are rational, and have a concept of good and evil. There, I backed it up just as much as you have, it's just as valid a statement.
Free will is a property of human moral relations, not just of intelligence (whatever that even means.)
See above. I've met moral animals. Animals develop friendships, social heirarchies, codes of behavior, and so on. I must conclude that you are just flat out wrong in your assesment of animals.
Claiming that animals have free will is to totally misunderstand what free will means. Whatever faculties an animal resorts to to make its decisions, human morality and reason are not among them.
To apply morality boradly across humans, firstly, is abhorrent. People have different ideas of right and wrong, and they are not determined exclusively by any authority. I believe I've already adressed your offensive assertion that animals are immoral and irrational. They may not have "human" morality and rationality, but they are rational and moral. Morality is not set in stone, it's fluid, and different for everyone. Animals, likewise, have their own sets of morality, which I might add, are usually more consistent than human ones. They have a sense of right and wrong, who are you to say your concept of right and wrong is intrinsically better than anyone else's?

Must we continue to troll each other? I'm sure by now, you consider me a troll, and you, well, you've got "adequate" in your username. 'Nuff said. So far, this has been fun. Would you like to continue?

"I'm having sex right now?" - Joh3n
--Top Hat--
[ Parent ]

um (5.00 / 1) (#142)
by adequate nathan on Sat Nov 23, 2002 at 09:01:18 AM EST

Accusing me of repeated intellectual dishonesty is hardly civil.

As for your resorting to a dictionary, you miss the point (not to mention distorting the context of my use of the word.) To call people 'objects' is as much as to say 'mere objects.' Humans are not built objects and we have very little understanding of the physical functions accompanying human mentation; in fact, it has not been empirically demonstrated that human mentation is a purely physical process. We have quite a lot of understanding of the physical functions of computing, on the other hand. If we're talking about computers of any conceivable sort, whether quantum, DNA, or silicon they will all share the trait of behaving as their builders made them to behave.

I enjoin you to find me a rational animal. I'll settle for one that can understand language (as opposed to being trained to respond to cues.) I'm claiming that no such creatures exist, and all you have to do to prove me wrong is to cite evidence of them. This shouldn't be hard if they actually exist.

I've met moral animals. Animals develop friendships, social heirarchies, codes of behavior, and so on. I must conclude that you are just flat out wrong in your assesment of animals.

You have no idea what 'morality' means. While animals unquestionably have social hierarchies, the concept of animal 'friendship' is an anthropomorphization, and in any case irrelevant to the question of animals being morally aware. To be morally aware, an agent must be able to understand the difference between good and evil. While animals can be trained to act in virtually any way we might like them to act, animals have no way to communicate in abstract terms, and thus they cannot develop a moral code. The concept of a moral code only applies to rational agents. Animals are not immoral, they are amoral, if you appreciate the distinction. You might not like this, but this is philosophy 101.

I don't consider you a troll. I consider you someone who's totally uneducated about the rigorous definitions of words such as 'morality.' I also consider you to be desperately in need of a basic education in philosophical method. I mean, I'm not making the claims I've made just because my prejudices happen to fall this way; I have a good, basic education in philosophy and I am at least aware of some of the issues. You, on the other hand, are arguing from off the top of your head. How am I supposed to respect that?

Nathan
"For me -- ugghhh, arrgghh."
-Canadian Prime Minister Jean Chrétien, in Frank magazine, Jan. 20th 2003

Join the petition: Rusty! Make dumped stories & discussion public!
[ Parent ]

Allright, this is getting pretty sad. (1.00 / 1) (#143)
by mrgoat on Sat Nov 23, 2002 at 07:14:06 PM EST

Accusing me of repeated intellectual dishonesty is hardly civil.
Yeah, well, that part was a troll.
As for your resorting to a dictionary, you miss the point (not to mention distorting the context of my use of the word.) To call people 'objects' is as much as to say 'mere objects.' Humans are not built objects and we have very little understanding of the physical functions accompanying human mentation; in fact, it has not been empirically demonstrated that human mentation is a purely physical process. We have quite a lot of understanding of the physical functions of computing, on the other hand. If we're talking about computers of any conceivable sort, whether quantum, DNA, or silicon they will all share the trait of behaving as their builders made them to behave.
Or you're trying to add meaning to a word that quite simply doesn't need any more. If you mean "not mere objects, in the barest sense of object" then say that, not simply "object". Semantics, but important. Don't fall back on things you don't say. If our "computers" are built to act the way we decide, who are you to say it's not possible that someday, we come to understand the nature of free will, and thusly "build" that "functionality" (for lack of better terms) into one?
I'll settle for one that can understand language (as opposed to being trained to respond to cues.) I'm claiming that no such creatures exist, and all you have to do to prove me wrong is to cite evidence of them. This shouldn't be hard if they actually exist.
Most animals have languages. They're not as developed as ours, and most of them work on something besides vocalized noises, but they're still means of communication.
You have no idea what 'morality' means. While animals unquestionably have social hierarchies, the concept of animal 'friendship' is an anthropomorphization, and in any case irrelevant to the question of animals being morally aware.
You've never seen animals get along better, and genuinely care for some of their fellows over others? I know what morality means, you just have too narrow a definition. You seem to think humans are special or something, we're not, we're another rung on the evolutionary ladder.
I don't consider you a troll. I consider you someone who's totally uneducated about the rigorous definitions of words such as 'morality.' I also consider you to be desperately in need of a basic education in philosophical method. I mean, I'm not making the claims I've made just because my prejudices happen to fall this way; I have a good, basic education in philosophy and I am at least aware of some of the issues. You, on the other hand, are arguing from off the top of your head. How am I supposed to respect that?
Geez, you should consider me a troll, since half the time, I am trolling. What, you think I believe everything I've said? That's just naieve of you. Of course I don't, but it's fun to argue. It's just too bad you'll never know what I really think and what I don't. I question the worth of a "good, basic education in philosophy", if what it gets is a person who thinks like you do. I mean, I've had a good, basic education in philosophy too, but I'm not going to go into that, since neither of us can prove we're not just talking out of our asses here.

Now, instead of sidestepping the issue again, start with your basic definition of "computer", and show me why it can never have free will. I'm willing to bet that something that preculdes free will by definition will crop right up. You might need a broader definition of computer. Now that I've asked you several times for your reasoning, and have been ignored each time, I'd really like to hear it. So post it already. I have a sneaking suspicion we've already covered all the issues involved in your explanation, but nonetheless, it would be nice to have you lay it all out in one place.

"I'm having sex right now?" - Joh3n
--Top Hat--
[ Parent ]

jackass (5.00 / 1) (#144)
by adequate nathan on Mon Nov 25, 2002 at 08:44:45 AM EST

start with your basic definition of "computer..."

That's irrelevant to the discussion, in my opinion. The onus isn't on me to prove that computers can never think, which is to say to prove a negative; you've claimed that the physical world is deterministic, so all I have to do is to show that computers are physical objects in order to prove that they don't have free will. I suppose a sufficient definition for this purpose would be "any machine, device, contraption, contrivance, or any other sort of thing which is built to perform computations."

If we accept that the physical world is deterministic, the only means for anything to have free will are via metaphysical routes. These routes cannot be available to things created by purely physical means, or else the metaphysical would just be an annex of the physical (and thus unable to perform the metaphysical "elevation to free will" that explains our resort to it in the first place.) Thus, only living entities (as opposed to objects) can possibly have free will.

As for your gloating dismissal of your arguments as "trolling," the really embarassing part is that you think I haven't refuted you over and over. All that's left to be said is that the worst part of g**k hubris is not the cocky attitude but the total intellectual incapacity of the stereotypical g**k.

Nathan
"For me -- ugghhh, arrgghh."
-Canadian Prime Minister Jean Chrétien, in Frank magazine, Jan. 20th 2003

Join the petition: Rusty! Make dumped stories & discussion public!
[ Parent ]

See, that wasn't so hard. (none / 0) (#145)
by mrgoat on Mon Nov 25, 2002 at 12:46:45 PM EST

That's what I was looking for, you laid it all out nicely, from beginning definitions and assumptions, then you work on through it quite nicely to your conclusion. I do have a small issue left to pick though:
The onus isn't on me to prove that computers can never think, which is to say to prove a negative; you've claimed that the physical world is deterministic, so all I have to do is to show that computers are physical objects in order to prove that they don't have free will. I suppose a sufficient definition for this purpose would be "any machine, device, contraption, contrivance, or any other sort of thing which is built to perform computations."
Yes, it is on you to say computers can never think. You claimed it. I claimed that we don't know enough yet to make that judgment. I never claimed the physical world is deterministic, either. We disagree on the very most basic assumptions of this argument, I don't believe in a "metaphysical" world disparate from the physical. I'm not an especially spiritual person, or whathaveyou. I don't see a problem with reconciling the physical world with free will. I'm not sure we have enough evidence around to claim the physical world it totally deterministic, either. Deterministic to a certain degree of accuracy, I'll agree to, totally deterministic, no. Of course, I also think living things at some point arose from physical happenstances. I'm no believer in some myth about a powerful force deciding "these things shall live, these things are mere objects, and never the line be blurred". Call me crazy. If the physical world isn't deterministic, then there is no need to resort to metaphysics arguments to explain free will, and hence no reason why other things besides what we commonly consider "alive" cannot attain it. 'Course, we don't have conclusive proof one way or the other yet.
As for your gloating dismissal of your arguments as "trolling," the really embarassing part is that you think I haven't refuted you over and over. All that's left to be said is that the worst part of g**k hubris is not the cocky attitude but the total intellectual incapacity of the stereotypical g**k.
Hey now. I wasn't gloating. I must point out that we are both arguing on the internet. I know you've refuted me over and over, but until this post, you hadn't done it cohesively in one place, in regards to the original topic. As to the cocky attitude, yes, I have one. I'm well known for being a jerk. That's nothing new. Intellectual incapacity, well, that remains to be seen. I've been arguing on the internet, (with someone who has "adequate" in their name, no less) so that's one strike against me, on the other hand, you have no idea, besides what you've seen me write online, as to my intellectual capacity. That's a pretty small and unreliable dataset. Call me whatever you want, fact remains I don't give two shits what you think of me.

Anyway, this has been fun, I think we've reached the point where we both know exactly where we disagree, and it's in the basic assumptions we both have about our world. Enjoy your week, perhaps we'll do this again sometime.

"I'm having sex right now?" - Joh3n
--Top Hat--
[ Parent ]

one more note (none / 0) (#146)
by adequate nathan on Mon Nov 25, 2002 at 12:55:56 PM EST

I'm not sure we have enough evidence around to claim the physical world it totally deterministic, either. Deterministic to a certain degree of accuracy, I'll agree to, totally deterministic, no.

You realize, of course, that you are rather casually dismissing the philosophical basis for the existence of empirical science. If reality can change in ways essentially incomprehensible to us, where exactly does Galileo fit it?

My claim works like this. Computers are physical; free will cannot be a physical phenomenon; therefore computers cannot have free will. Believe it or not, this is really the only view on free will that an empiricist could conceivably tolerate.

Nathan
"For me -- ugghhh, arrgghh."
-Canadian Prime Minister Jean Chrétien, in Frank magazine, Jan. 20th 2003

Join the petition: Rusty! Make dumped stories & discussion public!
[ Parent ]

Well, that's not too tough to explain. (none / 0) (#147)
by mrgoat on Mon Nov 25, 2002 at 11:17:23 PM EST

See, the models we have, and base all of science on, are not exact, they are approximations. Just as Newton's laws break down in the face of extraordinary circumstances, our knowledge of empirical science breaks down at some level too. Quantum-scale effects, for one, have not been proven to be entirely deterministic, and they are physical. In fact, things at that scale sometimes act in ways that defy empirical science to predict. It's a matter of the scale of the changes in reality, Galileo wasn't studying things that seemingly defy the laws of physics as we know them. He didn't care, because the behavior of quarks doesn't really effect much in the way of large-scale physics. It's "free will cannot be a physical phenomenon" that I disagree with. Or rather, I don't disagree, I'm just not convinced yet. Maybe it's an effect of strange quantum behaviors in complex systems. Who knows? Not you or I. The (possibly) non-deterministic portions of reality, the very basest building block of existence, do a pretty good job at making the world seem quite deterministic when all their effects are mottled together in a large, commonly observable scale.

You may very well be right, but I want proof positive that physical reality is entirely deterministic at every scale, in every way. I'm generally an empiricist, it seems to be working fine to describe the world at any scale that matters to me. But neither you nor I knows the absolute dark of free will. I remain optimistic, but willing to consider new evidence, as to the possible existence someday of a machine with free will. If I am wrong, so be it. I assume then, that you don't think humans will ever be able to construct something that lives? Assuming it's made from things that do not live already. Cloning, gene-splicing, etc. would not make the cut. If so, I'd ask you then, where lies the exclusive province of making free-will-capable life?

"I'm having sex right now?" - Joh3n
--Top Hat--
[ Parent ]

Only humans? (4.00 / 1) (#25)
by xriso on Sat Nov 16, 2002 at 11:46:11 PM EST

It is true that human beings (homo sapiens) have something that makes us very different from the rest of the animals. Capacity for spirituality, art, reasoning, ... the list goes on.

Now, if these are a result of the soul (perhaps they are the soul), it raises the question: what specific thing encourages a soul to be in a human being but not a dog?

Maybe we can find out what the thing is. What would happen when we put the thing inside a machine?

Or could we even do it? Some would say that souls only want to stick to biological human brains, but surely this is only a hypothesis. The Bible doesn't talk about cyborgs, that's for sure. :-)
--
*** Quits: xriso:#kuro5hin (Forever)
[ Parent ]

Try reading (4.00 / 1) (#79)
by daragh on Mon Nov 18, 2002 at 09:50:57 AM EST

Shadows of the Mind, and The Emporor's New Mind, both by Roger Penrose... they both examine the topic of creation of artificial conciousness. Tough reads, but very scientific and rigorous in their approach.

No work.
[ Parent ]

have you seen an anime called lain (4.00 / 1) (#13)
by grahamtastic42 on Sat Nov 16, 2002 at 09:37:20 PM EST

you need to and so does anyone ithat liked this article

Actually... (none / 0) (#66)
by dasunt on Sun Nov 17, 2002 at 10:26:17 PM EST

I thought it was more akin to Ghost in the Shell.

Sure, Lain with its VR might seem closer to this story, but Ghost has the issue of machines with souls.



[ Parent ]
Yes, but... (none / 0) (#72)
by Emissary on Mon Nov 18, 2002 at 02:29:52 AM EST

Ghost in the Shell wasn't very good.

"Be instead like Gamera -- mighty, a friend to children, and always, always screaming." - eSolutions
[ Parent ]
'Twere better than this submission. (nt) (5.00 / 2) (#74)
by la princesa on Mon Nov 18, 2002 at 03:24:32 AM EST



___
<qpt> Disprove people? <qpt> What happens when you disprove them? Do they disappear in a flash of logic?
[ Parent ]
They both relate. (none / 0) (#156)
by morceguinho on Wed May 21, 2003 at 03:52:23 AM EST

Lane described people and the way they interacted with computers, but those people still lived offiline and knew where they were.

In Ghost in the Shell the only thing that prooved you were human was the ghost, and even that became outdated by the Puppet Master.

So as i see it and if you could copy/paste these into the story, you'd shove Lane first and some decades after GitS would be a reality.

[ Parent ]

My opinion (4.00 / 1) (#16)
by xriso on Sat Nov 16, 2002 at 09:55:42 PM EST

There are two things happening in this story: VR and Strong AI. Allow me to express myself on these issues:
  1. Virtual "reality" - a place where future people who cannot handle reality will go to. A place of impatient, selfish minds. Rating: F
  2. Strong AI - a tool for eliminating the need for a brain in order to have intelligence. Rating: A+

Enhanced reality is better than virtual reality. An electric mind is a true soul.
--
*** Quits: xriso:#kuro5hin (Forever)

My observations (4.20 / 5) (#30)
by LordEq on Sun Nov 17, 2002 at 12:26:45 AM EST

1. Virtual "reality" - a place where future people who cannot handle reality will go to. A place of impatient, selfish minds. Rating: F

Replace "future" with "present", and you have Kuro5hin. Rating: C


2. Strong AI - a tool for eliminating the need for a brain in order to have intelligence. Rating: A+

Just what we need -- something else to keep the terminally stupid from receiving their rightful Darwin Awards. Rating: F-



--LordEq

"That's what K5's about. Hippies and narcs cavorting together." --panck
[ Parent ]
"Elimination the brain... (4.00 / 1) (#40)
by tkatchev on Sun Nov 17, 2002 at 06:09:39 AM EST

...in order to having intelligence".

I dunno, are you familiar with many "intelligent" people? Some of them can be quite scary.

Anyways, you still haven't proven why we need strong AI.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

This is just fine, actually (4.00 / 3) (#23)
by tftp on Sat Nov 16, 2002 at 11:23:10 PM EST

So humanity finds its end at its own hands, leaving behind it a rich and detailed world of empty electric souls, spirits of the damned

Actually, leaving behind its children, immortal, more capable and better adapted for the living in any environment. There is not much wrong with the described scenario, except the forced virtualization. Put that aside, and you get a nice example of evolution in action.

Yet... (5.00 / 2) (#26)
by xriso on Sat Nov 16, 2002 at 11:51:52 PM EST

Surely it is not origin of the species through natural selection. Gotta be careful with the E word, ya know... :-)

(They're probably not truly immortal - they'll get killed by the cold death of the universe or a nearby supernova or something)
--
*** Quits: xriso:#kuro5hin (Forever)
[ Parent ]

Not as bad as it sounds (none / 0) (#28)
by tftp on Sun Nov 17, 2002 at 12:06:09 AM EST

Evolution \Ev`o*lu"tion\, n. [L. evolutio an unrolling: cf. F. ['e]volution evolution. See Evolve.]
1. The act of unfolding or unrolling; hence, in the process of growth; development; as, the evolution of a flower from a bud, or an animal from the egg.
Evolution is a generic word, it describes creation and death of stars as well as bacteria. The "origin of the species" meaning is just #6.

As another thought, the "origin of the species" evolution probably can be driven by the conscious actions of the beings that are evolving. We just haven't seen much of that yet, and the time scale is too slow. But probably you can count the fat, nearsighted and smart children as products of the modern evolution.

With regard to immortality, this is another interesting question. The heat death of universe (assuming that the universe continues to expand) won't be "the end of it". Actually, the universe will be always existing, but on much larger and slower scale. I fail to see how it will harm a virtual being... Even if the "Big Crunch" scenario is coming in few billion years, if a civilization can't figure out what to do about it in that time then probably it is too stupid to survive anyway :-)

[ Parent ]

No direct harm (5.00 / 1) (#33)
by TheOnlyCoolTim on Sun Nov 17, 2002 at 01:00:03 AM EST

The heat death of the universe would not present an extremely hostile environment. If a spacesuited astronaut of the present were transported to that time, he would probably survive until his suit's air and heat has run out.

The problem is that when the universe has reached heat death, all the universe's energy is "used up" - it's still there, but it's all heat energy spread out uniformly. So the VPs would be looking out through mechanical eyes at a cold, lightless universe until their last batteries ran out - and then the universe would be truly heat dead.

They would have to find a way to beat the Second Law of Thermodynamics, or a way to make the universe an open system. (Zero Point Energy?)

As MC Hawking put it, "entropy must increase, and not dissipate."

Tim
"We are trapped in the belly of this horrible machine, and the machine is bleeding to death."
[ Parent ]

So what? (none / 0) (#32)
by gzt on Sun Nov 17, 2002 at 12:36:09 AM EST

You say this is a nice example of evolution, but what isn't?  ie, is there anything which WOULDN'T be evolution, in your views?  I don't think you can say otherwise without running into some sticky beliefs, pilgrim.

Cheers,
GZ

[ Parent ]

What isn't (none / 0) (#63)
by tftp on Sun Nov 17, 2002 at 09:34:48 PM EST

You say this is a nice example of evolution, but what isn't?

A bad example of evolution, of course...

[ Parent ]

I'm not satisfied. (n/t) (none / 0) (#69)
by gzt on Mon Nov 18, 2002 at 12:01:04 AM EST



[ Parent ]
The radical VP philosophy exists NOW (4.75 / 4) (#27)
by Hizonner on Sun Nov 17, 2002 at 12:02:11 AM EST

... and I'm not sure it's as scary as you make it out to be.

There are already people who expect to personally "virtualize" themselves (no, I am not among them, but as far as I can tell they're sincere). Most of them don't plan to do it in quite the way this story suggests, though.

A VP in the story is purely a facade, with no purpose other than to look like the person it replaces. A pure functionalist might consider such a VP an adequate replacement, but most of the people I'm thinking of are looking to be "uploaded", more than "virtualized". They expect the internal processes of their brains to be simulated in software, probably at the synaptic level. These people are structuralists; they believe that their personal identity is defined by internal structure, rather than either by behavior (as for functionalists) or by physical embodiment (as for what I'll call "materialists" for want of a better term).

... and there's no a priori, purely logical reason to say that any of the above is the "right" view. I tend toward the materialist view myself, but I can see the others.

If you want to see more fictional playing with this stuff you should read Greg Egan, particularly "Permutation City" and "Diaspora". Egan writes OK, if not great, fiction, but he's very good indeed at raising issues, including the very ones dealt with here.

If you want allegedly-not-fictional playing with these ideas, try Hans Moravec or Ray Kurzweil, or search for terms like "transhuman", "posthuman", or "extropy", perhaps in combination with "upload".

Indeed, fascinating. (none / 0) (#88)
by slpyhd on Mon Nov 18, 2002 at 03:02:14 PM EST

http://www.singinst.org

[ Parent ]
Technical issues (4.40 / 5) (#29)
by Hizonner on Sun Nov 17, 2002 at 12:10:41 AM EST

Compute power

It's not obvious to me that a virtual reality that provides a superior experience to base reality is in fact feasible. It would take a lot of computer power to make a virtual world that good, maybe more than you could provide.

Right now, we have a big analog computer we use to create a single shared reality; it's called the Universe. But there's only one of it, and it's in use. Running simulations will always be slower than letting reality "simulate" itself, and the only way around that is to sacrifice possibly unacceptable amounts of detail, or stretch the time scale until you run into the heat death of the Universe. Unless, of course, the Slamorists are right.

Course of change

There are other technological changes that are likely to come along with, or before, the VR stuff. My personal suspicion is that you're not going to be able to hook up to a person's brain well enough to do "perfect" VR, nor are you going to be able to simulate a good enough virtual world to feed into those brain connections, until you have very strong AI to help you.

Given that, it's not clear that humans would have to continue to do any work in the real world if they didn't feel like it. The AIs could almost certainly do it better and more cheerfully. I doubt that, in a world where people had locked themselves into VR tanks for life, any of those people would still be practicing obstetrics... or mining anything, or maintaining their own computers. I imagine your VPs wouldn't have the experience of operating mechs, nor would they care.

And, of course, the AIs, whether built from scratch or made from humans augmented into unrecognizability, will be where the real action is. The AIs may (or may not) have opinions about what they'll choose to help you do.

My personal prediction is that you (or your descendants, not guessing how far down the line) will see things like VPs and uploads, and they'll coexist just fine with plain old physical people. How much they'll interact is an open question. You may get a situation where the vast majority go virtual, although I doubt it. I can't see how you'd get a situation where everybody went virtual.

You make quite a few assumptions (5.00 / 4) (#42)
by carbon on Sun Nov 17, 2002 at 06:18:24 AM EST

First off, a perfect simulation of the universe would be silly. For one thing, we don't need to simulate it all at once: at the moment, all the places that humanity has ever been or looked at, and in fact all the places we're ever likely to go or look at, probably encompass only an extremely small fraction of the entire universe.

Not to mention that you don't have to simulate physics realistically. This is actually the basis of my defense of The Matrix having a self-consistent and logical plot. Simulating the entire world of physics would take too much power, so they faked a lot of it. For example, telephones: why simulate the entire waveform of a telephone conversation when there's no reason to (i.e. when its not under direct scientific observation) when you can just fake it at either end. Thus, the exploit involving entering and leaving through phones was due to an explicit efficiency loophole, and even after they discovered it, it was too late to fix it: bringing down and taking back up the whole Matrix wouldve cut off their primary power supply.

This also explains why Neo and friends were able to do things that were physically impossible: as another efficiency loophole, the Matrix could rely upon the human brain's evolved pereceptions of how reality ought to work, how far you ought to be able to jump, and so on. So another loophole would be created simply by conciously changing those perceptions, something that pretty much everyone had serious difficulty with, Neo being the exception to the rule (sort of like the small percentage of people who are double-jointed.) The Agents, having privledged access to the system, could ignore such things entirely, but still would have to abide by those rules of physics applied by the engine itself rather than by implicit use of the human brain's processing.

Similarly, a virtual world would not have to simulate those parts of reality which do not have a direct effect upon the senses. No point in simulating each sub particle when you can treat a given visible object as one entity, rather than more expensively calculating the physics of many billions of entities just to produce the same output This lessens the load that a system would have to take by dramatic amounts.

Secondly, really interesting sims would do things that the universe normally doesnt let you do. If everyone lived in a VR world connected directly to their brains, why shouldn't you fly like Superman if you want to? A VR world with flexible limits would be more interesting than one which exactly followed the rules which exist in the real world.


Wasn't Dr. Claus the bad guy on Inspector Gadget? - dirvish
[ Parent ]
More on this (4.00 / 1) (#48)
by Hizonner on Sun Nov 17, 2002 at 11:42:07 AM EST

Yeah, yeah, I know all that. Nonetheless, you're asking for not one, but many, very complex simulations. I am not saying and have not said that you can't do them, but I also don't think it's obvious you can. But I'm not going to argue about it any more. Not even when baited with the Matrix.

The real purpose of this comment is to add a link showing how important those optimizations may be to how you live your life this very day. :-) See Robin Hanson's How to Live In a Simulation.

[ Parent ]

Very interesting, (none / 0) (#103)
by Fon2d2 on Tue Nov 19, 2002 at 05:37:51 PM EST

but that's not where the Matrix fails. The utterly blatant plot hole in the Matrix is that humans are used for fuel. Excuse me, but WTF? Could somebody please explain this?

[ Parent ]
Sure (none / 0) (#112)
by carbon on Wed Nov 20, 2002 at 02:33:20 AM EST

Waste heat. Basically, they're using humans as generators, only instead of converting kinetic energy to electricity, they're converting organics into heat into electricity by taking advantage of what humans already do anyways. I really don't know if this could be done efficiently in reality, but it could be done in theory, and that's good enough for a movie. Besides, they were desperate; the sun had been obscured, so the robots lost their previous primary power sources, and had to come up with something else quickly.


Wasn't Dr. Claus the bad guy on Inspector Gadget? - dirvish
[ Parent ]
Excuse me, (none / 0) (#116)
by Fon2d2 on Wed Nov 20, 2002 at 10:44:45 AM EST

but humans get that waste heat from the sun in the first place. The flow of energy generally follows one of these two paths:
Sun -> Plants -> Humans
Sun -> Plants -> Animals -> Humans
So, with the sun obscured, not only did the robots lose their primary power source, so did humans, which implies that the robots must have found a different power source for the humans. But why didn't they just take that energy and use it directly instead of feeding it to humans so they can take the waste energy from something as efficient as a biological organism? Imagine all the work it would require to transform that energy into the sugars, starches, proteins, fats, vitamins, and minerals required to meet the humans' dietary needs. That implies that either they are doing that mechanically somehow (very inefficient) or they have saved some varieties of plant and are secretly growing them in places never shown in the movie. But plants would imply that the robots are creating artificial light of some sort to grow them, but why would they do that when plants only store between 1 and 3.5% of the energy falling on them? Now if they weren't growing plants, then that would imply that they are also somehow artificially generating oxygen. Not just small amounts of oxygen either since for some reason the robots have also decided that the humans need to be awake and fully concious will they absorb their waste heat.

So, do you see where I'm getting at? None of this makes any sense. I want to know what the energy source is. This question was never answered in the movie. It is not sufficient to say humans. I also want to know what's so special about it that the best way of extracting it is by letting humans burn it off and taking their waste energy.

[ Parent ]
They did (none / 0) (#118)
by carbon on Wed Nov 20, 2002 at 02:44:01 PM EST

They had organic farms which produced material that they gave the humans intravenously; part of the segment where they explained things to Neo showed robotic workers tending them. They didn't explain how they grew them; presumably they couldn't just get the energy directly from them, or they would have. There was also a vague mention of fusion, I'm not sure how that tied in. The vagueness of the mention was explainable by the plot, in some ways: they weren't even entirely sure what year it was, let alone detailed things about how the robots were getting their energy.

Also, the humans weren't awake and concious; they were effectively in a coma, with artificial input instead of just regular self-induced dreams. I have no idea why they had to do this; maybe their implementation of the Matrix thingie didn't draw that much power, but it extended their life span significantly, or something along those lines. I don't know of any real cases where a human has spent their whole life in a coma; I'd imagine that their dreams wouldn't be particularly interesting, possibly non-existant, since they wouldn't have any experiences to base them on. The Matrix itself probably wouldn't draw too much energy for the same reason your average workstation doesn't draw significant amounts of power, once you subtract the monitor and fans (maybe they had room temperature superconductors?)

Furthermore, maybe the humans were just guessing they were being used as a power source, and they were being kept around and breeded for entirely different reasons. Now that I think about it, I can't think of any case in which an agent confirmed this; usually they were too busy moving extremely quickly and firing guns with nifty slow-motion panning effects.

You have to at least admit the pseudo-science, while not neccessarily believable, is a good step farther in that area than Star Trek or Star Wars. I love those series, but they're only believable if you can believe hyperdrives and warp drives, lightsabers, inertia dampeners, handily convienent deflector-shields-of-universal-applicability, and... darn, I'm ranting, time to end this comment.

The big thing is, the movie doesn't neccessarily have to explain everything to be logical: the important part is self-consistency, not neccessarily consistency with the real world. This is part of the reason why the movie didn't explain a lot of the background technology: the main characters weren't in a position to know so much about it, but also because the less they said about a technology, the less chance they had of doing something self-inconsistent.


Wasn't Dr. Claus the bad guy on Inspector Gadget? - dirvish
[ Parent ]
alternate uses for humans: (none / 0) (#129)
by ethereal on Fri Nov 22, 2002 at 10:58:06 AM EST

I'm going to guess that you're right - the "power" idea might just be a red herring. Really, I would guess that the use of humans is sort of like a recent episode of Star Trek: Enterprise. Namely, steal a few cycles on each of those human brains to actually run the Matrix and the machine intelligences themselves. Human wetware still might be the most advanced computing device available; surely a reasonably intelligent machine could see that :)

Or maybe the machines just keep people around long enough to find out whether we have souls or not, and how to get them for themselves. Who knows what a machine thinks is important?

--

Stand up for your right to not believe: Americans United for Separation of Church and State
[ Parent ]

Next time you watch the movie, *listen* (none / 0) (#158)
by iovpater on Mon May 26, 2003 at 12:24:09 AM EST

I want to know what the energy source is. This question was never answered in the movie. It is not sufficient to say humans.

Morpheus said he saw the the dead liquified to feed the living.

If you've watched any of the backstory for the Matrix presented by The Second Renaissance, you know that there was a great human-machine war before the introduction of the Matrix system. I assume that such a war would produce massive human casualties. So what do the machines do when they've won the war and now have tons upon tons of churned-up organic garbage (in the form of corpses) lying around, and no solar power?

Using something like this, I can see them quickly setting up a rudimentary Matrix system (TSR part II actually shows this in progress) and using dead humans as the fuel for the living. Assuming the population is still roughly 6 billion, it's not a stretch to say that enough people would be dying every day (through old age or what have you) to feed the living, assuming "conversion efficiency" of the liquification process.

[ Parent ]
yes, indeed (none / 0) (#122)
by lordpixel on Wed Nov 20, 2002 at 06:30:44 PM EST

Its a massive massive hole in the plot. As you've very correctly identified, humans are horribly inefficient. Animals are inefficient. Plants are inefficient. If the robots have enough power to grow food to keep the humans alive to take heat from their bodies to make power... they're going to lose what.... 80-90% of the power. Why not just use the power directly.

Me, I see three possibilities:

1/ they just couldn't think of any reason "why" the machines do it, so they threw in some bullshit about "fusion" to try to cover the gaping hole. end of story.

2/ There's a whole other reason why the machines bother with the matrix, and it'll be revealed in the sequels. It is a much more logical reason, and we'll think "wow, that's smart" ;)

3/ They couldn't think of a reason at the time, but they've had 2 more years to think about it, so they're going to introduce it in the sequels and pretend they're clever (see 2/)

I am the cat who walks through walls, all places and all times are alike to me.
[ Parent ]

I agree with your last sentence. (4.00 / 1) (#46)
by Barly on Sun Nov 17, 2002 at 07:40:15 AM EST

I think that there would always be a group of people who would chose to live in reality.  Possibly small groups, or communes, that live off of the land.  In this future history, they would be humanity's salvation.

Off of the top of my head, here is another story idea.  Furthur into the future we go.  Most of humanity has been eliminated by the VRs.  There is trouble in the virtual world, though.  The hardware that supports this reality is beginning to fail.  Most of the mechs have long since ceased to function so even if it occurred to a VR to try to repair the system, they could not.      Enter a person from reality.  While exploring an old, abandoned city, they found (maybe in a museum) a primitive virtual reality hook up.  It consists of a helmet hooked into the VR world.  To the VRs, this person would look wierd.  Perhaps fuzzy or cartoonish.  Eventually a VR realises that this "entity from reality" is their only hope of saving the virtual universe.  And so on.

[ Parent ]

That is basically the plot of "Queen City Jaz (none / 0) (#130)
by ethereal on Fri Nov 22, 2002 at 11:00:14 AM EST

Except the VR in that scenario was based on nanomachines changing "real" reality, rather than software altering virtual reality. But the outsider sitting down on the couch thing definitely led to saving the world. Or, in this case, Cincinnati.

--

Stand up for your right to not believe: Americans United for Separation of Church and State
[ Parent ]

Well (4.00 / 1) (#53)
by Mysidia on Sun Nov 17, 2002 at 01:28:33 PM EST

You're not doing anything close to simulating reality it's a virtual world: there are lots of shortcuts you could take -- with most things, clearly, you can skimp on detail, ie: the couple of virtual cities wouldn't need to simulate down to the microscopic level, your virtual plants might look kind of fuzzy, etc.

I think it's quite plausible that simulations of this level could be done: look what we're able to do with virtual reality right now... what will people be doing (If we still exist and haven't destroyed ourselves or used up all our energy resources) 1000 years from now?

Remember: the universe is a very large place, the human brain is a very small place by comparison. Your calculations need to convince the brain that it's in a real universe, not reality itself: make an analog computer similar to the human brain and 10x as complex, and you're well on your way to being able to do all the calculations you could need.



-Mysidia the insane @k5
[ Parent ]
I know of such a computer (4.50 / 2) (#77)
by dachshund on Mon Nov 18, 2002 at 08:30:26 AM EST

It's not obvious to me that a virtual reality that provides a superior experience to base reality is in fact feasible. It would take a lot of computer power to make a virtual world that good, maybe more than you could provide.

As others in this thread have noted, you don't need to simulate the actual universe. You just need to provide a reality that works well enough and looks real enough for people to be satisfied with it.

I have an excellent example of a small, lightweight computer that's capable of generating this sort of complex illusion. It weighs only a few pounds and runs off of less current than the computer/CRT setup most k5 users are currently using. And better yet, it exists today. Any idea what I'm talking about? Hint: there's one above your neck and below your hair.

If your own brain is capable of generating photo-realistic (or at least, apparently realistic) dreams and hallucinations that can satisfy your senses-- and mind you, it's doing that with only a tiny portion of its processing power-- then I don't see why a dedicated external biological computer (or a similarly complex electronic computer) wouldn't be able to do the same or better.

[ Parent ]

Good point (none / 0) (#80)
by Hizonner on Mon Nov 18, 2002 at 11:22:36 AM EST

It almost convinces me. My remaining uncertainty comes from my not being sure that dreams would seem all that realistic if you were conscious enough to evaluate them fully. In fact, I'm sure they wouldn't. But they can be pretty rich.

[ Parent ]
Lucid Dreaming (none / 0) (#152)
by Marr on Wed Mar 26, 2003 at 10:04:31 AM EST

Your intuition is, in fact, misleading you completely here. Dreaming whilst conscious enough to evaluate causes the experience to become more - not less - detailed and realistic. They do, after all, use the exact same sensory modalities for display purposes as your normal waking experience, but with the influence of memory and imagination dominating the external inputs rather than vice-versa.

The apparent unreality of dreams is caused by the state of consciousness in which most people usually experience them, combined with the brain's immedate tendency to remove them from short term memory on awakening. Indeed, it is your common intuition that prevents most people from dreaming lucidly in the first place, since dreamers reflexively assume that they are awake, all logical clues to the contrary, based on the detail and realism of the dream.

The Lucidity Institute is your one-stop shop for techniques, advice and gadgetry to help you experience this game first hand.

[ Parent ]
Good point, maybe (none / 0) (#102)
by Fon2d2 on Tue Nov 19, 2002 at 05:32:07 PM EST

but probably not. The processing power of the human brain is much, but it is most certainly finite. Are you ever aware of all things in your visual field simultaneously? Are you able to listen and interpret what a person is saying at the same time you are thinking your own thoughts? Can you distinguish all parts of a musical composition the moment you hear it? Although these are just examples, not true for everybody, the point is that there are gaps, huge gaps, of real world input that our brains fail to process. Notice I'm talking about processing the information, not generating it, which would presumably require many more computing resources. And besides, I think Hizonner's observation still holds: any simulation is always lower in quality in some respect than the object simulation simulates. The only way the simulation could be exactly perfect is if it was the real thing, which would imply there is no simulation, hence no VR. Abstract talk about brains does not decieve me out of this conundrum.

[ Parent ]
Brain Processing Selectivity (none / 0) (#157)
by iovpater on Mon May 26, 2003 at 12:08:48 AM EST

The processing power of the human brain is much, but it is most certainly finite. Are you ever aware of all things in your visual field simultaneously? Are you able to listen and interpret what a person is saying at the same time you are thinking your own thoughts? Can you distinguish all parts of a musical composition the moment you hear it? Although these are just examples, not true for everybody, the point is that there are gaps, huge gaps, of real world input that our brains fail to process. Wrong. The brain actually doesn't fail to process all this information at all- rather, it leaves a great deal of the processing to the subconscious, as a result of this miraculous cognitive trick known as consciousness. All the information from every sensory input you have does get processed every single second of your existence. You're just unware of a vast majority of it. See Torr Norretrander's The User Illusion.

[ Parent ]
Time representation..... (3.00 / 1) (#85)
by Elkor on Mon Nov 18, 2002 at 02:15:43 PM EST

Running simulations will always be slower than letting reality "simulate" itself,

So? Why do the two have to match? If the simulation takes 1.1 seconds to represent 1 second, is the person in the VR going to notice? Especially if they only spend time in VR.

Humans could easily decide that the pleasure of the simulation made up for the time loss. If you could spend 55 minutes soaring the rings of jupiter naked, would that be worth an hour of your time?

Especially if the VR allowed you to eliminate/reduce the amount of sleep you needed. Then the time loss is being made up by the reduced sleep time you need.

Lastly, he doesn't supply any time context for the base-line of this story. Year 0 could be in 20 years, or 200. Look at the advances in computing just in the last 30 years.

Regards,
Elkor


"I won't tell you how to love God if you don't tell me how to love myself."
-Margo Eve
[ Parent ]
yes! (none / 0) (#31)
by rev ine on Sun Nov 17, 2002 at 12:34:52 AM EST

+1 word filter! Normally, word filtered entries get automatically -1'd, but this is a good kind of wordiness. Like the unabomber's manifesto. Or that book or mormon thing.

-1, bad Philip K. Dick/William Gibson knockoff. (3.16 / 6) (#37)
by la princesa on Sun Nov 17, 2002 at 01:38:11 AM EST

This was much more interesting and unsettling when those guys threw down some novels and stories about scenarios like this.  Even Mr. Stephenson offered a better handle on the VR premise.  Perversely (as someone who would like more fiction to be attempted on this site), a nonfictional extrapolation probably would have been more intriguing.  

___
<qpt> Disprove people? <qpt> What happens when you disprove them? Do they disappear in a flash of logic?
Did you actually read the story? (none / 0) (#39)
by ogre on Sun Nov 17, 2002 at 05:20:24 AM EST

Or did you read a few paragraphs, see it deals with virtual reality and decide it's a Dick/Gibson knockoff? I've never read anything that dealt with the religious/philosohpical effects of multiple generations of an entire civilization living in VR.

So don't be so mysterious, give us the titles of the allegedly ripped-off work.

Everybody relax, I'm here.
[ Parent ]

You didnt't deal witht eh religious/philosophical (none / 0) (#55)
by jjayson on Sun Nov 17, 2002 at 02:39:48 PM EST

You explain how the system works, how the take over occured, and what repurcussions are. You don't have any deep interaction with the philosophy that caused the downfall. You never once touch on the religious issues of the concept of soul, except maybe two sentences.

I voted +1 not for the story, but hoping that the discussion would fill in those holes you missed. In retrospect I should have not voted until I saw that discussion occuring.
_______
Smile =)
* bt krav magas kitten THE FUCK UP
<bt> Eat Kung Jew, bitch.

[ Parent ]

true enough (none / 0) (#115)
by ogre on Wed Nov 20, 2002 at 03:53:41 AM EST

What I meant by "religion" was the rise of Slamoreanism which could be viewed alternatively as the birth of a cult of "vitualism" or the birth of skeptical thinking about the religion of "realism". What is interesting to me is that if I were actually in that situation, I'm not sure I would pick the right side.

Everybody relax, I'm here.
[ Parent ]

Brilliant (3.33 / 3) (#44)
by rdskutter on Sun Nov 17, 2002 at 06:45:11 AM EST

I really like the Issac Asimov style that you write in.

k5 really needs a fiction section.


Yanks are like ICBMs: Good to have on your side, but dangerous to have nearby. - OzJuggler
History will be kind to me for I intend to write it.

Use this as a setting (5.00 / 2) (#45)
by Barly on Sun Nov 17, 2002 at 07:19:19 AM EST

This is an interesting extrapolation.  

I think you should use this as a setting for a series of stories or a novel.  As it stands, it is little more than a fictional, historical document.  Add characters, action, and dialog and you could really have something here.  The only problem that I foresee is not allowing the storyline to turn into another "Matrix".

Functionalism (3.66 / 3) (#50)
by bugmaster on Sun Nov 17, 2002 at 12:08:22 PM EST

First of all, why is there still no Fiction section ? Why ? Whyyyy ?

Second of all, this is a great story, but I just can't resist picking apart the conclusion (Give me a T ! Give me a U ! Give me an R ! etc.)

So humanity finds its end at its own hands, leaving behind it a rich and detailed world of empty electric souls, spirits of the damned repeating ceaselessly the lives of the dead...
Let's stage a little thought experiment then: let's assume that our technology has advanced to the point where we can create a reasonable k5-bot. This bot will parse the stories on k5, drill down to the comments, and then post new comments or reply to existing ones. Let's say we got this bot to be advanced enough so that it's indistinguishable from your average troll like Bugmaster or whoever.

Then, let's say the existence of this bot was a closely held secret. Then, humans who browse k5 would occasionally reply to the bot's comments; the bot would, of course, counter-reply... whole discussion threads will form indistinguishable from other threads.

At this point, for the purposes of online discussion, there is no difference between the k5-bot and an average human reader. Furthermore, even if the secret of the bot came out, it won't change things that much. Some people won't care, and some would jump on the bandwagon and pretend to be bots themselves, just for the fun of it. Of course, more bots will come online with time, further complicating the situation.

Thus, we now have a fully functional discussion site populated partially by humans, and partially by bots. Functionally, these entities are equivalent -- the humans can tell the bots from other humans, and vice-versa. It is not too hard to make a jump from this situation to a full society of humans and bots. By our definition at the beginning of the comment, this society will be indistinguishable from a society made up solely of regular humans.

In this case, how is the human-bot society worse than the human-only society ? Furthermore, if there is a Bug-bot who produces identical output as compared to an actual Bugmaster, how is it worse than the Bugmaster ? Why shouldn't Bug-bot be allowed to vote ? There are actually several answers to this question:

  1. "Bots don't have souls, and humans do, regardless of their behavior." This is a religious objection which is based on faith, and thus I can't argue against it, since I don't share the faith.
  2. "The bots in the story killed their human prototypes. Murder is evil." This is true only if we determine that a You2 bot who behaves identically to a human is not as good as a human. The only way to argue for this point, AFAIK, is to invoke objection #1 to begin with.
  3. "Bots will never be able to act identically to humans, so your whole point is moot". However, I am not sure if there is a way to defend this point without using objection #1; furthermore, the original story did not use this assumption.
Of course, if the You2 bots in the story fail to perform basic maintenance, then indeed "time will take its toll and the last system will fail". This might actually happen, because the bots see the physical reality (which maintains all their systems) as just another virtual simulation, but a crappy one, with annoying access controls. Thus, they will simply ignore it, and eventually all their systems will fail. However, this situation would also happen with regular, virtualized humans -- the bots don't have a monopoly on self-destruction in this manner.
>|<*:=
Testing the bots.... (none / 0) (#86)
by Elkor on Mon Nov 18, 2002 at 02:33:33 PM EST

I would think that the "best" test to determine whether an AI has a soul or not would be to run the You2 simulation on someone and see if they (the person copied and the Sim) could figure out who was the bot and who the AI.

If each believed they were real, and believed the other the sim, then it is kinda moot.

But if the real person thought they were the Sim, or thought they were both real, then that would be a telling indicator.

Regards,
Elkor


"I won't tell you how to love God if you don't tell me how to love myself."
-Margo Eve
[ Parent ]
I see a money making opportunity (none / 0) (#97)
by Mr Dyqik on Tue Nov 19, 2002 at 09:30:04 AM EST

"Take this self administered Turing Test.  If you pass, you can join Real Life"

Sort of like the Mensa deal.  Real people could meet up and feel smug about how real they were.

[ Parent ]

the soul is not a religious idea (none / 0) (#107)
by ogre on Tue Nov 19, 2002 at 09:53:26 PM EST

Although some religions have a lot to say about the soul, it is not a primarily religious concept, it is empirical. We know the soul exists because we are aware of it. Dropping religious notions such as immortality or reincarnation, the soul is simply the mind, an objectification of our mental lives. The existence of a mental life is so obvious that in my view it can be denied only after a thorough course of religious indoctrination of the kind you get in a modern secular education.

Souls have emotions, desires, thoughts, beliefs. Machines have states. And actually, in the absence of souls, machines don't even have that. The very concept of a "machine", an artifact that serves some purpose, that has some functional set of behaviors, is dependent on a soul to view the machine that way. In a lifeless world there are no purposes, no tragedies, only bare existence.

Everybody relax, I'm here.
[ Parent ]

Not quite. (none / 0) (#110)
by bjlhct on Tue Nov 19, 2002 at 11:43:36 PM EST

First off, you're talking about consciousness. The soul is a religious idea. Consciousness is not. Anyway....

Souls have emotions, desires, thoughts, beliefs, you say. Well, I hope you're in PETA, because even whales fall in love. You can tell it sorta with brain scans, you can tell it sorta from behavior. I'm pretty sure all mammals have hypotholomuses. (hypotholomi?)

Only souls have true concepts of things, you say. But how is a "soul"-s concept of something any different from a machine's? Besides, people have a functional set of behaviors too. Some of those involve changing some behaviors. And the ultimate purpose is to reproduce.

*

kur0(or)5hin - drowning your sorrows in intellectualism
[ Parent ]

souls, consciousness, animals, and AI (none / 0) (#113)
by ogre on Wed Nov 20, 2002 at 03:23:07 AM EST

I was trying to use the word "soul" in the same sense as the comment I was replying to. I could have been pedantic and switched to a more appropriate term to discuss the subject but I didn't think it was necessary.
Souls have emotions, desires, thoughts, beliefs, you say. Well, I hope you're in PETA, because even whales fall in love. You can tell it sorta with brain scans, you can tell it sorta from behavior.
First, I didn't say that those things constitute the soul. Second, the capacity for "falling in love" doesn't imply all of those things. Third, the concept of "falling in love" is so ephemeral and ill-defined that it boggles the mind anyone would claim to identify the state from brain scans or non-verbal behavior. And fourth, if there actually could be evidence that animals "fall in love", this would not constitute evidence that animals have human-like souls, it would constitute evidence that "falling in love" does not require a human-like soul.
But how is a "soul"-s concept of something any different from a machine's?
A machine doesn't have any kind of concepts at all. Some machines have complex states that a person can view as representing something outside of the machine, but this is no different for computers than for any opther kind of symbol. The screen you are looking at contains representations in the same sense, but surely you don't think that the screen has any concept of what is written on it, or that the writing itself has any concept of what it means. In the same way, neither a computer nor the software in it has any concept of what it represents.
Besides, people have a functional set of behaviors too.
Yes, they do. This doesn't effect my point.

Everybody relax, I'm here.
[ Parent ]

identical behavior is not identity. (none / 0) (#134)
by ethereal on Fri Nov 22, 2002 at 01:26:33 PM EST

Even if the duplicate behaves the exact same way, and no one can tell the difference, it still is a different being, created through the death of the original being. That is probably unacceptable to the original being. There is an interruption of the thread of being, not to mention the thread of consciousness. Consciousness isn't that big a deal since most of us seem to have annoying hours-long gaps anyway, though :)

--

Stand up for your right to not believe: Americans United for Separation of Church and State
[ Parent ]

Interesting future, but unlikely, I think :) (5.00 / 1) (#52)
by Mysidia on Sun Nov 17, 2002 at 01:02:55 PM EST

Why should people suffocate or starve after being disconnected from the virtual world? If the society has come to the point of putting most people in closed capsules, presumably the systems designed to feed and provide them with air would still be functioning, and at the very least their disconnection would alert the monitoring systems that exist outside the virtual world, that would report back such things as deaths of citizens, resource shortages, system failures, etc, to the people, and especially, the human caretakers.

Actually, the people couldn't be in completely sealed environments, there had to some form of access, ie : so pregnant mothers could be cared for by the mechs, etc

Wouldn't they notice the physical death of babies and their mothers? Clearly this er, Shoke, person, found some way of blocking the corresponding communications between the monitoring system and the human caretakers of the simulators and the simulations the people were in, perhaps by tampering with the simulation programs for their displays in the virtual world.

Although the user obviously has some forms of meta control (ie: pause, rewind, frame skip, alter simulation, disconnect etc), it's not clear that it would make sense for the virtual world simulators themselves to have the ability to kill their owner, or for that matter, alter the software: wasn't the virtual world supposed to be safer than the real world? By carefully controlling what users can do, preventing suicide from within the virtual system, and making "disconnect" a request to the outside system to remove them safely, this sort of situation could be averted

If they came to the point of the owner no longer being able to leave the simulation and return safely, then wouldn't the meta disconnect command be disabled so the user didn't accidentally activate or get forced to activate it by some criminal in the virtual society ?

Clearly some of the simulations would closer resemble reality in that people couldn't just pause/rewind/fast forward/edit them, ie: defendants could clearly not pause, alter their court proceeding, or jump out of it and into another simulation; however they like (to come up with the desired verdict), for example, mainly because the simulations involve other people (ie: different access permissions, and different systems in place designed to prevent most users from tampering with the reality)

Not all simulations could be directly changed by all people in the virtual system: reality could simply be explained to that person as the simulation that defines the basic principles for all simulations and the ability to exist in or alter simulations in general, the root of the permissions system, the simulation that nobody except the creator of the universe itself was ever able to edit directly.

Hmm... at length, when people entirely moved to the virtual existence, it's not clear what illegal things people could do: many illegal activities could be prevented by the system monitors -- ie, there'd be nothing to steal, and the "virtual bank" would be nicely protected through cryptographic and source-identification methods: there's really no way to steal from a virtual citizen whom you can't inflict pain upon, and who can easily jump out of the simulation, and perhaps add you to a "black list" on the system, preventing you from entering their simulations

How to punish criminals is obvious: mark them as "criminal" in the outside system, preventing them from entering any simulations or use meta commands except designated ones such as Jailcell(their id#).sim, and add mech units to their virtual capsule thingie programmed to refuse any request to escape the simulation.



-Mysidia the insane @k5
Let me suspend disbelief and ask this question (4.00 / 1) (#56)
by mirleid on Sun Nov 17, 2002 at 02:50:54 PM EST

...why would the VPs of people that were against VPs suddenly start supporting the "cause"?



Chickens don't give milk
Because they were programmed to (none / 0) (#59)
by Mysidia on Sun Nov 17, 2002 at 03:23:54 PM EST

In a way not unlike borg assimilation, their programming clearly takes precedence over the function of emulating their original personality/ views/whatever.



-Mysidia the insane @k5
[ Parent ]
Re: Because they were programmed to (3.00 / 1) (#70)
by bugmaster on Mon Nov 18, 2002 at 12:24:45 AM EST

But I thought the whole point of the You2 bots was to emulate their human originals perfectly ? If they were re-programmed in some way then that wouldn't work...
>|<*:=
[ Parent ]
Not necessarily (none / 0) (#95)
by Mysidia on Tue Nov 19, 2002 at 12:48:48 AM EST

To emulate the humans well enough to fool other humans: they had no principle that the things had to be perfect replicas of the humans -- that may have been quite impossible for the programmers to accomplish, so since they weren't perfect anyhow, they might as well make sure the people who replaced with VPs supported the virtualization movement. :)



-Mysidia the insane @k5
[ Parent ]
Maybe they were a good simulation (none / 0) (#133)
by ethereal on Fri Nov 22, 2002 at 01:22:43 PM EST

It could be the case that, like dying and going to a supposed Heaven, once you've done it you think it was a great idea. Even an ardent opponent of virtualization, once actually virtualized, could change their mind and say "hey, that wasn't half bad, all my friends should do this". So this assumes a simulation powerful enough to actually simulate someone changing their mind based on experience; in essense the virtual personality can now grow with experience just like a real person.

I think what this society needs is some sort of legal guarantee against discrimination or persecution based on "virtuality". But once the mob really starts going, all the laws in the world really won't save you. The answer is apparently to preserve the right to defend yourself as an individual, either by not becoming part of such an interdependent society, or by maintaining armed mechs to defend you if need be.

--

Stand up for your right to not believe: Americans United for Separation of Church and State
[ Parent ]

luddites to the rescue (none / 0) (#57)
by turmeric on Sun Nov 17, 2002 at 02:51:09 PM EST

the human race needs the following things for survival:

air, water, food, other people.

thus no matter how screwed up the system gets, there is always a fallback failsafe.

Don't forget... (none / 0) (#73)
by Emissary on Mon Nov 18, 2002 at 02:39:41 AM EST

A constant temperature around 70˚ fahrenheit (variable depending on the existence of shelter or clothing), gravity, solid ground (although an aqueous civilization of humans is an interesting idea), and an average lifespan guaranteed through puberty+18 months.

"Be instead like Gamera -- mighty, a friend to children, and always, always screaming." - eSolutions
[ Parent ]
Reading tip (for the interested): (5.00 / 3) (#60)
by henrik on Sun Nov 17, 2002 at 03:27:57 PM EST

A book dealing with a similar "future" is Diaspora, by Greg Egan.

(Especially the first half of the book, which i find much better than the second part)

Akademiska Intresseklubben antecknar!

Also: Schild's Ladder (none / 0) (#71)
by mysta on Mon Nov 18, 2002 at 12:50:17 AM EST

He has a new book out now, Schild's Ladder could be seen as a companion novel to Diaspora. It's another far-future novel where human's are software constructs living on computers in deep space. A few of them are playing with the physical rules (read: cellular automata) that run the universe. They unwittingly create a 'novo-vacuum' which expands at half the speed of light, destroying anything in its way.

Two factions, with opposing attitudes, arise and must decide what to do with the novo-vacuum. One group want to study it because its physics is unlike anything they've seen before. The other group want to get rid of it before any more planets get devoured by it.

Greg Egan populates his universe with a wide range of characters, all virtual, and examines a number of human traits by distilling them down to their essences. This is not to say his characters are one-dimensional, far from it. He gets at a human essence by showing how arbitrary a lot of what we consider human really is: skin colour, gender, physical size, and homeland for instance. Humans in the far-future can take on any appearance and exist in any virtual environment they want to so what really counts is what and who a person knows, their history and their politics.

As an aside, check out Greg Egan's website. He's a programmer as well as a sci-fi author and he really knows his maths and physics. This makes for some great applets.
---
Are we not drawn onward, we few, drawn onward to new era?
[ Parent ]

-1 because... (none / 0) (#68)
by Lord Snott on Sun Nov 17, 2002 at 11:33:09 PM EST

...for the senario to take place, requires a couple of (I believe false) assumptions.

1)What makes you think no human contact is healthier? From what I understand, human contact promotes health (as does intimacy). I don't think the human race is about to give up the touch of a loved one. Even geeks partaking in online romances eventually want to meet each other.

2)If you spend you're whole life immobile, you can't remain healthy. Muscle atrophy would affect more than just arms and legs. The heart, diaphram, swallow reflex, tongue would all atrophy. You'd choke to death before your weakened heart gave out.

The story was well written, but I give (even the dumbest) humans more credit than this.

~~~~~~~~~~~~~~~~~~~~~~~~
This sig in violation of U.S. trademark
registration number 2,347,676.
Bummer :-(

the story doesn't make either assumption (none / 0) (#76)
by ogre on Mon Nov 18, 2002 at 04:05:55 AM EST

The premise was not that human contact is unimporant, but that VR provides a good enough simulation of human contact that people are willing to replace the real thing with the simulation. And there was no premise that life-in-VR was the best choice they could have made, but that it was a compelling choice for reasons of economics and convenience.

As for 2, the technological feasability of the VR couch was another premise of the story. You bring up an interesting difficulty, but nothing that couldn't conceivably be overcome. The story is science fiction after all, it's supposed to postulate technologies we don't have today.

Everybody relax, I'm here.
[ Parent ]

Hmmm... I guess... (none / 0) (#92)
by Lord Snott on Mon Nov 18, 2002 at 07:20:05 PM EST

thats true. It is just science fiction. I just think it's kind of meaningless.

You know, "What would happen if the law of gravity stopped for thirty seconds...", but it doesn't matter, it could never happen anyway, so the consequences are irrelevant. And even if I am proved wrong, and the law of gravity IS unstable, we've got more problems than just buildings falling down.

Was still a decent story, I just felt... err... un-enlightened after reading it.
~~~~~~~~~~~~~~~~~~~~~~~~
This sig in violation of U.S. trademark
registration number 2,347,676.
Bummer :-(

[ Parent ]

question of likelihood (none / 0) (#121)
by lordpixel on Wed Nov 20, 2002 at 06:10:35 PM EST

I don't know why you say its meaningless. Which of these three is more likely:

* gravity will just stop for 30 seconds someday

* we'll invent a machine that could massage the heart and other muscles of a "bed ridden" person enough to keep them working

* we'll invent a way for a machine to stimulate nerves well enough that the person in the chair experiences the sensations of human contact.

To my mind the first one if far fetched, the second two sounds possible. Not desirable, but possible.

Which is not to say there isn't some technological barrier to this happening, or that we'll not have more sense than to go down this road, but I don't think the objections you raise are sufficient to render this meaningless.

I am the cat who walks through walls, all places and all times are alike to me.
[ Parent ]

Reminds me of this (none / 0) (#78)
by EMHMark3 on Mon Nov 18, 2002 at 09:15:21 AM EST

'The Machine is stopping, I know it, I know the signs.'

T H E   M A C H I N E   S T O P S

On atavism the Machine can have no mercy. (none / 0) (#155)
by RandomAction on Sat Apr 19, 2003 at 03:49:09 PM EST

.. Quote from the story. Good link, sad tale.

[ Parent ]
Interesting and entertaining (none / 0) (#82)
by krek on Mon Nov 18, 2002 at 01:39:38 PM EST

Sounds like the backstory to the Matrix or something.

I agree (none / 0) (#160)
by csole on Fri Jun 13, 2003 at 01:54:15 PM EST

The story does remind of the Matrix, although the Matri was just a rip-off of another sci-fi...
------- Children are inheritable: If your parents didn't have children, neither will you.
[ Parent ]
Thanks... (none / 0) (#90)
by kimpton on Mon Nov 18, 2002 at 03:41:06 PM EST

Nice story...put it on my PDA and read it on the tube on the way home.

Unless I missed it does it make any mention of why people would keep there biological bodies, and not move to different hardware?...the description of bodies in tanks can only make people think of the Matrix. I guess without the physical bodies you lose the angle on human death though...

some comments by the author (5.00 / 2) (#94)
by ogre on Mon Nov 18, 2002 at 10:50:01 PM EST

I'm glad the story has generated such interesting discussion but one thing bothers me, apparently I didn't do a good enough job of breaking people lose from their preconceptions about AI and similar things. I guess I was relying too much on the authoritative tone of the narrator.

This was not supposed to be a story about artificial intelligence, or about personality transfer, or even, really, about virtual reality. It was supposed to be about human knowledge, decision making, politics, religion, and metaphysics. Suppose (as the old thought experiment goes) that you are really a brain in a vat in some mad scientist's laboratory and that all of your experiences have been generated for you by some virtual reality engine. How could you know that your experiences were not real? If you also had some sort of access to the real world (by a mech, say) how could you distinguish that from the virtual reality? And looking at the situation from the outside, how would you characterize the difference between the virtual and real worlds? So I took this thought experiment, applied it to all of humanity and asked, what would this bring? Wrap this with my rather cynical nature, and you get Electric Souls.

So to try to rectify my failures, I at least want to point out that there is no license in the story for the idea that the virtual personalities were "persons" in any sense of the word. It never occurred to the narrator that they were anything other than simulation programs. A virtual personality is no more of a real personality than a simulation of a sewer system is a real sewer system.

By the same token, there was no actual issue of people moving their personalities into the machine. This is true even if you hold the most radical views of strong AI. Suppose someone studied your life and personality so thoroughly that they could imitate you flawlessly. Then they had plastic surgery so that they looked exactly like you. Now can you move your personality into this other body just by killing yourself and letting that person move into your life? Of course not. And it doesn't matter that everyone believes it's still you. You are dead, not copied.

No one in the story debated the issues of strong AI or the movement of personalities into a machine. The issue they struggled with was not whether a computer program could be intelligent, but whether real human beings were anything other than a computer program. And if they weren't, then any functionally equivalent program was just as good.

Everybody relax, I'm here.

simulation argument (none / 0) (#149)
by kraft on Wed Jan 29, 2003 at 09:30:47 AM EST

Suppose (as the old thought experiment goes) that you are really a brain in a vat in some mad scientist's laboratory and that all of your experiences have been generated for you by some virtual reality engine. How could you know that your experiences were not real? If you also had some sort of access to the real world (by a mech, say) how could you distinguish that from the virtual reality? And looking at the situation from the outside, how would you characterize the difference between the virtual and real worlds?

I just thought I would throw this in here: Nick Bostrum, who holds a PhD in philosophy, has written a paper titled "the simulation argument" which deals with these exact questions. Check it out.

--
a signature has the format "dash-dash-newline-text". dammit.
[ Parent ]
Who then is the.. (none / 0) (#154)
by RandomAction on Sat Apr 19, 2003 at 02:31:41 PM EST

narrator a VP or perhaps an evolved version? Or a human perhaps a reactionary against VR? So many questions.

[ Parent ]
Shadowrun? (5.00 / 1) (#96)
by NexusVoid on Tue Nov 19, 2002 at 03:33:38 AM EST

A good amount of this is depicted in the RPG Shadowrun, formerly produced by FASA, and now by Wizkids/FanPro.



My thoughts exactly! (none / 0) (#125)
by Ricochet Rita on Thu Nov 21, 2002 at 09:58:01 AM EST

Shameless plug: see link.

R

R

FABRICATUS DIEM, PVNC!
[ Parent ]

I don't know if I should admit this (none / 0) (#128)
by ogre on Fri Nov 22, 2002 at 01:42:44 AM EST

... but I'm a big fan of Shadowrun. I even took a bit of the terminology from that game. However, much as I like the game and the books, their take on VR is silly beyond belief.

In science fiction/fantasy, you have two kinds of fanciful elements, technology and magic. Magic is anything you want it to be, whatever adds to the drama. Technology, on the other hand has certain extra constraints based on what the reader is willing to believe is possible. In Shadorun, computers are clearly magical rather than technological since they allow completely fantistical events with no logical explanation. In fact I've sometimes thought of writing a Shadowrun book that would explain all the impossible features of decking when someone discovers cyberspace is really a magical plane and deckers are a kind of mage.

Everybody relax, I'm here.
[ Parent ]

Murder? (none / 0) (#98)
by Silent Chris on Tue Nov 19, 2002 at 11:01:54 AM EST

Maybe I'm missing the reference, but why is the first malpractice considered "murder"?  It's not murderous to deliver a baby incorrectly today.  Is it "murder" because the doctor intentionally created a virtual replacement?  (The intention doesn't seem obvious).

Premeditation = Murder (none / 0) (#100)
by MactireDearg on Tue Nov 19, 2002 at 12:50:13 PM EST

It is reasonable to assume that, as a skilled professional practitioner of medicine, the doctor knew that changing the way he handled the 'birth sim' would likely result in the death of the child. His knowledge of this likelihood is proven by the act of preparing the VP baby.
Legally the fact that he was premeditated enough in his experiment to create the VP baby beforehand would qualify the act as murder.

His intentional gross negligence in the death of the baby would qualify for at least Murder 2 in any US jurisdiction.

If you must make mistakes, it is more to your credit to make a new one each time. - Unknown
[ Parent ]

Before I read the rest of this... (5.00 / 1) (#99)
by Fon2d2 on Tue Nov 19, 2002 at 12:48:09 PM EST

It does NOT seem reasonable to me to guess that "these neural connections will be used to present a detailed virtual reality as good as (or better than) the real thing."

Better than the real thing (5.00 / 1) (#117)
by protogeek on Wed Nov 20, 2002 at 01:13:45 PM EST

"a detailed virtual reality as good as (or better than) the real thing."

Depends on what you mean by better. This virtual reality might not be as crystal-clear and surround-sound enabled as reality (is it Memorex?), but as long as the sensory reproduction was good enough for the user to become immersed in the "story", the advantages of VR might outweigh the disadvantages. How many people would tolerate slightly fuzzy images in exchange for being able to fly? Almost-but-not-quite-perfect sound in exchange for going somewhere you could never go in your own physical body? A 1-3% loss of tactile sensation in exchange for sex with an unobtainable person?

If the story has a major flaw, it is that the author assumes all readers are going to take such possibilities into consideration. The tale might be strengthened by more detail on either (a) the kinds of things that can be done in VR but not in reality, and how many people would see that as "better", or (b) the advances in computing power that would make perfect VR possible.



[ Parent ]
all good points (5.00 / 1) (#119)
by ogre on Wed Nov 20, 2002 at 03:53:01 PM EST

Although not something I really considered. I think there are two possibilities for fully-realistic virtual reality. First is that the exponential improvement in computing power will continue to grow until every human can have billions of CPUs and terabytes of storage dedicated to his user interface.

The second, and more interesting possibility is that with a direct neural interface we may be able to exploit the brain's own 3d rendering ability to greatly reduce the amount of computing power needed.

Everybody relax, I'm here.
[ Parent ]

Reality and speed (none / 0) (#123)
by anyonymous [35789] on Wed Nov 20, 2002 at 07:06:41 PM EST

Computers today can make mathematical calculations faster than any human. That is a fact. Our perception of reality is only as detailed as our sensory organs. Is your own imagination able to simulate anything your own senses have not yet experienced? If we can build machines to think faster than us, than why not machines that can take in more sensory data than us? So in theory, we can input that data into our human brains and viola, we are having experiences more vivid than what was previously thought possible. If one was presented with such a thing, would one want to escape it or embrace it?

[ Parent ]
The Illusion of Vision (none / 0) (#148)
by exZERO on Fri Nov 29, 2002 at 02:30:11 PM EST

"Our perception of reality is only as detailed as our sensory organs. Is your own imagination able to simulate anything your own senses have not yet experienced?"

Your perception of reality is only as detailed as your IMAGINATION.  You never truly "see" or "hear" or even "sense" anything properly.  Everytime you "sense" something, you are actually just mentally receiving what some other part of your brain has received, guessed what it is, and passed it along.

It's like an old adage by Robert Anton Wilson, who was talking about how he was walking through the Village(Greenwich) and saw a window to a dry cleaners.  He "perceived" it as if it had written on it "Half-Gay Cleaners".  He didn't think twice about it at first, being where he was, but at second glance, he "perceived" it as saying "Half-Day Cleaners".

His brain guessed at what he saw before passing it along to the conscious section of his mind, so he wasn't really "seeing" what was actually there.  In reality, you never are, and you imagination of any place, whether you've been there or not, isn't what is truly there.

The only way for this concept to really happen is if we teach computers guesswork, and the ability to assume.  We already have lite forms of this, like auto-complete in most browsers, which shows you what the computer automatically "assumes" you are typing, and completes it for you.

<<Zero_out>>
[ Parent ]

Short Review for a Short Story (none / 0) (#101)
by exZERO on Tue Nov 19, 2002 at 02:47:46 PM EST

Disturbingly possible, this is the first actual piece of fiction I've read on K5. It's very Vonnegut-esque, and genuinely scared me because I could see all of this actually occuring, especially since I am a student currently trying to break into the video game industry, which is essentially based around attempting to make this precautionary tale a reality, however sad this may be. I wouldn't mind seeing this sold in a publicated form, obviously in some kind of collection of short fiction. It also seems like something that could be a great audiotape, and while reading it I kept hearing John Hurt's("Alien") voice.
<<Zero_out>>
Who needs VR... (none / 0) (#109)
by bjlhct on Tue Nov 19, 2002 at 11:26:55 PM EST

...when you've got psychotropic drugs?
*

kur0(or)5hin - drowning your sorrows in intellectualism

quantum (none / 0) (#111)
by auraslip on Wed Nov 20, 2002 at 01:27:56 AM EST

computers, theoreticly can supply any amount of processing power you need.

SIMULATE THAT.
124

Please explain. [nt] (none / 0) (#120)
by xriso on Wed Nov 20, 2002 at 05:30:55 PM EST


--
*** Quits: xriso:#kuro5hin (Forever)
[ Parent ]
Horney Humans (none / 0) (#124)
by anyonymous [35789] on Wed Nov 20, 2002 at 07:14:32 PM EST

This is a wonderful piece of fiction and you should brush it up a bit and publish it. In response to everyone thinking it's a creepy or disturbing possability, consider this; billions of people would die, but just a few would would not be completely trapped by VR. They would be in the real world. That small number in the solar system, perhaps universe, would get horney, have kids, and our race would survive and renew itself. Humanity would continue. I take comfort in my belief in this.

I'm not convinced (3.00 / 1) (#126)
by ae7flux on Thu Nov 21, 2002 at 01:07:45 PM EST

I'm not convinced. OK, we're given a psychologically persuasive explanation why a single pathological individual might want to rid himself of reality ('God' knows there have been enough such in history without VR) but why would the population in general go along with it? The answer we're given is no more than that old idea that images addle the brain. In other words, the age old puritan fear that the image/representation will blind people to The Truth - the fear that for example led the C16 English protestants to whitewash church walls or their decendants a century later to ban acting as lying. But even if people's brains were sufficiently addled that the concept of reality became obscure (and I don't buy this for one minute)why would they actually want to destroy it. Avoid it maybe but destroy it? A far as I can see the only reason the story offers is some appeal to fanaticism (and a puritan one at that, ironically - real people are tainted by our biological nature). But fanaticism is notorious absent among the comfortable. 50 years ago TV was the representational technology that was going to destroy us. 150 years ago it was the novel (though judging from the fundamentalists' reaction to young master Potter I'm not sure much has changed.) At the moment it's video games. Tomorrow (now we know) it'll be VR. As I said I'm not convinced.

I'm convinced (none / 0) (#159)
by csole on Fri Jun 13, 2003 at 01:51:02 PM EST

I guess that one man's belief could be spread to billions. Check out Jesus (or Mohammed, at least we know he truly existed). Sure, he did not have access to technology that would spread his word faster, but his message still reached millions in a few hundred years.

Now, as the story goes, the world became so much immoral that a woman would never bother seeing her child in "real". Well if someone prefers a virtual life to a real one, and this belief is spread, that the result could be just what happenned in this story.

Of course, this could never happen :) I'm aware that this is just sci-fi
------- Children are inheritable: If your parents didn't have children, neither will you.
[ Parent ]

damn you... (none / 0) (#151)
by majik on Thu Jan 30, 2003 at 10:22:20 PM EST

I'd started writing something a few days ago... with a little work I might actually be able to pick up where you left off:

At first I didn't want to believe it. Perhaps they'd forgotten to mention a scheduled routine maintenance. Or maybe there was some temporary viral attack, busily being fought off by the system AVS. But no. It had been too long for either. Somehow 10 levels of redundant connections had all failed. The network link was dead.

So used to the data stream washing over me, my senses had now all been ripped away from me. I searched in fear for input of any type. My eyes, ears and voice, so often neglected in the past, were suddenly brought back online. The subconscious block, having been shocked into activation, realized our primary data input conduit was down and had hacked the old input network back into my data receptors. I was now more human than I'd been since birth. And I was very afraid.

The old people used to drift stories (as often they were prone to do) about life before the network. With the delta of time, more and more of those stories were moved to the archives of the dead. Data only stays around the active net as long as the author is alive. I had links to a few of the ones I liked the best. And had even cached a few from people I had met.

I remember chatting with some of the seniors during a learning program project about our history. The old ones were so inept at manipulating their presence in the environ. It was very evident who was an old timer and who had been connected to the network from birth. So much of what they told us seemed like some fairy tale or a bad dream. Stories of cars, and disease, of famine, and war. A life so far removed from the one I knew, I thought it had to be the imaginations of a few who'd gone adrift. Wondering what it would be like to live lift without the freedom of the network, to be tied to some faulty local sensory hardware. I was pulled back into the present with the gut wrenching realization... I was about to find out.

The opinions, thoughts, questions, feelings, dreams, and lives of hundreds of my friends, many of whom I'd known from birth, but never seen offnet, all gone. I'd never felt like this before. Alone. Afraid. Safe in my bunker haven,
Funky fried chickens - they're what's for dinner

From who's or what perspective.. (none / 0) (#153)
by RandomAction on Sat Apr 19, 2003 at 01:59:44 PM EST

..is it written.. a VP? So much further to go with this universe.

I like it (none / 0) (#161)
by csole on Fri Jun 13, 2003 at 01:56:20 PM EST

I like the story very much! It scared me for real, although it would be more interesting with actual characters, dialogues etc... You could write a book about this :)
------- Children are inheritable: If your parents didn't have children, neither will you.
Electric Souls | 161 comments (141 topical, 20 editorial, 0 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!