Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
Search Engine Symbiosis and the Quiet Cybernetic Revolution

By jacobian in Technology
Thu Aug 03, 2006 at 03:07:23 AM EST
Tags: cybernetics, intelligence, internet, cognitive science, linguistics (all tags)
Internet

"The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today."

— “Man-Computer Symbiosis,” J.C.R. Licklider, March 1960.


Imagine that Jill from 2006 and Jack from, say, 1990 are able to communicate with each other using an instant messenger program. (Think of it as The Lake House meets the Turing Test.) They discuss a wide range of topics, and Jack is stunned by Jill's intelligence – or, at least, her breadth of knowledge. She is familiar with every cultural reference Jack throws at her, no matter how obscure; she knows the plot of every novel, movie, and Cheers episode he mentions; she is aware of the critical and commercial reception to every record album in existence; she seems to have intimate knowledge of events that took place ten years before she was born. True, her responses are occasionally a bit sluggish, but no one could find these facts in an encyclopedia volume quickly enough to simultaneously hold a conversation in real time. Most of these facts aren't even in any encyclopedia.

So Jack asks Jill how she got so smart. He doesn't fully grasp her answer – what's a google? – but from what he understands, she's interfacing at extremely high speeds with a global data network that contains most of the important information in the world. To Jill, and to us, this is everyday technology and it’s nothing terribly impressive. But to a person from sixteen years ago, Jill and her Internet connection form a cybernetic organism with the entirety of human knowledge at her mind’s disposal, and the future she inhabits is a strange place indeed.

Now pivot forward another sixteen years and imagine a similar conversation between Jill in 2006 and Jeff in 2022. This time, let’s make it a phone call. Jeff comes across as highly intelligent and thoughtful. Not only is he knowledgeable, but even his opinions seem to be incredibly well-grounded, consistent, and supported by giant webs of facts and ideas. All of his responses come instantly; if he’s culling his information from the Internet or elsewhere, there’s no indication of it. It’s as if he’s already spent huge amounts of time researching and contemplating every conceivable topic of discussion. Jill knows no amount of frantic googling would allow her to keep up with him, especially in a voice conversation.

So she asks him how he got so smart. She doesn’t really understand his answer – something about “trained surrogates” and “situational analysis” and “rapid belief integration” – but basically, he’s some kind of cyborg hooked into a swarm of intelligent agents that are tuned to his environment, personality, actions, and goals. The future, Jill decides, must be a strange place indeed.

Humankind is in the process of inventing one of the most transformative tools in its history. It lies at the intersection of the human mind, global networks, and artificial intelligence. As our networks grow more complex and our software grows smarter, the ways in which our mind handles information will deeply change. The cognitive frontiers of cybernetics will move from information retrieval to information interpretation to information construction, and the nature of our cognition will be altered forever.

This article will examine the nature of the modern interface between digital information and the human mind, and how this interface relates to two distinct ideas: artificial intelligence and intelligence amplification. I’ll draw on concepts from computational linguistics, data mining, and cognitive psychology in an attempt to chart the cybernetic marriage of our minds to the algorithms and networks we use in our day-to-day lives. Hopefully, we’ll wind up with a clearer map of the road from 1990 to 2006 to 2022 and beyond.

SOFT CYBORGS

The emphasis in this discussion will rest on computer software rather than hardware. In a networked era, a device’s ability to augment its user’s intelligence lies more in its software- and network-related assets, like the network’s bandwidth and the quality of the data and software on the network, than in its hardware resources, like memory and processor speed. A state-of-the-art PC is no better at browsing the Web than a six-year-old model. In other words, the Internet is useful to us because of the information and software that exists on it, and not because of the hardware we use to access it. (The hardware infrastructure of the Internet is still relevant because it determines bandwidth.) As applications increasingly migrate to Web-based implementations, the resources on the user’s end become deemphasized further, and the slack is picked up by centralized servers and network bandwidth.

This helps motivate recalibrating the connotation of phrases like “cybernetic organism” and “machine-augmented intelligence.” These terms evoke images of silicon chip implants and bionic women and the Borg Collective – in essence, a very hardware-oriented (and somewhat frightening) form of human-computer integration. What we see in reality are the beginnings of intelligence amplification with no direct integration of human physiology and computer hardware, but a high degree of interaction between human minds and computer networks through software. Instead of a direct brain-computer link, the computer-to-human interface is provided by the traditional senses of sight and hearing, and the forms of human-to-computer input are just as mundane.

Is today’s network-connected individual really a cybernetic organism? The term is meant to designate an organism that’s a mixture of organic and synthetic components. We can stretch this idea to describe a mind that’s a mixture of human cognition and software. I’ll call this type of cybernetic organism a soft cyborg. A human being armed with a good search engine is very nearly a soft cyborg who “knows” all the information that can be easily obtained on the Internet, since the speed of finding information online is approaching the speed of searching one’s own mind for information. In situations where this delay can be smoothed over as a delay in response time, like in an IM conversation or an email exchange, the soft cyborg of 2006 behaves like a human mind with limitless knowledge.

QUIET AI

“Artificial intelligence” is a term that warrants a similar reevaluation. It suggests systems focused on reasoning and planning, and, in the extreme case, synthetic human-level consciousness, or “strong AI.” Judged on these dimensions, half a century of AI research has yielded disappointing results. Compared to humans, planning agents remain primitive and overly specialized, and despite progress in cognitive science and neurological modeling, strong AI is still struggling for a foothold.

But there is one human-level task at which today’s computers excel: the interpretation of data. Pattern recognition is among the skills considered most fundamental to human intelligence, and we routinely use machines to scale pattern recognition to particularly large or complex sets of data. This is usually referred to as data mining or knowledge discovery. Google’s PageRank algorithm, which analyzes the World Wide Web’s vast hyperlink structure to determine which Web sites are most “important,” is one data mining technique; the algorithms used by intelligence agencies to root out terrorists are another. The recent proliferation of social networking sites has led to new forms of data mining that leverage the intelligence and behavior of millions of users in the same way that PageRank leverages the “wisdom” of hyperlinks.

I contend that data mining systems represent artificial intelligence every bit as much as systems devoted to reasoning or decision-making. It’s true that data mining sometimes makes use of the same machine learning techniques as traditional AI, but it deserves to be categorized as bona fide artificial intelligence because of its deep connection to human thought: just as traditional AI seeks to emulate the distinctly human skills of reasoning, planning, and high-level perception, data mining strives to produce humanlike pattern recognition in a computational domain.

A significant difference between traditional AI systems and data mining systems lies in their usefulness to contemporary human cognition. In an information age, reasoning isn’t any harder than it used to be, but managing information is, due to the explosive rate at which information grows, spreads, and mutates, and the increasing speed with which we need to obtain specific pieces of it.

The traditional perspective sees AI systems as appliances: by focusing on planning and reasoning, we implicitly expect intelligent machines to work independently of us and to make our lives more convenient by freeing us from low-level tasks. A fresher perspective would allow for AI that works with us to expand and accelerate human cognition. Instead of an appliance or a human replacement, AI can be seen as a cognitive tool.

Data mining falls under the latter view as a specific type of artificial intelligence that, like all tools, serves to augment natural human ability. A hammer is useless in the absence of a person to intelligently manipulate it. By the same token, a search engine doesn’t really do anything independently of its human user, and it’s not weak AI nor a primitive type of strong AI nor a component of a strong AI, but when combined with a human that can choose keywords and select the correct results, the search engine becomes extremely useful.

I propose the term quiet AI for this kind of artificial intelligence – “quiet” because it is augmentative and unassuming, because it has nothing to say about machine sentience and is thus independent of the weak-strong axis, and because it has the potential to be folded so completely into our behavior that it can become nearly imperceptible. Quiet AI is vital to the notion of the soft cyborg; Web search, a form of data mining, is the basic artificial component of today’s soft cyborg, and the soft cyborg of the future will rely on more advanced varieties of quiet AI.

The difference between quiet AI and strong AI is an important one to note. While the end goal of strong AI is a total simulation of human intelligence in an artificial (and, presumably, accelerated) form, quiet AI aims toward the amplification of human intelligence using a combination of the artificial and the biological. At the precipice of a technological singularity, strong AI pushes us aside and jumps off alone, while quiet AI rather politely holds our hand on the way down.

THE CYBERNETIC MEMORY

Modern cognitive psychology divides the human long-term memory into two fundamental information types: declarative information and procedural information. Declarative memory, based on conscious recall, holds information that can be explicitly stored and retrieved. Procedural memory holds information derived from practice and implicit learning, and is closely linked to motor skill. Explicit knowledge about the state capitals is stored in your declarative memory; implicit knowledge about how to ride a unicycle is stored in your procedural memory.

Declarative information can be further divided into two subtypes: semantic information and episodic information. Semantic memory encodes abstract concepts and information about the world, while episodic memory stores personal sensations and emotions tied to specific experiences and contexts.

From this, it’s clear that the soft cyborg mind extends only the semantic memory store. It’s not possible for current technology to interface closely enough with either your motor system or your sensory organs to be able to handle procedural or episodic memory, and this is unlikely to change in the near future.

To put it another way, semantic memory is the only type of memory that can be easily communicated: a concept can be encoded as language by one person and transmitted to another, and once the language is interpreted, roughly the same concept should exist in the minds of both people. (Communicating episodic information is a lot harder, and it’s usually the purview of artists.) This underscores the importance of language to cognition, and, similarly, the importance of computational linguistics to intelligence amplification.

THE LOCATION OF MEANING

As noted by Graeme Hirst of the University of Toronto, the past three decades of computational linguistics have vacillated between three philosophies on where the meaning of a text is located:

  1. Meaning is in the reader.
  2. Meaning is in the writer.
  3. Meaning is in the text.

(“Text” is used here to denote any kind of utterance, short or long, speech or writing, and “meaning” denotes the complete semantic information encoded in a text.)

These three views were motivated by three corresponding paradigms in artificial intelligence research. The “in-reader” view of meaning prevailed from the mid-seventies to the mid-eighties, when research focused on creating intelligent agents that could assimilate knowledge about the world and use that knowledge to reason and make decisions. Computational linguistics saw texts as ambiguous conveyers of knowledge, and the goal of an intelligent agent was to find a semantic interpretation of a text that was most consistent with its existing knowledge. The more abstract knowledge an agent had, the better it was at interpreting a text.

Research then shifted to developing interactive systems that could converse with human users and determine the user’s intent, triggering an “in-writer” view of text meaning. Rather than learning about the world to understand text, it became necessary for intelligent agents to learn as much as possible about the user’s plans and goals so that they could interpret text from the user’s perspective.

From the mid-nineties onward, computational linguistics veered closer to quiet AI, and its principal application became information search and retrieval. The philosophy in this case is that objective meaning exists in the text itself, and it arises from the combined effect of the words in the text. As Hirst puts it, “meaning is ‘extracted’ from the text by ‘processing’ it.”

The language processing algorithms of this paradigm rely on statistics and data mining. One of the more well known of these methods is latent semantic analysis (LSA), which performs a type of mathematical reduction on text and represents words as points in a concept space of about three hundred semantic dimensions. By adding up the point vectors of words in a sentence, we obtain the position of that sentence in the concept space, and hence its meaning. We can find the meaning of a paragraph or a document the same way.

Of course, a set of coordinates in concept space seems like a crude representation of the semantics in a document, but it’s useful if you’re interested in sorting documents by their meaning or retrieving documents that correlate to some query concept. More creative applications of LSA include text summarization, automated essay scoring, and psychiatric diagnosis from patient writing samples.

As it turns out, hyperlink structure is rich enough with information that we don’t really need LSA-type algorithms for finding the Web pages we want, but computational representations of semantics will come to the fore as the objective of quiet AI expands from information search to information interpretation.

Hirst suggests that, over the next decade, both the in-reader and in-writer views of text meaning will reemerge as we use computers for two types of interpretation: interpretation on behalf of the reader and interpretation on behalf of the writer. In the first case, the computer acts as the user’s surrogate, tailoring its interpretation of information to the user’s goals, agenda, and beliefs. In the second case, the computer attempts to consider a text from the author’s point of view in order to understand what the author wishes to say, so that it can communicate meaning to the user as faithfully as possible.

SEMANTIC PUTTY AND PSYCHOLOGICAL FOOTPRINTS

Future progress in human-AI integration will be driven by progress along two dimensions: the plasticity of text and the amount of knowledge about the user.

Presently, quiet AI is very good at analyzing the implicit semantics of hyperlinks and the coarse meanings of objective texts for the purposes of information retrieval. But contemporary applications like text summarization give us a foretaste of plastic text that maintains semantic integrity. The goal is to view text as semantic putty: it can be stretched or shrunk as needed while its semantic substance – that is, its overall meaning and intent – doesn’t change. You’re given a paragraph-long summary of a breaking news event. You ask for more, and the paragraph stretches into a page of information constructed from bits of news floating around online. You ask for less, and the paragraph shrinks into a headline.

With a more precise understanding of a text’s intent, and hence a more faithful interpretation on behalf of the writer, we can extend this technique to opinion texts. A blogger writes a short post expressing a nuanced antiwar sentiment that you find appealing. You ask for more. Your AI seeks out documents that express a similar sentiment and algorithmically builds a long opinion piece on demand. You ask for less, and the sentiment is compressed into a sentence-long soundbite.

There are a number of ways for software to understand its user’s needs and goals, but they all rest on a constant monitoring of the user’s environment and actions. Real-time detection of new utterances and objects in your immediate surroundings will allow an AI to make predictions about the information you need in the here and now. Long-term analysis of your observable actions and communications – your psychological footprint – will give your AI a sense of your goals and beliefs. By examining its history with the user, an AI will be able to make educated guesses about what knowledge the user already holds in his or her head, and it will prune new information accordingly.

This can sound a little Orwellian, but it’s important to note that the monitoring is private and personal, and the goal of the AI is effectively to act as much like its user as possible. It’s not Big Brother; it’s Kid Brother. Through its interaction with you and with your environment, the quiet AI learns to mimic you and becomes capable of acting as your digital surrogate. What’s more, the merging of mind and software will see a blurring of the distinction between the monitoring of experience and the experience itself. At that point, to say that your software is spying on you is a little like saying your cerebral cortex is spying on your reptilian brain.

So, what do you get when you combine plastic semantics with a digital user surrogate? The implications are kind of staggering. This sort of AI would be able to independently seek out novel information that is compatible with your goals. It could read a long text and automatically form an opinion about it that would approximate the opinion you’d arrive at if you read the text yourself. It could locate other soft cyborgs in your “belief zone” and, through computation alone, interact with them to construct an entire self-consistent ideology. It could, with permission, load up the surrogate personality of another individual and tune its own information delivery accordingly, lending you a very real window into that person’s worldview. Less open-minded (or maybe just less careful) people could turn their AIs into confirmation drones that disfigure external information so that it never conflicts with their existing beliefs.

The potential uses of this technology are at turns exhilarating and frightening. The following section looks in on people living at various points in the future and describes how their mental life is transformed by quiet AI.

VIGNETTES

The Know-It-All

Sometime in the second decade of the twenty-first century, Jerome is interviewing for a Rhodes Scholarship. Wearable computing is big these days; Jerome wears a pair of networked glasses that act as a head-up display, overlaying information on top of his vision when needed.

The interviewer asks, “What are your thoughts on current trends in lumber imports from Canada?”

One of Jerome’s wearable devices has a microphone for voice communication. In real time, the device recognizes the text of the interviewer’s question and relays it to Jerome’s glasses. The text is interpreted, the salient phrases are extracted – “trends,” “lumber imports,” “Canada” – and, virtually instantly, a series of graphs and point-form facts are displayed for Jerome. He vamps for a second or two to take in the information, and then starts to answer the question confidently and knowledgeably. His opinion about the issue forms as he talks through the data. Five seconds ago, he didn’t know anything about Canadian lumber imports.

This is an example of anticipatory search – information before you ask for it. By monitoring his surroundings, Jerome’s wearable network located information on a relevant topic before he would’ve even begun to search for it. Jerome is a soft cyborg, accelerated and gone mobile.

The Passionate Wonk

A few years later, Julia is running for local political office. She’s in the midst of a debate against her opponent. By now, wearable devices and augmentation software have become so natural and ubiquitous that no one objects to politicians using them to answer questions. (Increasingly, it reflects poorly on leaders not to augment their intelligence.)

By monitoring the moderator’s questions and her opponent’s responses and rebuttals, Julia’s AI determines the debate issues being discussed. Based on what it knows about Julia’s political views, her AI builds specific opinions about these issues. Semantic analysis algorithms condense that information into small chunks to transmit to her contact lenses. The information is scaled based on how much time she has to speak, and it comes packaged with verified facts and figures to support her opinion. In effect, it’s a personal, automated, dynamic teleprompter.

After the debate, most observers declare Julia the winner. She is seen as passionate and knowledgeable and her arguments are convincing. But is this impression based on her human qualities, or the effectiveness of her algorithms? Is the electorate voting for a person or for software? Perhaps more importantly, should it matter?

The Slacker

Several years later, Janet is a college student majoring in world literature. She only had one class today, and she didn’t feel much like going; instead, she spun off a surrogate to monitor the perceptual feed of a friend who attended the lecture. Based on the notes it came back with, she didn’t miss much.

Now she’s starting a 2000-word paper that’s due in about fifteen minutes. The topic is the use of free indirect discourse in Thomas Mann’s Death in Venice. She hasn’t gotten around to reading the book yet, but that’s no big deal. She retrieves some representative examples of free indirect speech from the text and computes approximate opinions about the style and content of the book. She looks over this information and develops a rough thesis. Little is found to support this thesis, so she adjusts it slightly. After a few more iterations, she converges to something she’s satisfied with. She then constructs 2000 words of supporting arguments, fully cited and written in her own style.

The paper mentions quite a few philosophical concepts and literary terms that her biological half has never heard of, but her brain trusts her software. Her homework is submitted with three minutes to spare.

The Thinker

Three or four decades from now, Jacob is an academic who specializes in foreign policy. He’s just come back from a day-long “culture sabbatical”: every few minutes, he loaded up a different personality file in order to experience information from a new cultural perspective. It’s something he does every so often to keep his belief system from growing stale. Meanwhile, the twelve surrogates he’s currently running didn’t get to enjoy the vacation – as always, they were busy churning out essays and giving lectures. It’s publish or perish, after all, and like most thinkers of his time, Jacob produces several dozen publications per day.

He pulls all of his surrogates back for reintegration. After taking a moment to relish the undivided bandwidth, he starts talking to himself to verbally explain the ways in which his sabbatical has shifted his beliefs. He does this in order to train the software portion of his mind to see the world the same way that his biological mind now does. He then spins his surrogates off again and gets back to work, confident that the arguments put forth in his essays will now reflect his new worldview.

INCIDENTAL UPLOAD AND BELIEF PILOTING

What happens when a quiet AI becomes such a faithful user surrogate that the way it perceives and produces information is indistinguishable from the way its user behaves? Maybe the user has accomplished something akin to uploading his biological mind to an artificial medium. It’s an incidental upload: in the pursuit of intelligence amplification, we wind up offloading so much cognition to software that the software alone becomes capable of what looks like cognition.

Consider Jacob, the foreign policy expert. His quiet AI is so advanced and tuned to his psyche that it can produce “Jacobesque” cognitive behavior without any direct human input. This allows him to spin off several copies of his AI to work on cognitive tasks in parallel while he devotes himself to honing what he believes. He’s the belief pilot of his flock of artificial minds, turning his life experiences into a unique belief system that influences the behavior and information handling of his software, and providing minor course corrections when necessary. This may wind up being the most important vehicle for individuality in the information culture of the future.

CYBERNETICS REVISITED

I’ve concentrated so far on the interaction between mind and software, but there’s another important player on the cybernetic scene: the network. We use quiet AI to mine and manipulate information, but where does that information come from? The information retrieved by a search engine doesn’t live in an encyclopedia. It’s the information that we spew out in our everyday lives, through news articles, blogs, and commercial enterprises. Quiet AI is getting smarter largely because a growing subset of human behavior is reflected in digital information. That information is useful to us even if it wasn’t originally created to be searched by others.

Our developing symbiosis with technology will only accelerate the reflection of our behavior as publicly accessible information. (This, unlike AI monitoring, is a privacy concern, but it’s privacy that we’re already giving up voluntarily, and there’s no sign that we’re going to stop.) As we increasingly use our software to help us think, an increasing amount of our cognition is likely to appear online. This information will, in turn, feed back into the intelligent software of every other individual in our society. The amplification of our intelligence will result from the manipulation of valuable information that exists in the world, and the value of this information will hinge on our intelligence.

There’s nothing fundamentally new about this trend; our civilization is built on a cultural feedback loop that operates through the cyclical dissemination, learning, and production of knowledge, and this cycle has existed since we became able to communicate with each other. But for the first time, technology will allow the speed of the information cycle to match the speed of cognition.

You can see where this leads. As a feedback loop, it sets in motion an extremely rapid intelligence takeoff. In a broader sense, it effectively gives birth to a new structure in the human mind: an internalized copy of the information society itself.

---

The modern ease of information access isn’t just another nifty convenience, or the result of some single gadget. It represents the beginning of the most dramatic shift in human consciousness since the invention of writing: a smooth, quiet ramp-up to posthuman cognition through an utter detonation of the divisions between mind, software, networks, and society. The particulars of our cybernetic revolution are hard to predict with certainty, but we can say this for sure: the future will be a strange place indeed.

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Related Links
o Google
o noted
o Also by jacobian


Display: Sort:
Search Engine Symbiosis and the Quiet Cybernetic Revolution | 42 comments (35 topical, 7 editorial, 0 hidden)
For serious this time (3.00 / 8) (#1)
by jacobian on Tue Aug 01, 2006 at 03:07:59 PM EST

I originally published this article here. Someone posted it here on k5 without my knowledge; being plagiarism, it got dumped, and I was asked if, as the original writer, I could repost it myself. So, sorry if you've had to vote for this twice.

-1 again (1.16 / 6) (#4)
by creativedissonance on Tue Aug 01, 2006 at 03:53:16 PM EST

crapflooder


ay yo i run linux and word on the street
is that this is where i need to be to get my butt stuffed like a turkey - br14n
[ Parent ]
I rarely get to -1 crap TWICE (2.00 / 4) (#7)
by army of phred on Tue Aug 01, 2006 at 05:28:26 PM EST

damn cybernetic pseudobabble web 2.0 myspace freaks

"Republicans are evil." lildebbie
"I have no fucking clue what I'm talking about." motormachinemercenary
"my wife is getting a blowjob" ghostoft1ber
[ Parent ]
Plagiarism (3.00 / 6) (#5)
by Frijoles on Tue Aug 01, 2006 at 04:11:55 PM EST

You should post your comment at the end of the article, since that's where I was originally looking. I looked over the comments to see if someone else had mentioned it being on TheSignalBox first and saw your post. Others may not (if there are a lot of comments, for example).

[ Parent ]
+1FP, a very nice read (2.60 / 5) (#2)
by FizZle on Tue Aug 01, 2006 at 03:22:50 PM EST

sorry it had to be plagiarized the first time around

---
"Leave a tip if you're datin' a girl from Eaton, or vice versa." - tip jar at B&D
death to all posthumans (1.90 / 11) (#3)
by big fatso kitty on Tue Aug 01, 2006 at 03:50:49 PM EST



doesn't look like a wild-eyed rough draft (2.50 / 4) (#10)
by Saber RICO on Tue Aug 01, 2006 at 07:39:09 PM EST

ah it was posted elsewhere once. ok that explains it.

just feel like showing the new sig.
--
"YOU HAVE BEEN FINED by Delirium FOR GROSS MISUSE OF THE TROLL-SUMMONING MECHANISM"
your sig is accurate (nt) (3.00 / 3) (#12)
by Paneer Tikka Masala on Tue Aug 01, 2006 at 10:11:43 PM EST


-----
And seriously, I post on K5, do you really think anything is traumatic at this point? - GhostOfTiber
[ Parent ]
I disagree $ (3.00 / 2) (#16)
by nebbish on Wed Aug 02, 2006 at 08:47:04 AM EST


---------
Kicking someone in the head is like punching them in the foot - Bruce Lee
[ Parent ]

Oh hang on (3.00 / 3) (#17)
by nebbish on Wed Aug 02, 2006 at 08:48:08 AM EST

I thought you said inaccurate.

I agree.

---------
Kicking someone in the head is like punching them in the foot - Bruce Lee
[ Parent ]

plz to be reposting old sig $ (none / 0) (#35)
by Paneer Tikka Masala on Sun Aug 06, 2006 at 01:40:31 PM EST


-----
And seriously, I post on K5, do you really think anything is traumatic at this point? - GhostOfTiber
[ Parent ]
old sig follows: (none / 1) (#36)
by Saber RICO on Sun Aug 06, 2006 at 08:32:52 PM EST

"taking pure cocaine is typically more pleasurable than having sex."

ignore the following new sig:
--
"YOU HAVE BEEN FINED by Delirium FOR GROSS MISUSE OF THE TROLL-SUMMONING MECHANISM"
[ Parent ]
Besides the below, premise is faulty (2.96 / 30) (#11)
by livus on Tue Aug 01, 2006 at 07:54:28 PM EST

The reality is more like this:

Imagine that Jill from 2006 and Jack from, say, 1990 are able to communicate with each other using an instant messenger program. They discuss a wide range of topics, and Jack is stunned by Jill's ignorance - or, at least, her shallow, patchy knowledge. She is facile with names, dates, and has a geeky interest in the kinds of pop culture references that mark a "fan" rather than a serious thinker, but there her abilities end. She seems to have only the most superficial understanding of history, of cause and effect, or general knowledge of cultural linkages - how concepts interact. Her grasp of spelling and grammar make her seem to Jack many years younger than she is.

More disturbing for Jack is her lack of critical thinking. Jill seems to have few opinions of her own - she can label her ideological affiliations but cannot engage in sustained, intelligent discussion of them. She seems unable to accurately guage the veracity of her sources. And if Jack asks her opinion on something she hasn't considered, the response is "lol what?".

So Jack asks Jill how she got so dumb. To Jill, and to us, this is everyday technology and the modern education system, and it's nothing terribly disturbing. But to a person from sixteen years ago, Jill and her Internet connection form a dramatic offloading of the basic requirements of human thought and education onto an insufficiently capable technology, and the future she inhabits is a strange place indeed.

---
HIREZ substitute.
be concrete asshole, or shut up. - CTS
I guess I skipped school or something to drink on the internet? - lonelyhobo
I'd like to hope that any impression you got about us from internet forums was incorrect. - debillitatus
I consider myself trolled more or less just by visiting the site. HollyHopDrive

lol irony (3.00 / 3) (#14)
by swr on Wed Aug 02, 2006 at 07:20:30 AM EST

More disturbing for Jack is her lack of critical thinking. Jill seems to have few opinions of her own - she can label her ideological affiliations but cannot engage in sustained, intelligent discussion of them. She seems unable to accurately guage the veracity of her sources. And if Jack asks her opinion on something she hasn't considered, the response is "lol what?".

What's really funny is that you probably only think that because you read something like it on the internet somewhere.



[ Parent ]
you mean something (none / 0) (#21)
by livus on Wed Aug 02, 2006 at 08:07:05 PM EST

like this?

Good call but actually no. "Real life" provides me with more than enough opportunities to assess the current state of the internet generation.

---
HIREZ substitute.
be concrete asshole, or shut up. - CTS
I guess I skipped school or something to drink on the internet? - lonelyhobo
I'd like to hope that any impression you got about us from internet forums was incorrect. - debillitatus
I consider myself trolled more or less just by visiting the site. HollyHopDrive

[ Parent ]

Awash in a sea of received wisdom (3.00 / 5) (#24)
by Scrymarch on Thu Aug 03, 2006 at 01:31:24 AM EST

It seems you are talking about Jill the OC-wannabe and Jack the front-row forward, while the author is talking of Jill the music nerd and Jack the history buff. Just because facile conversation is now annotated with a bit of commonly available detail doesn't make it less facile.

That said I guess I'd agree that tools like Google allow for a rapid shallow appreciation of a foreign topic. I believe Bruce Sterling described Google as a "common-sense engine", because it makes it easy to get the most common three or four perspectives on a topic, but even if it returns thousands of links they tend to be thousands of links to those same four or five opinions.

I'm highly skeptical of any evolutionary step in computing that would allow bots to account for my opinions or the writer's intent any better than a spam filter attempts such a thing today. As for this sort of bot (or swarm of bots) writing academic papers, well I think the author is both excessively confident in software and cyncical about academic publications. The flavour of such a text would surely be that of eating someone else's vomit: predigested.

I imagine Jacob's wife grabbing his context aware glasses while screaming "Talk to me for fuck's sake!". Picking up on the tone of voice, they flash scatter plots of domestic violence stats by race and gender as she hurtles them at the floor.

[ Parent ]

It's actually Janet that bugs me (2.80 / 5) (#28)
by livus on Thu Aug 03, 2006 at 06:52:11 AM EST

"The topic is the use of free indirect discourse in Thomas Mann's Death in Venice. She hasn't gotten around to reading the book yet, but that's no big deal. [...] She looks over this information and develops a rough thesis[...]Her homework is submitted with three minutes to spare" sounds a lot like an academic plagiarist.

What they never realise is that the humble student who just reads the damn book and comes up with some sort of half baked idea is always going to beat the Cut+Paste googling zombie - just as responsive flesh and blood people will always have an edge over the RealDoll, love has an edge over money, and "practice" has an edge over "theory".

Hmm I'm beginning to sound like a refugee from  Gattaca.

---
HIREZ substitute.
be concrete asshole, or shut up. - CTS
I guess I skipped school or something to drink on the internet? - lonelyhobo
I'd like to hope that any impression you got about us from internet forums was incorrect. - debillitatus
I consider myself trolled more or less just by visiting the site. HollyHopDrive

[ Parent ]

You can't google for the human soul (2.66 / 3) (#32)
by Scrymarch on Fri Aug 04, 2006 at 12:13:08 AM EST

Oh wait, you can.

[ Parent ]
I think that's his point... (2.50 / 4) (#26)
by ShiftyStoner on Thu Aug 03, 2006 at 04:52:54 AM EST

He says google has all mans knowledge, something along those lines, so it doesn't look like that's the point.

I think what hes saying is in the future it wont be shallow patchy knowledge. You will be able to instantly access information now only available within the mind of a high profile profesor. No, all high profile profesors at once. Not only that all other mediums for knowledge on whatever topic as well. Instantly, verbaly, on a cell phone anywere any time.

Okay, he is talking about cyborgs. But the above is were my mind has been on the subject. It scares me.

Rather than hooking somone up to a chip.(God don't let that happen). People will get better at intercepting and retaining information. People will continue to be able to read faster and exorb more knowledge/information. At the same time, technology will be able to provide more accurate credibale in depth information on any topic. Information will be broken down, concentrated. Seperated from clutter.

The future scares me, it's all moving so god damned fast. How long before the average 15 year old is smarter than me? How long do we have.

Call me a moron if you'd like. Despite what you think, the average kid being smarter than me is a fucking insane scary all to close reality. When it comes to inteligance, I'm in the 99 percentile any test you want to look at it, other than spelling. Well I never took or studied for the SATs, but still, others have shown.

A part of me wants it. It wants all sorts of PHDs within a couple years. The ability to accomplish this. And even if not, have that wealth of knowledge at my fingertips. If this was the state of the world, I certainly wouldn't want to be kept in the dark.

I want it all to come to a crashing halt. It's to much, we are only human, lets stay that way. We're moving to fast, we're to advanced as it is.
 
( @ )'( @ ) The broad masses of a population are more amenable to the appeal of rhetoric than to any other force. - Adolf Hitler
[ Parent ]

nah (2.50 / 2) (#27)
by livus on Thu Aug 03, 2006 at 06:37:07 AM EST

the way I see it - what good is an exoskeleton if your own reflexes have slowed to those of a sloth?

What good is knowledge if you can no longer think.

---
HIREZ substitute.
be concrete asshole, or shut up. - CTS
I guess I skipped school or something to drink on the internet? - lonelyhobo
I'd like to hope that any impression you got about us from internet forums was incorrect. - debillitatus
I consider myself trolled more or less just by visiting the site. HollyHopDrive

[ Parent ]

hi, (1.14 / 7) (#20)
by loteck on Wed Aug 02, 2006 at 06:56:08 PM EST

cross-posting nullo! you might also want to submit this to slashdot, fark, mefi, husi, adequacy, and whatever that other retardville that everyone around here posts about is called... uummmmm.. DAILYKOS. yes, that's it.

also, consider buying a spam list of email addresses and just mass mailing it out, gently forcing it down the throats of the general populace like a black man orally raping a white woman. you wouldn't want anyone to miss this!

then maybe someone will digg it and it will end up on somethingawful! YES!!!
--
"You're in tune to the musical sound of loteck hi-fi, the musical sound that moves right round. Keep on moving ya'll." -Mylakovich
"WHAT AN ETERNAL MOBIUS STRIP OF FELLATIATIC BANALITY THIS IS." -Harry B Otch

nah (2.80 / 5) (#23)
by jacobian on Wed Aug 02, 2006 at 11:50:11 PM EST

I'd prefer to just stand on a street corner and scream at passers-by while shoving leaflets into their hands. I'm a people person.

[ Parent ]
skynet meets timecube guy (1.00 / 3) (#22)
by circletimessquare on Wed Aug 02, 2006 at 09:30:00 PM EST

+1 fp

The tigers of wrath are wiser than the horses of instruction.

Thought (1.20 / 5) (#25)
by ShiftyStoner on Thu Aug 03, 2006 at 04:19:27 AM EST

This is far from normal thought. This story, what you are saying. It was, it was. You should be careful what you say asshole.

 
( @ )'( @ ) The broad masses of a population are more amenable to the appeal of rhetoric than to any other force. - Adolf Hitler

You need not worry about the thinker (2.75 / 4) (#29)
by Metasquares on Thu Aug 03, 2006 at 09:04:56 AM EST

You kind of hint at this point, but you come short of stating it explicitly, so I'm going to throw it out into the open:

As this sort of technology advances, there won't be any "thinkers" anymore, because it's easier to let the computers do all of the thinking for us. In your example, it is no longer the thinker himself doing the work, it is his "surrogates". The fact that he is able to publish 12 papers per day indicates that these papers are being automatically generated - simply writing them out, even with all of the research in hand, takes significantly longer than 2 hours. He may as well retire; he isn't working as-is anyway.

For that matter, even his perspective is no longer his own - it is a tangle of preprogrammed perspectives that he is tricked into thinking he is forming an intelligent opinion based on.

Here's another example of this by "the slacker":
"The paper mentions quite a few philosophical concepts and literary terms that her biological half has never heard of, but her brain trusts her software"

So the sum total of all that she has learned is zero. Why bother giving the homework in the first place?

I used to do research in AI - Knowledge Representation and Reasoning, to be specific. I stopped because I realized that humans are depriving their lives of meaning by allowing machines to take over the process of thought. We are not going to become more than human by doing this, only less.

kinda (3.00 / 3) (#30)
by jacobian on Thu Aug 03, 2006 at 09:44:27 AM EST

I see your point, but I don't believe that computers will be able to do all our thinking for us in the foreseeable future. What I'm suggesting has computers as (highly capable) information processing tools. Humans are still needed to provide original insight, and to nudge along the linkage of concepts that results in the synthesis of new ideas.

It's not that the thinker's entire job is done for him; it's that many aspects of thought, like research and learning and sifting through information, are made easier or unnecessary, and the thinker's role turns to supervising the integration of ideas and injecting novel insight into the system.

In my view, these kinds of tools don't dehumanize thought any more than cars dehumanize movement. They're just a faster way of arriving at the same objective.

[ Parent ]

sorta (3.00 / 3) (#33)
by Eivind on Fri Aug 04, 2006 at 05:06:30 AM EST

But the tools can indeed dehumanize what you're doing.

That's not to say they are bad. Just that you shouldn't use them for everything.

A car is efficient if you want to get from point A to point B. Most of the time that's all it is.

Taking a cable-car to the top of some mountain is not a "quicker, more efficient" way of achieving the same thing as getting there on foot.

It gets you the view. But that's it, arguably a photograph would do that even more "efficiently". In reality its a very "efficient" way of acomplishing nothing worthwhile whatsoever.

What is the value of having delivered a "homework" you did not write, and indeed are even uncapable of understanding ?

What is the value of an opinion that is so amazingly shallow, so little considered that it literally did not exist 5 seconds ago ? Why would *anyone* want to ask your opinion if all they are gonna get is Googles opinion ?

What is the value of research that isnt researched ? Is a paper written by consuming 20 seconds of CPU-time, 100MB of bandwith and half an hour of human thougth likely to be worth more than the sum of those components ? If so, why ?

[ Parent ]

Bah... (2.71 / 7) (#31)
by joto on Thu Aug 03, 2006 at 06:37:19 PM EST

Jerome is interviewing for a Rhodes Scholarship. [snip] The interviewer asks, "What are your thoughts on current trends in lumber imports from Canada?" [snip] He vamps for a second or two to take in the information, and then starts to answer the question confidently and knowledgeably.

And Jerome looses the job. In the day and age of wearable computing, being able to google up a few graphs about lumber imports in Canada impresses no one. Coming up with a more witty answer, like: "I'll answer that if you first tell me your thoughts on variation of average length of millipedes of the species Pycnotropis epiclysmus during rain season through the last 10 years in Caquetá", would almost guaranteed have handed him the job. But only if he had worn a tie.

Julia is running for local political office. [snip] After the debate, most observers declare Julia the winner.

On the other hand, only 1.4% of the voters even bothered to vote in the last election. At this point in history, politics are more controlled through special interest groups and corporate lobbying than through elections or political parties, anyway. Whether a particular candidate is a good debater is something only 5% of the voters care about. But those 5% care more for a candidates real insight, than their ability to find trivia from the net fast enough. If only Julia had played soccer (or participated in Big Brother on TV) instead of debating politics---she would have stood a chance of winning the election. But then again, she would probably need a boob job first.

Janet is a college student majoring in world literature. [snip] Her homework is submitted with three minutes to spare.

The homework is then corrected by a surrogate for the teacher, whose algorithms immediately detect that the writer cannot possibly have read the book. It then proceeds to think up a few extra questions for the teacher to ask at Janets oral exam next week, at which net access will not be permitted.

Jacob is an academic who specializes in foreign policy. [snip] It's publish or perish, after all, and like most thinkers of his time, Jacob produces several dozen publications per day.

In reality Jacob is a patient living in a special home for retards. While it's possible for Jacobs surrogates to automatically generate dozens of plausible scientific papers each day, so could even the primitive computers back in 2006. That doesn't mean any of the papers has any scientific value, and by closer inspection, you'll find that none of them makes any sense at all, not even to Jacob. The nurses let him do it, though, as it keeps him quiet and happy.

In the year 2306, all the interesting research is done by teams of scientists working together over months, or more often: years. Trivial stuff is of course always faster when done by a surrogate or AI, but this is trivial and part of the tools scientists use. Because of this, usually only raw data and human thought is considered worthy of storage.

Nah (1.00 / 3) (#34)
by trhurler on Sat Aug 05, 2006 at 01:37:23 PM EST

All this "agents" crap comes from AI jerks who have yet to demonstrate anything useful despite billions in funding, millions of man hours, and so on. I really think you're guilty of the first error people make in predicting the future: believing in the vaporware of the present.

--
'God dammit, your posts make me hard.' --LilDebbie

rambling thoughts. (none / 1) (#37)
by wampswillion on Sun Aug 06, 2006 at 10:05:26 PM EST

something i quipped to a lot of people when my father died when they would reference that i would miss him was "yeah, now when i want to know something i'm going to have to use google."  

but more than that what i realize now is that even with the vast amount of information available to me on the internet, i found i was very lazy at determining which bits of information were key and which were junk when thinking  about a subject in order to form an analysis or an opinion of a situation or a subject.  i'm very used to him saying "read this" or "look up that"  when i'd ask him a question.

and i guess along these lines, what my father also seemed to be very good at was asking questions.  or knowing what questions to ask. and he'd already asked them and he knew where to find clues to the answers i was searching.  and this reliance on him has made me lazy, i suppose.    

i think perhaps even if we all become "knowledgeable" or if we have all the information in the world at our fingertips or right there in our eye glasses- what might still individuate us is the questions we ask. or the kinds of decisions we make based on the information we have.  

one of the reasons i really like talking to people on the computer is that i can google what's on the tip of my tongue to say.

for instance i can remember-  that there is a line in an elton john song that says "alevin toffler you had a son today"  when someone talks about "culture shock" but i kinda forget why that popped into my brain in the first place.

  then i remember "oh it's because he wrote that book about becoming overwhelmed with all the technological advances- but damn if i can remember the name of that book."
  so i google it.  ta-da it's "future shock."  and then whoever i talk to might have the assumption that i remembered it all on my own.  and i didn't.  if we'd been talking in real time and space, i'd have had to admit my memory ain't what it used to be.  
is that important?  about knowing me?  certainly i might come off sometimes on the computer as more knowledgeable than i am or at least with quicker recall.  and that would be a misconception.  
but i guess also i worry about becoming lazy and complacent and someday will no longer be able to form and make even parts of the connections by myself.
one of my favorite things to do to students who hand in something is take it and hold it away from them and say "ok, tell me what you know about this here that you wrote about. and tell me why it would be important or interesting for a person to know it."  
and kids, they used to be much much better at this than they are now, in my opinion.  

one time i saw a bulletin board in a hallway of a school during the month of feb.  and the title of the bulletin board was "10 Important Facts About George Washington"   and each kid had written out 10 facts he or she had looked up about george.  
so i read some of them.  pretty typical stuff.  but my favorite was this entry on one of the kid's papers-  "george was sitting with some indians.  he said "dance and i will give you money."   and the indians danced for him."  

i'd have loved to have asked that kid what he felt was important about that fact.   i'd have given him extra credit if he'd come up with the answer or an answer.  

 

Sounds like FASCISM to me. (none / 1) (#38)
by A Bore on Mon Aug 07, 2006 at 08:57:23 AM EST

What you're basically saying is that, in the future, we will be so lazy that a COMPUTER will THINK for us???

I propose we program UR COMPUTA to entertain suicidal IDEATION. Bring on the Butlerian Jihad.

Authenticity, expertise, and opacity. (none / 1) (#39)
by JenniferForUnity on Wed Aug 09, 2006 at 03:50:18 PM EST

There's a tension here between "authentic thinking" and merely "using a tool" to augment your decisions.

I think that a person who actually managed to figure out even half of the technical details required for applications like this to work would be amazingly smart on topics like reasoning, text analysis, goal inference, preference solicitation, and so on.  Everyone who used her tool might be "an idiot with a fabulous prosthesis", but she would be anything but inhuman... as someone who knew how the cyber portions of her own mind worked.

And in the meantime, people mindlessly using the tools would be making better decisions even if the decisions were the same ones everyone else could make using the same tools.  From a high level look, the important part would be that the world was simply simply running better in a lot of respects.

People would more closely approximate the "perfect agents" that economists love.  Where to buy gas?  Your tools actually figure out the political, ecological, and mechanical implications of buying from a particular gas station and you get the one that's marginally less evil and has neat fuel addititives.  You can even look up the reasoning and probably track things all the way back to the website where someone made the original arguments.  But the thing you can't explain is why you're looking at those websites versus others you're not even aware of having been filtered out of pragmatically influencing you... but even if you can't get back to the original opinion sources, you still made a better decision relative to your personal gas buying goals, so who cares so long as your car breaks down later than otherwise.  Maybe with homework or such it matters, but that's not the central application of this technology.  Faster better decisions are the killer part of this app.

On the other hand, I don't see how the practical implications of this are any different from, say, Liquid Democracy in sucking the decision making out of a person and embedding it in a technical/social stucture more complicated than you can understand.

One interesting juxtaposition is that both systems have a "man behind the curtain".  There has to be some really focused expert somewhere putting the opinions into either machine for everyone else to use.  In LD who the expert is is established by a series of basically intelligible topic-scoped proxying steps.  In the system proposed here, there's a magical "life watching" engine that "infers your goals and beliefs"... essentially for the sake of automating even your proxying power in an opaque way.

(There's a longer term worry here that as "the man behind the curtain" himself begins to use the tool, the opinions that go into the machine may lose certain properties the machine relied on to work.  It's possible no one would even notice it breaking if they were all machine dependent by the time the issue grew pressing.)

I really liked the idea of "quiet AI" but opacity is another concept that I find really useful for thinking about this stuff.

There's a really cool article on Egyptian taxi drivers that Google won't give me no matter what search terms I feed it (yay irony!).  It's a meditation on the way that ease of interface trades off against understanding the mechanisms behind the interface.  In Egypt you have a relationship with your taxi driver.  His obligations to his relatives affect when he can pick you up and what the fees will be.  In most US cities, everything is clean and anonymous and efficient.  If your taxi doesn't show up in the US, who knows why it didn't come.  In Egypt, your personal taxi driver will tell you he was an hour late because he had to take his uncle and nephew to the hospital because the nephew had a bug stuck in his ear canal.

There's a similar "opacity" theme (with many tangents and weird spins) in Stephenson's oldie but goodie In the Beginning was the Command Line.

I've never seen a deep argument trying to cut to the core of the way "authentic-fake" and "hard-easy" are correlated so often, but the idea is relevant here.  (I've heard that Zen and the Art of Motorocycle Maintenance is sort of an attempt, but I've never had the time to read it myself.)

Thought and Knowledge (none / 0) (#40)
by Crono on Sat Nov 11, 2006 at 11:52:56 PM EST

This article raises an interesting point. We're in the Information Age now, apparently. But the true society shifting effects seem to stem more from the communication of information then it's presence. It's the new ways we can convey it, such as kuro or wiki or whatnot as well as the ease of accessing it. The automated interpretation of raw data is beneficial due to it's capability to convey patterns and trends, as well, though that has been around a little more. The effects of having all that accessible at any time at all through cell phones, pdas, and whatever comes next is what to watch, I think.

But the idea of interfacing with AI closely is somewhat dubious. It raises the issue of how much exactly do you learn in these situations? If you never need to learn anything because it is available on demand then you'd never get an opportunity to integrate it into your whole body of knowledge. You might be able to access the superficial technical aspects of a subject but the subject as whole as it relates to everything else you know would be out of your reach. Seems to me like wide spread use of this might destroy human innovation and progress.

A scifi postcyberpunk/cyberprep author, Neal Asher, wrote a book called Gridlinked which explores the effects of interfacing directly with 'quiet AI.' The main character is 'gridlinked' so he gets information from the AI networks on anything he wants immediately, right? And he has to be disconnected because it is affecting his viability as a secret agent. Seems that having the AI expert opinion on everything means he is losing his initiative and ability to actually think, replacing it with expertise on demand.

And even more into scifi cyberpunk land, anyone seen Ghost in the Shell: Stand Alone Complex? I thought of it a bit as I read this until I I got to the point about having the whole of information society in your head. Now, woh, now, That seems like a apt place for SAC to root itself in. If we have the whole of information society internalized and synchronized as well as the whole of knowledge on tap it I think it might lead to us slowly losing individuality. I mean, if you can consider everything from every angle whilst being able to draw on every viewpoint possible all collapsed into one it would figure that you'd draw the same conclusion as everyone else. You might argue that we would still have our own viewpoints but the prevalence of groupthink is apparent even with our current means of communication. Scary.

its not about encyclopedic knowledge. (none / 0) (#41)
by fungja on Sun Nov 26, 2006 at 07:14:04 PM EST

i think procedural memory was discounted a bit too fast. i  used to use a device which displayed a video overlay of what i was seeing allowing recording and playback, on a set of eyeglasses.  wore it a few weeks continuous  and i remember once playing back video on it out of boredom - in the video someone was approaching me, i instinctively moved out of their way, despite the fact no one was approaching in real life.  

also, the idea of frantically wikipedia'ing and google'ing fits nicely into a comp sci type thinking. but looking at how people really use messengers and mobile devices : its teens snapping pics of themselves or their friends etc, and that's content that's shared.  its not obscure facts about mundane things that is transmitted.

so i would argue cybernetics will be about instant communication to ''understand'' other people : understand their context - theres something new about being able to see the ins and outs of others' lives whom you may not have otherwise had a chance to 'meet'.

i'd argue, even hope, progress wont' be measured by the ability of people to have encyclopedic knowledge but rather the ability of people to understand, empathize, and get along with people of all types from a wider range of experiences.

also trusting your opinions to AI? - perhaps a modern version of astrology - trusting decision making to motion of rocks whizzing about in space.

2nd Hand Knowledge (none / 0) (#42)
by thaig on Tue Dec 26, 2006 at 07:19:07 AM EST

We all use 2nd hand knowledge already - from books. People who read voraciously in the past are probably insatiable Wikipedia addicts now.

Reading books is a case of people not thinking for themselves but taking opinions and thoughts from the author.  That doesn't make it bad.

I think that people copy each other already in many ways and I notice it particularly when some opinion  is quoted back to me a week later by the person I first mentioned it to.  I copy opinions too.

It's still hard to transfer a skill, though, and I think that includes "ways of thinking" as well as physical skills.  Perhaps in the future the great surprise will be some new way to do this efficiently.

Search Engine Symbiosis and the Quiet Cybernetic Revolution | 42 comments (35 topical, 7 editorial, 0 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest © 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!