Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
The End of Moore's Law

By BottleRocket in Science
Wed Apr 20, 2005 at 07:25:33 PM EST
Tags: Hardware (all tags)
Hardware

BBC is currently offering an article on Moore's Law 40 years later.

In Gordon Moore's original observation, made in 1965, he argued that the number of transistors per integrated circuit increased as an exponential function, doubling about every year. The pace wasn't able to sustain quite this level, but Moore made a downward revision in 1975, saying that they doubled about every 2 years. Some claim that he revised it to 18 months, which, in the past 20 years, has proven even more reliable (Moore's original paper-[pdf]). When this prediction was made, the processor was cost-effective at 50 transistors per chip. Soon after, Intel produced the 4004, the world's first single chip microprocessor. The 4004 contained 2300 transistors, and was shrunk to an eighth of an inch wide by a sixth of an inch long. Today, the Itanium 2 chip contains half a billion transistors, or 229, to look at it in context. Wikipedia has a pretty nice graph of the relevant data.

There is now good reason to suggest that Moore's Law, which has been so reliable for so long, may be on the verge of losing its relevance. Many have suggested that Moore's Law can no longer be maintained because of economic factors or technological limitations. The intent of this article is to show why the opposite is true. I believe we are on the verge of outstripping Moore's doubling time.


Chip manufacturers are confident that they will be able to continue to maintain the pace of Moore's Law for the next decade. As of the fourth quarter of 2004, transistors in microprocessors were a little over 100 nanometers(nm) across (a nanometer is 10-9 meters, or one one-billionth of a meter). If we assume that the transistor gets proportionally smaller in order to maintain chip size, then in 10 years, we would expect the transistor to be 10 nm across, and that the processor would contain 50 billion of them. If the industry leaders are correct, this should be well within our capabilities. But in 2003, several members of the Institute of Electrical and Electronics Engineers, Zhirnov, Cavin and Hutchby, submitted a paper that proposed that we may be about to hit a wall when it comes to scaling electronics.

Their paper, Limits to Binary Logic Switch Scaling--A Gedanken Model [pdf], proposed that switching in transistors is limited to constraints defined by Heisenberg's Uncertainty Principle. The paper used the term "energy barriers" to describe the potential between the gate and the carrier, but no matter how great the potential difference, eventually the tunneling of electrons and holes will become too great for the transistor to perform reliable operations. In short, the two states of the switch would become indistinguishable. This cannot be allowed in a binary system, but it would happen if its size gets as small as 4 nm. Indeed, this would be the size of a transistor produced in 13 years, keeping strict adherence to Moore's Law.

They add that the heat from these transistors will be very difficult to moderate, because to do so would require somehow diverting the heat produced by this 5 nm device away from the processor. Alternatively, the entire processor could be cooled, which would produce more heat than it takes away.

In addition, there are rising costs for the producers of these chips. From the Wikipedia article

It is interesting to note that as the cost of computer power continues to fall (from the perspective of a consumer), the cost for producers to achieve Moore's Law has followed the opposite trend: R&D, manufacturing, and test costs have increased steadily with each new generation of chips. As the cost of semiconductor equipment is expected to continue increasing, manufacturers must sell larger and larger quantities of chips to remain profitable. (The cost to "tapeout" a chip at 0.18u was roughly $300,000 USD. The cost to "tapeout" a chip at 90nm exceeds $750,000 USD, and the cost is expected to exceed $1.0M USD for 65nm.) In recent years, analysts have observed a decline in the number of "design starts" at advanced process nodes (0.13u and below.) While these observations were made in the period after the year 2000 economic downturn, the decline may be evidence that the long-term global market cannot economically sustain Moore's Law.

On what basis then could it be suggested that Moore's law could possibly be outstripped by technology? What evidence is there to suggest that we can possibly speed up the pace of electronics advancement better than we have in 40 years of exponential improvement? For this, we should look to some of the current advances in nanotechnology.

Exhibit 1: MIT's Technological Review. This article suggests a way that we may begin to solve the problem of heat dissipation. In the last year, nanoscience has managed to create something that has eluded electrical engineers for many decades. The (5,5) single-walled carbon nanotube (SWNT) is a superconductor at room temperature (a nanotube is defined by a chiral vetor- 5,5 in this case. The dimension is a function of this vector, and knowing something about the chiral vector will provide insight into how the nanotube looks when it is rolled up. This is an example of an armchair configuration). It is 0.55 nm in diameter, and has already been used in an experimental transistor. Unlike any other transistor currently being produced, The SWNT can take on properties of both P- and N-type semiconductors simultaneously, depending on the gate voltage (more information on nanotube electronics).

Exhibit 2: Quantum Computing. Why be content looking for smaller ways to perform the same old processes? There are now a number of alternative processors starting to move into the realm of feasability. At Almaden Research Center, the seven-qubit (quantum bit) quantum computer has already managed to run Shor's factoring algorithm. Take a standard computer with 'n' bits, and a quantum computer with 'n' qubits. If the two computers can process a bit with the same speed, the quantum computer can run through 2n states in the same amount of time it takes the conventional computer to process just one.

The DNA computer is also worth mentioning here. The distance between levels on a DNA chain is 3 nm, and a typical human chain is a couple of centimeters in length. That means each DNA chain is capable of storing 7 million DNA-bits, each of which is capable of 4 different "states," adanine, thymene, cytosine or guanine. That's 47,000,000 possible states, and during cell division, this gets processed in just over an hour!

Exhibit 3: The human brain. According to the linked article, the human brain should have the capacity to process 100 million MIPS (million instructions per second) or 100 trillion instrutions per second. From SIGNAL magazine,

On an evolutionary scale, current processing speeds of 1,000 MIPS place robots at the small vertebrate level. "A guppy," [Hans] Moravec, [of Carnegie Mellon's mobile robot laboratory] says, adding that besides carrying out their specific functions, autonomous robots are only aware of their immediate surroundings. However, he predicts that increasing processing speeds will bring more capable systems within a decade. Once robots are commercially available in large numbers, many solutions for issues such as hazard recognition will arrive through incremental use and modification. "There is no substitute for field use for learning about problems and solving them," he says.
What this indicates is that computers are catching up fast. If Moore's law holds, then in 30 years, computers will be able to "think" faster than humans. Even before computers overtake the human brain, they may well become capable of improving on their own designs. The possibility of computers eventually rendering humans obsolete is touched on in Vinge's Singularity (original paper).

What these arguments still fail to take into account is the type of human ingeneuity that drives future innovation. There is incentive to revolutionize computing, because if alternative processors catch on, any company still trying to develop conventional microprocessors will quickly be left far behind. Any kind of unforseen breakthrough will shorten this timetable, causing the exponential slope of Moore's Law to accelerate even faster. My prediction is that computer processors will improve by a factor of 4 in the next two years. Then, while they approach the limit to smallness, they will slow down and follow a more natural 1.5 year doubling time. Once DNA and quantum computers, or some other revolutionary type of microprocessor becomes an effective replacement to the conventional semiconducting microprocessor, Moore's Law will cease to be an effective predictor of the future of computing.

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Poll
Doubling time- Moore's Law revised
o Every three years 24%
o Every two years 16%
o Every 18 months 12%
o Every year 4%
o Every six months 0%
o Every day 44%

Votes: 25
Results | Other Polls

Related Links
o BBC
o Moore's original paper-[pdf]
o 4004
o graph
o confident
o pdf
o Wikipedia article
o MIT's Technological Review
o armchair configuration
o more information on nanotube electronics
o Quantum Computing
o Shor's factoring algorithm
o DNA computer
o The human brain
o Vinge's Singularity
o original paper
o Also by BottleRocket


Display: Sort:
The End of Moore's Law | 114 comments (92 topical, 22 editorial, 0 hidden)
The future of microprocessors (2.50 / 4) (#1)
by Cat Huggles on Tue Apr 19, 2005 at 09:01:39 PM EST

Transistor switching speed will plateau.

Massively parallel personal computers will be developed. They will not use the fastest transistors because of heat problems.

I envision a computer system where CPU and RAM modules are cheap little boxes 2x2x2 cm and you can easily add and replace new CPUs at runtime.

Like hot-swappable RAID, but for things other than storage.

People who are paranoid about corruption can install multiple redundant layers with comparators. Errors will be spotted and the system will quarantine the problem and run tests on the modules.


Re: The future of microprocessors (1.11 / 9) (#60)
by Robert Acton on Thu Apr 21, 2005 at 07:38:01 AM EST

Transistor switching speed will plateau.

Massively parallel personal computers will be developed. They will not use the fastest transistors because of heat problems.

I envision a computer system where CPU and RAM modules are cheap little boxes 2x2x2 cm and you can easily add and replace new CPUs at runtime.

Like hot-swappable RAID, but for things other than storage.

People who are paranoid about corruption can install multiple redundant layers with comparators. Errors will be spotted and the system will quarantine the problem and run tests on the modules.

Shut up.

--
I am cured.
[ Parent ]

The 90's are calling, the 90's are calling... (none / 0) (#101)
by wumpus on Sat Apr 23, 2005 at 07:59:30 AM EST

Transistor switching speeds became irrelevant long ago, most of the delay is in the wires.
Massively parallel computers didn't work to well the last time they were tried. Ever used a connection machine? Note I am ignoring so-called GPUs as not typically what is refered to as "massively parallel PCs".
Easily upgradeable PCs would involve a fight between MS and Dell, who will win?

Wumpus

[ Parent ]

alternative computing paradigms? (1.05 / 17) (#2)
by the ghost of rmg on Tue Apr 19, 2005 at 09:26:15 PM EST

dna? qubits? 2n? where does it all end?

what you have to remember is that moore's law, like newton's law, is an aspect of physical reality. it was here when we got here and it's the way things are. scientists are always trying to push the enchilada with new theories and new technologies, but when is enough enough? i don't want to sound trite, but since this is kuro5hin, i'll do it anyway: scientists are so busy trying to figure out what they can do, they don't stop to think about whether they should!

history shows how man's attempts to subvert the laws of nature invariably go awry. the tower of babylon and jurassic park are just to name a couple examples. what we need is not faster computers or integer factorization algorithms; this society needs humility.

it's time to stop the sacrilege. scientists need to recognize the limits of their theories and stop trying to push them on everyone else. for my part, i don't want a more complicated computer or processor made out of aborted fetuses. i just want a nice, clean interface, with user friendly development tools leveraging the latest in aspect oriented programming technologies.

the .NET platform provides all of this. its object-relational database connectivity and rapid development environment maximize my productivity while minimizing the total cost of ownership. the inherently cross-platform nature of the .NET framework insures that my applications will run just as well on my home PC as on the cray workstation at the office.

i suggest that all of you tittering about this new "quantum" technology get a grip, read your bible, and get to CompUSA to purchase your copy of VisualStudio.NET today.


rmg: comments better than yours.

The problem with quantum computing (3.00 / 3) (#3)
by Cat Huggles on Tue Apr 19, 2005 at 10:04:49 PM EST

When you're running your system so close to the bare   laws of the universe, nasty side-effects are bound to happen. This was discussed in localroger's parable "The Metamorphosis of Prime Intellect", in which a sloppily designed computer is able to take over the universe by just thinking about it!

In reality the dangers are much worse. The human race wouldn't be able to survive a divide by zero error, because the resulting black hole would fall into the center of the Earth and then slowly eat up the planet's core, with the surface slowly shrivelling up. Think global warming is a problem? How about global shrinkage!

[ Parent ]

if only local roger had opted for VisualStudio.NET (2.66 / 3) (#5)
by the ghost of rmg on Tue Apr 19, 2005 at 10:30:20 PM EST

i think the parables between local roger's story and samson's are obvious. when samson lost his hair in the all too human battle with male pattern baldness, he lost his strength and virility -- what had made him great before. similar, in abandoning superior technologies like the .NET framework in favor of textfiles, hand tuned assembler, and quantum computers, local roger has lost what had made him such a star on this site in past years.

what he needs is rapid development and object-relational database integration, not pipe dreams about talking spaceships.


rmg: comments better than yours.
[ Parent ]

Well Samson's hair grew back /nt (3.00 / 4) (#9)
by localroger on Tue Apr 19, 2005 at 11:02:28 PM EST



I am become Death, Destroyer of Worlds -- J. Robert Oppenheimer
[ Parent ]
the wonders of modern medicine! (none / 0) (#76)
by the ghost of rmg on Thu Apr 21, 2005 at 04:04:04 PM EST




rmg: comments better than yours.
[ Parent ]
Take this shit to /. (none / 0) (#35)
by 6502 on Wed Apr 20, 2005 at 03:43:25 PM EST

where you might actually get a funny bite. As it stands Cat Huggles' reply is funnier than your original troll. 1 for you, rmg.

[ Parent ]
why don't you give it a try? (none / 0) (#37)
by the ghost of rmg on Wed Apr 20, 2005 at 05:33:00 PM EST

post that comment at slashdot with whatever account you have available and watch the bites roll in!


rmg: comments better than yours.
[ Parent ]
I've been having my own fun (none / 0) (#39)
by 6502 on Wed Apr 20, 2005 at 06:04:49 PM EST

And anyway, I'd rather write my own material. We have different agendas.

[ Parent ]
any agenda that involves posting on kuro5hin (3.00 / 2) (#41)
by the ghost of rmg on Wed Apr 20, 2005 at 06:06:57 PM EST

is in need of reevaluation.


rmg: comments better than yours.
[ Parent ]
WHO'S law? (none / 0) (#50)
by MarlysArtist on Thu Apr 21, 2005 at 12:11:03 AM EST

Who's Law? Your Law? Your God's Law? No, Newton's Law. Newton's Law was written by an English scientist in the seventeenth century. And let's us all hope that his theory, despite its limits, is still more-or-less representing reality tomorrow morning.

Marly's Artist

"Never ask 'oh, why were things so much better in the old days?' It's not an intellegent question" --Ecclesiastes, 7:10
[ Parent ]

*golfclap* (N/T) (none / 0) (#63)
by Morphine007 on Thu Apr 21, 2005 at 09:32:43 AM EST



[ Parent ]
lol what (2.00 / 3) (#7)
by Exergetic Analysis on Tue Apr 19, 2005 at 10:45:52 PM EST

Computers are already able to improve upon their own designs. Did you think electrical engineers at Intel work with pen and paper?

Also, suggesting that a simple increase in processing power will magically bring about a self-aware artificial intelligence is just asinine.

IAWTP (3.00 / 3) (#27)
by thekubrix on Wed Apr 20, 2005 at 12:02:39 PM EST

More processing power won't be the major factor that gives us AI anymore,....now its more about computer science, philosophy, and psychology.

However higher processing power will start allowing for more advanced robots,......I can't wait till they replace the people at cash registers.

[ Parent ]

heh, you mean like (none / 0) (#71)
by jbridge21 on Thu Apr 21, 2005 at 02:00:57 PM EST

The automated checkouts at my local kroger's, home despot, blowe's, etc?

[ Parent ]
AS THE ONLY REAL LIFE NINJA ON THE INTERNET (1.05 / 18) (#10)
by dharma on Tue Apr 19, 2005 at 11:31:26 PM EST

I HAVE TO ADMIT I CAN RELATE TO THIS ARTICLE. THE MODERN WORLD HAS REMOVED THE RELEVANCE OF MOST NINJAS.

BUT THANKFULLY AS THE ONLY REAL LIFE NINJA ON THE INTERNET, I'VE MANAGED TO KEEP MY SKILLS FRESH, FAST AND DEADLY. I'LL SNEAK IN AND STEAL YOUR DAUGHTER'S VIRGINITY BEFORE SHE CAN CRY OUT IN PURE ECTASY.

Wha? (2.00 / 3) (#11)
by StephenThompson on Tue Apr 19, 2005 at 11:46:32 PM EST

This article is pretty much delirious:

That means each DNA chain is capable of storing 7 million DNA-bits, each of which is capable of 4 different "states," adanine, thymene, cytosine or guanine. That's 47,000,000 pieces of information, and during cell division, this gets processed in just over an hour!

Yeah, and um..a byte has 8 bits each capable of 2 different "states" which means when you process its you are processing 256 "pieces of information" or NOT

God I hope you were drunk when you wrote this.

not drunk (none / 0) (#12)
by BottleRocket on Wed Apr 20, 2005 at 12:11:20 AM EST

Just delerious.

$ . . . . . $ . . . . . $ . . . . . $
. ₩ . . . . . . . . . . . . . . . . . . . .
. . . . * . . . . . * . . . . . * . . . . . * . . . . . *
$ . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . $
Yes I do download [child pornography], but I don't keep it any longer than I need to, so it can yield insight as to how to find more. --MDC
$ . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . $
. . . . * . . . . . * . . . . . * . . . . . * . . . . . *
. ₩ . . . . . . . . . . . . . . . . . . . .
$ . . . . . $ . . . . . $ . . . . . $
$B R Σ III$

[ Parent ]

A 'law' (2.33 / 3) (#17)
by starsky on Wed Apr 20, 2005 at 06:35:10 AM EST

the guy has revised every few years to keep it correct is a pretty lame law.

I predict there will be as many transistors as there are that year every year for the rest of time.

Read Again (none / 0) (#75)
by hardburn on Thu Apr 21, 2005 at 04:00:47 PM EST

He corrected it a few times, and then it held for a few decades in its present form.


----
while($story = K5::Story->new()) { $story->vote(-1) if($story->section() == $POLITICS); }


[ Parent ]
Sigh. (none / 1) (#18)
by Ward57 on Wed Apr 20, 2005 at 08:06:23 AM EST

the big advantage of the brain is that it's components are layed out in three dimensions, not just two. Presumably, this is the way to go.

yeah, but (none / 1) (#42)
by Norkakn on Wed Apr 20, 2005 at 08:09:43 PM EST

when we tell the EEs that they make a cute little wimpering sound and start mumbbling about heat disipation.

[ Parent ]
Yeah, (none / 0) (#109)
by Ward57 on Tue Apr 26, 2005 at 06:02:14 PM EST

it'll have to be cooled by some very cleverly designed system. My first thought is "oil cooled", but I have a nasty suspicion that it would need active pumping (although the pumps could be outside the processor unit).

[ Parent ]
Nope. (none / 1) (#59)
by creaothceann on Thu Apr 21, 2005 at 07:30:52 AM EST

The thinking part of the brain (grey cells) is on the surface of the brain parts; that's why it is crenated. The "wires" (the white stuff) are on the inside.

[ Parent ]
try reading Moore's paper (none / 1) (#23)
by anon 17753 on Wed Apr 20, 2005 at 11:36:59 AM EST

Moore was writing about costs and improvement in manufacturing techniques. He wasn't saying that the number of transistors would increase by 2 amount over 18 months. He was describing a cycle where every 18 months we would be able to build a manufacturing line that would produce chips with 2 times more transistors than the manufacturing line we built 18 months previously - at the same cost as that previous manufacturing line.

I've read it (none / 0) (#26)
by BottleRocket on Wed Apr 20, 2005 at 12:00:52 PM EST

I know that he doesn't really make any prediction about the future transistor count of microchips, but he does offer a projection. Moore's Law is sort of mythical in that respect. Still, it's pretty amazing that the trend has held out for as long as it has.

$ . . . . . $ . . . . . $ . . . . . $
. ₩ . . . . . . . . . . . . . . . . . . . .
. . . . * . . . . . * . . . . . * . . . . . * . . . . . *
$ . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . $
Yes I do download [child pornography], but I don't keep it any longer than I need to, so it can yield insight as to how to find more. --MDC
$ . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . $
. . . . * . . . . . * . . . . . * . . . . . * . . . . . *
. ₩ . . . . . . . . . . . . . . . . . . . .
$ . . . . . $ . . . . . $ . . . . . $
$B R Σ III$

[ Parent ]

This 'singularity' (1.50 / 8) (#24)
by starsky on Wed Apr 20, 2005 at 11:45:31 AM EST

bollocks ignores the fact that for computers to start developing things a human would never have thought of there needs to be a human who has taught them to do that in the first place. Er, whoops.

Sci-fi fans: Robots that think for themselves and replace humans - never. going. to. happen.

Why doesn't the same argument apply to people? (3.00 / 3) (#29)
by topynate on Wed Apr 20, 2005 at 12:27:30 PM EST

How can you think of something new without being taught to?


"...identifying authors with their works is a feckless game. Simply to go by their books, Agatha Christie is a mass murderess, while William Buckley is a practicing Christian." --Gore Vidal
[ Parent ]
no logical fallacy (none / 0) (#107)
by iggymanz on Mon Apr 25, 2005 at 12:28:55 PM EST

Inventing an algorithm for a computer to implement, which produces an original creation or invention "which no human has could think of" is not a logical fallacy - the algorithm is not the creation nor invention.

[ Parent ]
A... I... [n\t] (none / 0) (#112)
by valar on Sun May 01, 2005 at 04:39:27 PM EST



[ Parent ]
Good article, but strays from Moore's Law (none / 1) (#33)
by dr zeus on Wed Apr 20, 2005 at 02:47:21 PM EST

Quantum computers, DNA computing, and the human brain are important computing concepts, but are not strictly related to Moore's Law. A qubit isn't a transistor, and neither is a DNA base pair.

But I voted it up anyway, since you seemed to make a fairly clear distinction between Moore's Law and the computation acceleration from other technologies.

Well, I didn't really... (none / 1) (#34)
by BottleRocket on Wed Apr 20, 2005 at 03:35:12 PM EST

But thanks.

Regardless, if semiconducting processors stop being used for home computers, the Law wouldn't really describe this transition. For instance, if we start measuring clock speed in terms of the time to reach superposition, this couldn't really be compared to instructions per second- they're two entirely different processes. In that case, Moore's Law becomes obsolete anyway.

$ . . . . . $ . . . . . $ . . . . . $
. ₩ . . . . . . . . . . . . . . . . . . . .
. . . . * . . . . . * . . . . . * . . . . . * . . . . . *
$ . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . $
Yes I do download [child pornography], but I don't keep it any longer than I need to, so it can yield insight as to how to find more. --MDC
$ . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . $
. . . . * . . . . . * . . . . . * . . . . . * . . . . . *
. ₩ . . . . . . . . . . . . . . . . . . . .
$ . . . . . $ . . . . . $ . . . . . $
$B R Σ III$

[ Parent ]

Who do you think you are?! (1.66 / 6) (#36)
by undermyne on Wed Apr 20, 2005 at 04:23:46 PM EST

Posting a *technology* article on what amounts to a political blog.

+1 FP


"I think you've confused a GMail invite with money and a huge cock." Th
The illogic of Moore's Law (2.50 / 2) (#43)
by urdine on Wed Apr 20, 2005 at 08:37:35 PM EST

I never understood why anyone bothered to pay attention to this retarded "law".  There is no evidence or even indication of anything that would support this claim.  The ONLY reason it has survived so long is that it has been adopted by Intel into their sales and marketing strategy, and created a stable way for them to release and profit from incremental improvements over a long period of time.  

In fact, history pretty well proves that this law should NOT work, because innovation usually happens all at once, then flattens, then spikes again.  You can't predict innovation unless it is not innovation at all, but capitalism in action.  So from that standpoint, it has been a self-fulfilling prophecy, but the reasons for its success should be understood.

It's not a law... (3.00 / 2) (#67)
by Eccles on Thu Apr 21, 2005 at 11:50:30 AM EST

It's not a law (but it's simply too hard to override the meme name), it's an observation, and probably a useful one. Thinking you might need a fab? Use Moore's Observation to estimate how long it'll be able to produce chips, and where they lie on the technology scale (and thus profit scale.)

[ Parent ]
The persistence of Moore's Law in reality (none / 0) (#84)
by cburke on Thu Apr 21, 2005 at 11:43:35 PM EST

Whether you think it logical or not, Moore has been correct for the last twenty years.  This goes beyond Intel, as did Moore's original observation.  It's what has occured throughout the industry and continues occuring.  You think it is a deliberate sand bagging of development.  Well it is deliberate, but not in the sense of restraint but in the sense of aspiration.  

Exponential growth is neither "incremental" nor "stable".  Think about this:  Since 1971, we've gone from 2000 transistors on Intel's 4004 to 200 million on a Pentium EE.  That's a factor of 100 thousand.  This is not some slow, miserly development curve.  That's break-neck development non-stop for decades.  Moore's Law is true because engineers have worked hard to keep it true, always trying to get another percent on top of what they already have.  That's a self fulfilling prohpecy to be sure, but it is not one that comes for free.

There would be no reason to do this -- to pay the cost of the extra transistors, the manufacturing facilities, the design complexity -- if nothing came from it.  Performance is what the sales and marketing people sell not transistor count.  That, while not as sharp or constant as Moore's Law, has also been growing exponentially.  The quest for more performance has been what has driven the increasing transistor counts and everything else.

You do raise an interesting point, though:  Our expectation from other areas is that exponential curves don't last long, and development rates tend to level off.  Far from being some kind of deliberate slowing, this kind of development typically isn't conceivable.  Why is it possible with computers?  A big reason is that computers are tools that can help you build faster computers.  Tools to make tools to make tools.  This feeds into other areas, such as chemistry where computer simulations are used for research in process engineering that feeds back into creating faster computers.

All of this, to be sure, has happened because it was profitable.  That what was possible became actual is, in my opinion, a sign that in this case capitalism really did work at driving innovation.

[ Parent ]

Computing power of brains (3.00 / 3) (#45)
by localroger on Wed Apr 20, 2005 at 09:27:19 PM EST

I'd have like to see a better breakdown of how the computing power of a brain is quantified. It seems to me that neurons are pretty stupid; I don't go in for all this microcircuitry chic that Kim Stanley Robinson wrote into Blue Mars. The main strength of brains compared to silicon is the hundreds of thousands of miles of nerve fibres allowing reprogrammable point-to-point connections to be formed in googols (as opposed to googles, hmmm) of combinations. The actual processing of data along those pathways is pretty slow, but the pathways themselves (which also form slowly) can form in a truly vast array of combinations. It seems to me that it would be pretty hard to relate this capability to that of a Von Neumann style computer without a fair amount of math and more than a few *cough* assumptions.

I am become Death, Destroyer of Worlds -- J. Robert Oppenheimer
Exactly Correct (3.00 / 3) (#46)
by hardburn on Wed Apr 20, 2005 at 09:55:41 PM EST

What all these media bites fail to understand is that the brain and modern computers are completely unlike each other, and we have no reliable way of comparing them. Additionally, each is very good at doing different kinds of tasks. Computers are great at heavy number crunching. Human brains are great at abstraction and pattern finding.

Try holding a finger a few inches from your face, and move it back and forth while tracking it with your eyes. This task is easy for anyone older than an infant to do (save for some with mental disorders). However, writing a computer program with a camera setup to do the same is horribly complex. Around the level of a Masters thesis in AI.

Now trying calculating pi in your head. No cheating with paper or using memory! Even with paper or memory, it's hard to do more than a dozen or so digits. Now write a computer program to do the same. Once you've found the relevent formulas, it's barely harder than a "Hello, world!", and may generate more digits in the time it takes to let go of the "Enter" key to run the program than you could do in an hour.

These are two very different tasks. While I'm not fundamentally opposed to AI research, I think direct comparisons of transister-based computers to the human brain are misguided, and that we may be better off letting the two do tasks that they are well suited for.

Whenever I hear "in 30 years, computers will be fast enough to emulate humans", I go get my LART.


----
while($story = K5::Story->new()) { $story->vote(-1) if($story->section() == $POLITICS); }


[ Parent ]
Yup (none / 0) (#49)
by dn on Wed Apr 20, 2005 at 11:15:51 PM EST

Human brains are great at abstraction and pattern finding.
Abstraction and pattern finding with huge data sets. What makes neurons so great isn't their individual properties, but that gobs and gobs of them work in parallel.
Whenever I hear "in 30 years, computers will be fast enough to emulate humans", I go get my LART.
As far as the Turing test goes, I suspect they've been fast enough for many years. The problem has been not enough RAM. Just parsing and storing a dictionary using convenient (naive, bloated) data structures burns through hundreds of megabytes, and the brain has dozens of "knowledge centers" or "agents" or whatever you want to call them that need to be implemented.

    I ♥
TOXIC
WASTE

[ Parent ]

different but prehaps complimentary (none / 0) (#57)
by m a r c on Thu Apr 21, 2005 at 06:12:11 AM EST

prehaps the future of intelligence is not a choice between the computer 'mind' and the human mind but a hyprid of the two. If an interface could be designed that would be transparent to human consciousness then it could be considered the next evolution of the human brain.
I got a dog and named him "Stay". Now, I go "Come here, Stay!". After a while, the dog went insane and wouldn't move at all.
[ Parent ]
Neurons aren't stupid (3.00 / 5) (#65)
by schrotie on Thu Apr 21, 2005 at 09:50:12 AM EST

When building artificial neural networks, the usual way of thinking of them is that a real world neuron would be represented by a whole artificial network. Real neurons do more than summing and weighting. They learn in real time, they become exited and exhausted and react to a plethora of chemical pathways to which they also contribute. It's a mess.

I'm writing simulations of models that try to describe how a certain species of stick insect (walking stick) controls its gait. Those beasts have a couple of thousands of neurons and beat any walking robot hands down. I'm pretty optimistic that their controller could be simulated on a modern PC, but I don't know how. So the computing power is probably there (for emulating walking sticks, not emulating small mammals, mind you!), but computers still suck big time in competition.

There are dozens of examples for such things. Flies are evaluating visual information and react in milliseconds. They have more neurons and I don't think that could be done with computers. Bees are are intimidatingly intelligent. They can learn a hell of a lot including simple math. They have top notch sensory processing, navigation, communication and and and. At about a million neurons they are the Mensa Club of insects.

All these comparisons of computers and mammal brains are utterly ridiculous. Beat insects and win the Nobel Prize.

[ Parent ]

basic numbers (none / 0) (#70)
by jbridge21 on Thu Apr 21, 2005 at 01:57:56 PM EST

100 billion neurons
signal propagation rate 1000 Hz
interconnects per neuron 1000?

[ Parent ]
Details (none / 0) (#88)
by schrotie on Fri Apr 22, 2005 at 04:15:43 AM EST

100 billion neurons
In the brain, maybe including the spine. I think there's that amount again in the stomach.
signal propagation rate 1000 Hz
More like 200 Hz max. Depends on neuron type. This rate is the central way of encoding on neuron level. Neurons aren't binary switches but rather analog encoding in firing frequency. In many areas (e.g. the complete retina and the "upper" half, i.e. dendrites, of every neuron), encoding is completely analog.
Signal velocity differs vastly. Those wires from your foot to your knee are rather fast, many wires in the brain are pretty slow.
interconnects per neuron 1000?
Sometimes a lot more (pyramidal cells in cortex) sometimes less, but the number is basically ok.

[ Parent ]
Quantum/DNA computers (none / 1) (#47)
by poyoyo on Wed Apr 20, 2005 at 10:06:03 PM EST

What's missing from most of the popular discussion about quantum computers is that these systems, while a huge breakthrough in theory, are in practice likely to be only useful for military (i.e. codebreaking) and scientific computations. Many people seem to think that their desktop PC is going to be running a quantum CPU in 20 years, but that's not likely at all. Algorithms for quantum computers are probabilistic --- they are very likely, but not absolutely certain, to give the right answer. They are very difficult to program --- designing even a simple algorithm requires doing Fourier transforms and visualizing matrices of complex numbers. (Look at Grover's algorithm for example --- not exactly as easy to understand as classical binary search, is it?) And because of the great lengths that have to be taken to prevent a quantum system from coming into contact with the environment and decohering, they are likely to always remain much bulkier, more expensive and more energy-guzzling than electronic computers.

For all these reasons I see a future for quantum computers as processors in specialized supercomputers, triggered by electronic control units (i.e. no practical computer will ever be fully quantum). I think this scenario is pretty widely believed among quantum computing researchers, though they won't admit it in public because they don't want to lose their research grants.

The same goes for DNA --- although it may well be useful for medical applications such as triggered drug release, it's hard to imagine how, say, a word processor could be made to run on it. Biological materials aren't especially sequential or coherent. (Same thing goes for a hypothetical human-brain-type system.)

This is not to put down these technologies, which may lead to big advances in some scientific applications, but they're not likely to make an appearance in non-scientists' everyday life, and they have nothing to do with Moore's Law, which usually refers to improvements in mainstream corporate and consumer computing.

So anyway, your argument doesn't hold much water. Your first link is an innovation which can only maintain Moore's Law for another decade or so, so what you have left is basically: What these arguments still fail to take into account is the type of human ingeneuity that drives future innovation. This type of argument is all very well and good when it comes to things that are in principle engineerable, but it falls apart when we're about to hit theoretical limits, as is the case now. Yes, our engineers did manage to build flying machines despite the naysayers, but faster-than-light warp drives are another matter entirely.

Probabilistic algorithms (none / 0) (#68)
by fairthought on Thu Apr 21, 2005 at 12:51:24 PM EST

I don't see the fact of algorithms being of a probabilistic nature holding back quantum computing in any way.

For one thing, probabilistic algorithms are well accepted and used even in current computers.

Additionally, current computers are not 100% reliable. So the results you get from even completely deterministic algorithms are only "probably" correct. With both binary computers and quantum computers the solution to the problem is to use redundant design to reduce the chance of a wrong answer to an arbitrarily low value.

[ Parent ]

Limits (none / 0) (#91)
by paranoid on Fri Apr 22, 2005 at 04:53:45 AM EST

You forget about two things: 1) human brain. We obviously can make a computer that is as fast as a human brain, because brains exist, and 2) we are not limited to 2D processors.

We are not even close to the fundamental limits to computation. There easily is potential to make computers millions and billions of times faster before we really start hitting the impenetrable walls.

[ Parent ]

whoa there pardner (3.00 / 5) (#51)
by demi on Thu Apr 21, 2005 at 12:12:57 AM EST

You know that MIT Tech review article lists like 8 research projects and 2 things that are actual technologies (cell phone viruses and airborne networks). Most or all of these things will need a massive engineering effort to get them to work as well as has been done with Si CMOS. None of them are realistic candidates to extend Moore's Law at this point.

For example, single-walled carbon nanotubes (SWNTs) in theory could be absolutely stupendous interconnects. The intrinsic thermal and electronic conductivity of a (5,5) or (10,10) nanotube handily beats copper wires of comparable cross-sectional area, with vastly superior electromigration resistance to boot. However, once you make non-SWNT connections to the ends of the tube, as would be the case if you wanted to 'wire' them into circuits, the contact mismatches wipe out the gains and create a mess of new problems. The extra real estate and complexity of making good contacts, when possible, tends to make the tiny size of the nanomaterials irrelevant. This is true for other kinds of quantum wires and quantum dots, too. Last, the article glosses over the fact that there is no known way to make a pure sample of (5,5) SWNTs or any other chiral vector for that matter. The best methods at this point can only make the semiconductor tubes in high yield (yet still too low for IC production), not the ballistic conductors that are the basis of so many optimistic scenarios.

Quantum computing is intriguing for solving certain kinds of problems, but the devices usually operate with ~MHz frequencies at cryogenic temperatures. Not likely to be coming to a laptop near you by decade's end, but I would still keep an eye on efforts like Vancouver-based D-Wave. Whatever comes of it, and other methods of 'computation' such as those using DNA, is certainly not going to be appended to the ITRS roadmap any time soon (if ever).

The prospects for scaling/speed beyond the years 2010-2012 are very nebulous at this point, different from the other times in the past where the industry faced challenges in making a transition to a new technology (recent examples: sub-micron lithography, Al to Cu interconnects, 300 mm wafers, DUV and EUV steppers). The human inertia, capital investment, and potential to keep kludging new life out of Si, SiGe, etc. will undoubtedly keep the industry on track for about 8 more years. Problems like the thermal wall will continue to nag, but the real end of Moore's Law will be when we reach about 10 nm half-pitch, the point at which quantum physics will completely subsume the function of *FET design. The question is whether the industry will be able to brute-force its way straight to 10 nm, or if a series of ever more costly plateaus that will appear. Most of the smaller companies will have long since reached a decision point where technological development of integrated circuits will be completely outsourced to foundries and architectural development and product differentiation may or may not flourish in its stead. The endgame of Moore's Law will be a very small number (2-3 at most) of huge IC consortia trying to focus on keeping any margins at all for their core business of higher end products, fending off a flood-like horde of commodity producers operating in all corners of the globe.

Nanotechnology is slowly making inroads into the fabrication environment and already there are some ways in which it is causing barriers to come down. But to make it into production, the new nanomaterials will be called upon to perform and behave in a way that one-off research prototypes do not demand. The thing that is sorely lacking from all of these scenarios where some brave scientist topples the temple of silicon is an understanding of the engineering difficulties involved. I'm not saying it can't be done, in fact I am trying to do it myself. Whether or not a new computation/fabrication paradigm arises to replace our current technologies will depend highly on whether these research projects get over their vanity phase and take all of these factors into consideration.

There is a reason that Silicon Valley is not called Germanium Valley - that's because germanium oxide has some slight chemical and physical differences from silicon oxide that could not be ameliorated by engineering. The processing advantages of working with silicon and silicon oxide caused it to beat out the otherwise undeniably superior material (Ge, which may yet make a comeback). And all of the money that's been and continues to be poured into post-CMOS electronics might be a waste if that lesson is not heeded.

right on! (none / 0) (#111)
by pako on Sat Apr 30, 2005 at 06:45:32 PM EST

cramming transistors into wafer is not all that keeps Moore's law valid.

All these rascals needs to be connected and that's where the rubber meets the road in real life production world. Litography, planarization of layers and all that connected to real production is right now at about 65 maybe 45 nano, with sometimes sketchy results. 10 nano might work in LAB, but it's a long way from MIT lab to Taiwanese FAB.

Yes, consorciums are the way to go at the moment. They get together to develop and standardize process and than go to market to duke it out in yields.

  • they came in the clothes that i'm in and through the phone in my wall. they're strangers.
    [ Parent ]
  • Um (none / 1) (#52)
    by trhurler on Thu Apr 21, 2005 at 02:07:28 AM EST

    First of all, this is mostly speculation.

    Second, if you're going to speculate, why not try to figure out what we need such machines for? It is quite obvious that we'll be able to run the best imaginable sorts of games and PC applications on regular PCs of the present sort. Networked applications certainly won't require this sort of thing. It could be a boon to research and scientific computing, but if that's the only market, price will keep it from really happening anyway. So what's the driver? Remember, no technology makes it to market if there's no market.

    --
    'God dammit, your posts make me hard.' --LilDebbie

    Real time hologram generation. (none / 0) (#56)
    by StephenThompson on Thu Apr 21, 2005 at 04:43:03 AM EST



    [ Parent ]
    Time travel. (none / 0) (#58)
    by Robert Acton on Thu Apr 21, 2005 at 07:26:30 AM EST



    --
    I am cured.
    [ Parent ]
    No, not so. (none / 1) (#72)
    by Parity on Thu Apr 21, 2005 at 03:46:37 PM EST

    There is no cap on the amount of processing power that one can use in designing a game. In terms of the graphics side of the equation, currently we have crude, sharp edge approximations of shadows,  simplistic lighting models, and simply ignore  any kind of refraction. If you came up GPU's with a hundred times the processing power tomorrow,  they could be used to capicity within a year.

    On the CPU side, there's no limit either - the
    more you can process the more elements you can  add to make a virtual world more realistic - more complex AI, more mobile agents that aren't strictly plot elements, etc. Maybe you don't really care about having rats scurrying around in a consistent manner in the next FPS, but for RPGs, having the wildlife and citizenry act as lifelike as possible is important.

    In the 'real world', there's no cap to the power needed to do more and faster database processing - there are questions that aren't asked because it would take too long, and processes that must be run that take real, human-noticeable time. In the network, servers are multiplying because the amount of CPU limits - there's only so much server-side processing that can be handled. Basic things like Spam-filtering and QoS-guarantees require per-connection or per-packet analysis (depending on implementations), and server-side network services have a definite client limit based on CPU power.

    In most cases, these things -can- be scaled by adding more CPUs, or more blades, etc., but there's a point of diminishing returns on that
    as well.

    And that's without the new applications that we
    haven't thought of doing yet because we 'know'
    you can't do that much with a computer...

    Look through the history of computing. Your criticism has -always- been raised, and it has always been wrong. When you built a general purpose computer with enough power, someone will build an application that has 'just now' become possible.

    --Parity None

    [ Parent ]

    Oh really (none / 0) (#82)
    by trhurler on Thu Apr 21, 2005 at 08:44:38 PM EST

    If what you're saying is true, why has nobody yet written a game that truly needs a 9800 Ultra? Hmm? It's been around for a long time now, you know.

    The truth is, polygon rendering as a technology has reached a point where going faster is mostly meaningless. That's why prices on cards that will do whatever you want are dropping so fast - commoditization.

    CPUs in active use tend to sit about 80% idle. The notion that we NEED faster CPUs is just ridiculous.

    I'm not saying a need won't crop up. What I'm saying is, we don't have one right now, and I'm curious as to what it will be.

    --
    'God dammit, your posts make me hard.' --LilDebbie

    [ Parent ]
    Radeon 9800 Ultra (none / 0) (#90)
    by paranoid on Fri Apr 22, 2005 at 04:47:51 AM EST

    If what you are saying was true, the answer would be the temporary low penetration and long development cycles. Too few people have 9800s and games that are released now entered development when 9500 (or something) was all the rage.

    But I don't even think that you are right. I have Radeon 9600 Pro and it's definitely slow for modern games. I'll trade it for something much faster very soon.

    And what you are saying about the polygon technology is wrong. Yes, we don't need to make it faster, but we need more polygons and we need longer shaders. For that we need faster GPUs.

    [ Parent ]

    Um... (none / 0) (#99)
    by trhurler on Sat Apr 23, 2005 at 12:42:52 AM EST

    The fact that your 9600 isn't good enough is irrelevant to the fact that the 9800 Ultra is sheer overkill. My GeForce 6600GT is considerably slower than a 9800 Ultra, and yet is more than enough(regular 6600s are more than enough, for that matter.) The Radeon 9800 Ultra is a card made for one purpose: because there are people stupid enough to buy them.

    --
    'God dammit, your posts make me hard.' --LilDebbie

    [ Parent ]
    Stupid? (none / 0) (#108)
    by paranoid on Mon Apr 25, 2005 at 06:40:46 PM EST

    What do you means stupid? Not every PC gamer is a cash-strapped teenager. Some people (not myself) don't need to think twice before paying 500$ for a new video card. High-end is an important market and it's ATI (and Nvidia) that would be stupid to ignore it. And I am sure that most people buying Ultras are well aware that they are paying 50% more for a 5% performance improvement. And when you think of it, it's not even that expensive.

    [ Parent ]
    Physics and AI in Games (none / 0) (#73)
    by hardburn on Thu Apr 21, 2005 at 03:53:29 PM EST

    We're definately running up against diminishing returns in terms of graphics in games. The limit is tending to be the technical and creative abilities of your art department rather than processing power. There are still a few more boundries to be crossed (realistic hair and clothing, for instance), but these tend to be exceptional cases.

    However, I doubt we've hit the limit for AI and physics in a game. Guessing from the game behaviors, it seems that C&C: Generals uses basically the same pathfinding algorithm used since the orginal C&C, a 320x240 resolution DOS game. Half Life 2 was a huge leap in game physics, but there are still places where it isn't quite right if you look hard enough (like a helicopter producing interfearence on surface water that's underneath an encosure).

    However, I think these cases will continue the trend of GPUs by having domain-specific processing units instead of doing the work on the main CPU.


    ----
    while($story = K5::Story->new()) { $story->vote(-1) if($story->section() == $POLITICS); }


    [ Parent ]
    global illumination (none / 0) (#80)
    by jsnow on Thu Apr 21, 2005 at 08:18:40 PM EST

    We're definately running up against diminishing returns in terms of graphics in games.

    That's partially true. Graphics have certainly reached the "good enough" point where they're usable for most purposes. However, rasterizing polygons is never going to produce completely believable graphics no matter how many triangles you can draw per second. Better graphics will require better algorithms like ray tracing (computers and algorithms are approaching the point where this can be done in real time) with photon mapping, which can handle many of the cases polygonal renderers or pure ray tracers can't, like reflected/refracted light, subsurface scattering, realistic ambient light, etc..

    At some point in the future (less than 10 years, I suspect), real time photon mapping is likely to be comonplace, and it will require very fast processors (or very fast dedicated hardware) to generate sharp, believable images at a decent resolution/framerate.

    [ Parent ]

    Hmmm (none / 0) (#81)
    by trhurler on Thu Apr 21, 2005 at 08:41:52 PM EST

    A friend of mine did the very first real time raytracing work in the world. He hit 30fps using a supercomputer(but had no way to store the images he generated quickly enough to save more than one here and one there, amusingly enough.) I don't think you're going to see this as a practical technology in less than 20-25 years time. The reason is simple: as you say, "pure" raytracing is a rather academic exercise. The things you add REALLY slow things down when you're talking about generating frames in a small fraction of a second. In any case, this work will almost certainly be done on specialized processors, and bandwidth is more important than raw processor speeds(which are already approaching fast enough in a few cases.)

    --
    'God dammit, your posts make me hard.' --LilDebbie

    [ Parent ]
    Real time ray tracing (none / 0) (#85)
    by jsnow on Fri Apr 22, 2005 at 12:42:47 AM EST

    I don't think you're going to see this as a practical technology in less than 20-25 years time.

    Tachyon can render a sphereflake in a few seconds. Not fast enough to be interactive, but almost. Some people at Stanford have implemented a ray tracer with photon mapping that runs on a GPU. Some people in Germany are working on openrt, a graphics library for real time ray tracing. Heaven seven is a real time ray tracer as a 64k windows binary that displays a preprogrammed animation.

    Jensen has a lot of cool videos of less-than-real-time renderings with photon mapping that show what's possible with lots of time and/or computer power, and he wrote a pretty good book about the subject.

    The complexity of photon mapping and ray tracing really isn't that bad, and it scales very well with increasing scene complexity. People are already experimenting with real time ray tracing. So far they're all toys that aren't at all competitive with a modern GPU, but I don't expect it to be long until someone produces a ray tracer that performs as well as the best polygonal renderers, and not long thereafter until it supports global illumination and we start playing games that look like real life.

    [ Parent ]

    Look (none / 0) (#86)
    by trhurler on Fri Apr 22, 2005 at 01:07:47 AM EST

    Tachyon was written by my friend. I know exactly what's possible. I gave him a 25% speedup on antialiased scenes one day while we were yakking in the lab. I'm no expert on it, but it is a fair bet I know more than you do.

    What I'm telling you is, you may not expect it to be long, but that's a human trait called "wishful thinking." Tachyon can render an animated sequence of scenes in realtime on a supercomputer. So what? Do you own a supercomputer? Remember, the supercomputers of today are not just a few hundred times faster than the PCs, as was the case in the 80s. They're often ten thousand times or more faster for parallelizable tasks(and Tachyon is highly parallel - that's how it gets the job done.) It will be quite a while before any machine that's mass market priced will even be remotely capable of this sort of thing.

    You hope that someone will write some amazing new code that goes faster than Tachyon. What you don't understand is, developments beyond what has already been done are going to be incremental. There is no algorithm change that's going to lop an order of magnitude off. And remember, Tachyon is a pure raytracer that lacks even many common raytracer features - it does NOT produce photorealistic scenes except for carefully chosen ones that don't show any of its weaknesses. Any attempt at real photorealism is going to slow it down a LOT. As in, orders of magnitude difference. The best known methods for handling nonpoint lights, arbitrary mirrors, diffraction effects, and so on are so much slower than the basic raytracing algorithm that it isn't even funny. On huge server farms, people like Pixar take YEARS to produce two or three hours worth of video. You think that's going to change in the next ten years so much that you'll have it working in realtime on your desk? If so, you're daft.

    The only way this is going to happen is MUCH better hardware, and that's going to take a long time to reach the consumer level. Just get over it.

    --
    'God dammit, your posts make me hard.' --LilDebbie

    [ Parent ]
    I don't think it's as bad as that (none / 0) (#87)
    by jsnow on Fri Apr 22, 2005 at 03:11:26 AM EST

    I've written a couple simple ray tracers myself, and have used povray extensively so I'm quite aware of how slow they can be, and what the algorithmic limitations are.

    I do have a strong suspicion that the best ray tracers out there trace way more rays than necessary. I wrote a ray tracer that traces at a low resolution and then traces at progressively higher resolutions in the areas of the image where the earlier rays came close to "interesting features". Consequently, only the edges of objects get rendered at full resolution, while the rest can be interpolated. So far, it only traces spheres and doesn't do any hierarchical bounding, but it works reasonably well and is very fast compared to rendering one ray per pixel. There are all sorts of complications in determining whether a ray came near an "interesting feature" or not, but I suspect the approach could be workable in a fully featured ray tracer, which maybe I will write if I have the time, motivation, and skill. If not, maybe someone else will.

    Perhaps tachyon already does something similar. If so, maybe the order of magnitude I'm hoping for isn't realistic and I'll have to wait for computers to get 1000 times faster instead of 100. That won't be all that long if moore's law holds.

    [ Parent ]

    Well, (none / 0) (#100)
    by trhurler on Sat Apr 23, 2005 at 12:59:02 AM EST

    First of all, 100 times faster wouldn't do it even if you were right, and 1000 times won't do it regardless. You are still neglecting the fact that pure ray tracing isn't enough, and that "fill in" methods are very slow by comparison.

    Second, last I checked, Tachyon does not do the optimization you're talking about(and it might be hard to implement because of the distributed nature of the processing, but to get a real answer on that, you'd have to ask the author.) However, for any real scene, I defy you to come up with an algorithm for determining "interesting" areas that runs in reasonable time for this sort of thing. A single sphere? Sure, you probably hardcoded it, and even if not, it is a trivial task. How about a teapot? That's much harder. Now how about a sphereflake? You totally lose. A mildly irregular sphereflake. Heh... um... yeah.

    Third, even if a variation on that optimization proved to be feasible, tuning it so that it would never "screw up" would be difficult or impossible, the model language in use would have to be specialized, and a single order of magnitude is really only worth maybe five years of advances in computing hardware.

    Raytraced surfaces may look better, but if you can't have realistic clouds, rain, mirrors, glare effects, nonpoint light sources, and so on, the resulting image overall will look worse than a polygon rendered image. Adding all that will slow you down more than you care to admit.

    Then on top of all that, you're just not being realistic about how long it takes to render arbitrary complex scenes these days, even with simple shadows, no reflections, and so on. That is, how long it takes on commodity hardware.

    --
    'God dammit, your posts make me hard.' --LilDebbie

    [ Parent ]
    complexity of ray tracing (none / 0) (#105)
    by jsnow on Mon Apr 25, 2005 at 02:26:21 AM EST

    You are still neglecting the fact that pure ray tracing isn't enough, and that "fill in" methods are very slow by comparison.

    Ray tracing does at least as well as rasterizing triangle in terms of quality, and in some areas it does quite a bit better, so saying it's not good enough isn't quite fair.

    To fix some of the limitations of ray tracing takes more resources, but probably not much more than an order of magnitude more. Focal blur, motion blur, and soft shadows can be implemented by tracing more rays.

    Global illumination can be implemented using radiosity or photon mapping. Radiosity is O(n^2) in the complexity of the scene, which rules it out for this use (and also has the limitation that it can't represent anything other than perfectly diffuse interreflections between objects).

    Photon mapping is O(nlogn) with the number of photons, since they have to be sorted into a tree and retrieved later, but otherwise is not much different from regular ray tracing, but traced from each light source, rather than the camera.

    I really haven't used photon mapping enough to have a good feel for it's "real world" performance, but the algorithms are not very bad, provided not many photons are cast into parts of the scene that aren't visible. (In his photon mapping book, Jensen mentions a trick involving a kind of photon-like unit called an "importon" cast from the camera to help judge the relative importance of each photon cast in the photon-tracing phase.)

    However, for any real scene, I defy you to come up with an algorithm for determining "interesting" areas that runs in reasonable time for this sort of thing. A single sphere? Sure, you probably hardcoded it, and even if not, it is a trivial task. How about a teapot? That's much harder. Now how about a sphereflake? You totally lose.

    My solution to this was to augment the ray-intersection test to return the angle by which the ray missed the nearest discontinuity in the object. For groups of objects, you also have to take into account discontinuities brought about by intersections of objects. Rather than describe how this works (its complicated), here is a screenshot of a bunch of spheres. To visualize how much work is actually being done, I enabled an option to display a dot for every ray traced. Notice that extra rays are traced in the crack where two spheres come together. Sphereflake (regular or irregular) is no more complicated, but I'll have to implement a bounding volume heirarchy if I want that to work at any reasonable speed. My implementation is not perfect, but it works most of the time, and I don't believe its defects to be in the concept but rather in my buggy implementation.

    My "augmented" ray intersection test has to be implemented for every kind of basic geometry. This is probably why no one else tries to do this - it's hard to get right, and a lot of work. I don't think it's impossible, though, and it may be worth the extra hastle for the speed improvement.

    Raytraced surfaces may look better, but if you can't have realistic clouds, rain, mirrors, glare effects, nonpoint light sources, and so on, the resulting image overall will look worse than a polygon rendered image. Adding all that will slow you down more than you care to admit.

    Raytracers thrive on mirrors. Clouds can be implemented the same way they're implemented in games now: paint a picture on the background. If you want something better, clouds are always going to be hard whether you're ray tracing or blasting triangles onto the screen. There are reasonable approaches either way, but doing a realistic simulation of an atmosphere takes a lot of cpu time and memory. Close up, rain can be drawn with a particle system, otherwise it behaves like a cloud. Nonpoint light sources can be handled by photon mapping, or adaptively supersampling the shadow rays, or using some cheap trick.

    Then on top of all that, you're just not being realistic about how long it takes to render arbitrary complex scenes these days, even with simple shadows, no reflections, and so on. That is, how long it takes on commodity hardware.

    Actually, ray tracers are wonderful for complex scenes. A good ray tracer is O(logn) with the complexity of the scene, if it implements a bounding heirarchy that's at all sane (octrees and bsp trees are quite popular). This is one reason why I expect ray tracers to overtake triangle rasterization one day - beyond a certain threshold of complexity, they're just more efficient.

    [ Parent ]

    what a dolt. (none / 1) (#92)
    by the ghost of rmg on Fri Apr 22, 2005 at 02:11:33 PM EST

    the advantage of quantum computing is obvious: P != NP. the ability to solve NP-problems in polynomial time would fundamentally change computing.

    now go write some java. i'm sure there's some business logic you could be implementing.


    rmg: comments better than yours.
    [ Parent ]

    You have proof? (none / 0) (#93)
    by derobert on Fri Apr 22, 2005 at 03:14:28 PM EST

    Neat! I'm glad to hear you've proven that P != NP. Have you claimed that Clay Millennium Prize yet?

    [ Parent ]
    it was proven by computer hackers (none / 1) (#96)
    by the ghost of rmg on Fri Apr 22, 2005 at 05:46:23 PM EST

    in the early nineties. you have to be pretty 1337 to get it, but there's a perl script that does the conversion. of course, a lamer like yourself wouldn't know anything about that.


    rmg: comments better than yours.
    [ Parent ]
    Ah (none / 1) (#98)
    by trhurler on Sat Apr 23, 2005 at 12:39:21 AM EST

    You've fallen for the big lie. Quantum computers can solve NP problems in P time IFF the size of the problem is less than 2^n where n is the number of qubits. It turns out that the difficulty of increasing the number of qubits is roughly the same as the difficulty of just producing massively parallel traditional computers. Also, all existing quantum computers and all that are possible under present theory are completely destroyed when you read their outputs, making them, to say the least, a bit impractical for anything but lab use.

    --
    'God dammit, your posts make me hard.' --LilDebbie

    [ Parent ]
    No longer the decision factor, for many (2.75 / 4) (#53)
    by strlen on Thu Apr 21, 2005 at 02:56:24 AM EST

    I recently chose a 1.5 Ghz Pentium-M based system for a laptop, largely due to small weight and a prolonged (8 hour with two batteries) battery life.

    I also chose a chip with a lesser clock rate, but a large cache, for a colocated server -- and chosing an Intel cpu to match a server motherboard which seemed like an excellent bargain.

    I chose a 2.0 ghz Athlon 64, rather than a 3.2 Pentium 4 for my desktop machine -- due to bang for buck -- and again, a bargain for the money motherboard (for the applications I was looking for in a desktop).

    My home non-desktop, non-laptop machines (e-mail, firewall, DNS, dhcp, testing, etc..) are largely SPARCs and Alphas, due to the facts that a) the speed of those isn't too terribly relevant (especially when I can simply cross compile on the AMD based machine to build NetBSD's world if push comes to shove) b) the longevity (due to presence of high quality components is) c) having serial console d)  essentially low, if not free price when obtaining them.

    I don't think that I have, in recent years, made a decision based explicitly on clock rates (except when choosing to get an Athlon machine to replace an older PIII 700 laptop which was my fastest machine at that time).

    Not to mention, for the performance most people are interested in (games, video editing) the things that matter are (first and foremost) the GPU, the hard disk speed, the memory speed.

    --
    [T]he strongest man in the world is he who stands most alone. - Henrik Ibsen.

    Interesting point. (none / 0) (#54)
    by Lisa Dawn on Thu Apr 21, 2005 at 03:28:09 AM EST

    Maybe the market limitations, rather than technological ones, will defeat Moore. In fact, I'm happy with 2GHz for most things, though I could see myself putting 32GHz to the limit. Of course, that's for complex realtime simulations.

    Still, I'm holding out for fully optical systems. I read about a 100% optical router or something of the sort a few years ago, and that got me thinking. Not just reduced heat (I assume) but think of this: no signficant quantities of metal. In fact, no electrical conductivity at all. That I would pay dearly for. So would governments worldwide, I can only assume. I'm not sure that EMP and HERF are commonly used in war, but they could be made irrelevant in this respect.

    [ Parent ]

    Not purely the market and not purely moore's law (none / 0) (#64)
    by porkchop_d_clown on Thu Apr 21, 2005 at 09:45:42 AM EST

    The thing retarding clock speeds right now isn't transistor count, it's heat. The faster the chip, the more power it requires and the more heat it generates.

    That's what's driving the switch to multi-core chips: they run slower, but you get the equivalent of two CPUs in a single package. Thus, you get more performance at lower cost.

    How many trolls could a true troll troll if a true troll could troll trolls?
    [ Parent ]

    GODDAMMIT (2.66 / 6) (#61)
    by creaothceann on Thu Apr 21, 2005 at 08:04:22 AM EST

    What this indicates is that computers are catching up fast. If Moore's law holds, then in 30 years, computers will be able to "think" faster than humans. Even before computers overtake the human brain, they may well become capable of improving on their own designs. The possibility of computers eventually rendering humans obsolete is touched on in [...].

    COMPUTERS DON'T THINK!

    The CPU is just a damn hardware interpreter crunching its way through a row of RAM cells. It's the program code that matters, and even genetic algorithms won't be able to create it completely on their own.

    That's why the quotes... (none / 0) (#74)
    by Parity on Thu Apr 21, 2005 at 04:00:26 PM EST

    The quotes around "think" indicate he's using
    the term in a colloquial and inaccurate meaning
    that, nonetheless, we will all understand.

    Breathe deeply. Relax. It's only a word.

    --Parity None


    [ Parent ]

    Yeah, used quotes... (none / 0) (#94)
    by creaothceann on Fri Apr 22, 2005 at 03:19:02 PM EST

    But his arguments equal to "Scientists need money. Let's give them x trillion dollars each year, and soon we will have anti-gravity".
    It just doesn't work like that.

    [ Parent ]
    That's a nonsequitur (none / 0) (#77)
    by p3d0 on Thu Apr 21, 2005 at 05:23:50 PM EST

    Couldn't a similar statement be made about the brain? It's just cells!!!
    --
    Patrick Doyle
    My comments do not reflect the opinions of my employer.
    [ Parent ]
    Yes. (none / 0) (#95)
    by creaothceann on Fri Apr 22, 2005 at 03:36:49 PM EST

    That's the flaw - brain cells don't think, and CPUs don't help writing a letter.
    But they can create a larger system (mind / word processor) that has totally new abilities.

    Do you know about emulation? I can play Super Metroid on my PC, even though ZSNES (the "brain cell") has no clue whatsoever about brilliant leveldesign. It's two different things.

    [ Parent ]
    Singularity (none / 1) (#62)
    by schrotie on Thu Apr 21, 2005 at 09:00:43 AM EST

    Vinge's singularity idea has intrigued me since I first heard about it, but I think it overlooks an important fact: the idea may hold for anything that can be done with information alone. But there are things that have to be tried against reality. That is what science is about and this was also always an important part of engineering. The reason for this is that we do not have a perfect theory of the world and that even if we had one, it might be faster (for a long time to come) to try things than calculate them on quantum or super string level or whatnot.

    Now real world tests cannot be sped up infinitely as (apparently) information processing can. Speeding up real world test implies higher energy usage, more stress on involved parts and more - and there are physical limits. So even if computers become very fast, they will probably not be able to accelerate Moore's law to infinity because they cannot derive the implied physics from thinking alone. And even if they could, the faster computers would have to be built, which has to done in real world and can't be accelerated infinitely.

    And then there is what I call - for lack of a better term - information ecology. Little is known about rules that govern very complex systems of information. It might be that there are no limits to the speed and complexity of information processing systems. But maybe there are.

    If modern physics are not very much mistaken information cannot travel faster than light and information cannot be encoded in anything smaller than the fabric of the universe (quantum particles, super strings or quantum space itself - space is probably not continuous). So there are physical limits. Which are broad. But systems are not going to become "infinitely" intelligent. But that is not the point I'm trying to make. There might be laws that govern the organization of information, and there might be walls we don't know.

    Technology Review; not Technological (none / 0) (#69)
    by beefman on Thu Apr 21, 2005 at 01:06:22 PM EST

    -Carl

    Is in Islington Bile.
    superconductor =/ semicondutor (none / 1) (#78)
    by full plate on Thu Apr 21, 2005 at 05:55:28 PM EST

    The (5,5) single-walled carbon nanotube (SWNT) is a superconductor at room temperature ...what kills me is the "room temperature" part. You could have least slightly bent the truth and said it was a extremely high superconductor (i.e. >100 degree kelvin) ...but room temp! CNTs are superconductors at approximately 15K not 300K, big difference. Now there are some quacks out there who claim they have seen some effects which might maybe perhaps be superconductivity, but no one has to date produced a CNT that transmits electrons with no resistance and melts in your mouth, not in your hand. Maybe you're getting the words mixed up (semi/super), but there is - and possibly never will be - a room temperature superconductor. I think CNTs are cool enough as-is, without pretending they turn lead to gold and non-scientists to scientists.
    Space is like ______, it can only be ______ in its absence.
    he probably means (none / 0) (#79)
    by demi on Thu Apr 21, 2005 at 06:47:10 PM EST

    ballistic conductance, which to the layman is analogous to superconductivity over short distances (i.e., less than the mean free path of an electron in the given medium). Some metallic SWNTs like (5,5) are ballistic conductors at RT, for distances of a few hundred nm.

    [ Parent ]
    Not super (none / 0) (#89)
    by paranoid on Fri Apr 22, 2005 at 04:36:23 AM EST

    I don't have a link handy, but these 5,5 nanotubes are extremely great conductors, with miniscule resistance, which is independent of the nanotube length. This is NOT superconductivity, but for practical purposes it is probably close enough to be excited.

    [ Parent ]
    Overtaking the Human Brain (none / 1) (#83)
    by cronian on Thu Apr 21, 2005 at 09:58:21 PM EST

    I have yet to see evidence that computers can or will overtake the human brain. The capabilities of humans brains can and does increase. People can learn. For instance, ChessBase, maker of some of the best Chess software, has an article series asking whether top humans are improving faster than top chess computers. Is processing power a significant bottlenock for the human brain? Some people have shown extraordinary capabilities for performing mental computation. If there is a teachable method for people to learn this, what evidence is there that people couldn't take advantage of it?

    Even without this, what reason is there to assume that physical constraints are the main constraints on human thought. The human "software", better known as things like language, mathematics, and philsophy, provide improved thinking for humans. In addition, humans can learn from the advances made by computers.

    We can keep researching technology, and improve the ability of computer's to accomplish many tasks. However, I think too many people are forget that we sometimes need to more work to make the latest discoveries accessible by humans. When more humans can understand the latest discoveries, it will lead to even more advances.

    We perfect it; Congress kills it; They make it; We Import it; It must be anti-Americanism
    Even when we hit the wall... (none / 1) (#97)
    by OpAmp on Fri Apr 22, 2005 at 06:29:06 PM EST

    ...with current technology it won't be that bad.

    First, when the clockspeed barrier indeed is hit, it will be hit first for high-end workstation microprocessors. However, even then there will be enough room for other devices (e.g. microcontrollers, ASICs) to grow. This will probably lead to paradigm change towards even more intelligent peripherials, offloading even more work from the CPUs (already started, e.g. GPUs).

    Second, we will probably see a shift towards parallelism (already starting).

    Third, probably the optimisation techniques for software will be revisited. When throwing more cores and megahertz (as this is currently done) at the problem no longer works, other avenues will have to be explored. As simple step as targeting the build for a modern CPU can give a speed increase in the order of several hundred percent (my own experience). For example, how many AMD64 machines today run mostly code built for 386 or Pentium I microprocessors, effectively wasting already available features, which could give tremendous speed benefits in certain situations? Not to mention algorithmic optimizations.

    Fourth, we may see emergence of smarter ways of doing data processing in hardware and software using these features (think more MMX or Altivec-like ideas).

    Fifth, we may see wide deployment of programmable logic circuits (like FPGAs) as coprocessors. It's true that they will still run at lower speeds than microprocessors, but if they can do as much work in one clock cycle as a microprocessor in a thousand cycles, using them would be viable.

    We may yet see a lot before the main focus shifts away from silicon.

    Quantum computers are just philosophical abstracti (none / 0) (#102)
    by Jacksonbrown on Sat Apr 23, 2005 at 10:13:07 AM EST

    I don't really believe that quantum computers could be implemented. We need full components set to create a computer. At first, it's NOT logical element, than OR or AND gate (other variants are possible, but we need logically full set), memory cell and coductors. Do we have all these elemnts for quantum computers? I think, NO... And there'ra fundamental difficulties to create all these components.

    Uhm, (none / 0) (#106)
    by lukme on Mon Apr 25, 2005 at 12:14:42 PM EST

    The way that it has been implemented using NMR is the Q-Bits are stored as the spin of unique hydrogen or flourine (both spin 1/2), and are operated on by pulse sequences to perform analogous operations to AND, OR, NOT, ... .

    A similar thing can be done using ion traps using pulses of laser light to perform the operations.

    Neither of these methods is good for performing Q-computing on a budget, since the NMR is greater than $500,000 used - not counting all of the chemicals and equipment you would need to synthesize the compound to be used.

    To implement the ion traps, there is the associated vacuum equipment, some method of creating ions and lots of lasers/optics. I bet this would run more than the NMR implementation.


    -----------------------------------
    It's awfully hard to fly with eagles when you're a turkey.
    [ Parent ]
    QUANTUM CPU CYCLES FOR SALE (none / 0) (#103)
    by Byte Crime on Sat Apr 23, 2005 at 11:34:30 AM EST

    My partners and I are building an ultra fast Quantum Computer in the basement. Once it's up and running, we will rent out cpu cycles over the internet through dumb terminals. Why buy an expensive, bulky, noisy, hot Quantum Computer when you don't have too! Just plug in one of our boxes to your cable and you are off and running with unimaginable Ghz to spare. Be the frag king on your block. Solve energy problems on your spare time. Create a Unified Theory during coffee brake.

    OK, I totally made all that up... but I'm wondering if there is any chance that this could happen in the next 30 yrs or so?

    I love these posts!

    As to it being done in someone's basement? (none / 0) (#104)
    by MarlysArtist on Sat Apr 23, 2005 at 08:12:01 PM EST

    Perhaps. Social scientists studying technological innovation often comment that it's someone outside 'the firm' who comes up with new stuff. When a problem such as the inevitable doom of Moore's law appears, it's called a "presumptive anomaly." A historian named Edward Constant wrote a book about this, explaining that aircraft people back in the '30s that while in the near future they could build much faster airframes, propeller-driven engines couldn't be made to generate those higher speeds. But the guys who developed jets weren't "insiders" in the aviation engine business.

    Jack Kilby of integrated circuit fame commented that working for TI was cool and all, but really, those aren't the kind of places people come up with really, really new stuff. He cooked up his idea all by his lonesome, when the rest of the office was on vacation; he had the lab all to himself, with no defined program, and most important, no manager. While it could be said that he worked in industrial research, when he developed the 'chip' idea, he was in a unique situation where the ordinary constraints of industrial research were removed. Although sometimes invention occurs within the company, industrial research is more geared to development, rather than truly novel invention.

    Similiarly, think about PCs. The ad on recently depicting two hippies in a garage having an epiphany that would lead to the PC is a quaint reference to the founders of Macintosh.

    of course, that's when you make a deal with a banker for venture capital, but if you're not as savvy as Robert Noyes of Intel, in ten years the financiers pry you out of your own company (Cisco systems) and you spend the rest of your life breathing fire in TV interviews over it.

    Marly's Artist

    "Never ask 'oh, why were things so much better in the old days?' It's not an intellegent question" --Ecclesiastes, 7:10
    [ Parent ]

    The oft-overstated "law" of Moore. (none / 0) (#110)
    by masher on Sat Apr 30, 2005 at 12:25:20 AM EST

    At its heart, Moore's Law is a simple statement of geometry.  A line is linear...but the area enclosed by two lines grows exponentially.  Double the size of a box and you quadruple its area. <br><br>

    For semiconductors, a linear decrease in transistor size yields an exponential increase in density. Simple geometry.  As long as the industry moves to new process nodes in a linear fashion, we'll see exponential density increases. <br><br>

    So...is the industry moving linearly to each new node step?  One of the first things a physics student learns is _any_ behavior-- no matter how mathematically complex-- can be modelled as a simple linear relationship.  As long as you consider a small enough interval...everything becomes linear.  <br><br>

    Over any small time period, you can draw a reasonably close straight-line approximation of the industry's movement to smaller node sizes.  Over the full 40 years since Moore stated his "law", however...its not really close at all.  Which explains why the original one year value expanded to 18 months then two years, and even as high as three.  <br><br>

    Want to know how fast chip densities are doubling?  Just see how fast we're moving to new nodes.   Fit a straight line to it...smooth out any inconvenient bumps....and you just "proved" Moore's Law.


    [ Parent ]

    Yeah! (none / 0) (#114)
    by The Human Kidney on Fri May 20, 2005 at 08:50:37 AM EST

    Well written. Fantastic article that covered all veiws/
    The human kidney is lesser known in thine eye than the human hand. (What?)
    Thanks Mr. Kidney (none / 0) (#115)
    by BottleRocket on Fri Jun 17, 2005 at 02:07:56 PM EST

    Glad you enjoyed it. The next 30 years will be a great vindication for me if we are all enslaved by robots.

    $ . . . . . $ . . . . . $ . . . . . $
    . ₩ . . . . . . . . . . . . . . . . . . . .
    . . . . * . . . . . * . . . . . * . . . . . * . . . . . *
    $ . . . . .
    . . . . . . . . . . . . . . . . . . . . . . . . . $
    Yes I do download [child pornography], but I don't keep it any longer than I need to, so it can yield insight as to how to find more. --MDC
    $ . . . . .
    . . . . . . . . . . . . . . . . . . . . . . . . . $
    . . . . * . . . . . * . . . . . * . . . . . * . . . . . *
    . ₩ . . . . . . . . . . . . . . . . . . . .
    $ . . . . . $ . . . . . $ . . . . . $
    $B R Σ III$

    [ Parent ]

    The End of Moore's Law | 114 comments (92 topical, 22 editorial, 0 hidden)
    Display: Sort:

    kuro5hin.org

    [XML]
    All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
    See our legalese page for copyright policies. Please also read our Privacy Policy.
    Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
    Need some help? Email help@kuro5hin.org.
    My heart's the long stairs.

    Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!