Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

RISC Losing Ground to Intel

By SlydeRule in MLP
Mon Jun 25, 2001 at 07:55:44 PM EST
Tags: Technology (all tags)

Compaq announced today that it is dumping the Alpha processor and switching to Intel's Itanium. By the year 2004, all new Compaq servers will be "Intel Inside".

The equipment and engineering staff currently used for the Alpha chip will be transferred to Intel, although Compaq apparently will be retaining the Intellectual Property (IP) associated with Alpha and granting Intel a non-exclusive license to the IP. Neither Compaq nor Intel seems to be committed to continuing Alpha chip development beyond the upcoming EV7, and production life is unclear.

Hewlett-Packard also has declared that Itanium will be H-P's "unifying architecture" as it phases out PA-RISC over the next few years.


Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure


Related Links
o dumping the Alpha processor
o H-P's "unifying architecture"
o Also by SlydeRule

Display: Sort:
RISC Losing Ground to Intel | 27 comments (27 topical, editorial, 0 hidden)
A warning (2.08 / 12) (#1)
by www.sorehands.com on Mon Jun 25, 2001 at 03:37:01 PM EST

Is "Intel Inside" a required warning label?

Actually, I suspect that Intel may use some of the Alpha patents and designs for their next chip. Not just buying a competing chip to bury it. Is this is a sign of more movement towards more WinTel systems.

Mattel, SLAPP terrorists intent on destroying free speech.

Moo? (3.81 / 11) (#2)
by raaymoose on Mon Jun 25, 2001 at 03:42:33 PM EST

And here I thought that the x86 chips had moved to the point where they didn't execute x86 instructions, but translated them into the equivalent instruction sequence in the processor's RISC-like instruction set, then executed these instructions. I know the AMD K6-I/II/III(+) and the K7-based processors do this. I'm not so sure about Intel processors, but I suspect the same is true.

I just think the title is a little misleading, I think it should be more along the lines of the Alpha possibily disappearing, not RISC, but the point is good, I like the Alpha and would hate to see them disappear.

x86 and RISCy cores (4.50 / 6) (#15)
by shoeboy on Mon Jun 25, 2001 at 06:16:51 PM EST

And here I thought that the x86 chips had moved to the point where they didn't execute x86 instructions, but translated them into the equivalent instruction sequence in the processor's RISC-like instruction set, then executed these instructions. I know the AMD K6-I/II/III(+) and the K7-based processors do this. I'm not so sure about Intel processors, but I suspect the same is true.

Yes, it is true. Intel started this with the Pentium Pro core. This was the first mainstream "post-RISC" chip and should have ended the RISC vs. CISC debate, but still it continues. The Pentium Pro pretty conclusively proved that implementations of CISC instruction sets could be competetive with RISC implementations. In fact, the .35 micron PPro briefly snatched the SpecINT title away from the .50 micron Alpha. Digital snatched it right back, but the point was made and the post-RISC era was ushered in.

Anyway, there are advantages to the micro-op (breaking CISC instructions up into RISC like operators and executing those). The biggest is that CISC instructions take up less space. So CISC code is more cache friendly than RISC code. The only drawback is that you have to retranslate the CISC instructions in your cache back into micro-ops every time you execute them. The p4 addresses this problem through the use of a trace cache, thus limiting the benefit of smaller CISC code. I could go on and on, but the point is that the RISC vs. CISC debate has become extremely muddy and complex and isn't worth much anymore.

No more trolls!
[ Parent ]

Historic detail (3.33 / 3) (#21)
by infraoctarine on Tue Jun 26, 2001 at 12:51:36 AM EST

Actually, I belive the x86 clone from NexGen, called Nx586, was the first to convert the x86 ISA to internal micro-ops. They came out in early 1995, the PPro in late 1995 (the PPro was better though).

AMD later bought NexGen, and the same people went on to design the successful K6 CPU.

[ Parent ]

AARRRGHH! (3.77 / 9) (#3)
by Xeriar on Mon Jun 25, 2001 at 03:44:09 PM EST

RISC vs. CISC does not really apply anymore in modern processor designs - ideas got swapped from both, and now we have a bunch of mutts.

When I'm feeling blue, I start breathing again.

First, Last and Always (3.50 / 6) (#4)
by jd on Mon Jun 25, 2001 at 03:52:22 PM EST

Unless I'm mistaken, this leaves the "purest" contender for the "RISC" throne the StrongARM, which was incidently also the -first- RISC chip ever built.

(Welll, ok, its predecessor, the ARM chip, was, but that's nit-picking.)

IMHO, the RISC concept of ultra-simplified Reduced Instruction Set Computers is by far the best design. Complex computers are more vulnerable to unexpected side-effects, chip bugs, etc, simply because you've got to check a much greater range of possible instructions and states.

In fact, IMHO, the "ideal" CPU would not be "central" at all, but rather consist of possibly hundreds of "trivial" processing elements, each essentially independent, apart from some basic communications mechanism.

Now, I've argued this before, for running JAVA applications at the hardware level, WITHOUT having any "virtual machine" layer. One of the counter-points was that the communications would become a bottleneck, essentially wiping out any gains the vastly simplified structure could achieve.

I'm going to answer that point. To have a bottleneck, you must have more data wanting to go through a specific channel than that channel can support. Much like the US road network, in fact. :)

However, let's imagine a 3-layer system. Layer 1 is memory. Main memory, cache, register stacks, etc. It's just high-speed memory, to be used as the processor needs.

Layer 2 is the hardware side of the processors. It won't be just one processor, it'll be as many processors as you can possibly cram onto the silicon. My guess would be you could fit around 1024 light-weight RISC processors onto a 3" wafer.

Layer 3 is a network layer, and is the key to having no bottlenecks. In essence, every processor element could have enough bus width to dump and load their entire state in a single cycle. This would effectively allow you to compose "complex" instructions that are as fast, if not faster, than ones on a CISC chip. It is easy to envisage a pipe switcher that would operate as fast as any instruction look-up system.

Have you read the FleetZero papers? (3.00 / 2) (#7)
by QuantumG on Mon Jun 25, 2001 at 04:31:19 PM EST

async logic, wave the of the future. Yah.

Gun fire is the sound of freedom.
[ Parent ]
Some comments... (4.50 / 2) (#14)
by infraoctarine on Mon Jun 25, 2001 at 05:49:50 PM EST

IMHO, the RISC concept of ultra-simplified Reduced Instruction Set Computers is by far the best design

Simple? Well, it was the idea, originally. But modern superscalar, out-of-order cores like the Alpha are very complex. In fact, verification is a nightmare on these large cores. On the other hand, this is mostly because these cores utilize instruction-level parallelism to speed up execution, something which is even harder on a CISC machine.

In fact, IMHO, the "ideal" CPU would not be "central" at all, but rather consist of possibly hundreds of "trivial" processing elements, each essentially independent, apart from some basic communications mechanism.

This depends on what you want. Can your algorithm be parallelized to take advantage of all those small cores? What is the communications overhead (not just bandwidth but also latency)? Applications that can easily be multithreaded, such as webservers or databases can easily benefit from such an architecture, but if you need for instance heavy floating-point numbercrunching (like many scientific applications or games) it would be much harder to get performance out of this architecture. I doubt there is an "ideal" processor architecture. It all depends on what you intend to use it for; there will always be tradeoffs to make ;)

Now, I've argued this before, for running JAVA applications at the hardware level, WITHOUT having any "virtual machine" layer.

There are actually some embedded cores that do this. But the Java ISA is a complex one, so this necessarily means such a processor will be CISCish.

About your 3-layer architecture, I have to confess I don't understand how it would work. It's layer 3 that got me confused. Where is the state dumped or loaded to, and why? What do you mean with instruction look-up system? Is it the normal instruction-fetch we are talking about? Maybe you'd care to explain this further?

I'm not sure how many minimalist cores you could have on a 3" wafer, but it seems like you propose to use this whole piece of silicon as a single huge processor. This is not possible, at least not today. Larger die sizes mean increased likelihood of errors in manufacturing. The largest dies that are manufactured are something like 3x3cm. Larger than that, the fraction of successfully manufactured dies (yield) is just too low. On top of that, you have problems with heat dissipation, clock skew etc. which are worse on large dies.

[ Parent ]

Big network of small processors (none / 0) (#27)
by cameldrv on Tue Jun 26, 2001 at 10:20:49 PM EST

This has been tried as the Connection Machine. It was faster than anything out there theoretically, but it was very difficult to program for. The same issues come up when you go to FPGA based designs. Ultimately, our programming languages are based on a Von Neumann paradigm, and it's not so easy to translate them to a different one. Furthermore, most all programmers think in a Von Neumann mode, and so it is difficult to get people to change around to your way of thinking. Hence, all the modern CPU architectures present a Von Neumann style front-end to a sophisticated parallel processor. This is becoming more difficult. It is widely acknowledged that massively parallel architectures at all levels provide the highest computational throughput. However, it's a lot easier to change people's CPU than it is to change people's minds.

[ Parent ]
Several problems (4.50 / 8) (#5)
by delmoi on Mon Jun 25, 2001 at 04:18:03 PM EST

First, Intel's Itanium is not CISC, it's EPIC, a form of VLIW (Very long instruction word). EPIC systems are designed to take advantage of CPU technology not available when either CISC or RISC were created.

Secondly, HP had a hand in developing the Itanium ISA (Instruction set archetecture), so it's not like their loosing out anything.

Thirdly, both the PentiumII/III/4 (but not the original, IIRC) and Athlon all use RISC cores, with translators for the more complex CISC code being fed into them

So, -1 from me, but since this post does have a lot of topic-relating stuff, I'll leave it as a topical comment.
"'argumentation' is not a word, idiot." -- thelizman
Object Oriented loosing ground to Microsoft? (3.16 / 6) (#6)
by ritlane on Mon Jun 25, 2001 at 04:18:17 PM EST

One is a philosophy (more or less) the other is a company.

Besides, even as a software guy I know that this old argument no longer applies to modern chips. They are both moving into a central (but more RISCish) direction

Witness the G4 (RISC) adding Altivec (CISC where needed)

I like fighting robots
Altivec (2.50 / 4) (#9)
by infraoctarine on Mon Jun 25, 2001 at 05:05:18 PM EST

Witness the G4 (RISC) adding Altivec (CISC where needed)

Altivec is not CISC, it is SIMD (single instruction multiple data), or a "vector unit". Altivec is much more RISC than CISC like, in the way instructions are represented and decoded.

[ Parent ]

yes, but... (3.25 / 4) (#10)
by ritlane on Mon Jun 25, 2001 at 05:12:43 PM EST

Things done in Altivec can be done without it (just slower). So I was looking at it as CISC in philosophy. Adding more than the bare amount of instructions to accomplish a task.

The fact that we are nit picking like this simply supports the point that others (including myself) are making:
It is a waste of time to try to classify chips as RISC or CISC

I like fighting robots
[ Parent ]
wrong definition of RISC (2.50 / 4) (#11)
by uweber on Mon Jun 25, 2001 at 05:24:19 PM EST

RISC is not primarily about less instrucions but about instrucions which need less cicles to complete, so a SIMD unit is not against the RISC philosophy, actually the most unRISCy instruction in a processor is that for integer devisions (usually takes up to 5 times the time in regard to the others).

[ Parent ]
RISC (3.00 / 2) (#13)
by westgeof on Mon Jun 25, 2001 at 05:34:37 PM EST

Wel, to further lead this into the realm of the nitpicking, RISC actually is about less instructions. (Hence the name Reduced Instruction Set Computer)
Having instructions that all (or most) take only a single clock cycle and the pipelining concept is one of the resons behind the development of the architecture, along with simplifying the I/O complexity, but but neither of these truly defines RISC.

As a child, I wanted to know everything. Now I miss my ignorance
[ Parent ]
RISC (4.00 / 4) (#18)
by Gat1024 on Mon Jun 25, 2001 at 06:29:57 PM EST

RISC is slightly misnamed. The original idea was that less complex instructions would provide a number of benefits:

(1) less complex chips to design
(2) lower cost to manufacture possibly making parallel systems cheaper
(3) fewer complex addressing modes leading to better compiler optimizations
(4) more registers to ease register pressure
(5) fewer subsystem depedencies on chip leading to higher throughput
(6) some other crap I don't remember

I think the original idea was along the lines of what Intel is doing with EPIC, namely making the compiler smarter and the hardware dumber. Unlike Intel, one of the selling points of RISC was lower cost chips.

Funny how things turn out.

[ Parent ]
time (3.50 / 2) (#16)
by delmoi on Mon Jun 25, 2001 at 06:27:34 PM EST

RISC instructions should generaly all take the same amount of time. And yeh, RISC IS About Reducing the number of instructions (so you have less hardware and the Compiler can optimize better)
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
huh? (1.00 / 1) (#19)
by Delirium on Mon Jun 25, 2001 at 06:39:45 PM EST

RISC is not primarily about less instrucions

So then why does RISC stand for Reduced Instruction Set Code?

[ Parent ]

RISC (4.25 / 4) (#20)
by Bad Harmony on Mon Jun 25, 2001 at 07:45:04 PM EST

It doesn't. Reduced Instruction Set Computer.

The elimination of extraneous instructions was just one of the design techniques in early RISC architectures. It wasn't the primary goal, so the name is misleading. The goal was reduced complexity and increased speed, getting rid of microcode, complex addressing modes, hard to decode variable length instructions, etc. The VAX is often used as the poster child for CISC.

See this page for some notes on RISC vs. CISC, or look through the comp.arch archives for posts by John Hennesey.

5440' or Fight!
[ Parent ]

SIMD is CISC [NT] (1.75 / 4) (#17)
by delmoi on Mon Jun 25, 2001 at 06:28:41 PM EST

(no text, see subject)
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
Apples and Oranges (3.66 / 3) (#22)
by infraoctarine on Tue Jun 26, 2001 at 01:26:53 AM EST

RISC/CISC and SIMD/SISD/MIMD/MISD are different sets of classifications. RISC/CISC are design philosophies dealing with the way ISA and processor cores are implemented. xIyD is another classification of architectures, based on how it uses instruction and data streams. You cannot say "SIMD is CISC" or "SIMD is RISC"; it could be implemented either way (so yes, my earlier comment was badly phrased ;).

Current SIMD extensions, such as Altivec, are implemented with a more RISC-like philosophy. The PowerPC is itself not RISCish in the sense that it has a minimal instruction set, but the overall design philosophy is (see comment by Gat1024).

[ Parent ]

RISC will survive (3.00 / 4) (#8)
by westgeof on Mon Jun 25, 2001 at 04:57:13 PM EST

Technically, anyway. Most computers are somewhere in between anyway, since the line between RISC and CISC can be as arbitrary as you want. (What is the line between reduced and complex? Especially since we're approaching the center from both sides?)
Of course, in the smaller devices, PDA's et al, if nowhere else the smaller size of the RISC-based processors will keep the architecture alive. (Until my personal favorite, the VLIW format, takes off and wins the race ;-)

On a side note, though, I sure won't miss the PA-RISC. Let's just say that after working on a few of those, the only nice thing I can say about HP is that they make nice calculators...

As a child, I wanted to know everything. Now I miss my ignorance
comp.arch (3.25 / 4) (#12)
by mlinksva on Mon Jun 25, 2001 at 05:26:01 PM EST

You may find slightly more informed commentary on comp.arch (google groups link), though you'll likely come away even more confused. Even MIPS is chugging along as a high-end CPU.
imagoodbitizen adobe unisys badcitizens
Long time overdue (2.33 / 3) (#23)
by pavlos on Tue Jun 26, 2001 at 03:49:00 AM EST

I'm surprised that the Alpha wasn't discontinued earlier. It has not been worthwhile from a price/performance point of view for a long time, except for compiled (scalar) FP code. Even that advantage is now lost (and has been debatable since SIMD extensions arrived).

If you look at these SPEC figures you see that, while the alpha is beaten only by Itanium in FP performance, it is itself only slightly better than the fastest P4. In integer performance, it is similar or slower than P4/Athlon processors. If you look at older results you find this was the state of affairs since 1999 or so. Even though new Alpha releases briefly captured first place, Intel's faster release cycles meant that it had the fastest CPU on offer most of the time.

I don't understand why anyone would pay the premium of a minority CPU (less software, fewer motherboards, fewers system vendors, expensive CPUs, slow release cycles) for such a modest gain in performance. I think any alternative CPU that promises better performance would have to be at least twice as fast to be worth considering.

The main contribution of the Alpha to the computing industry, I think, has been to provide a temporary platform for 64-bit Windows NT and Linux to be developed. I think it is telling that Microsoft dumped the 64-bit Alpha version of NT as soon as Itanium arrived on the scene.


Scalability? (4.50 / 2) (#24)
by pharm on Tue Jun 26, 2001 at 06:00:59 AM EST

You're forgetting the importance of scalability in the usage of a particular CPU. By all accounts the alpha based systems scaled very well indeed, and are much used in large multiprocessor clusters.

When you're using 512 dual CPU nodes in your supercomputing cluster, poor scalability will kill any performance advantage that you might think a particular cpu should have based on its raw SPEC figures alone.

Whether the scalability of alpha systems was a property of the CPU itself I can't vouch for.

[ Parent ]

Packaging, 64-bit address space (4.00 / 2) (#26)
by pavlos on Tue Jun 26, 2001 at 07:54:34 AM EST

The type of CPU that you use affects scalability only in SMP arrangements. In supercomputers such as the T3D, each cluster of two CPUs functions essentially as an independent computer, so any CPU that scales to dual SMP would do the job. Other than that, probably the most important scalability factor is packaging: how big is the CPU and how much power does it dissipate. Having 64-bit addresses also helps, but only if you plan to expose a NUMA-style unified address space.

In the case of supercomputers like those built by Cray, almost everything is a new design, so there is no penalty to using a minority CPU (except the risk of it being discontinued). But such systems are increasingly facing competition from less elegant Beowulf clusters running on cheap, commodity hardware. Apart from neater packaging, the only things that "real" supercomputers have over beowulf clusters are better interconnections and operating systems. If I were a supercomputer vendor, I would be putting all my effort into making ultra-fast very local area network cards and software.


[ Parent ]

You don't buy CPUs, you buy solutions. (4.50 / 2) (#25)
by Tezcatlipoca on Tue Jun 26, 2001 at 06:28:37 AM EST

In many situations a RISC/Unix solution is far better than a Intel/Windows or Intel/Unix.

Althoug solutions Intel/Unix (more precisely Intel/Linux) are making some inroads in computing clusters, Intel/Windows was never an option for solutions that needed to escalate. The Windows UI adds a layer of complexity that you can avoid with Unix solutions, which normaly are available with RISC processors. I have seen clusters of Intel/Windows machines going down for problems with Windows Explorer. That puts a Windows/Intel solution out of contest.

And any way, the MHz and benchmarks are pointless. The fact is that anybody with experience in the field knows that RISC/UNIX is a better proposition for many heavy duty tasks and usualy when somebody commits the mistake to put Intel-Windows where it should not be, disaster strikes.

The benchmark? My stopwatch maesuring how long it takes two solutions to solve a reasonable sized subset of my problem.

Might is right
Freedom? Which freedom?
[ Parent ]
RISC Losing Ground to Intel | 27 comments (27 topical, 0 editorial, 0 hidden)
Display: Sort:


All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!