Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The RISC vs CISC debate has been dead for years. Doubly so ever since we found the limits of scaling clock frequencies ever higher. After all, the RISC movement started as a reaction to the difficulties of scaling the architectures of the day to faster clock frequencies. Now (decades, really) CPU designers are concentrating on doing more work per clock cycle, which is rather anti-RISC. So the only questions that matter are "can we implement this feature efficiently?" and "does this feature provide enough performance or power gain for the implementation cost?"


I don't think it's quite dead yet; the performance/power hit for decoding x86-64 instructions is significant, just to decode to a RISC-like microcode anyway. However, that may be more of a statement about x86-64 than it is about CISC in general. Certainly, the days when CISC made any sense at all, mainly to ease assembly programming, is long gone; remember the 8080's string instructions? Yea, neither does anyone else.

However - I think that x86 is so deeply entrenched, and x86 processors are so refined these days, that the value of the architecture is in the software and the investment in the chip design, not in the architecture itself. I think if the PC industry were to start over again, it would go with some kind of POWER variant.

Regardless of CISC vs RISC, I do agree - SIMD and many-core/stream multiprocessing will make far more difference than the instruction and register flavor used on each core.


Well, the fact that x86 encoding is suboptimal is also a dead debate. If AMD had had the resources of Intel, or if Intel hadn't botched IA-64 so badly and actually licensed it to AMD, x86-64 would have better instruction encoding, no question. (seriously, like all of the unused/slow instructions have 1 byte opcodes)

Anyway, my point is that pure CISC designs (as much as that means anything) obviously lost ages ago. Pure RISC also lost as frequencies plateaued, or perhaps more accurately never really won; CPU designers care about what makes CPUs more performant, not abstract ideology. So we get stuff that runs counter to RISC ideals: SIMD, VLIW, out-of-order execution, and highly specialized instructions like AES and conditionals.


Yes, I agree wholeheartedly. I still think RISC and CISC have value as terms, however vague, because they succinctly summarize trade offs well. I fully realize that today's processors are hybrids of many techniques, and that's a good thing.


>the performance/power hit for decoding x86-64 instructions is significant, just to decode to a RISC-like microcode anyway. However, that may be more of a statement about x86-64 than it is about CISC in general. Certainly, the days when CISC made any sense at all, mainly to ease assembly programming, is long gone

CISC still has an advantage in that it effectively compresses your instruction stream, meaning you can fit more in cache


Small question: did the 8080 have string instructions? I know the Z-80 did, not sure about the 8080.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: