Dear Mohammad Ahsan Rizvi, Its controversial topic among experts they are divided in their opinions. RISC is cheaper and faster and hence this mounts as architecture of the future. The RISC puts a greater burden on the software.
RISC hasn't been able to kick CISC out of the market even RISK exists since more than 10 years.
Dear Afaq Ahmad sir, I think CISC puts a greater burden on the software because they follows VLIW processor architecture in which all control is handled by complier. kindly reply on this topic to clear my confusions.
New CISC processors are "hiding" a RISC core with a virtual CISC envelope. The compiler will aim to use the RISC subset, which is the best away to achieve fast execution, while complex instructions are supported to provide backward compatibility.
I doubt experts are divided in their opinions; any new architecture that starts from scratch will follow the RISC approach.
I guess that the only CISC instruction set that survived is the x86 instruction set. But as many others have mentioned Intel and AMD processors are not true CISC processors anymore. On the inside they are RISC processors and all CISC instructions are translated to RISC micro instructions. The L1 instruction cache might even store the micro instructions directly.
The idea of CISC came when people still had to write assembler: give the programmer a lot of power with few instructions. Make his life easier. But, on the other hand CISC is a bad idea for compilers. It is much easier for them to optimize for RISC instructions. Also, in the beginning CISC processors were faster because each instruction took one cycle and so they could do more work in that one cycle. The number of successive transistors limits the frequency a processor can work at. RISC architecture easily allow for separating work into pipeline stages and thus allow for higher frequencies.
So, to answer your question: all recent processors are RISC processors (at least on the inside). I don't think that there is any recent CISC processor out there.
The definition of RISC seems to have changed over the years from the time it was first introduced in the early 1990s. Reduced Instruction Set Computer was the original definition - everything was done via register to register manipulation, and the only memory access instructions were Load/Store. The chip would include *lots* of registers. The compilers generated instructions that were heavily pipelined, and any branches outside the pipeline would stall the processing until the pipeline could be refilled. Lately I've seen the definition as Reduced Instruction Set Cycle - meaning that the instructions were taking less time, but there were more and more instructions being added.
Simon Schröder's point that most (if not all) CPUs are now inherently RISCy in nature is true. Many of the advantages of RISC were added to what were viewed as CISC architectures. IBM System Z is the classic example of CISC architecture - but I have no idea if they leverage any RISC concepts.
Oracle's SPARC architecture has been recently updated. IBM's POWER architecture is now on it's eighth iteration, MIPS is still being enhanced (for embedded devices), and then the previously mentioned ARM.
Intel's Itanium family (originally done by HP) went the other route. VLIW processor, with *ultra* smart compilers generating efficient code - something that continues to prove elusive.