> instead of having to do 12 different simple instruction, you can just do the one obscure instruction appropriate to the situation
That one instruction just turns into 12 micro-ops, though, and then you need a much more complicated front-end to decode it. (And a smart enough compiler to use it in the first place.)
You do benefit from higher code density, but when you compare real-world RISC and CISC code the size difference is arguably small enough that it's not worth it, especially when there are other improvements to spend resources on that provide more benefits, like better branch prediction.
Also, instruction sets like ARM and RISC-V aren't needlessly minimalist, so you still get "extra" instructions such as vector/SIMD extensions where it makes sense. The old-school kitchen-sink instruction sets aren't popular any more for a reason.
That one instruction just turns into 12 micro-ops, though, and then you need a much more complicated front-end to decode it. (And a smart enough compiler to use it in the first place.)
You do benefit from higher code density, but when you compare real-world RISC and CISC code the size difference is arguably small enough that it's not worth it, especially when there are other improvements to spend resources on that provide more benefits, like better branch prediction.
Also, instruction sets like ARM and RISC-V aren't needlessly minimalist, so you still get "extra" instructions such as vector/SIMD extensions where it makes sense. The old-school kitchen-sink instruction sets aren't popular any more for a reason.