I assume you mean with "Dark Silicon Issue", the fact that our chips become power-dominated, to such an extend that what can be done on chip, is determined by the power that can be dissipated, and not so much by what can be integrated on the device.
The answer to this, is that in digital design (which my company does), the focus needs to shift, and the prime design target when designing a circuit need to minimize its power dissipation. I have been in silicon industry for over 25 years, and we had different design focuses over this time:
- The design needs to be done with the lowest amount of gates
- The processor needs to achieve the highest clock speed.
- The power/performance ratio needs to be optimized.
Entering the 'dark silicon' area, means the latter becomes the focus: Optimize the power/performance ratio. Not only will we not care any more for the gate count. (Today, this is not a prime driver any more), but the next focus will be to reduce performance in order to improve power/performance ratio.
Note that ARM cpu's to give an example, are primarily performance optimized. (Well, at least the CPU's that are typically found in cellphones).
So, I think the 'dark silicon issue' will be mitigated by a lot of digital design effort, but really a *lot*, as eg a CPU designed for power/performance trade-off is quite different in architecture from a CPU designed for performance with a power constraint. (The same applies for other types of circuits.)
As to what techniques to use to optimize power/performance, well, that's a whole different question, and I think I need a (much) longer answer there.
Nice answer. Yes, you're right. This is a great topic now-a-days everywhere, be it processor building companies or academic research. Performance vs. energy per work load trade off is the crucial issue. Anyways, please, don't call me sir.
I would like to add a little. Some time ago, in our company, we did a processor for a radar application. After we designed the processor, the project went south, so now we have the processor. I hope to be able to get permission to post some documentation on it on Researchgate.
Now, this processor was designed to optimize the power/performance ratio, while still having high performance.
If we now look 2-3 years in the future, in a 10 nm process, you would be able to:
- Integrate 1000 of these processors on a chip, together with about 100 Mbyte SRAM.
That would still be an acceptable chip.
- Each of the processors would run 500 Mhz clock, @ 2 Dhrystone MIPS/clock, so the thing does total 1e12 dhrystone MIPS per second.
- There are no caches and no mass memory. (too power hungry.)
Now, that is the easy part. It solves the hardware aspect of what is called the 'dark silicon issue'. But now the difficult part : How are we gonna write software for this ?.
It basically means the end of the software paradigm as we know it. The current software paradigm is the "Von Neumann" machine. In recent years, there are moderate multi-processors, and then you hope to have multiple tasks in your application, so that you can allocate one processor per task. However, such an algorithm would fail in a system with 1000 cores, where, on top, memory access is difficult.
Could you provide inspiration here ?. I see you live or study in California, home to some of the most advanced software companies of the world. Maybe some at your university can provide some help here ?.
That's a great result. Yes, I have joined Univ of California, Davis for my PhD just 5 months back. I'm new to our research group. Once I get a good rhythm I"ll definitely help you. As far as your question is concerned, I would ask the same to few experience guys and let you know the outcome.
I have published some results on the BLUSP processor on my home page. Please have a look, at let me know if it interests you to use it as a hardware platform for your research