An example of it could be Intel's forthcoming “Falcon Shores” chip which will have 288 gigabytes of memory and support 8-bit floating-point computation. These will be specialized for AI supercomputing.
I'm one who does not buy into the hype that AI is something discrete and brand new. In my view, AI is just more of the same, a continuous evolution of how we use computing power. I also didn't buy the hype about "the cloud." It too was more of the same, a continuously evolving WWW. Or even from before web browsers, when we had remote computing services available over x.25 networks.
Having caveated the term, I predict that so-called AI will play a big part in medicine, for example, in surgery. Much like surgeons practice new procedures, to determine potential outcomes before working on a real patient, computer models should be able to try out way more possible tactical approaches, and a lot faster than humans can. And learn from mistakes.
Internal medicine too, I suspect. Same idea, where computer modeling can speed up the trial and error process substantially.
A similar "revolution," let's call it that, occurred when finite element analysis, or finite element method (FEM), came into widespread use, I'd say during the 1970s.
The example I think applies here would be engine design. Before widespread use of FEM, engine design was largely a work of art. Different companies built a reputation on their own particular design choices, claiming the specific advantages of their engine artistry. (Same can be said for chassis designs, of course.)
With FEM, now any company can design whatever types of engine they need. Just give FEM your requirements and constraints and it will spit out your best design options.
My bet is, so-called AI will perform that same magic in the field of medicine. And yes, I would also predict, people will be sad to see the "artistry" aspect in medicine taking a back seat. But overall, it should be a beneficial change.
Didn't really answer your question on specific chip designs. I'm not sure why chip designs would be different for AI or any other advanced computing applications. Here's a view on the topic:
The success of modern AI techniques relies on computation on a scale unimaginable even a few years ago. Training a leading AI algorithm can require a month of computing time and cost $100 million. This enormous computational power is delivered by computer chips that not only pack the maximum number of transistors— basic computational devices that can be switched between on (1) and off (0) states—but also are tailor-made to efficiently perform specific calculations required by AI systems. Such leading-edge, specialized “AI chips” are essential for cost-effectively implementing AI at scale; trying to deliver the same AI application using older AI chips or general-purpose chips can cost tens to thousands of times more. ...
This report presents the above story in detail. It explains how AI chips work, why they have proliferated, and why they matter. It also shows why leading-edge chips are more cost-effective than older generations, and why chips specialized for AI are more cost-effective than general-purpose chips.
So, my question would be, why wouldn't those same chip designs go into use in any advanced computers? For example, to be used in control systems.
The paper is an interesting read. I remember years ago, when "reduced instruction set computer (RISC) chips were competing against "complex instruction set computer (CISC) chips. So, how did that pan out? Soon CISC chips reached performance levels similar to RISC, and that whole debate vanished. I would expect the same to happen now.