As most everyone is aware, the big conference in the chip design software world, DAC, runs July 26-31 in San Francisco. I'm looking forward to it and plan to be up there most days. Some prognosticators have posted their "must-see" lists. (Gary Smith, John Cooley laying out the law for aspiring vendors) and there will be more to come.
Rather than calling out specific companies, I'll share some of technologies that I'll be looking to learn more about.
- Low Power
- This is one of the genuinely valuable and necessary "next big things" in methodology.
- I've always been surprised that specialized datapath techniques aren't more successful. It seems like you either use an advanced RTL synthesis tool, or design datapath by hand. There's not a lot of in-between.
- MCMM (Multi-Corner, Multi-Mode)
- It sounds like the solution to many problems. But how well does it really work -- how scalable is it?
- parallelism, multi-threaded, multi-core, GPGPU
How will EDA ever catch up to designs scaling by Moore's Law?
By using the parallelism available in today's CPUs and GPUs.
Multi-core is working today for 4-8 cores, but may hit a wall beyond this.
And what about the tremendous parallel computational power in your
Graphics Processing Unit?
A few EDA tools are leveraging the CUDA platform; where will it pop up next?
Update: check out Richard Goering's interview with EDA luminary Kurt Keutzer on this topic.
- Asynchronous Design
- This is my token research-y interest. Synchronous design is what we all learn in school, and there's a plethora of tools (namely, the EDA industry) to automate such designs. But there are drawbacks with respect to area and power. Can we learn a new way to design, and develop new sets of IP and automation tools?