Monday, August 24, 2009

Everybody's Getting Physical

Image courtesy Wikimedia.

Back before "nanometer design", there was "deep submicron design", and ASIC synthesis users became very concerned about interconnect effects on timing. The first attempt to deal with this was through "Links to Layout", and especially a whole slew of custom wireload model (CWLM) strategies. If you look at SNUG programs a number of years ago, there were more CWLM papers than you could shake a stick at.

A major advance in accounting for interconnect effects was when synthesis tools started to perform "virtual layout" as part of the optimization-estimation-timing analysis loop. The most notable tool to do this was Synopsys Design Compiler Topographical. My colleagues and I did evaluations of DCT starting with 90nm designs, and data indicated better correlation and generally a better netlist for Place & Route.

As we progress from 90nm to 45nm and below, physical considerations are becoming ever more sophisticated. The latest examples are

The Skeptic Weighs In

While "getting physical" has the feeling of more accuracy and general goodness, TNSTAAFL. Though I hope for better quality of results and predictability, I see the following drawbacks

  • It forces the logic designer to learn physical design details, and learn new tools or coordinate with a physical designer. While that's nice to know, it takes away from the RTL creator's time to focus on architecture, design and verification.
  • The optimization process itself will either take longer (to perform the "virtual layout"), or results will be less-optimal, because optimization time will be taken away in order to run physical algorithms.
  • Some of the effort may be wasted, especially for detailed buffering and gate sizing. For example, some P&R tools "throw away" the incoming netlist's buffering and sizing, and re-optimize in the physical domain. So, that effort in logic synthesis is wasted (other than its predictive benefit).
  • While WLM-based synthesis is well understood and mature, these physical tools are not. It may take a long refinement period for the tools to produce reliable netlist quality and consistency.
  • Some of these tools can't decide if they're logic design tools, physical design tools, or some mish-mash of both. In their efforts to be accessible and affordable for logic designers, they may lack the physical design functions and data access needed to do the job right.

Over the coming months, advanced synthesis users will be putting these latest tools through their paces with real designs. And we'll start to learn whether the added complexity and cost leads to better implemented designs. If not, we can always go back to our old friend the wireload model, trying to get to P&R quickly, where the rubber really (not virtually) meets the road.

Update: I stumbled upon a detailed reply/rebuttal over on the Cadence Logic Design blog! Jeffrey Flieder, thanks for writing this. I very nearly overlooked it.

2 comments:

Nick said...

Hi John,
There is limited value in pushing things upfront on to the synthesis side. (for example estimating congestion during synthesis)and I think it will be this way for a long time to come.

There have been great advances in algorithms (fast timing aware placement, probabilistic congestion estimates, quick Steiner route during placement iterations), but we have to bear in mind that all these come at a great loss in accuracy.

It's no big deal for anyone to start iterating on the physical design side 1-2 times to figure out where things stand with any given front end net list.

I have seen some good value in using Magma in the past with the super/hyper cell concepts instead of custom wire load models.

-Nick

Sean Murphy said...

I always thought it was TANSTAAFL see http://en.wikipedia.org/wiki/TANSTAAFL and http://www.jargon.net/jargonfile/t/TANSTAAFL.html for two citations.