Ruben Molina of Extreme DA has a well-written article, ‘Golden’ timing signoff – does it correlate to Spice?.
Certainly, one cannot argue that we treat SPICE simulation results as "golden" (for a given set of PVT conditions).
At the same time, device-level SPICE is a lot harder to work with than gate-level tools such
as static timing analyzers.
Therefore, as he says, designers looking to qualify a tool may take the easy way out and compare
the new tool to the de facto standard, even if the accuracy of the standard tool isn't well understood.
SPICE should be the arbiter.
I sometimes feel sorry for new tool vendors going against a well-established "gorilla".
They design the fastest, most accurate tool that they can, and then find
customers complaining that it doesn't match the idiosyncracies of older tools.
To add insult to injury, the aspiring tool vendor may have to add in
"compatibility mode" features that produce less accurate or less sensible results,
just to correlate to the tool the want to displace.
This has happened in EDA for a long time.
Remember when Cadence's Verilog-XL was THE simulation standard?
Upstarts, including Chronologic with VCS,
couldn't just implement the Verilog spec.
Customers forced them to mimic every quirk of Verilog-XL.
It was the arbiter, much to competitor's frustration.
In the case of timing tools, it behooves tool vendors to offer a qualification flow.
For example, to compare to SPICE, the tool should be able to select timing paths,
write out accurate & valid SPICE decks, and make it easy to run the simulations
by pointing to the SPICE tool and library models.
In this way, it lowers the barrier for customers to compare against the "right" standard,
instead of the easy one.