Breakdown of Verification effort – Debug, Debug & more Debug..

Interesting analysis of how Verification effort is being spent across industry:

(See the pie-chart, Figure 2). It goes very much inline with what we have been hearing from customers, competitors and also from our own own experience. So DEBUG is THE area if one were to automate within Functional Verification. I’m little surprised to see a 15% spent on ENV – perhaps it is the case for modern SystemVerilog/VMM/OVM stuff, but again that’s for the initial period I suppose. My belief is if you reuse VIPs, leverage on previous code and hire the right candidate, the ENV creation can be handled within 10% The testcase development is shown as 18%, not clear if some of it spread into the coverage bucket (another 15%) – as there is a strong correlation among the two anyway. I believe this is where technologies like Breker’s trek becomes interesting.


On the debug – the good old Novas/SpringSoft is still the leader with Siloti, Verdi and Debussy. Though I’m little disappointed at their SystemVerilog solutions – personally I would have liked more innovation on that space from these debug GURUs. They do have “log/transaction display”, but am sure more is in pipeline. A new company is showing up at places, will be interesting if anyone locally is using it. It will be worth getting some true success stories to see what exactly it automates.

Staying on the debug – I personally believe lot of these automation originate inhouse at customer sites. For instance during our Ethernet Switch/Router Verification monster, we created several scripts, plots etc. to do intelligent failure analysis ( Also our recent work with a local SAN customer resulted in visualizing AVL trees from running simulation. See: and

And then we had this SystemVerilog memory blow-up debug case, – so for now Debug continues to fascinate us the most!

Drop me a note if you would like to explore how you can automate your debug challenges.

Happy Debugging!