Category Archives: Lint

Is your RTL Linter makes you find needle in the haystack? Here is a smarter approach!


Static design verification through thorough RTL analysis started several years ago, with early stage ones simply being “linters” and the later, advanced ones combining some of formal techniques as well. One of the most common complaints by RTL teams while using linters is the SNR – Signal-2-Noise Ratio of the endless set of errors/warnings from the tool. So much so that we have heard of customers giving up on linters, primarily those freebies/bundled with simulator ones. While it is true that some of the reported ones are indeed serious issues, the ROI (Return On Investment) of having RTL designers navigating through the ocean of messages is too little in many cases.

OTOH there are clear set of issues that a good LINTer can spot for you quite easily, for instance see:

Sledgehammer to crack a nut? – Use right tools for right class of design errors/bugs: 

Essentially it comes to the art of “Finding the needles in the haystack” and in a timely manner:


Don’t loose your heart, there is a silver lining with modern day linters/static analysis tools. We TeamCVC covered the rise of new age linters back in 2010; Recently Ascent from RealIntent showed the below case study that proves this very point:


In the following table you will see the results of the analysis of Ascent IIV on 130K gates of RTL logic that was done by NEC in Japan.

Table 1.  Ascent IIV Intent Checks and Failures Report for a 130K Gate Block.
The tool generated 31,186 intent checks that were analyzed by its various engines and 2,999 failures were produced in total.  The hierarchical reporting of the tool characterized these failures into several categories.  It determined that by fixing the Primary Errors (purple column in the table) this would eliminate the duplicate, secondary and many structural errors.  In this benchmark, 181 primary errors were identified.  This represents a dramatic contraction of almost 95% of the total failures from the tool.  You can read further comments by NEC here.

So next time when you hear of the Linter’s SNR issue, do recollect this “smart categorization” and push for it in your solution or move/upgrade to better ones. you should however check on local support team for such new age products though as these tools do require some hand-holding especially in the initial stages of adoption.

Good Luck,


Technorati Tags: ,

DAC 2013 notes: Giraffes are everywhere, Verific inside story

As I recount on my recent DAC at Austin experince, one thing that surprised me was the number of Giraffes (sure, images/toys, wish the real ones..but I was at DAC and not a zoo/safari) in the exhibit floor.


Yes, am talking about the Verific’s mascot here. It was one of the tallest standing booths so not many could miss it. My friend Sashi Oblisetty led me to them after our early breakfast meeting that morning at DAC. I met with Michiel Ligthart, Verific’s president and chief operating officer. He is a tall man, a Netherlander/Dutch (Graag, ik kan een klien beetje Nederlands spreken .. –  Glad, I can speak a little bit Dutch language, thanks to my early days of work at Philips, Eindhoven). It was a pleasant surprise to see how many customers Verific has to-date, from their facebook page I found:

That’s impressive indeed. No wonder I found several small Giraffes on other vendors’ booth tables. It reminded me of the popular "Intel inside" campaign, perhaps Verific should do similar "Verific Inside" campaign in EDA world :-) Or should we call it "Giraffes are everywhere" (For those avid travellers, see:

I also met Ahbijit Chakraborty, their India GM (based out of Kolkata/Calcutta) and we agreed to follow-up on some of our DVAudit ideas post-DAC. If you wonder what DVAudit is, here is a brief on what I presented at Cadence theater at DAC floor, drop us a note via if you need more details.

Truly an engaging crew at Verific booth, we at TeamCVC do look forward to their PERL based parser front end to build custom utilities for our customers.


Sledgehammer to crack a nut? – Use right tools for right class of design errors/bugs

I am sure you have heard this phrase before – “A sledgehammer to crack a nut”; the below picture describes it all!

Would you use a HUGE hammer to crack a small, tiny nut?


(If you are further interested in this phrase read:

I recently had a small design error introduced in a piece of  RTL as below: It is an interrupt masking logic, code snippet as below:


Note the use of “ANDing” logic – simply, AND- mask with data to produce result.The subtlety in Verilog/System Verilog is that you have 2 seemingly similar operators for doing AND operation;

  1. The logical AND: &&
  2. The bitwise AND: &

Given the “loose” data type checking, assignment rules etc. one can get away by using either one of the above many-a-times. In the above case the user used:

result = data && mask;

With result being a vector the above is a “logical/design error” but usually a Verilog compiler would let this go through (as it is not an error as per LRM).

Now one can “verify” this by writing a testbench, simulate, look at waveform and debug. Depending on luck and the expertise of the engineer, it could take some 30-minutes to few hours. But as a Verification power-house CVC suggests to rethink – use the right tool/technology for the right class of design errors. These are things that are very easy for a static verification technology such as HDL-Linting to flag in less than a minute.

For instance, let’s try the above code with a popular Linter – ALINT from Aldec (

ALINT has nice rule sets pre-packaged for various policies such as STARC ( It produces the following:


This will trigger 2 rules:
–  rule about logic operation having a vector operand
–  rule about bit width mismatch in the assignment – LHS vs RHS.

ALINT: Warning: test.v : (4, 1): Module “top”. “STARC_VLOG.″ Logical operator has vector argument(s). Use bit-wise operators for multi-bit arguments and logical operators only for 1-bit arguments. Level: Recommendation 1.
ALINT: Warning: test.v : (4, 1): Module “top”. “STARC_VLOG.″ Assignment source bit width “1” is less than destination bit width “8”. Upper bits of the right-hand side will be filled with zeroes. Match bit widths exactly to improve the readability of the description. Level: Recommendation 2.

Now from a business perspective too – this is a far better option for your management – usually LINT tools are far cost efficient than full blown SystemVerilog simulator(s) such as Aldec’s Riviera-Pro

So next time when you receive a RTL code to verify, do yourself a favor by running a quick Lint run before looking for “hard bugs” that demand popular, powerful techniques such as Constrained-random, coverage-driven, UVM based etc.

BTW – CVC offers training sessions ( on Aldec’s ALINT and HDL-Lint in general. Contact us ( to see how we can help your teams!

Happy Verification ahead!