Home Reviews Plug, Play and Reuse!

Search

Plug, Play and Reuse! PDF Print E-mail
User Rating: / 5
PoorBest 
Thursday, 04 March 2010 17:19

Time to talk about module-to-system reuse, a very important topic. If you plan your verification environment properly (using one of the common methodologies in the market today or your own) you’ll be able to easily build a system level verification environment that reuses most of your module level environments (i.e. sub-environments). However, even if all your sub-environments are well suited for plug and play reuse at the top level, there are still considerations to be made regarding the overall topology. In other words, how do you go about connecting the sub-environments to each other to make an effective top level environment? Here are 3 methods that you can use.

 

It is assumed that all of your module level environments look more or less like this:

 

 

 

The first method is the classic reuse method where you’re virtually instantiating all your sub-environments at your top level environment – simple, quick but a bit redundant. Note that you can save some code if you only instantiate a single monitor (instead of two) on internal interfaces. In fact, if you’ve been a good boy at the module level, you might have thought about this in advance and placed a monitor reference rather than instance to facilitate smoother reuse. But even if you didn’t, this shouldn’t be much of an effort. Eventually you’ll get something like this:

 

 

 

If there are many blocks in your data path and the internal busses that connect them use non-standard protocols then you might find out that having multiple monitors along the data path could introduce a significant amount of false alarms. New RTL releases and bug fixes often introduce minor timing differences or other changes to internal signals that don’t necessarily affect the end-to-end data path, but might mean that you’ll have to adapt your internal monitors with each revision. Ouch! From experience, when you’re doing top level verification – the only thing that counts is the overall functionality. If internal logic has to be modified to make the chip work – that’s what they’re going to do, and you’re stuck with a bunch of out-of-date monitors. The solution is such cases is fairly simple and is called scoreboard chaining. In this method, the scoreboards (or reference models) are daisy-chained together to create a virtual end-to-end scoreboard. Here’s what it looks like:

 

 

 

One disadvantage of the previous method is that by eliminating “internal” monitors you’re also eliminating precious debug information. In case the end-to-end scoreboard reports an error, you’ll have to dig in the code to locate the problem and potentially go through several interfaces until you reach the offending one. Good news - there’s a third method that you might want to try in that case. I’ve used this method successfully in one of my projects (after the first two turned out to be less efficient). It’s kind of like a way to enjoy both worlds. You keep all (or some) internal monitors alive, but you don’t get any false alarms because the the scoreboard/reference models are fed by the external monitors only! The internal monitors are simply there to monitor signals and provide debug information (to log file or something). They no longer have the power to affect test result, but they still can help you locate and track data items that flow through the system. Now before you shout out at me, let me clarify - this method is not ideal, it’s just a practical approach that worked successfully for me in the past and might work for you too. But as always with verification, the trick is to match the most efficient solution to your specific problem. But anyway, here’s what it looks like:

 

 
More articles :

» VMM Hackers Guide - Creating Smart Scenarios With Atomic Generators

VMM ships with some pretty useful built-in components and applications. VMM''s Atomic Generator is probably one of the most powerful ones, yet it's pretty basic. It can definitely help you generate a flow of random items but it was not intended for...

» AutoDup: Create Test Variants Quickly (Free Utility)

Coverage driven verification has a big advantage – you can write a single test, and let run it several times with random seeds. Each run will generate a slightly different scenario – depending on the nature of the constraints you provided....

» Coverage Driven Thoughts

In today’s short post what I’ll try to do is share with you some of the recent trends and ideas that deal with coverage. I won’t go into much technical detail today in order not to wear you (and myself) out, but really - if I want to be more...

» Review - Verification Leadership Seminar

How many of you have tried to cut on coffee? or even quit drinking coffee altogether? I guess a lot. Well, personally I’ve given up on trying but you know what? there’s actually something worse than having 8 cups of coffee per day - it’s the...

» Eternal Sunshine of the Verifier's Mind

To be successful in verification you not only need to possess the right technical skills, but you also need to possess the right mindset. Possessing the right mindset will lead you to success rapidly. Here are 3 things that I’ve found very...

Comments  

 
0 #1 2010-03-14 10:18
In methods 2 and 3 you suggest to use end to end SB. using this methods chip level that include large number of internal blocks with test using long data packets make the debugging process very long and exhausting. one solution is to use the combination of end to end SB with method 1 i.e. use method 2/3 for regression and run method 1 only on seed that fail. The other option is to use interface structs between blocks and build a script that on new design version edit the struct according to design interface changes.
Quote
 

Add comment


Security code
Refresh

Copyright © 2017 Think Verification - Tips & Insights on ASIC Verification. All Rights Reserved.
Joomla! is Free Software released under the GNU/GPL License.