Home Reviews Plug, Play and Reuse!

Search

Plug, Play and Reuse! PDF Print E-mail
User Rating: / 5
PoorBest 
Thursday, 04 March 2010 17:19

Time to talk about module-to-system reuse, a very important topic. If you plan your verification environment properly (using one of the common methodologies in the market today or your own) you’ll be able to easily build a system level verification environment that reuses most of your module level environments (i.e. sub-environments). However, even if all your sub-environments are well suited for plug and play reuse at the top level, there are still considerations to be made regarding the overall topology. In other words, how do you go about connecting the sub-environments to each other to make an effective top level environment? Here are 3 methods that you can use.

 

It is assumed that all of your module level environments look more or less like this:

 

 

 

The first method is the classic reuse method where you’re virtually instantiating all your sub-environments at your top level environment – simple, quick but a bit redundant. Note that you can save some code if you only instantiate a single monitor (instead of two) on internal interfaces. In fact, if you’ve been a good boy at the module level, you might have thought about this in advance and placed a monitor reference rather than instance to facilitate smoother reuse. But even if you didn’t, this shouldn’t be much of an effort. Eventually you’ll get something like this:

 

 

 

If there are many blocks in your data path and the internal busses that connect them use non-standard protocols then you might find out that having multiple monitors along the data path could introduce a significant amount of false alarms. New RTL releases and bug fixes often introduce minor timing differences or other changes to internal signals that don’t necessarily affect the end-to-end data path, but might mean that you’ll have to adapt your internal monitors with each revision. Ouch! From experience, when you’re doing top level verification – the only thing that counts is the overall functionality. If internal logic has to be modified to make the chip work – that’s what they’re going to do, and you’re stuck with a bunch of out-of-date monitors. The solution is such cases is fairly simple and is called scoreboard chaining. In this method, the scoreboards (or reference models) are daisy-chained together to create a virtual end-to-end scoreboard. Here’s what it looks like:

 

 

 

One disadvantage of the previous method is that by eliminating “internal” monitors you’re also eliminating precious debug information. In case the end-to-end scoreboard reports an error, you’ll have to dig in the code to locate the problem and potentially go through several interfaces until you reach the offending one. Good news - there’s a third method that you might want to try in that case. I’ve used this method successfully in one of my projects (after the first two turned out to be less efficient). It’s kind of like a way to enjoy both worlds. You keep all (or some) internal monitors alive, but you don’t get any false alarms because the the scoreboard/reference models are fed by the external monitors only! The internal monitors are simply there to monitor signals and provide debug information (to log file or something). They no longer have the power to affect test result, but they still can help you locate and track data items that flow through the system. Now before you shout out at me, let me clarify - this method is not ideal, it’s just a practical approach that worked successfully for me in the past and might work for you too. But as always with verification, the trick is to match the most efficient solution to your specific problem. But anyway, here’s what it looks like:

 

 
More articles :

» My Story With Certitude

Long time no see. My website has been down for a while (technical issues - web experts contact me if you want to help) but now it's up again. And going through some of my old articles here (well, they're all old, I should probably do something about...

» AutoDup: Create Test Variants Quickly (Free Utility)

Coverage driven verification has a big advantage – you can write a single test, and let run it several times with random seeds. Each run will generate a slightly different scenario – depending on the nature of the constraints you provided....

» The Miracle Of Verification

Is verification really a miracle and verifiers have ceased to be engineers? Not too long ago I wrote an about some common myths in Verification. Today I would like to talk about a bigger myth which I like to call the "Verification Miracle Myth"....

» To Do List 2010

Introducing Philip Americus - a new guest blogger here on Think Verification. Phil is an ASIC veteran who's worked with every phase of ASIC design - from initial concept to tapeout, with an emphasis on verification, including management of both HW...

» VMM Hackers Guide - Default Behavior For Your BFM

Here's a short tutorial on how to implement a default behavior for your BFM using VMM. Some protocols require constant activity on their interface even when you don't have any data to transmit. This means you must have a mechanism that drives idle...

Comments  

 
0 #1 2010-03-14 10:18
In methods 2 and 3 you suggest to use end to end SB. using this methods chip level that include large number of internal blocks with test using long data packets make the debugging process very long and exhausting. one solution is to use the combination of end to end SB with method 1 i.e. use method 2/3 for regression and run method 1 only on seed that fail. The other option is to use interface structs between blocks and build a script that on new design version edit the struct according to design interface changes.
Quote
 

Add comment


Security code
Refresh

Copyright © 2017 Think Verification - Tips & Insights on ASIC Verification. All Rights Reserved.
Joomla! is Free Software released under the GNU/GPL License.