Home SystemVerilog Plug, Play and Reuse!

Search

Plug, Play and Reuse! PDF Print E-mail
User Rating: / 5
PoorBest 
Thursday, 04 March 2010 17:19

Time to talk about module-to-system reuse, a very important topic. If you plan your verification environment properly (using one of the common methodologies in the market today or your own) you’ll be able to easily build a system level verification environment that reuses most of your module level environments (i.e. sub-environments). However, even if all your sub-environments are well suited for plug and play reuse at the top level, there are still considerations to be made regarding the overall topology. In other words, how do you go about connecting the sub-environments to each other to make an effective top level environment? Here are 3 methods that you can use.

 

It is assumed that all of your module level environments look more or less like this:

 

 

 

The first method is the classic reuse method where you’re virtually instantiating all your sub-environments at your top level environment – simple, quick but a bit redundant. Note that you can save some code if you only instantiate a single monitor (instead of two) on internal interfaces. In fact, if you’ve been a good boy at the module level, you might have thought about this in advance and placed a monitor reference rather than instance to facilitate smoother reuse. But even if you didn’t, this shouldn’t be much of an effort. Eventually you’ll get something like this:

 

 

 

If there are many blocks in your data path and the internal busses that connect them use non-standard protocols then you might find out that having multiple monitors along the data path could introduce a significant amount of false alarms. New RTL releases and bug fixes often introduce minor timing differences or other changes to internal signals that don’t necessarily affect the end-to-end data path, but might mean that you’ll have to adapt your internal monitors with each revision. Ouch! From experience, when you’re doing top level verification – the only thing that counts is the overall functionality. If internal logic has to be modified to make the chip work – that’s what they’re going to do, and you’re stuck with a bunch of out-of-date monitors. The solution is such cases is fairly simple and is called scoreboard chaining. In this method, the scoreboards (or reference models) are daisy-chained together to create a virtual end-to-end scoreboard. Here’s what it looks like:

 

 

 

One disadvantage of the previous method is that by eliminating “internal” monitors you’re also eliminating precious debug information. In case the end-to-end scoreboard reports an error, you’ll have to dig in the code to locate the problem and potentially go through several interfaces until you reach the offending one. Good news - there’s a third method that you might want to try in that case. I’ve used this method successfully in one of my projects (after the first two turned out to be less efficient). It’s kind of like a way to enjoy both worlds. You keep all (or some) internal monitors alive, but you don’t get any false alarms because the the scoreboard/reference models are fed by the external monitors only! The internal monitors are simply there to monitor signals and provide debug information (to log file or something). They no longer have the power to affect test result, but they still can help you locate and track data items that flow through the system. Now before you shout out at me, let me clarify - this method is not ideal, it’s just a practical approach that worked successfully for me in the past and might work for you too. But as always with verification, the trick is to match the most efficient solution to your specific problem. But anyway, here’s what it looks like:

 

 
More articles :

» Latest Buzz From The EDA & Verification Community

{loadposition pos101}{loadposition pos102}{loadposition pos103}{loadposition pos104}{loadposition pos105}{loadposition pos106}{loadposition pos107}{loadposition pos108}{loadposition pos109}{loadposition pos110}{loadposition pos111}{loadposition...

» Smart Constraints In SystemVerilog

Constraints are our best friends when it comes to random generation. They allow us to limit the range of possible scenarios/values as much as we like, from the level of "everything is allowed" down to the level of such heavy constraints that really...

» AutoDup: Create Test Variants Quickly (Free Utility)

Coverage driven verification has a big advantage – you can write a single test, and let run it several times with random seeds. Each run will generate a slightly different scenario – depending on the nature of the constraints you provided....

» We Hear Ya!

 During the last months we conducted a poll about what you guys would you like to read more about on ThinkVerification and here are the results: Verification Methodology - 41%SystemVerilog Tutorials - 31%e Tutorials - 13%Interviews - 12%...

» Using Constrained-Random Verification with Legacy Testbenches

One of SystemVerilog's noticeable features is that it is basically a "design language" that has been extended with verification capabilities. This might be an advantage or not, depending on who you're asking, but obviously, if you only want to use a...

Comments  

 
0 #1 2010-03-14 10:18
In methods 2 and 3 you suggest to use end to end SB. using this methods chip level that include large number of internal blocks with test using long data packets make the debugging process very long and exhausting. one solution is to use the combination of end to end SB with method 1 i.e. use method 2/3 for regression and run method 1 only on seed that fail. The other option is to use interface structs between blocks and build a script that on new design version edit the struct according to design interface changes.
Quote
 

Add comment


Security code
Refresh

Copyright © 2017 Think Verification - Tips & Insights on ASIC Verification. All Rights Reserved.
Joomla! is Free Software released under the GNU/GPL License.