13.07.2015 Views

Spec-based verification: A new method for functional ... - Cadence

Spec-based verification: A new method for functional ... - Cadence

Spec-based verification: A new method for functional ... - Cadence

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

WHITE PAPERSPEC-BASED VERIFICATIONA NEW METHODOLOGY FOR FUNCTIONAL VERIFICATION OF SYSTEMS/ASICs


TABLE OF CONTENTSAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1The <strong>verification</strong> challenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1Today’s <strong>verification</strong> <strong>method</strong>ologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2Test <strong>method</strong>ologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3Deterministic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3Pre-run generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3Checking strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4Coverage metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5<strong>Spec</strong>-<strong>based</strong> <strong>verification</strong> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5Enabling technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6Functional coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6Constraint-driven generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7On-the-fly temporal checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8The <strong>Spec</strong>man Elite <strong>method</strong>ology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11TABLE OF FIGURESFigure 1: The spec-<strong>based</strong> <strong>verification</strong> <strong>method</strong>ology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6Figure 2: Cross-coverage window showing the registers accessed <strong>for</strong> each CPU opcode . . . . . . . . . . . . . . . .6Figure 3: The constraint solver is the core technology enabling constraint-driven generation . . . . . . . . . . .7Figure 4: The <strong>Spec</strong>man Elite spec-<strong>based</strong> <strong>verification</strong> flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9Figure 5: Struct and spec constraint from the interface specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10Figure 6: Temporal check from the interface specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10Figure 7: Test constraint from the <strong>functional</strong> test plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11


ABSTRACTDue to the increasing complexity of today’s systems and ASICs, <strong>functional</strong> <strong>verification</strong> has become a majorbottleneck in the design process. Design teams reportedly spend as much as 50 to 70 percent of their timeand resources on <strong>functional</strong> <strong>verification</strong>. This paper presents a <strong>new</strong> <strong>method</strong>ology <strong>for</strong> <strong>functional</strong>ly verifyingsystems and ASICs: spec-<strong>based</strong> <strong>verification</strong> — an automated and measurable approach to <strong>verification</strong> thatenables more effective <strong>verification</strong> <strong>method</strong>ologies while cutting the overall resource investment in half.INTRODUCTIONIn the past decade, the electronics industry has successfully focused on automating the process of physicaldesign (place-and-route) and design implementation (logic synthesis). However, the process of verifyingdesign <strong>functional</strong>ity has been relatively neglected. Advances in <strong>verification</strong> have centered primarily onincreasing the speed of simulation, not on automating the <strong>verification</strong> <strong>method</strong>ology as a whole.With design complexities increasing exponentially, <strong>functional</strong> <strong>verification</strong> has become the main bottleneckof the design process. Faster simulators are only part of the solution. Enabling the execution of morecycles provides some benefit, but often many of these cycles are wasted because they add no additionaldesign coverage.The first step toward improving the efficiency of <strong>functional</strong> <strong>verification</strong> <strong>for</strong> large, complex designs isto raise the level of abstraction of the <strong>verification</strong> environment to the specification level. The commonspecifications driving the <strong>verification</strong> process are the design spec, the interface spec, and the <strong>functional</strong>test plan. The <strong>verification</strong> team can develop and implement a comprehensive <strong>verification</strong> strategy onlywhen the rules defined in these specifications can be captured in an executable <strong>for</strong>m. However, raisingthe level of abstraction is insufficient, in and of itself. Once these rules have been captured, the task ofgenerating tests and checking results must be automated <strong>for</strong> the team to have any hope of attaining full<strong>functional</strong> coverage within reasonable time and resource budgets.<strong>Spec</strong>-<strong>based</strong> <strong>verification</strong> addresses the <strong>functional</strong> <strong>verification</strong> bottleneck in a manner similar to the waythe introduction of logic synthesis tools addressed the design challenges posed by increasing designcomplexity. First came the hardware description languages (HDLs) such as Verilog ® and VHDL, which raisedthe level of abstraction from gate-level designs to register-transfer-level (RTL) designs, thereby makinglarge designs much more manageable. However, not until the introduction of logic synthesis automatingthe translation of RTL designs to gates did the real breakthrough occur in time and resource reduction.Similarly, the real value of spec-<strong>based</strong> <strong>verification</strong> is created by a composite of features that deliverautomation: a spec-<strong>based</strong> <strong>verification</strong> environment, automatic generation of high-quality tests, data andtemporal checkers, and accurate measurement and analysis of <strong>functional</strong> coverage. With an automated<strong>functional</strong> <strong>verification</strong> approach at the center of the <strong>verification</strong> <strong>method</strong>ology, the designer gains thethree essential elements needed to overcome the <strong>verification</strong> bottleneck: quality, productivity, andpredictability. Verification <strong>method</strong>ologies that incorporate spec-<strong>based</strong> <strong>verification</strong> produce reusable<strong>verification</strong> environments, shorten the <strong>verification</strong> cycle, and reduce the risk of costly silicon re-spins.THE VERIFICATION CHALLENGEThe dynamics at the root of most <strong>verification</strong> bottlenecks are the relationships between designcomplexity, <strong>verification</strong> complexity, engineering resources, and time constraints. As designs grow morecomplex, the <strong>verification</strong> problems they pose grow exponentially — that is to say, as designs double insize, the <strong>verification</strong> ef<strong>for</strong>t can easily quadruple. As a result, the <strong>verification</strong> ef<strong>for</strong>t can consume as muchas 50 to 70 percent of the entire engineering budget.Un<strong>for</strong>tunately, neither the schedule nor the available engineering resources offer much in the way offlexibility. More importantly, the cost of missing a time-to-market schedule can <strong>for</strong>ce design teams toterminate the <strong>verification</strong> ef<strong>for</strong>t prematurely. This leads to incomplete and inadequate <strong>functional</strong>coverage, creating the potential <strong>for</strong> a pattern of debug cycles, re-designs, and re-spins that erode theprofitability and deliverability of the end product. Thus, ASIC design quality becomes a function of the<strong>verification</strong> schedule, rather than the <strong>verification</strong> metrics.1


TODAY’S VERIFICATION METHODOLOGIESCOMPONENTSTo understand the impact spec-<strong>based</strong> <strong>verification</strong> has on minimizing the <strong>functional</strong> <strong>verification</strong>bottleneck, it’s important to understand the components of the typical <strong>verification</strong> <strong>method</strong>ology todayand how those components are developed. (It’s also important to keep in mind that these components,and the <strong>method</strong>s used to develop them, have changed very little in the past 5 to 10 years, despite majoradvances in design complexity.)Beginning with an initial design specification, the design team partitions the design into <strong>functional</strong>blocks, which are then assigned to specific design team members. Then the interface specs and <strong>functional</strong>test plan are written. The <strong>functional</strong> test plan describes what <strong>functional</strong>ity should be tested and how,including normal operating behavior as well as important corner cases to be tested.The next step in the <strong>verification</strong> strategy is to assemble the <strong>verification</strong> environment, comprising piecesof code (often written in "C" ) called "stubs," which model the components surrounding the deviceunder test (DUT) in the real system. These stubs interact with the device, injecting stimuli into the deviceand receiving output according to the protocol defined in the interface specification. Sometimes the<strong>verification</strong> environment also contains monitors used to check the correctness of the <strong>functional</strong>ity.Depending on the types and quantity of tests required, the designer may choose to write the testsmanually or to create a tool that generates tests according to specific directives or parameters. Whenusing an automated test generation strategy, the designer must decide whether to generate thecomplete test be<strong>for</strong>e simulation or to generate the test on-the-fly as simulation progresses, reactingto the state of the device.Designers have numerous strategies at their disposal <strong>for</strong> checking the results of a test. Initially, designersmust decide whether to use white-, black-, or gray-box testing. White-box testing, most commonly used<strong>for</strong> module and algorithm testing, enables the <strong>verification</strong> engineer to both drive and sense internalsignals. Gray-box testing allows only sensing of internal signals and is the <strong>method</strong> most often usedwith highly complex designs. Black-box testing offers no access to internal signals at all.Similar to the test generation strategy, the designer must decide whether to have the tests checkedmanually or automatically, on-the-fly or after simulation.In addition, one of the most difficult and critical challenges facing the designer is to establish adequatemetrics to track the progress of the <strong>verification</strong> ef<strong>for</strong>t and measure the coverage of the <strong>functional</strong> testplan. Effective coverage metrics are essential to avoid redundant or unnecessary testing, as well as todetermine when the <strong>verification</strong> ef<strong>for</strong>t is complete.PROBLEMSMost of the problems associated with <strong>functional</strong> <strong>verification</strong> <strong>method</strong>ologies today arise from the lackof effective automation to cope with the daunting growth in design size and complexity. This makesdeveloping <strong>verification</strong> environments, software <strong>for</strong> test generation, and deterministic tests an intensivemanual ef<strong>for</strong>t. Checking and debugging test results is also predominantly a manual process.Another problem related to the size issue is the difficulty engineers experience when attempting totrack assumptions made by fellow engineers working on different pieces of the design. Lacking acentrally accessible and unambiguous means of communicating and tracking design intent at thespecification level, engineers are often left with architectural-level bugs that require enormous ef<strong>for</strong>tto locate and remove.Locating bugs is always a problem, especially when they occur in unpredictable places. Even the mostcomprehensive <strong>functional</strong> test plan can completely miss bugs generated by obscure <strong>functional</strong>combinations or ambiguous spec interpretations. This is why so many bugs are found in emulation,or after first silicon is produced. Without the ability to make the spec itself executable, there’s reallyno way to ensure comprehensive <strong>functional</strong> coverage <strong>for</strong> the entire design intent.2


The relative inefficiency with which today’s <strong>verification</strong> environments accommodate midstream specchanges also poses a serious problem. Since most <strong>verification</strong> environments are an ad hoc collection ofHDL code, C code, a variety of legacy software, and <strong>new</strong>ly acquired products, a single change in thedesign can cause a ripple of required changes throughout the environment, eating up time and addingsubstantial risk.Perhaps the most frustrating problem facing design and <strong>verification</strong> engineers is the lack of effectivemetrics to measure the progress of <strong>verification</strong>. Indirect metrics, such as toggle testing or code coverage,indicate if all flip-flops toggled or all lines of code were executed, but they do not give any indication ofwhat <strong>functional</strong>ity was verified. For example, they do not indicate if a processor executed all possiblecombinations of consecutive instructions. There is simply no correspondence between any of these metricsand coverage of the <strong>functional</strong> test plan. As a result, the <strong>verification</strong> engineer is never really sure whethera sufficient amount of <strong>verification</strong> has been per<strong>for</strong>med.TEST METHODOLOGIESDETERMINISTICThe oldest and most common test <strong>method</strong>ology, still used today, is deterministic testing. These tests aredeveloped manually and normally correspond directly to the <strong>functional</strong> test plan. Engineers often usedeterministic tests to exercise corner cases — specific sequences that cause the device to enter extremeoperational modes. Usually these tests are checked manually. However, with some additionalprogramming the designer can create self-checking deterministic tests.Although deterministic testing offers the <strong>verification</strong> engineer precise control by providing access tohard-to-reach corner cases, it has several drawbacks. Generating deterministic tests is a time-consuming,manual programming ef<strong>for</strong>t. Although simple tests can be written in minutes, the more complex onescan take days to write and debug. For example, to test a corner case requiring that two asynchronousdata streams reach a specific point at exactly the same time, the <strong>verification</strong> engineer might have toresort to trial-and-error <strong>method</strong>s: running the test, seeing how far off it is, correcting it, and trying again.Moreover, midstream changes to the design’s temporal behavior may <strong>for</strong>ce the engineer to go throughthis process repeatedly. And when this test is completed, the corner case is tested through only onepossible path.An average project normally develops several hundreds of deterministic tests, which can easily consumeseveral months to create. Checking deterministic tests also consumes considerable time and resources,whether it is per<strong>for</strong>med manually or written into the test.PRE-RUN GENERATIONPre-run generation is a <strong>new</strong>er <strong>method</strong>ology <strong>for</strong> generating tests. It addresses some of the productivityproblems associated with deterministic testing by automating the test generation process. C or C++programs (and sometimes even VHDL and Verilog, despite the lack of good software constructs) areusually used to create the tests prior to simulation. The programs read in a parameter/directives filethat controls the generation of the test. Often these files contain simple weighting systems to directthe random selection of inputs.The generator normally outputs the test into a file, which is then read by the simulator and stored inmemory. The simulator reads the next entry whenever it’s prepared to inject the next set of inputs.Although pre-run generation provides much higher throughput than deterministic testing, it is difficultto control. The parameters are static and do not provide much flexibility. Also, most generators of thistype don’t allow <strong>for</strong> interdependencies between data streams — each data stream is generatedindependently. This can cause the generator to generate unlikely or even illegal tests.Reaching corner cases using pre-run generation is nearly impossible. The engineer has very little controlover the sequences generated. This makes it difficult to <strong>for</strong>ce the occurrence of specific combinations.As such, pre-run generation makes a suitable complement to deterministic testing, but cannot replace it.3


Another problem with pre-run generation is that it’s hard to maintain. As the <strong>verification</strong> processprogresses, <strong>new</strong> parameters are often needed. This normally requires modifying the program, sometimesaffecting delicate interdependencies between different parts of the generator.Maintenance problems can also occur when updating the program after a bug is found in the RTL design.To temporarily avoid generating the "buggy" test sequence again, the engineer must patch the code untilthe bug is fixed in the RTL design. Once the bug is fixed, the code must be "unpatched." Several suchpatches often co-exist in the code. The patching/unpatching process sometimes introduces bugs into thegenerator, which may not be noticed until several hours or days of simulation have transpired.The cost of developing and maintaining such a generator requires a minimum of several months perproject, and the cost increases significantly as the generation becomes more complex.A side effect of this <strong>method</strong>ology is that the full test is usually very large, since it is generated in advance.It is commonly loaded into a simulation memory at the beginning of the test and run from there. Thissignificantly increases the memory requirements <strong>for</strong> simulation, often causing the simulator to swapmemory, which slows the simulation down by orders of magnitude.CHECKING STRATEGIESThe two most popular ways to determine if test results are good are to compare them to a referencemodel or to create rule-<strong>based</strong> checks. Both of these checking <strong>method</strong>s must include the temporalbehavior or protocols of the device as well as the <strong>verification</strong> of data.Reference models are most common <strong>for</strong> processor-like designs <strong>for</strong> which the correct result can bepredicted relatively easily. Designers usually develop the reference model in C or C++ . Stimuli are injectedinto the reference model as well as the device, and their outputs are compared. In gray- and white-box<strong>method</strong>ologies, the comparisons also include the state of internal registers and nodes.Rule-<strong>based</strong> approaches are more common in communications devices <strong>for</strong> networking applications, wherethere can be several legal outputs <strong>for</strong> the same input, or where it’s not easy to predict the correct result.In this case, the engineer often uses specialized techniques to check data integrity and protocols, such asscore boarding, which tracks in<strong>for</strong>mation about cells or packets without worrying about the order inwhich they appear on the output ports.Engineers per<strong>for</strong>m these checks either on-the-fly or post-run. Simple checks and protocol checks can beper<strong>for</strong>med on-the-fly by the stubs and monitors using an HDL. Post-run checks are often per<strong>for</strong>med usinga C/C++ or Perl/awk program. The outputs of the test are either saved in a simulator memory and thendumped into a file, or written into the file directly. The program reads the inputs and outputs and checksthe correctness of the results. Often, these <strong>method</strong>ologies still require some amount of manual checking,usually achieved by viewing actual wave<strong>for</strong>ms or data dumps.The problem with these checking strategies arises from the way they are implemented today. Post-runchecking wastes cycles. If a test runs <strong>for</strong> 500,000 cycles, but a bug occurred after cycle 2,000, then498,000 cycles are wasted. In addition, since post-run checking cannot detect a problem in real-time,the designer does not have access to the values of the registers and memories of the device at the timethe problem happened. In general, debugging these problems requires re-running the simulation to theappropriate point.On-the-fly checking is more powerful. However, on-the-fly checks are most often implemented in Verilogor VHDL. These languages do not have a powerful temporal language to simplify protocol checks. Theyare low level and lack features like dynamic memory, which simplifies the process of writing thestubs/monitors and increases per<strong>for</strong>mance.In addition, reference model checking is often hard to implement on-the-fly, since intermediate results arenot always available. On-the-fly reference models also require a direct interface to the simulator (throughPLI or FLI), which is not easy to write or maintain.4


COVERAGE METRICSMeasuring progress is one of the most important tasks in <strong>verification</strong> and is the critical element thatenables the designer to decide when to end the <strong>verification</strong> ef<strong>for</strong>t. Several <strong>method</strong>s are commonly used:• Toggle testing: verifies over a series of tests that all nodes toggled at least once from 1 to 0 and back• Code coverage: demonstrates over a series of tests that all the source lines were exercised; in manycases, there is also an indication as to whether branches in conditional code were executed; sometimesan indication of state-machine transitions is also available• Tracking how many bugs are found each week: possibly the most common metric used to measureprogress; after a period of a few weeks with very few or zero bugs found, the designer assumes that the<strong>verification</strong> process has reached a point of diminishing returnsUn<strong>for</strong>tunately, none of these metrics has any direct relation to the <strong>functional</strong>ity of the device, nor is thereany correlation to common user applications. For example, neither toggle testing nor code coverage canindicate if all the types of cells in a communications chip (with and without CRC errors) have entered onall ports. Neither can these metrics determine if all possible sequences of three instructions in a row weretested in a processor.As a result, coverage is still measured mainly by the gut feeling of the <strong>verification</strong> manager, and eventuallythe decision to tape out is made by management without the support of concrete qualitative data.Not knowing the real state of the <strong>verification</strong> progress causes engineers to per<strong>for</strong>m many moresimulations than necessary, trading off CPU cycles <strong>for</strong> "confidence." This usually results in redundant teststhat provide no additional coverage or assurance that <strong>verification</strong> is complete. The real risk is that thedesign will be sent to production with bugs in it, resulting in another round of silicon. The cost of respinningsilicon includes non-recoverable engineering (NRE) costs to per<strong>for</strong>m the additional productionprocess, the cost of extending the team’s work on the project, and the major cost of missing a time-tomarketwindow. For most designs, this amounts to many millions of dollars.SPEC-BASED VERIFICATION<strong>Spec</strong>-<strong>based</strong> <strong>verification</strong> is a proven <strong>method</strong>ology <strong>for</strong> <strong>functional</strong> <strong>verification</strong> that solves many of theproblems design and <strong>verification</strong> engineers encounter with today’s <strong>method</strong>ologies. <strong>Spec</strong>-<strong>based</strong> <strong>verification</strong>captures the rules embodied in the specifications (design/interface/<strong>functional</strong> test plan) in an executable<strong>for</strong>m. An effective application of this <strong>method</strong>ology provides four essential capabilities to help breakthrough the <strong>verification</strong> bottleneck:• Automates the <strong>verification</strong> process, reducing a significant amount of manual work needed to developthe <strong>verification</strong> environment and tests• Increases product quality by focusing the <strong>verification</strong> ef<strong>for</strong>t on areas of <strong>new</strong> <strong>functional</strong> coverage andby discovering bugs not anticipated by the <strong>functional</strong> test plan• Provides <strong>functional</strong> coverage analysis capabilities to help measure the progress and completeness ofthe <strong>verification</strong> ef<strong>for</strong>t• Raises the level of abstraction used to describe the environment and tests from the RTL level to thespecification level, capturing the rules defined in the specs in a declarative <strong>for</strong>m and automaticallyensuring con<strong>for</strong>mance to these rules5


Device spec and<strong>functional</strong> test plan - eLegacy code (C, HDL)• Verification environment• Reference models• TestsConstraint-driventest generationData and temporalcheckingFunctionalcoverage analysisHDL simulatorHDLFigure 1: The spec-<strong>based</strong> <strong>verification</strong> <strong>method</strong>ologyENABLING TECHNOLOGIESFUNCTIONAL COVERAGEFunctional coverage analysis is a key enabling technology <strong>for</strong> spec-<strong>based</strong> <strong>verification</strong> <strong>method</strong>ology.This technology allows the <strong>verification</strong> engineer to define exactly what <strong>functional</strong>ity of the device shouldbe monitored and reported. To accomplish this, the <strong>functional</strong> test plan is translated into executabledirectives fed directly to the <strong>functional</strong> coverage analyzer. This makes the entire <strong>functional</strong> test planexecutable, including complex multi-cycle scenarios, state machine transitions, and temporal sequences.Figure 2: Cross-coverage window showing the registers accessed <strong>for</strong> each CPU opcode6


The coverage in<strong>for</strong>mation can also be crossed, providing in<strong>for</strong>mation on interrelated <strong>functional</strong>ity.This "querying" ability answers questions about the simultaneous occurrence of certain functions.For example, in a processor design, the <strong>verification</strong> engineer can cross in<strong>for</strong>mation about instructions,addressing modes and machine state to see exactly how many times each instruction was executed usingthe different addressing modes <strong>for</strong> every machine state.Using <strong>functional</strong> coverage analysis, the engineer finally has a clear indication of which <strong>functional</strong>ity hasbeen exercised and, more importantly, which <strong>functional</strong>ity has not. This focuses the <strong>verification</strong> ef<strong>for</strong>tand eliminates superfluous or redundant tests, ensuring that every test adds coverage. In so doing, theavailability of <strong>functional</strong> coverage analysis has opened the door to major advancements in <strong>verification</strong><strong>method</strong>ologies.The ability to generate clear reports showing which parts of the <strong>functional</strong> test plan were tested hasenabled a <strong>new</strong> approach to <strong>verification</strong>. Beginning with a base of automated tests, coverage reportsare generated to indicate which parts of the <strong>functional</strong> test plan were exercised. This allows the engineerto close quickly on significant <strong>functional</strong> coverage of the device with minimal ef<strong>for</strong>t. Afterward, the onlyadditional ef<strong>for</strong>t required is to generate tests focused on the <strong>functional</strong>ity not yet covered. The coveragereports after each set of tests direct the <strong>verification</strong> engineer to the types of tests to focus on next.Deterministic tests are needed only <strong>for</strong> the corner cases that were not reached over the course ofautomated testing. This minimizes the time spent writing deterministic tests.Functional coverage reports are also an essential tool <strong>for</strong> <strong>verification</strong> managers. They help monitor theprogress of the <strong>verification</strong> ef<strong>for</strong>t, provide a clear view of the state of the <strong>verification</strong>, and accuratelydetermine the moment the device is ready to be fabricated.Functional coverage reports also enable the <strong>verification</strong> engineer to verify the effectiveness of the tests.Once a certain <strong>functional</strong>ity has been tested enough (according to the metrics set by the engineer), other<strong>functional</strong>ity can be targeted and tests that add no <strong>new</strong> coverage can be discarded. There is no need tocontinue testing "just in case."The impact on the <strong>verification</strong> schedule when using this coverage approach is significant. It eliminatesmuch of the manual-intensive work of developing the deterministic tests, saving many of frustratingwork. It also significantly reduces the risk of sending the device to production while part of its<strong>functional</strong>ity has not been properly tested. This coverage approach eliminates silicon re-spins, whichin turn saves millions of dollars.CONSTRAINT-DRIVEN GENERATIONThe one feature spec-<strong>based</strong> <strong>verification</strong> offers that most significantly contributes to reducing the<strong>verification</strong> schedule is automated test generation. Armed with a constraint-driven test generator, aspec-<strong>based</strong> <strong>verification</strong> <strong>method</strong>ology can slash weeks or even months off the <strong>verification</strong> schedule,while placing enormous power in the hands of the engineer.Current cycle’s HDLsignal and register valuesTest constraintsSteer generation totarget areas• Target areas• Weights/probabilities• Corner-case tests• Structured sequences• Bug bypassesConstraint solverNext cycle’s stimulus<strong>Spec</strong> constraintsDefine what is legal• Protocols• I/O relationships• Context dependencies• Input definitionHDL simulationFigure 3: The constraint solver is the core technology enabling constraint-driven generation7


Constraint-driven generators offer some key advantages over the more common, parameterizedgenerators. First and <strong>for</strong>emost, they give the <strong>verification</strong> engineer full control over the generationprocess. By using simple constraints, engineers can generate tests that are completely random, completelydeterministic, or anywhere in between. Either these constraints can be spec constraints, used to define thelegal parameters that the generator must always hold to, or test constraints, which target the generatorto a specific test from the <strong>functional</strong> test plan. Instead of randomizing only what the user specificallyinstructs the system to randomize, constraint-driven generators take the infinity-minus approach: theyrandomize everything except what the engineer specifically constrains the generator not to randomize.This allows the generator to approach any given test scenario from multiple paths and find bugs thatwere not identified in the <strong>functional</strong> test plan.This capability to capture the “same” test scenario from different paths is critical. Most post-silicon<strong>functional</strong> bugs are not due to an inadequacy in the <strong>functional</strong> test plan, but to unanticipated usage bythe end user or ambiguities in the interface specifications. It’s impossible to think of all the possible bugswhen writing the <strong>functional</strong> test plan, so it is critical that all tests be run from multiple, different paths.Beyond offering support <strong>for</strong> the common pre-run generation <strong>method</strong>ology, constraint-driven generatorscan also support the more advanced "on-the-fly" generation <strong>method</strong>ology. Using the on-the-fly approach,the generator interacts with the DUT and reacts to the state of the device in real-time. In this manner,even hard-to-reach corner cases can be captured because the generation constraints are state dependent.On-the-fly generation also eliminates the need <strong>for</strong> big memories to hold the entire test. Instead ofgenerating the test all at one time, the inputs are generated as required. This reduces the requiredmemory and swapping by the simulator, significantly increasing simulation per<strong>for</strong>mance.Constraint-driven generators are also generic, or design independent. They receive a description of the dataelements to be generated (instructions, packets, or polygons) and generate them accordingly. If the designerintroduces changes to the architecture, the generator easily adapts, making the tests easy to reuse.One of the main challenges facing a <strong>verification</strong> engineer is to test corner cases in the <strong>functional</strong>ity ofthe device. These corner cases are usually reached after a long sequence of inputs, and they’re oftenthe result of several independent streams of input reaching a special combination at the same time.The constraint-driven generator’s ability to generate massive amounts of tests on-the-fly allows thegenerator to monitor the state of the device constantly and generate the required inputs at exactly theright time to reach the desired corner case. The result is high throughput of effective tests, which achievehigh <strong>functional</strong> coverage. Thus, when used in combination with <strong>functional</strong> coverage analysis, constraintdrivengeneration provides easy confirmation that a corner case was reached, and under what conditions.A powerful feature often missing in homegrown pre-run generators is the interface to the simulator.In the generic constraint-driven generators, this simulator interface does not have to be specified <strong>for</strong>every project, but is included as part of the generic generator. Manually created generators requirecreation of a custom interface that is specific not only to the simulator, but also to a particular versionof the simulator — a maintenance nightmare.All of these capabilities supplied by on-the-fly constraint-driven generators result in a major reductionin the <strong>verification</strong> cycle and significant increases in the <strong>verification</strong> quality.ON-THE-FLY TEMPORAL CHECKSToday’s highly integrated designs contain many components that are often developed by differentengineers, resulting in different interpretations of the protocols between components. This can causethe component interfaces to become the weakest link in the design. On-the-fly temporal checks verifythe temporal behavior and protocols associated with the design or system. They constantly monitor thedesign, sensing triggers (or events) that signal the beginning of a sequence. They then follow thesequence, verifying that it con<strong>for</strong>ms to the temporal rules specified by the <strong>verification</strong> engineer.There are several types of rules. In some cases, the protocol can be specified explicitly, defining exactlyon which cycle each part of the protocol should be per<strong>for</strong>med. In other cases, the exact cycle of aresponse can’t be defined; it is known only that it must happen eventually, or that it will occur withina certain interval of time. All of these checks — explicit, eventual, and interval — must be easy to specifyand must run on-the-fly to allow <strong>for</strong> protocol <strong>verification</strong> and efficient debugging of interface problems.8


Although temporal checks can be done using the current HDLs, their implementation is not always simple.New temporal constructs allow defining these rules in a simple and straight<strong>for</strong>ward manner and are keyto the spec-<strong>based</strong> <strong>verification</strong> <strong>method</strong>ology.On-the-fly checking, particularly in conjunction with on-the-fly generation, is a memory-efficientapproach to <strong>verification</strong>. Since the checks are per<strong>for</strong>med on-the-fly, only relevant in<strong>for</strong>mation is stored,and as soon as it is checked, it can be removed. On-the-fly checking also enhances the debug capability.When a check fails, it is possible to access the full state of the device at that point and generate adetailed report. Moreover, the test can then be aborted, since the subsequent cycles will not be usefuland merely waste simulation cycles.High-level temporal constructs also enable the engineer to define the rules independently. This allowsthe engineer to reuse checks developed <strong>for</strong> a lower-level block when verifying higher levels of integrationthat incorporate them.In summary, on-the-fly temporal checks provide an effective means to capture the interface specificationand verify the protocol of these interfaces, efficiently debug them, and eliminate the ineffectivesimulation cycles that follow the bugs’ occurrence. The powerful temporal constructs used by thesecheckers can also minimize the size of complex checkers, reducing by a factor of four the time it takesto write these checkers.THE SPECMAN ELITE METHODOLOGYIntegrated with the <strong>Cadence</strong> ® Incisive ® <strong>functional</strong> <strong>verification</strong> plat<strong>for</strong>m, <strong>Spec</strong>man Elite ® testbenchautmocation delivers a spec-<strong>based</strong> <strong>verification</strong> <strong>method</strong>ology. The <strong>Spec</strong>man Elite system allows you tocapture the rules defined in your specifications (design/interface/<strong>functional</strong> test plan) and make themexecutable. It has all of the enabling technologies described above: <strong>functional</strong> coverage analysis,constraint-driven generation of <strong>functional</strong> tests, and all the constructs necessary <strong>for</strong> creating powerfuldata and temporal checkers.Design spec andinterface specFunctional test planTest environmentEnvironment developmentVerificationTestsSimulationCoverage analysisFigure 4: The <strong>Spec</strong>man Elite spec-<strong>based</strong> <strong>verification</strong> flow9


The <strong>Spec</strong>man Elite <strong>verification</strong> flow is very similar to the traditional flow, but is enhanced with theenabling technologies described above. As always, the input to the <strong>functional</strong> <strong>verification</strong> process isthe design spec (which defines the device’s architecture, <strong>functional</strong>ity, and protocols) and the interfacespec (which defines the target system and the protocols between the DUT and the rest of the system).These specs are used to develop the <strong>functional</strong> test plan, which defines the <strong>verification</strong> strategy, the testenvironment, and the <strong>functional</strong>ity and user scenarios that are to be tested. These tests should includetypical tests, stress tests, error tests, and corner-case tests.Once the test plan is defined, the test environment is developed. Existing code from previous projectswritten in Verilog, VHDL, or C can be used and linked into the system. The test environment includesstructs, which define the data to be generated, and spec constraints (from the interface specification,see Figure 5). <strong>Spec</strong> constraints constrain the generator to generate only legal values. The <strong>verification</strong>environment also includes the data and temporal checks (see Figure 6), which define how to checkprotocols and monitor the simulations <strong>for</strong> correct behavior.Interface spec: An instruction consists of an opcode representingthe instruction set, op1, which can access registers zero throughthree, and op2, which is a byte.type command: [ADD, ADDI, SUB, SUBI, JMP, JMPR, JMPC, CALL, RETURN];type register: [REG0, REG1, REG2, REG3];struct instruction {opcode: command;op1: register;op2: byte;};Interface spec: If the opcode is jump to an address in a register (JMPR),the second operand must be zero.extend instruction {keep (opcode == JMPR) => op2 == 0;};Figure 5: Struct and spec constraint from the interface specificationInterface spec: An acknowledge must be received after 3 to 5 cyclesof when a request is seen.expect rise ('top.req') => {[3..5]; rise('top.ack')};Note: signals that are quoted represent DUT signals from the HDL.Figure 6: Temporal check from the interface specificationAt this point, a few deterministic tests can be written and run to verify that the device is getting outof reset and per<strong>for</strong>ming the basic operations correctly. Once this is done, instead of writing manydeterministic tests and checking them, constraint-driven tests can now be generated. <strong>Spec</strong>man Elitetestbench automation can generate many tests, drive them into the device, and check that the deviceresponded according to the spec. But more importantly, the <strong>Spec</strong>man Elite system collects the <strong>functional</strong>coverage in<strong>for</strong>mation, showing what <strong>functional</strong>ity was actually exercised. With this in<strong>for</strong>mation, it isnow easy to know what <strong>functional</strong>ity has not been covered and to write more specific test constraints(see Figure 7) that target the generator to focus on the untested <strong>functional</strong>ity. This process ensures anefficient <strong>verification</strong> cycle and guarantees that each test adds <strong>new</strong> coverage.10


Functional test plan: Test the jump on carry (JMPC) opcode when thecarry bit is high.extend instruction {keep ('top.cpu.carry'== 1) => opcode == JMPC;};Note: signals that are quoted represent DUTsignals from the HDL.Figure 7: Test constraint from the <strong>functional</strong> test planCONCLUSIONThe ineffectiveness of current <strong>verification</strong> <strong>method</strong>ologies is apparent both in the proportion of timespent verifying a design versus the time spent designing it, and in the number of re-spins most designsrequire to become fully <strong>functional</strong>. The enormous resource expenditures required to verify today’scomplex designs will more than double in the next generation of complexity. Verification teams alreadyfind themselves backed into a no-win situation with only two options — miss the schedule by anindeterminate amount, or abort <strong>verification</strong> and risk an almost certain re-spin.<strong>Spec</strong>-<strong>based</strong> <strong>verification</strong> provides an effective composite of enabling technologies that automate resourceintensivemanual processes and establish qualitative metrics to ensure sufficient <strong>functional</strong> coverage.Using these technologies increases the quality of the design while reducing the resources needed toimplement a powerful <strong>verification</strong> <strong>method</strong>ology. Verification engineers report experiencing a two- tofour-fold reduction in the <strong>verification</strong> schedule, while achieving significantly higher <strong>verification</strong> quality.11


<strong>Cadence</strong> Design Systems, Inc.Corporate Headquarters2655 Seely AvenueSan Jose, CA 95134800.746.6223408.943.1234www.cadence.com© 2005 <strong>Cadence</strong> Design Systems, Inc. All rights reserved. <strong>Cadence</strong>, the <strong>Cadence</strong> logo, Incisive,<strong>Spec</strong>man Elite, and Verilog are registered trademarks of <strong>Cadence</strong> Design Systems, Inc. Allothers are the properties of their respective holders.6372 08/05

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!