close By using this website, you agree to the use of cookies. Detailed information on the use of cookies on this website can be obtained on OneSpin's Privacy Policy. At this point you may also object to the use of cookies and adjust the browser settings accordingly.

What is formal verification?

Formal verification is the use of mathematical analysis to prove or disprove the correctness of a design with respect to a set of assertions specifying intended design behavior. In chip hardware design, formal verification is a systematic process to verify that the design intent (assertion specification) is preserved in the implementation (RTL or gate-level netlist).    

How is formal verification different from simulation?

In simulation, test cases (scenarios) are created by hand or by an automated test bench, and then executed on the RTL or gate-level design. Given the huge number of states in even a small design, it is impossible to simulate more than a miniscule percentage of design behavior. Simulation is inherently probabilistic; the chances of exercising every scenario that would reveal a design bug are small. Corner-case bugs, requiring very specific conditions to trigger them, are often missed in simulation. Formal verification does not execute the design, so it requires neither tests nor test benches. Instead, it statically analyzes the design for all possible input sequences and all possible state values, checking to see if any assertions can be violated. These violations are often called counter-examples since they represent a way in which the implementation does not match the specification of intended behavior. If no such violations are found, formal verification can often prove that there is no possible way to violate an assertion. Ideally, formal verification is 100% exhaustive, proving all assertions “safe” once all bugs have been found and fixed.

Is it possible to verify every design exhaustively?

Theoretically, any set of assertions for any design can be proven formally. In practice, limitations in hardware resources (servers) and the sheer complexity of some large designs mean that not every assertion can be proven. An effective methodology on how and where to apply formal verification is essential for chip project success. For example, certain types of design blocks may be verified formally, with no simulation performed at all. A mix of simulation and formal verification is common in larger blocks and subsystems. At the full-chip level, there are some types of assertions that can be proven even for the largest designs.

What is a bounded proof?

When a formal tool proves an assertion, this means that there is no way to violate the assertion given any legal sequence of input values over any number of clock cycles. A bounded proof is incomplete, but valuable because it quantifies the amount of formal analysis performed. If the tool reports a bounded proof of 25 cycles for a particular assertion, this means that there is no way to violate the assertion for all sequences of up to 25 clock cycles. Whether this is sufficient depends upon the design; there are techniques to calculate the depth at which a bounded proof is tantamount to a full proof.

What is an abstraction in formal verification?

One way to run formal analysis more efficiently on a large or complex design is to replace parts of the design with simpler, more abstract structure that are easier to analyze. Common examples include reducing the size of on-chip memories, the width of counters, and the depth of FIFOs and buffers. Formal tools may handle certain abstractions automatically, but more complex abstractions are performed by the verification team under the guidance of a formal methodology. Either way, the changes to the design must be made carefully so that any proofs obtained are also valid for the original design.

What sort of expertise is required for formal verification?

While a PhD may have been mandatory to run early, academic formal tools, this is not the case for today’s commercial products. No formal knowledge at all is needed to use formal equivalence checking tools or formal applications (apps) that tackle specific verification problems. Assertion-based verification (ABV), which includes writing assertions and perhaps other types of properties, requires knowledge of an assertion language plus some understanding of how formal tools work and how to best use them. Proving end-to-end properties for very large designs may require facility with abstractions and significant formal expertise, though not a PhD in mathematics.

Does formal verification work effectively on data paths?

Users traditionally focused formal tools on control logic where bugs lurked due to combinations of corner-case conditions never hit in simulation. But some of these conditions often derive from data path logic, for example, an arithmetic overflow. Today’s leading formal tools handle both control and data equally well. In fact, Xilinx presented a paper at DesignCon 2018 documenting the full formal proof of a 32-bit multiplier, long considered impossible. Formal equivalence checking tools run on the entire design, including both control paths and data paths.


Assertions/ABV


How are assertions related to properties and constraints?

Unfortunately, the terminology used across the industry is not entirely consistent. Some people use the terms “assertion” and “property” interchangeably. Most experts consider property a broader term, indicating some aspect of intended behavior, with three categories. Assert properties, or assertions, are the statements of design intent proven or violated by formal verification. Assume properties, also known as constraints, are rules that must be followed during the formal analysis. They ensure that only legal input sequences or state values are considered. Finally, cover properties can be targeted by formal verification to provide a metric for thoroughness. Especially if a full proof of every assertion cannot be obtained, knowing that the formal analysis has found a way to exercise every cover property builds confidence in the correctness of the design.

How are assertions specified?

Although VHDL has had an “assert” statement for many years, Verilog did not. This has been rectified in SystemVerilog, which provides a rich set of constructs for specifying assertions, constraints, and coverage points. This is known as the SystemVerilog Assertions (SVA) subset and it has become so popular that it is used to verify VHDL and SystemC designs in addition to SystemVerilog designs. The main alternative is the Property Specification Language (PSL), which offers different “flavors” of constructs to look more like the language being used for the design.

Is a “witness” the same as a counter-example?

The concepts are very similar. When a formal tool finds a way to violate an assertion, this means that there is a sequence of input values, obeying all constraints, that forms a counter-example. This sequence can be displayed within the formal environment, but most formal tools also have the ability to generate a test that can be run in the preferred simulator. Thus, a counter-example can be debugged the same way as a failing assertion from any other simulation test. An assertion violation usually means a bug in the design, but it could be due to an error in the assertion specification itself. Either way, the discrepancy can be debugged in a familiar environment.

A witness is a sequence of input values that makes the assertion true while obeying all the constraints. It is an example of how to satisfy the assertion. Most formal tools can also generate a simulation test for a witness. This can be very helpful if the simulation team is having a hard time trying to figure out how to exercise a particular part of the design. Similarly, most formal tools will also generate a simulation test showing how to trigger each cover property. Looking at how a property can be covered may provide guidance in how to write more effective simulation tests.

Are different assertions and constraints require for different tools?

Assertion standards have made formal verification more popular because they foster portability across tools from different vendors. The traditional VHDL “assert” construct, the SystemVerilog Assertions (SVA) subset, and the Property Specification Language (PSL) are all standardized and widely supported. SVA is especially popular because most design and verification engineers are trained on SystemVerilog and comfortable using it. Some formal tools even support using SVA to verify VHDL and SystemC designs in addition to SystemVerilog designs.

Is it hard to write assertions for complex protocols?

Capturing all the rules for a complex interface protocol is not a simple task, but that has more to do with the complexity of the rules than the format used. Much as pre-packaged verification IP (VIP) for standard protocols is commercially available for simulation, many formal tool vendors make assertion-based VIP for standards available as well. Using VIPs requires minimal formal knowledge and produces quick, clear results on compliance to the standard.

Is it possible to know when enough assertions have been written?

There are several older methods of estimating assertion quality and completeness, but they are overly optimistic, resulting in missed bugs. The only accurate measurement is model-based mutation coverage. This approach automatically inserts faults (bugs) into the formal model and checks to see whether any assertions detect them. Model-based mutation coverage reports the portions of the design in which a bug would not be detected by any existing assertion. This makes it easier for designers or verification engineers to determine which assertions must be added for full coverage.


Formal Apps


What is a formal app?

Full-scale formal verification is very effective, but it does require the specification of assertions and usually some constraints as well. In fact, “assertion-based verification” (ABV) is a common term for this approach. However, there are many verification problems that can be solved by formal verification with no need for user-specified assertions, and few if any constraints. Each of these problems is addressed by a formal application (app) providing a pushbutton solution. Any needed assertions are generated by the app internally from the design RTL implementation and possibly additional files such as connectivity spreadsheets or power-intent specifications.

Examples of verification problems addressed by apps include:
  • Register specification
  • Connectivity
  • X-Propagation
  • Protocol checks
  • Clock-domain crossings
  • Floating-point operations
  • Safety verification
  • Does it take a lot of work to get results from formal tools?

    The amount of effort is closely correlated the level of expertise required. Formal applications (apps) run automatically on the design and deliver results immediately. Writing assertions for custom verification challenges takes more time and effort, while solving the most complex formal problems may be a full-time job for a true expert.


    Coverage


    Is it possible to combine coverage metrics from simulation and formal?

    Verification leads need to look at the combined coverage metrics from all tools to assess verification progress and determine what remains to be done. Simulation structural coverage measures how well the testbench and test suite cover the design; model-based mutation coverage measures how well the design is covered by assertions. It is possible to integrate these two metrics into a single coverage view.

    Is there a way to get a unified view of verification progress?

    Beyond combining coverage metrics, project managers need to have a single view of verification progress across all techniques. Verification engineers can write the verification plan to indicate which goals will be addressed by simulation, emulation, and formal tools. Results, including uniquely formal metrics such as bounded and complete proofs, can be integrated and annotated back into the plan.


    Equivalence Checking


    What is equivalence checking?

    There are times in the course of a chip project when the design exists in two representations that are supposed to be functionally equivalent. The most common example is the RTL design that feeds into logic synthesis and the resulting gate-level netlist. Logic synthesis is supposed to optimize the design and map it into the target gate library, but not change the functionality. One way to try to verify that two representations are equivalent is to simulate the netlist using the full test suite run on the RTL. However, gate-level simulations are slow and, as with any simulation-based method, only a tiny percentage of design behavior can be exercised. The consequences of a synthesis bug that changes the netlist are serious, possibly a chip turn costing millions of dollars and adding months to the project schedule.

    For these reasons, formal equivalence checking has been a part of the development flow for ASICs for years. As FPGAs have grown from simple devices to full system-on-chip (SoC) designs, their developers have adopted many of the verification techniques used by their ASIC counterparts. When it comes to formal equivalence checking, in some ways this capability is even more important for FPGAs than ASICs. To deliver maximum speed, FPGA synthesis tools perform a wide variety of optimizations that change the internal structure of the design but should not affect functionality. Further optimizations occur during the place-and-route process, so the RTL should be checked against the post-layout netlist as well as the post-synthesis netlist.

    Formal equivalence checking is also commonly used to compare two RTL versions of the design, or even two netlists. Automated tools may insert test structures, clock-domain-crossing synchronizers, level shifters between voltage domains, and other specialized logic. Equivalence checking can verify that the inserted RTL or gates do not corrupt the original functionality.

    What is the difference between combinational and sequential equivalence checking?

    As with other types of formal analysis, equivalence checking of a large design is a tough mathematical problem. For most ASICs and simple FPGAs, the problem can be simplified by mapping the state elements (memory and registers) between the two designs. If this is possible, combinational equivalence checking needs to consider only the combinational logic between the state elements.

    Sequential equivalence checking, in its purest form, treats two designs as black boxes in which the input and outputs must match but the internal state can be entirely different. In practice, two designs may be similar but not close enough for combinational equivalence checking. This is commonly the case for FPGA synthesis, which may reorder state machines or migrate logic across registers to meet timing requirements. Thus, proving formal equivalence for FPGAs typically employs sequential checking for optimized parts of the design and combinational checking for the remainder.

    How is equivalence checking related to other formal approaches?

    It is common to divide formal verification into equivalence checking and property checking. The latter category includes both assertion-based verification and automated formal solutions such as apps. A simple way to conceptually unify these two categories is to think of equivalence checking as ABV on a model containing both designs to be compared as well as a set of assertions specifying that the outputs of the two designs must match. This does not necessarily represent how actual tools are build, but it may help in understanding.

    Can acceleration, emulation, or FPGA-based prototyping replace formal verification?

    Nothing can replace formal analysis. Simulation acceleration uses hardware to run faster, so it will execute more tests in a given time, but it still exercises very little design behavior. In-circuit emulation (ICE) and prototypes built using FPGAs are typically designed to be plugged into the end system in lieu of the actual chip under design. They may run even faster than acceleration and will exercise different aspects of the design. While ICE and prototypes are valuable for hardware-software co-verification, they are in no way a substitute for the exhaustive nature of formal verification.

    Is formal verification needed for FPGAs?

    It is no longer sufficient for FPGA designers to defer verification to the bring-up lab. Modern FPGAs are complex system-on-chip (SoC) devices with all the complexity of their ASIC counterparts. The difficulty of lab debug and long reprogramming cycles mandate that FPGA teams adopt ASIC-like verification processes. Formal verification tools work equally well on both technologies and have been adopted on many FPGA projects.

    Equivalence checking is arguably even more important for FPGAs than for ASICs. The FPGA synthesis and place-and-route processes often make significant changes to the design structure, including moving logic across register stages, in order to maximize use of the underlying chip technology. These manipulations present some risk of altering design functionality. Formal equivalence checking can catch any such problems, but sequential equivalence checking is required since there is no 1-to-1 mapping between state elements in the RTL source and the optimized netlist.