# What is formal verification?

Formal verification is the use of mathematical analysis to prove or disprove the correctness of a design with respect to a set of assertions specifying intended design behavior. In chip hardware design, formal verification is a systematic process to verify that the design intent (assertion specification) is preserved in the implementation (RTL or gate-level netlist).

## More answered questions…

## How is formal verification different from simulation?

In simulation, test cases (scenarios) are created by hand or by an automated test bench, and then executed on the RTL or gate-level design. Given the huge number of states in even a small design, it is impossible to simulate more than a miniscule percentage of design behavior. Simulation is inherently probabilistic; the chances of exercising every scenario that would reveal a design bug are small. Corner-case bugs, requiring very specific conditions to trigger them, are often missed in simulation Formal verification does not execute the design, so it requires neither tests nor test benches. Instead, it statically analyzes the design for all possible input sequences and all possible state values, checking to see if any assertions can be violated. These violations are often called counter-examples since they represent a way in which the implementation does not match the specification of intended behavior. If no such violations are found, formal verification can often prove that there is no possible way to violate an assertion. Ideally, formal verification is 100% exhaustive, proving all assertions “safe” once all bugs have been found and fixed.

## Can acceleration, emulation, or FPGA-based prototyping replace formal verification?

Nothing can replace formal analysis. Simulation acceleration uses hardware to run faster, so it will execute more tests in a given time, but it still exercises very little design behavior. In-circuit emulation (ICE) and prototypes built using FPGAs are typically designed to be plugged into the end system in lieu of the actual chip under design. They may run even faster than acceleration and will exercise different aspects of the design. While ICE and prototypes are valuable for hardware-software co-verification, they are in no way a substitute for the exhaustive nature of formal verification.

## Is it possible to verify every design exhaustively?

Theoretically, any set of assertions for any design can be proven formally. In practice, limitations in hardware resources (servers) and the sheer complexity of some large designs mean that not every assertion can be proven. An effective methodology on how and where to apply formal verification is essential for chip project success. For example, certain types of design blocks may be verified formally, with no simulation performed at all. A mix of simulation and formal verification is common in larger blocks and subsystems. At the full-chip level, there are some types of assertions that can be proven even for the largest designs.

## What is an abstraction in formal verification?

One way to run formal analysis more efficiently on a large or complex design is to replace parts of the design with simpler, more abstract structur that are easier to analyze. Common examples include reducing the size of on-chip memories, the width of counters, and the depth of FIFOs and buffers. Formal tools may handle certain abstractions automatically, but more complex abstractions are performed by the verification team under the guidance of a formal methodology. Either way, the changes to the design must be made carefully so that any proofs obtained are also valid for the original design.

## What is a bounded proof?

When a formal tool proves an assertion, this means that there is no way to violate the assertion given any legal sequence of input values over any number of clock cycles. A bounded proof is incomplete, but valuable because it quantifies the amount of formal analysis performed. If the tool reports a bounded proof of 25 cycles for a particular assertion, this means that there is no way to violate the assertion for all sequences of up to 25 clock cycles. Whether this is sufficient depend upon the design; there are techniques to calculate the depth at which a bounded proof is tantamount to a full proof.

## How are assertions related to properties and constraints?

Unfortunately, the terminology used across the industry is not entirely consistent. Some people use the terms “assertion” and “property” interchangeably. Most experts consider property a broader term, indicating some aspect of intended behavior, with three categories. Assert properties, or assertions, are the statements of design intent proven or violated by formal verification. Assume properties, also known as constraints, are rules that must be followed during the formal analysis. They ensure that only legal input sequences or state values are considered. Finally, cover properties can be targeted by formal verification to provide a metric for thoroughness. Especially if a full proof of every assertion cannot be obtained, knowing that the formal analysis has found a way to exercise every cover property builds confidence in the correctness of the design.

## How are assertions specified?

Although VHDL has had an “assert” statement for many years, Verilog did not. This has been rectified in SystemVerilog, which provides a rich set of constructs for specifying assertions, constraints, and coverage points. This is known as the SystemVerilog Assertions (SVA) subset and it has become so popular that it is used to verify VHDL and SystemC designs in addition to SystemVerilog designs. The main alternative is the Property Specification Language (PSL), which offers different “flavors” of constructs to look more like the language being used for the design.

## Is a “witness” the same as a counter-example?

The concepts are very similar. When a formal tool finds a way to violate an assertion, this means that there is a sequence of input values, obeying all constraints, that forms a counter-example. This sequence can be displayed within the formal environment, but most formal tools also have the ability to generate a test that can be run in the preferred simulator. Thus, a counter-example can be debugged the same way as a failing assertion from any other simulation test. An assertion violation usually means a bug in the design, but it could be due to an error in the assertion specification itself. Either way, the discrepancy can be debugged in a familiar environment.

A witness is a sequence of input values that makes the assertion true while obeying all the constraints. It is an example of how to satisfy the assertion. Most formal tools can also generate a simulation test for a witness. This can be very helpful if the simulation team is having a hard time trying to figure out how to exercise a particular part of the design. Similarly, most formal tools will also generate a simulation test showing how to trigger each cover property. Looking at how a property can be covered may provide guidance in how to write more effective simulation tests.

## What is a formal app?

Full-scale formal verification is very effective, but it does require the specification of assertions and usually some constraints as well. In fact, “assertion-based verification” (ABV) is a common term for this approach. However, there are many verification problems that can be solved by formal verification with no need for user-specified assertions, and few if any constraints. Each of these problems is addressed by a formal application (app) providing a pushbutton solution. Any needed assertions are generated by the app internally from the design RTL implementation and possibly additional files such as connectivity spreadsheets or power-intent specifications.

Examples of verification problems addressed by apps include:## What is equivalence checking?

There are times in the course of a chip project when the design exists in two representations that are supposed to be functionally equivalent. The most common example is the RTL design that feeds into logic synthesis and the resulting gate-level netlist. Logic synthesis is supposed to optimize the design and map it into the target gate library, but not change the functionality. One way to try to verify that two representations are equivalent is to simulate the netlist using the full test suite run on the RTL. However, gate-level simulations are slow and, as with any simulation-based method, only a tiny percentage of design behavior can be exercised. The consequences of a synthesis bug that changes the netlist are serious, possibly a chip turn costing millions of dollars and adding months to the project schedule.

For these reasons, formal equivalence checking has been a part of the development flow for ASICs for years. As FPGAs have grown from simple devices to full system-on-chip (SoC) designs, their developers have adopted many of the verification techniques used by their ASIC counterparts. When it comes to formal equivalence checking, in some ways this capability is even more important for FPGAs than ASICs. To deliver maximum speed, FPGA synthesis tools perform a wide variety of optimizations that change the internal structure of the design but should not affect functionality. Further optimizations occur during the place-and-route process, so the RTL should be checked against the post-layout netlist as well as the post-synthesis netlist.

Formal equivalence checking is also commonly used to compare two RTL versions of the design, or even two netlists. Automated tools may insert test structures, clock-domain-crossing synchronizers, level shifters between voltage domains, and other specialized logic. Equivalence checking can verify that the inserted RTL or gates do not corrupt the original functionality.

## What is the difference between combinational and sequential equivalence checking?

As with other types of formal analysis, equivalence checking of a large design is a tough mathematical problem. For most ASICs and simple FPGAs, the problem can be simplified by mapping the state elements (memory and registers) between the two designs. If this is possible, combinational equivalence checking needs to consider only the combinational logic between the state elements.

Sequential equivalence checking, in its purest form, treats two designs as black boxes in which the input and outputs must match but the internal state can be entirely different. In practice, two designs may be similar but not close enough for combinational equivalence checking. This is commonly the case for FPGA synthesis, which may reorder state machines or migrate logic across registers to meet timing requirements. Thus, proving formal equivalence for FPGAs typically employs sequential checking for optimized parts of the design and combinational checking for the remainder.

## How is equivalence checking related to other formal approaches?

It is common to divide formal verification into equivalence checking and property checking. The latter category includes both assertion-based verification and automated formal solutions such as apps. A simple way to conceptually unify these two categories is to think of equivalence checking as ABV on a model containing both designs to be compared as well as a set of assertions specifying that the outputs of the two designs must match. This does not necessarily represent how actual tools are build, but it may help in understanding.

### References

- https://www.electronicdesign.com/eda/these-five-principles-define-formal-verification
- https://www.semiwiki.com/forum/content/677-assertion-based-formal-verification.html
- https://en.wikipedia.org/wiki/Formal_verification
- http://www.eetasia.com/STATIC/PDF/201005/EEOL_2010MAY21_EDA_TA_01.pdf
- https://www.research.ibm.com/haifa/projects/verification/RB_Homepage/ps/red_dots.pdf
- http://people.eecs.berkeley.edu/~alanmi/publications/2014/tcad14_inddc.pdf