Integrating Results And Coverage From Simulation And Formal
By Tom Anderson, Semiconductor Engineering
Not so long ago, formal verification was considered an exotic technology used only by specialists for specific verification challenges such as cache coherency. As chips have grown ceaselessly in size and complexity, the traditional verification method of simulation could not keep pace. The task of generating and running enough tests consumed enormous resources in terms of engineers, simulation licenses, and servers. Yet even unlimited simulation capability provided no guarantee of functional correctness. Constrained-random simulation, by its very nature, is probabilistic and has little chance of exercising enough of the design to find deep, corner-case bugs.
For these reasons, verification engineers turned increasingly to formal. They highly value the ability to exercise 100% of design behavior, find obscure bugs, and prove mathematically that no bugs remain. Today, virtually every chip development team makes at least some use of formal techniques. Applications (“apps”) handle many common verification tasks automatically, standardized assertion languages provide portability across tools, formal engines (“solvers”) are more powerful, and debug tools have improved. Methodologies provide guidance and give even new users the confidence to be successful.
Despite all this progress, there was an important aspect of using formal that was not well addressed until recently: integration of its results and coverage metrics with those from simulation. A unified view of verification status is essential for engineering leads and project managers. They need to account for the effort expended on formal verification using measures and methods familiar from the simulation domain. Adopting formal enables more complete verification and reduces overall verification effort. Managers making decisions on tools and methodologies look for proof of these benefits.