Why end-to-end proofs?
The adoption of formal verification has been on the rise, driven by the performance improvements of formal tools and the increasing demand for exhaustive verification, for example, required by safety standards.
Employing end-to-end checks to formally verify a design block can facilitate the creation of formal verification IP (VIP) that covers the full design behaviour. With end-to-end checks formal verification engineers can ensure that all design functionalities are thoroughly verified and confirmed bug-free.
From a performance perspective, formal verification is very efficient on small RTL blocks or blocks with low complexity. But it encounters significant challenges when end-to-end checks are applied on complex RTL designs. A commonly faced example of such a challenge is the state space explosion that can make formal verification on these designs infeasible. However, formal verification engineers were able to develop several techniques to mitigate these challenges.
A simple solution is to partition the logic of the design. The formal verification engineer then only needs to create checks for each part. The created checks are, typically, white box checks which access design internal signals and have a limited scope which requires the formal verification engineer to orchestrate several white box checks to verify the full design behaviour. However, the correct orchestration of many formal checks is error-prone and increases the difficulty of achieving gap-free verification. In addition, the use of design internal signals in white box checks is not implementation agnostic. Any design change, therefore, may require corresponding modifications of the formal checks.
For this reason, modern formal verification employs sophisticated, yet conservative, techniques to address complexity issues. These techniques typically employ abstraction as a measure to reduce the proof complexity while preserving the option to employ end-to-end checks.
Idea behind Abstraction Techniques:
The complexity of proving properties in formal verification is influenced by several factors, including the Cone of Influence (COI), the number and complexity of logical and arithmetic operations, and the combinational or sequential depth of the design.
The Cone of Influence (COI) is the subset of a design logic that can have an influence on the signals used in an assertion. Even though the overall design may be large, the COI for a specific assertion is generally smaller than the design. But, even with relatively smaller COIs the complexity of this COI can still pose a significant challenge for formal verification tools.
These challenges arise because also a reduced COI may still include complex operations and deep combinational or sequential logic. Mitigating these factors through abstraction methods are essential to improve proof convergence. Abstraction techniques achieve this by reducing the state space which the formal tool must consider.
Two common abstraction techniques in formal verification are signal-cutting and black-boxing.
Signal-Cutting
Signal-cutting in formal verification reduces the COI that needs to be analyzed by removing the logic in the transitive fan-in of the cut signal. This COI reduction can lower the proof complexity. However, when a signal is cut, formal tools treat these signals as primary inputs, which means the tool can assign any arbitrary values to them, including those that are unreachable for the non-reduced COI which can lead to false counterexamples. To avoid this issue, verification engineers may have to constrain cut signals to prevent false counterexamples. Another drawback is that functional verification of some primary outputs in the transitive fanout of the cut signal might be infeasible. Because of this, signal-cutting is usually paired with a divide-and-conquer strategy, where assertions target a specific functionality of the design that was isolated by signal-cutting.
The key principles of signal-cutting are:
- Isolated Functionality: Enables a divide-and-conquer approach, where only a specific design functionality is preserved while all other design functions are removed from consideration.
- Focused Checks: Limit the scope of each assertion to a specific design behaviour.
- Improved Convergence: Functionality isolation and focused checks can improve the convergence of the property suite by reducing the complexity of its assertions.
Black-Boxing
Black-boxing is a technique where the verification engineer replaces the logic of a module instance with an abstract model, formal friendly model defined auxiliary logic or constraints. When a module instance is black-boxed, the formal tool excludes the entire logic of that instance from the formal analysis. Consequently, the inputs of the black-boxed instance become primary outputs, and the outputs become primary inputs for the rest of the design. Like signal-cutting, black-boxing allows formal tools to assume arbitrary value combinations at the outputs of the black-boxed instance. In order to prevent false counterexamples, the verification engineer may have to provide an abstract implementation of the black-boxed logic. Typically, the abstract implementation for a black-boxed module instance must be more sophisticated than the abstract description for a cut signal.
The core concepts of black-boxing are:
- Modularization: Enables a divide-and-conquer approach, where specific modules can be verified individually without involving the full instance hierarchy in formal analyses.
- Abstraction: Properties created for a submodule can be reused as an abstract model by instantiating them as assumptions.
- Convergence: Replacing the logic of complex modules with abstract descriptions can significantly enhance the convergence of the entire property suite.
Conclusion
Signal-cutting and black-boxing are formal verification techniques that reduce proof complexity
by abstracting parts of a system without altering the original design files. They reduce the cone of influence, thus reducing the complexity of a formal proof and enabling the formal verification complex designs. With the help of formal tools, both methods maintain the integrity of the design by avoiding direct file modifications. However, they require the careful creation of constraints, or auxiliary logic. This is crucial to ensure that the abstracted parts of the design behave sufficiently close to their non-abstracted counterparts to prevent false counterexamples.