Top 5 mistakes in formal verification adoption – how to build a system that sticks

Top 5 mistakes in formal verification adoption and how to build a system that sticks

Abstract

Formal verification is widely used in digital design organizations, yet many teams struggle to make it a sustained and reliable part of the verification flow. In many cases, the limiting factor is not tool capability but the absence of a supporting formal verification adoption methodology and organizational system that defines goals, ownership, workflows, and feedback mechanisms.

This blog examines five common structural mistakes that limit the effectiveness of formal verification adoption in real projects. These include an overly broad initial scope, the absence of measurable goals, underestimating the learning curve and the required roles, treating formal verification as a parallel activity rather than as part of the main verification flow, and failing to convert early successes into a reusable methodology.

This blog presents system-oriented alternatives that help teams integrate formal verification into the primary verification process and establish repeatable practices across projects. When formal verification is adopted with defined goals, clear ownership, and integration into verification planning and closure processes, it becomes a reliable contributor to design quality and verification confidence.

Introduction

When a team or organization invests in formal verification tools, early pilots often deliver mixed results. Over time, formal usage may plateau or decline as internal champions move on and verification efforts revert to simulation-heavy flows. In many such cases, the limitation is not the formal technology itself, but how it was introduced and sustained within the verification environment. Formal verification differs fundamentally from simulation-driven verification because it reasons exhaustively over reachable state space and can expose subtle design errors at very early development stages [1]. However, this technical strength alone does not ensure long-term value. Formal must be embedded within the verification organization with defined goals, roles, workflows, and integration into decision-making processes.

This blog examines the structural mistakes organizations commonly make during formal verification adoption and explains how a system-level perspective can make formal verification a reliable, repeatable contributor to verification confidence. The central message is straightforward:
Formal sticks when it is treated as a verification system rather than as a tool trial.

Formal as a system, not just a tool

Treating formal verification as a system means embedding it within the verification organization with clear ownership, measurable objectives, and integration into existing flows rather than applying it ad hoc to isolated problems. Without this surrounding structure, formal usage becomes inconsistent, results are not systematically tracked, and learning does not transfer across projects.

Formal Verification adoption: Formal property checking as a closed verification loop.
Figure 1: Formal property checking as a closed verification loop [1]

Figure 1 illustrates the relationship between RTL design, assumptions and assertions, and verification outcomes such as proof, counterexample, and debug feedback. This closed-loop structure connects specification intent, constraint definition, property evaluation, and engineering decision-making within a continuous reasoning cycle [1]. Effective formal verification adoption, therefore, depends not only on solver capability but also on methodology discipline, review practice, and organizational integration.

Formal completeness further depends on traceability from specification requirements to proven properties. Verification confidence can only be established when each requirement is explicitly justified by one or more assertions [1].

The key difference between formal as a tool and formal as a system

Table 1: Formal verification as a tool vs as a system
Table 1: Formal verification as a tool vs as a system


Mistake #1: Starting too big and too vague

A common failure mode in formal verification adoption is scope creep: the effort begins with an overly broad and poorly defined scope. Instead of targeting specific high-risk design areas such as complex control logic, protocol compliance blocks, or corner-case state interactions, teams often attempt to apply formal across multiple blocks or projects simultaneously. Without clearly defined verification targets, property scopes, or success criteria, early pilots expand in several directions at once. As a result, it becomes difficult to prioritize effort, measure verification progress, or demonstrate clear engineering value from the formal activity.

Why it happens: This approach often reflects organizational pressure to demonstrate early progress or return on investment. There is an assumption that formal can be applied broadly once tools are available, and that its benefits will become apparent through usage. At the same time, the methodological differences between formal and simulation-driven verification are underestimated, particularly the need for careful scoping, abstraction, and setup.

Consequences: As the scope expands too quickly, limited formal expertise is spread thin. Engineers move between unrelated targets without a consistent approach, and pilot results lack clear success criteria. Some local progress may be made, but its impact is difficult to evaluate or reproduce. Over time, confidence in formal declines, and the effort is perceived as inefficient or misaligned with project priorities.

System alternative: A more effective approach is to begin with a narrowly scoped, high-risk or high-impact verification target. Typical candidates include blocks with complex control logic, protocol compliance requirements, or corner-case behaviour that is difficult to exhaustively cover using simulation alone. The scope should be explicitly in terms of properties, constraints, and expected verification outcomes. Clear success criteria should be established for the initial adoption phase, including the set of blocks to be targeted, the properties expected to be proven, and how formal results will contribute to verification closure. This first effort should also be used to define workflows, abstraction strategies, and reusable verification infrastructure. Starting with a focused target allows teams to demonstrate measurable value while building internal expertise, creating a foundation that can be scaled systematically rather than expanded prematurely.

Mistake #2: No explicit goals or metrics

Formal verification is often introduced with reasonable technical intent, yet many teams fail to define what successful adoption actually looks like. Properties may be written, and proofs or counterexamples generated, but there is often no agreed definition of success that links this work to verification progress, risk reduction, or sign-off readiness. Without a shared understanding of what outcomes formal verification is expected to deliver, activity continues without clear, measurable objectives that demonstrate engineering impact.

Why it happens: In many organizations, formal verification adoption is justified at a high level as a general improvement in verification quality rather than as a response to specific risks. There is an expectation that value will become evident through usage, without the need to define targets in advance. In addition, some of the benefits of formal are perceived as difficult to quantify, which leads teams to avoid committing to explicit goals or measurable outcomes.

Consequences: Without defined goals or metrics, it becomes difficult to answer basic questions such as whether formal is working or whether additional investment is justified. Progress cannot be consistently evaluated across projects, and discussions of effectiveness rely on anecdote rather than evidence. Under schedule or budget pressure, formal is easy to deprioritize because contributions are not demonstrated in decision-relevant terms [1].

System alternative: A more robust approach is to tie formal verification adoption directly to specific business and engineering risks. Goals should reflect outcomes that matter to the organization, such as reducing late-found bugs in a critical block, shifting the discovery of particular bug classes earlier in the schedule, or lowering the number of simulation-only escapes in defined areas.

A small, focused set of metrics should be established for the initial adoption phase and tracked consistently, even if the measurements are imperfect. Typical examples include:

  • The number of bugs discovered by formal before the simulationreaches the same scenarios.
  • Proof coverage on critical properties, demonstrating that key safety or protocol conditions are mathematically guaranteed rather than only tested.
  • Reduction in simulation-only corner cases, where formal proves behaviours that would otherwise require large regression effort.
  • Coverage closure improvement, where formal analysis identifies unreachable states or redundant test scenarios.
  • Earlier discovery of protocol or control logic violations, reducing late-stage debugging risk.

These metrics should guide how the formal system evolves, not simply determine whether tool usage continues. Coverage and completeness questions also benefit from explicit tracking approaches, since “proof” alone does not guarantee specification completeness [2].

In practice, formal analysis produces three outcome classes: full proof, counterexample, or bounded proof, each associated with an exploration depth. Bounded proofs can still support verification sign-off when the explored depth is sufficient to cover all relevant design behaviour [3].

Comparison of simulation and formal state-space exploration [3]
Figure 2: Comparison of simulation and formal state-space exploration [3]


Figure 2 illustrates that simulation typically explores a single deep execution path from reset, whereas formal verification performs a breadth-first exploration of reachable states up to a bounded proof depth. This distinction explains why bounded formal proofs can still provide meaningful verification confidence when the explored depth covers all relevant functional behavior. Completeness reasoning in formal verification must consider proof depth, state-space coverage, and the relationship between properties and design intent rather than relying solely on traditional simulation metrics [3].

Mistake #3: Underestimating the learning curve and required roles

Formal verification tools increasingly provide abstractions and application-focused features that simplify specific verification tasks. Examples include connectivity checking, X-propagation analysis, deadlock detection, clock-domain crossing verification, and protocol compliance validation, which are often delivered as ready-to-use “formal apps” within commercial verification platforms. These capabilities allow teams to apply formal techniques to well-defined problems without having to build every proof environment from scratch.

However, effective use of formal verification still requires a set of skills and roles that differ from those used in simulation-driven verification. These include reasoning in terms of properties rather than test cases, managing constraints carefully, and interpreting proof outcomes and counterexamples. These skills relate closely to assertion-based thinking and formalized behavior specification practices [4].

Why it happens: Organizations often expect formal tools to integrate seamlessly into existing verification practices, particularly when predefined formal applications such as connectivity analysis, protocol checking, or X-propagation verification are available. Because these packaged applications appear straightforward to deploy, teams may assume that formal verification can be adopted without significant changes to verification methodology.

In practice, however, formal verification introduces a different way of reasoning about design correctness. At the same time, roles within the formal effort are not always clearly defined, with an implicit expectation that individual engineers can both apply the formal approach and develop the supporting methodology without dedicated time or ownership.

Consequences: When the learning curve and role separation are not planned for, engineers gain only limited exposure to formal techniques and do not develop sustained confidence or proficiency. Formal specialists are repeatedly drawn into ad-hoc debugging, constraint tuning, and property refinement across multiple projects, leaving little time to improve reusable verification workflows or mentor other team members.

Over time, this leads to inconsistent use of formal methods, limited knowledge transfer across projects, and reduced overall effectiveness of the formal verification effort.

System alternative: A more effective approach is to explicitly plan for skill development and role definition during formal verification adoption. Explicit planning for skill development and role definition during formal verification adoption includes allocating time for training, coaching, and structured pairing between formal specialists and design or verification engineers within a realistic timeline. Clear ownership should be assigned for formal methodology development, property libraries, and reusable constraint models, separate from day-to-day application of established flows. Building and maintaining the formal system should be recognized as essential engineering work, alongside feature delivery and project execution.

Over time, this structured approach enables teams to move from isolated formal usage toward a repeatable organizational capability.

Mistake #4: Treating formal verification adoption as a side project, not as part of the main flow

A common pattern in formal verification adoption is that formal activity runs in parallel to the primary verification flow rather than being integrated into it. Formal results are produced, but they are not consistently reflected in sign-off criteria, regression dashboards, or routine design and verification reviews. In a mature verification environment, formal verification should contribute to the same planning and closure processes as simulation. This includes defining where formal is applied within the verification plan, ensuring that property results contribute to verification closure, and integrating formal outcomes into the same tracking and reporting mechanisms used for simulation coverage and regression results.

Why it happens: Formal verification adoption often begins as a pilot or exploratory effort, separate from established simulation-driven workflows. Teams experiment with formal techniques to demonstrate value, but the Integration points with existing processes are not clearly defined, and it remains unclear how formal results should influence readiness or risk assessments.

As a result, it remains unclear how formal results should influence readiness assessments, coverage discussions, or design sign-off decisions. Without explicit ownership for integration, formal activity continues alongside the main flow rather than becoming part of it.

Consequences: When formal is treated as optional, it is vulnerable to schedule pressure. As timelines tighten, formal verification activity is frequently deprioritized because it is not tied to mandatory deliverables or decision gates. Even when formal identifies meaningful design issues or proves important properties, its impact is limited if those results are not systematically incorporated into verification reviews, closure discussions, or project readiness assessments. Over time, this reinforces the perception that formal verification is useful but nonessential.

System alternative: In a system-oriented approach, formal is treated as a first-class element of the verification flow. Its role is defined by specific blocks, property classes, and project phases where it is expected to contribute. Formal results are reviewed alongside simulation outcomes and used to inform readiness and risk decisions. For example, proof status for critical properties, counterexamples discovered through formal analysis, and bounded proof coverage can all contribute to discussions of verification closure. Where practical, formal data is integrated into existing tooling, dashboards, and reporting mechanisms such as regression dashboards, verification tracking systems, and design review artifacts so that it becomes part of routine verification practice rather than a parallel activity.

Figure 3: Formal applications across the verification lifecycle [1]
Figure 3: Formal applications across the verification lifecycle [1]


Figure 3 illustrates how formal applications are used across bring-up, development, and bounded or full-proof phases, including property checking, coverage analysis, unreachable-state analysis, CDC checks, and connectivity validation.

During early bring-up, property checking and automatic property extraction help validate implementation intent. During development, formal analysis can detect unreachable states, protocol violations, or connectivity issues that are difficult to expose through simulation alone. In later phases, bounded or full proofs contribute to reasoning about design completeness and verification closure. This lifecycle perspective highlights that formal verification is most effective when it is planned as part of the primary verification flow, with defined entry points, review checkpoints, and closure criteria that align with simulation-based verification activities.

Mistake #5: Not turning early wins into a repeatable system

Many organizations achieve initial success with formal verification on one or two projects but fail to convert those successes into a repeatable approach. Effective techniques, scoping decisions, and workflows remain tied to specific individuals or projects rather than being captured within an organizational system [1].

Why it happens: Early formal efforts are often treated as isolated problem-solving exercises rather than as opportunities to define standard practice. Once a project is complete, attention shifts to the next delivery, and there is limited time or ownership for documenting what worked. In many teams, knowledge remains embedded in individual engineers’ constraint models, property implementations, and debugging strategies rather than being captured in reusable methodology assets. As roles change and projects evolve, the implicit knowledge that underlies successful formal use is lost.

Consequences: Without institutionalization, each new formal effort begins largely from scratch. Teams repeat the same early exploration and setup work, and previous lessons are relearned rather than reused. Engineers often need to recreate property templates, constraint environments, and proof strategies that were already developed in earlier projects. Over time, this prevents formal verification adoption from scaling beyond individual successes and reinforces the perception that formal depends on specific experts rather than on a shared, reliable system.

System alternative: A system-oriented approach captures early successes in forms that others can apply consistently. This approach includes developing internal playbooks, reusable property libraries, constraint templates, and methodology guidelines that define suitable block types, property and constraint-scoping practices, and standard workflows. Reusable assets such as property templates for common interfaces, reset behavior checks, protocol compliance properties, or connectivity validation can significantly reduce the effort required to apply formal verification in new projects.

Simple internal documentation should enable engineers to apply established approaches without requiring continuous expert involvement. Regular feedback loops should be established to review each project, update guidance, and refine where formal verification is most effective and where it is not.

Conclusion: formal verification adoption requires a system

The central challenge in formal verification adoption is not the availability or sophistication of tools, but whether organizations build the surrounding system that enables formal verification to deliver sustained value. This system includes clear goals, defined roles and ownership, integration into verification workflows, and feedback mechanisms that support learning and improvement across projects.

The five mistakes outlined above describe common structural gaps that limit the effectiveness of formal adoption. Addressing them does not require sweeping change. Incremental steps that move formal verification from an isolated activity to a structured, repeatable engineering practice can materially improve outcomes. When formal verification is adopted not merely as a tool but as part of an engineered verification system, it evolves from an occasional debugging aid into a dependable mechanism for delivering design assurance and verification confidence at scale.

To discuss how to make formal verification adoption more reliable and scalable, connect with LUBIS EDA.

References

[1] J. Bromley and J. Sprott, “Formal Verification in the Real World,” DVCon Proceedings, 2018. Available: https://dvcon-proceedings.org/wp-content/uploads/formal-verification-in-the-real-world.pdf

[2] H. Chockler, O. Kupferman, and M. Y. Vardi, “Coverage metrics for formal verification,” International Journal on Software Tools for Technology Transfer, Apr. 2006. Available: https://research.ibm.com/publications/coverage-metrics-for-formal-verification

[3] N. Kim, J. Park, H. Singh, and V. Singhal, “Sign-off with Bounded Formal Verification Proofs,” Samsung Electronics and Oski Technology, 2018. Available: https://dvcon-proceedings.org/wp-content/uploads/sign-off-with-bounded-formal-verification-proofs.pdf

[4] H. Witharana, Y. Lyu, S. Charles, and P. Mishra, “A Survey on Assertion-Based Hardware Verification,” ACM Computing Surveys, vol. 54, no. 11s, 2022, Art. no. 225. Available: https://dblp.org/rec/journals/csur/WitharanaLCM22.html

 

Training Topics

  1. Abstraction vectors (time, functionality)
  2. AIP for protocols
  3. AIP orchestration
  4. BMC & IPG, invariants
  5. Codestyle
  6. Completeness
  7. Liveness property, safety property
  8. Non-determinism (abstraction technique)
  9. Response generation (abstraction technique)
  10. Scoreboard (abstraction technique)
  11. Signal cutting, blackboxing
  12. State space explosion and mitigation techniques
  13. SVA fundamentals
  14. Whitebox checking, blackbox checking, greybox checking
  15. Witness, vacuity, reachability

Become a leader in formal verification