My last post must have struck a nerve. In this post I ask whether fundamental innovation stalled in formal verification, and I speculate which area the next technological leap will come from. This post received some quite interesting comments. It also brought a counter point by Brian Bailey, partially motivated by his business partnership with Jasper –Brian sits in Jasper’s Technical Advisory Board. Last Friday I had a lively discussion with Rajeev Ranjan (CTO) and Holly Stump (VP Marketing) of Jasper. I am now taking the time to discuss these feedbacks.

My claim is that formal verification has reached a plateau from both the core technology and business point of view.

Yet that does not mean there is no progress! There is a world between creating a brand new technology and using it for an actual working product. This is what the Jasper, Real Intent, Atrenta, OneSpin, and many others are doing, as they creatively use these core technologies to propose new applications.

From the technological point of view, we have experienced steady improvements in many aspects –equivalence checking (EC), sequential verification, abstraction and refinement, etc. The scope of application of formal verification techniques has dramatically increased for the past few years. It has of course benefited from more powerful hardware –faster CPU, larger memory, multi-threading, distributed systems, FPGA. It also benefited from the skilled engineering of the core technologies, to the credit of the many private companies in the field.

But there is no recent verification tool that has been enabled by any new core technology. Nothing wrong with this, it is partially the consequence of a mature industry, where most of the effort goes into improving the customer experience and helping him integrating verification and design/synthesis flow.

When Randy Bryant published his BDD paper in 1986, he revolutionized the field of formal verification with a technology that could address problems previously out of reach. When EC switched from BDD to SAT solvers nearly 10 years ago, it made possible verifying multi-million gate designs against their RTL description. Bounded model checking became a practical approach to sequential property verification. These were disruptive technologies. Where is the next leap?

From the business point of view, force is to admit that the formal verification market has been pretty stable. Most of the ever-increasing design cost is taken by verification, but that does not translate into a fast growing formal verification market. Instead, the more and more daunting verification task benefits simulators more than formal tools. Yes, customers are slow to move to a different verification flow. Yes, the formal verification industry, as the rest of EDA, struggles to find a business model that would bring back a much needed growth. But which application or technology will bring enough value to make the ratio simulation/formal in favor of the later?

Regarding the simulation vs. formal debate, I would recommend reading Chris Wilson’s comment. He agrees with me that formal verification has had a relatively low return on investment. He then argues that simulators will remain the main verification solution, with formal verification technologies under the hood to speed up simulation, to increase coverage, and to help debugging. He may well have a point here.

One aspect that everybody agrees on is that debugging is still a bottleneck in the verification industry. Why does a design fail its functional requirement? Having a counter example (e.g., a sequence of inputs that disproves a safety property) is often not sufficient: the complexity and length of the counter example can make pinpointing the design error very difficult. A failing liveness property cannot be revealed with a finite sequence of inputs. Similarly, there is no tool that provides consistent debugging information explaining why a statement of a RTL description is unreachable. Formal verification techniques can certainly help here, and there are a few available products aiming at the problem.

In conclusion, formal verification as an industry has matured, but is still looking for the market share it deserves. I think there are a lot of opportunities to grow the market. Success may come as an enabler of a better, faster, high coverage, simulation. I rather believe it will come when formal verification allows software and hardware to be verified and debugged in a common, continuous, design flow. And this requires some major technical innovation.

Tags: ,

4 Comments on Formal verification stalling, take two

  1. […] Formal verification stalling, take two […]

  2. Armin says:

    Olivier,
    Thanks for your articles. I’m trying to make sense of point and counter point. I agree formal verification can coexist with dynamic simulation at this point to speed up simulation, to increase coverage and to help debugging as you have in article. Definitely they have their own share of verification and progress from many sources. Here is the state problem and solution, please add to develop discussion.

    Problem: To address liveness assertion which is infinite duration, all periodic cycles from starting states (all initialization phases) need to be analyzed. To analyze periodic cycles, the formal engine must analyze the length of longest cycle plus and initialization phase. And longest path can be infinite.

    Solution: One can bound the problem based on knowledge of design, but sometimes it’s not possible to convert liveness to safety assertion. To address this issue, bounding engines build state space incrementally and verify assertions so avoid state space explosion. Unfortunately bounded engines cannot solve liveness assertions because they also blow up. Dynamic Hybrid Engines use simulation to get to interesting state (control transition or when exception occur or where corner case is based on design knowledge) and then bounded model checker can find bugs which is not proof but bug hunting technique. To reduce state space again, one needs to have knowledge of design and tool needs to support engine optimization, nondeterminism (freeing and abstraction), pruning and partitioning.

    — Armin

  3. Hi Armin,

    If your state space is finite, the sequences will be periodic, and therefore your statement is correct. As long as we have an upper bound on the period, or on the longest sequence required for an induction proof, we can cast the liveness property into a safety property.

    There is a large body of work that has been done in reachability analysis, from “multiple stepping” (using an augmented next-state function to explore the reachable states in less iterations) to abstraction techniques. It works to some extend, even though the main problem is again debugging. Usually the period is extremely large, which makes these methods not very practical. This is why people rely more on (incomplete) hybrid methods to explore the state space an look for “exception” states where the FSM can be trapped in a loop that would invalidate the liveness property. Regardless of the method though, debugging still remains a problem –long input sequences are extremely difficult to use to figure out the flaw in a control system.

  4. Hi Olivier,

    I posted some more comments on this in my blog.
    http://bugsareeasy.wordpress.c.....echnology/

    –chris

Leave a Reply