A FPGA company makes revenue with the hardware: it sells its device, and gives away its design tools –synthesis, place-and-route. Yet the EDA industry has had success with its own (non-free) FPGA synthesis solutions. For good reasons: in its days, Synplicity’s Synplify was the best FPGA synthesis out there. Synopsys acquired Synplicity two years ago, but it was more to get a comprehensive emulation solution than pushing FPGA synthesis. Mentor Graphics is still invested in FPGA synthesis with Precision, and competes head to head with Xilinx’ XST and Altera’s Quartus.

In a world where FPGA software design is expected to be free (or very cheap, compared to it ASIC counterpart), is there still a market for EDA companies to sell their FPGA solutions? Synplicity stopped growing after it built its success on FPGA synthesis. Is that the fate of EDA for FPGA?

There are several forces at play here: device complexity, software complexity, and know-how.

The complexity of FPGA starts to rival that of ASIC’s. The largest FPGA devices contain 100,000’s of LUTs and registers, 1000’s of DSP components, and are equivalent to 1+ million gate designs. The increasing device size requires faster synthesis and larger capacity. It also strains verification because simulation costs are augmenting accordingly. The days were a designer could complete her FPGA project with a simple write-RTL/synthesize/simulate/fix iterative flow are gone.

FPGA companies differentiate with their devices’ speed, capacity, and power consumption. But beyond the raw hardware features, software to design FPGA has become a key for success. Altera learnt the lesson the hard way 10 years ago when it released software that was not ready: Altera quickly lost its top customers to Xilinx, while it could have become the undisputed #1 FPGA vendor. Some FPGA startups in the past could not get off the ground because they fail to deliver good synthesis for their device. Closer to us, we have heard about Tabula’s chronic problems to bring up its synthesis before it finally announced its device earlier this year. And Abound Logic’s huge netlist has stretched the capacity of today’s FPGA synthesis.

Altera has now a software powerhouse, and is meticulous about its software design and testing. Xilinx is currently going through a major overhaul of its software to catch up with its main competitor. There is no question that software is taken very seriously by the two vendors –they both have a couple hundreds engineers dedicated to provide customers with a full design tool suite.

So does EDA has any future in FPGA synthesis? There will always be FPGA startups looking for an OEM with Synopsys and Mentor, but this is not enough. The EDA industry must showcase a comprehensive FPGA development environment that will cover design, synthesis, and verification:

  • Verification is becoming ever more costly for FPGAs, as it already is for ASICs. Formal verification for FPGA is still embryonic –FPGA synthesis uses retiming and FSM re-encoding that makes formal verification quite difficult.
  • Synthesis of complex systems with a large IP spectrum is an area of expertise that EDA must leverage. Also EDA could provide a much-needed improvement in power management.
  • As for design, EDA must seize on the FPGA community’s ability to adopt new methodologies much faster than the ASIC community. ESL, SystemC, and C/C++ as hardware description languages are the right direction.

If EDA wants to compete with the few hundred software engineers of Xilinx and Altera, it needs to deliver a best-in-class and innovative FPGA design environment. Else it will end up as a no-growth by-product of ASIC synthesis.

Tags: , , ,

11 Comments on Is FPGA a sustainable market for EDA?

  1. Gary Dare says:

    Outside of synthesis and combining with software development tools, the FPGA companies see no added return going into the EDA space since their revenue comes from selling silicon. Creating and maintaining a good foundation is good enough. For EDA mainstream or start-up companies, the key is to identify value-added, as paid synthesis tools have done. I came across some older material online for Xilinx ESL Initiative, I’m not sure if that’s still alive, but listed was add-on tools from various start-ups. An interesting one was by BinaChip, the second start-up by Prith Banerjee (ex-Northwestern EE professor, now head of HP Labs) after AccelChip, which synthesizes software functions into hardware (binary netlist), thus modifying an existing design for higher performance. Meanwhile, Space Codesign from Montreal operates on new designs in ESL, then mapping your design implementation into hardware or software (in FPGA, functions running on an embedded CPU). Tools of this sort seem to be useful for large designs, which leads me to wonder how large most FPGA-based designs really are today.

  2. […] This post was mentioned on Twitter by Gary Dare, Olivier Coudert. Olivier Coudert said: New post @ocoudert Is FPGA a sustainable market for EDA? http://bit.ly/aWLWJD –RT please […]

  3. You point out an interesting dichotomy.

    On the one hand, FPGA tools want to be free because Xilinx and Altera and others are ultimately semiconductor companies that see the tools as a loss leader for the real silicon business.

    On the other hand, FPGA tools want to be expensive because they are approaching the complexity of ASIC tools, perhaps lagging ASIC tools by a few years at most.

    Although they don’t call it a Freemium model, the FPGA vendors have already started to move in that direction. For low-end FPGAs there are the free tools up to a point, and then you have to pay. This is the way so many other software products and services are going online, I think it’s a natural for FPGA as well.

  4. Gary Dare says:

    Harry, I actually think that the situation is complementary. FPGA companies own part of their flow, so supply software to establish that flow (versus ASIC, where the design house owns the flow regardless of foundry). But the line is drawn where added value means added investment and development of new revenue stream. That’s where the EDA ‘partners’ come in. Whether it’s Mentor Precision (Big 3) or BinaChip (start-up), the likes of Xilinx and Altera are happy to let them provide coattails to ride … the risk is solely on the value-added EDA ‘add-ons’ (an analogy might be Eclipse plug-ins?), the result is more FPGA silicon sales. The value of the add-ons is proven by EDA sales.

  5. Hi Gary,

    Well yes, FPGA companies have no point going into EDA. The question is whether EDA can have a business selling FPGA design tool. Thanks for the info on the startups you refer to, I’ll check them out.

    As for the size of FPGA design, I do not have hard data. But the latest devices from Xilinx and Altera are huge (1M+ gate equivalent). Abound Logic’s device is even larger (it has 750k CLBs). Such large devices require much better tools, and EDA can provide value given its experience with high level synthesis, ASIC synthesis, and verification. The question is whether customers are ready to pay for these FPGA design tools. If they provide value –QoR, fast verification, HDL–, I believe they will. But competition will be fierce, Xilinx and Altera have huge software teams of capable people. Innovation will make the difference.

  6. Hi Harry,

    Yes, low end FPGA come will free design tool, and will keep being that way. Interestingly enough, Altera *sells* it’s Quartus II pro version, so there is obviously a high-end market wiling to pay for the best results.

    EDA is in a difficult position. It can sustain its presence in the FPGA market only if it produces enough revenue, while its investment in developing better FPGA tools is strongly limited (even though its ASIC experience can benefit FPGA). This is not the case for Xilinx and Altera, they have been pouring a lot of resources to get the best possible tools. We will see how Xilinx’s software rehauling is unfolding.

  7. Gary Dare says:

    Olivier, as you point out, Xilinx and Altera have a lot of software resources but their revenue is maximized and RISK REDUCED by maintaining a stable ecosystem for all of their users. Risk is taken on by investors of (e.g.) Binachip and Space, or Synopsys and Mentor as companies. If any value-added EDA add-ons disappear, there is no impact (save for possible derived sales of silicon) on Xilinx, Altera, Actel, Lattice, etc. Right now, I can’t think of a single EDA technology that Xilinx or Altera would want to acquire, and offer for free to ALL of their customers. This is a different approach than the days when Xilinx bought AccelChip, which was a great technology for customers in the DSP space but useless if you are creating a controller to monitor natural gas pipelines. The EDA value-added is in the eye of the customer and the domain in which they operate, for which they are willing to pay.

  8. Hello Gary,

    Long time since I’ve seen you.

    May this be a need for a consortium? One where everyone can share in the cost of the development of tools for the benefit of its members?

    Thanks,
    Jeff

  9. With FPGAs starting to converge to general purpose embedded platforms, those hundreds of engineers will be involved in supporting, literally, whole systems-on-a-chip, from hard/soft microprocessors to coprocessor accelerators for those operations that need to be done in FPGA logic fabric.

    That said, most large scale users I know complain bitterly about
    the Place and Route time for large designs in large FPGAs. We’re talking whole work days here…not 10 minutes.
    It doesn’t get better with bigger devices.
    This is one aspect of what the new FPGA startups
    and IP vendors are trying to address.
    Anyone, including EDA vendors, who can help reduce that time will do well.

  10. Colin says:

    Hi Oliver,

    One line in your blog post caught my attention by way of a google search:

    “Altera has now a software powerhouse, and is meticulous about its software design and testing.”

    I am very curious to know why you make these statements. I ask because I have switched from Xilinx to Lattice and am now evaluating Altera, as both of the other FPGA vendors have fallen short.

    The switch comes in regards to a 2D FFT design I have created and am unable to get it to work well on both Xilinx and Lattice. Both companies have very capable hardware, on paper, but the software falls apart (very long synthesis times and rarely closes timing). I was never able to get the full-blown version of my 2D FFT working on either platform.

    Having already been through two FPGA vendors in a short period of time I have put off trying Altera. But I finally did move the design and familiarized myself with the Altera software package and I am stunned! The Altera software is leaps and bounds ahead of the other two vendors! I literally let the other vendor’s software run for weeks straight and never closed timing. The same exact project in comparable silicon closed timing in Quartus II in under 40 minutes.

    I still do not believe the results. I have double and triple checked my design entry and settings in the Quartus II software and it looks fine. So I have turned to the web to see if perhaps the Quartus II says one thing and then the hardware does another. Or perhaps Altera is legit?

    Until I actually load this design into physical hardware and see it working, any insight into the Altera software miracle is appreciated.

    Regards,

    Colin

  11. No miracle. Altera had a major setback late early 2000, all because of the SW. Basically, they decided to release a brand new version of synthesis, w/o enough testing, and w/o enough support for the old tool. It was a disaster. They lost their top customers to XLNX. I know of ALTR sale guys eager to keep a good relationship with their big customers (e.g., Cisco) that had no choice but telling their customer to NOT use the new SW, and instead go with alternative solution (meaning: stick with the old one, or switch to XLNX).

    So after that debacle, ALTR set up a strict methodology of SW development and testing. They also moves most of the SW development to Malaysia, keeping the hard-core innovative R&D in Toronto (and a few P&R people in San Jose). Their SW is now much better than XLNX, even though XLNX’ synthesis did improve substantially over the past 2 years or so.

    XLNX has been going through a lot of changes regarding their SW development, and unfortunately it is done in a way that may be harmful. For instance at the end of 2010 they closed their Grenoble facility (mostly working on XST, their synthesis solution), without really having people than can pick up and maintain that tool. Thus customers that want to keep using the mature version of XST may experience delays when they need support. XLNX decided to switch to a different technology (see their partnership with Oasys), but they are still in the process of developing the tool, and they don’t have the framework and methodology to develop and release such a SW with ALTR’s level of quality.

Leave a Reply