With respect to tooling, I believe the work on Lattice ICE40 will be an inflection point.
Consider: open tooling drives user device adoption, which in turn drives tooling refinement. At some point people working on the tooling are going to have ideas that simply didn't happen, or gain management support, in the commercial side.
Now if this enables some advancement that that suddenly makes ICE40 devices more appealing in end products, then Xilinx and Altera are going to be on the outside looking in. If further, Lattice watches what's happening and develops a hardware enhancement that accelerates/reduces power/etc. whatever development has happened on the open tooling side, this will further entrench their style of FPGA architecture.
For example, I am certain that the openness of Linux has essentially killed the search for new "process models" on both the software and the hardware side. (Think address space structure, and mmu design.)
However, if we are realistic, the web paradigm makes very few architectural requirements. Somebody, anybody, can re-architect the stack from the moment a GET request hits a NIC to provide whatever the existing mass of cruft on x86 systems provides probably for far less cost, far more performance, and far better efficiency.
The question then becomes, who are you doing it for? If it's for customers, and you require them to use specialized tools, then they're always going to whine about lock in. (There are ways to solve this problem, but this is already getting rather long.) So the only hope is that you build an app that leverages your infrastructure that no one can clone without a greater investment in traditional tooling.
This is all just a long winded way of saying that I do believe that there is something "out there" in having open FPGA tooling. The time is right. I see a lot of future in SoCs with integrated FPGA fabric featuring open tooling.
Personally, I'd love to see something like an ARMv8-M Cortex-M7 core(s) + FPGA. Do your general purpose stuff on the micro-controller, and accelerate specific tasks with dynamic reconfiguration of the fabric.
What is going to happen, however, is Intel will release a massive, overpriced, Xeon with an inbuilt Altera FPGA and only the likes of Google and ilk are going to be able to afford to do anything with it.
Here's hoping though. I have belief in the Chinese!
"Consider: open tooling drives user device adoption"
Citation needed. Given that most of the device adoption that's already taken place was with seriously-expensive closed-source tools that aren't exactly paragons of UX quality, I think this is assuming a LOT.
My guess is that the quality of the tools (and I'm handwaving "open" into "higher quality" which is not guaranteed) is a distant concern compared to lots of things: device capabilities, power envelope, $/10k, IP library, and many more take precedence.
"device capabilities, power envelope, $/10k, IP library, and many more take precedenc"
They do. There's been all kinds of open tooling and more open hardware. What did most people buy and keep investing in? Intel, AMD, IBM, Microsoft, FPGA Big Two, EDA Big Three, etc. Got the job done easily, reliably enough, and at acceptable price/performance.
I call out OSS crowd all the time on why they havent adopted GPL'd Leon3 SPARC CPU's & open firmware if it means so much. It doesn't have X, it costs Y, or too lazy to do Z. Always.
Hopefully Lattice work will inspire some more but almost everything I've seen comes out of academia. Takes smart, smart people and lots of mental investment to do anything critical. Most FOSS, ASIC companies, and FPGA vendors stay away from it. So, I doubt anything will happen as a default.
"What is going to happen, however, is Intel will release a massive, overpriced, Xeon with an inbuilt Altera FPGA and only the likes of Google and ilk are going to be able to afford to do anything with it."
That was my prediction. I look forward to it, though, as low-latency and ultra-high bandwidth interface is what FPGA co-processors need most. First seen in SGI Altix machines that I recall. It was a smart acquisition by Intel.
Your reasoning is sound but the economies of scale that apply to the open source world of software do not apply in the same way to the open source world of hardware. Capital investments on the order of 100's of millions or even billions are fairly normal in that world.
What we personally would love to see or not does not really matter if the economical underpinnings aren't there.
iCE40 is one of the niche-market (in this case, ultra-low-power) FPGAs that your post's parent talks about. iCE40 competes primarily with other boutique manufacturers and ancient big-vendor parts like CoolRunner CPLDs, not full-featured top tier FPGAs like Virtex. No Holy Grail of open tooling is going to make up for a lack of essential features which cost hundreds of millions of dollars to develop.
Especially since the FPGA vendors keep wisely funding and buying most of the best results in academia to keep them from being open. I was excitedly looking at one with many fold speedup in synthesis earlier until I saw the word Altera in the credits. (sighs) There goes another one...
Consider: open tooling drives user device adoption, which in turn drives tooling refinement. At some point people working on the tooling are going to have ideas that simply didn't happen, or gain management support, in the commercial side.
Now if this enables some advancement that that suddenly makes ICE40 devices more appealing in end products, then Xilinx and Altera are going to be on the outside looking in. If further, Lattice watches what's happening and develops a hardware enhancement that accelerates/reduces power/etc. whatever development has happened on the open tooling side, this will further entrench their style of FPGA architecture.
For example, I am certain that the openness of Linux has essentially killed the search for new "process models" on both the software and the hardware side. (Think address space structure, and mmu design.)
However, if we are realistic, the web paradigm makes very few architectural requirements. Somebody, anybody, can re-architect the stack from the moment a GET request hits a NIC to provide whatever the existing mass of cruft on x86 systems provides probably for far less cost, far more performance, and far better efficiency.
The question then becomes, who are you doing it for? If it's for customers, and you require them to use specialized tools, then they're always going to whine about lock in. (There are ways to solve this problem, but this is already getting rather long.) So the only hope is that you build an app that leverages your infrastructure that no one can clone without a greater investment in traditional tooling.
This is all just a long winded way of saying that I do believe that there is something "out there" in having open FPGA tooling. The time is right. I see a lot of future in SoCs with integrated FPGA fabric featuring open tooling.
Personally, I'd love to see something like an ARMv8-M Cortex-M7 core(s) + FPGA. Do your general purpose stuff on the micro-controller, and accelerate specific tasks with dynamic reconfiguration of the fabric.
What is going to happen, however, is Intel will release a massive, overpriced, Xeon with an inbuilt Altera FPGA and only the likes of Google and ilk are going to be able to afford to do anything with it.
Here's hoping though. I have belief in the Chinese!