Nice specs! Looking forward to seeing how this and the other projects on Waferspace goes. Being able to produce 1k chips at a reasonable price will hopefully do wonders for open hardware / open silicon.
Yep, Aegis's Terra 1 is designed to be "good enough" for the first generation. I do plan on expanding the Terra family of FPGA's if there's enough interest. I do want to work my way up to 100k LUT's.
What are your thoughts on including a RISC-V hardcore along with the gates? Because for almost all projects I can imagine using FPGA for, I would want a microcontroller also. This might however be slightly colored by me being a software/firmware first type of electronics engineer. Thinking especially for the smaller gate counts, like under 10k - because there a soft CPU takes up very precious resources.
1k chips for $4000 or $7000 at 180nm is (a lot) more expensive than 180nm at MOSIS or Europractice, I wound not call it reasonable, especially because the EDA software tools and PDK used are inferior.
I went though the list of prices at Europractice. Waferspace is 7000 USD for 1k of 20mm2. That is a per mm2 price of 350 USD. I could not find any offering at Europrice that matches that?
Chip fabs do not publish prices. First of all, the cost price of making a wafer is not a single item. What node, on what chip machine are they going to be made, what process, what PDK, are you breaking any of the PDK limits, what testing has your design went trough, what types and numbers of slices to chip the wafer, are there test before the chips get chipped or only after they are chipped, what packages the chips are in. Insurance types and fees, locations, what batches. All these steps can be performed in different fabs with different companies and subcontractors, between them they might have to ship your wafer under clean room conditions, sometimes flow around the world.
A wafer batch price is a very complex multi-party negotiation under NDAs, none of them has ever been made public. Show me any credible price quotes from the last 55 year (fe few million chips). You can't.
On these multi party shuttle projects this gets simplified into a price list where they quote you a high ball-park number that covers your test chips cost by a wide margin. The actual cost is never disclosed, certainly not on price lists.
A mask set maker and a chip fab create half of your product, they own that intellectual product and they won't even tell you what it has cost them. They merge their product with yours, now thyey co-own your product. There are only a few competing companies world wide (and getting fewer every year) and they compete on all this non-disclosed stuff. Prices above all.
Never belief what you read on the internet, especially in the chips war industry.
There are a few EDA companies, all with ancient software tools but kept up to date with the changing parameters and algorithms. You use the tools the insurance companies tell you or the mandatory tools of your chip fab suppliers. They use a lot of software tools on your design files you never get to see.
If you want to make better chips, like the low power Apple Silicon for example, you create your own EDA software tools to make the innovation. Creating a new transistor like the CFET [1] means writing new physics simulation tools, for example.
The outdated 1990's and buggy Open Lane software for example limits what kind of RAM transistors you can make or the complexity of your design.
My team makes asynchronous chips, free space optics photonics, ultra dense 2 transistor SRAM, niobium SQF chips, wafer scale integrations. All require bespoke software simulation tools, netlist rewriting tools, cross-reticle stepper exposure software (a software change in a $400 million dollar machine), etc etc.
Making hardware near atomic size structures is mostly a software job. Hardware is software crystalized early, Alan Kay quips.
Glad to see this. At least there is one player in Europe which does a full vertical integration around LLM/AI - from datacenter to LLM models and applications (Mistral Vibe).
On the data center part Europe seems to be doing OK, and also ok on applications. It would be nice to see more players focusing on the LLM model building - though it legitimately seems like a very tough (maybe even bad) business to be in.
Have been playing with Qwen3.5 35B. Runs OK nicely on a RTX5060Ti, though I would have liked to have a bit higher thoughput (a 5080/5090 would do). It is seemingly close-but-not-quite-there for code generation / agentic coding. So I am actually quite hopeful that in a few years time, using local LLM models will be quite feasible.
A AMD Ryzen AI Max Pro 396 will get 50t/s with Qwen3.5 35B.
In addition, the these local models are very, very, very sensitive to the template used. Make sure it is correct. I was using the wrong template and it would answer but felt like it had a brain worm.
The parameters must also be what is recommended, otherwise they go off the rails.
I get great results now after messing with it for a while. I prefer the 35B model because I enjoy how fast tokens appear at 50t/s, but at around 20-25t/s with the 122B model, it is also completely usable. And that one is very smart.
Very much looking forward to play with the BIO functionality on the Baochips that I have ordered. Thanks for the nice write up!
It is fascinating to see how widely applicable the "just throw a RISC-V core or 4 in there" design pattern is. The wide range of CPU designs that are standardized, the number oc mature open source implementations, and the lack of royalty fees, and the ready-to-run programming toolchains really drives this to a new level. And CPUs are small in die area anyway compared to SRAM! Was cool to see on the RPI2350 how they just threw in another two RISC-V cores next to the ARMs.
For these reasons specified above, I think that this trend will continue. For example, in my specialization of edge machine learning, we are seeing MEMS sensors that integrate user programmable DSP+ML+CPU right there on the sensor chip.
Highly recommend Statistical Rethinking for anyone looking for practical/applied/intuitive approach to Bayesian Statistics. For example the 2023 lecture series:
https://youtu.be/FdnMWdICdRs?is=KycmwPL-cn8clOK5
Many "subjective" tasks can also be done in an "objective" manner - as long as there is a large enough dataset to estimate what humans would evaluate the outputs - and the evaluators being reasonably consistent.
Many human preferences are relatively homogeneous, or sometimes clustered into groups. And there are whole fields of study/practice of such phenomena, such as sensory science - with applications in food, audio, images etc.
Very nice development towards more open hardware. And super powerful at that. The verification technique is alos super interesting. Have ordered a small set to play with, and to support the project.
There are also several RISC-V microcontrollers on run 1 of Wafer Space, hopefully some of those will also be available online soon.
https://github.com/wafer-space/ws-run1
There are easy fixes to get rid of violent and crazy people. Why would a powerful ASI bother with fixing them? A rabid dog just gets put down by humans. Why would we expect anything better of our overlords?
reply