Thanks for the feedback -- we’re actively looking into this and will keep the HN community updated as we roll out support for more enterprise-focused tools
So glad to hear that! Regarding the datasheet limit: the current Pro plan caps projects at 40 datasheets, but this is not a hard technical limit. For enterprise customers, we can raise this cap. The primary constraint is inference cost — once you go beyond ~40 datasheets, meaningful cross-checking can consume Pro-plan usage very quickly. For teams that are less cost-sensitive, a higher-tier plan with increased limits is feasible.
On EDA tool support, we can work with any tool that exports a netlist. If you can export to .EDIF, it should work out of the box, as this is the format we accept for Altium designs. The schematic visualizer currently supports KiCad only, but we are exploring how to parse full project files from other tools to provide the same visualization and extract additional metadata.
If your team has a formal procurement process, feel free to reach out via the contact email on our site.
I may mention it for my next schematic, but they'd probably want to understand the data flow and run everything locally. In that this is a wrapper around the other models (and full credit due, clearly not just a dumb wrapper like an extra prompt at the front of a chat bot), this should be possible, yes? Run the part that queries the models locally, point that to our locally hosted LLMs, and we're good?
Otherwise we probably couldn't put many designs of substance in. Just the data security risk.
I may reach out from my corporate email tomorrow. It's public who I am and where I work but yes we certainly have a formal procurement process.
This is exactly why the first version of our tool worked with netlists only. We've since evolved to parsing the full KiCad project and generating a netlist from it so we can also extract schematic-specific metadata that doesn't make it into the netlist (designer notes/annotations, component positions, etc.)
This looks great, but I want to know when LLMs will be useful for generating schematics rather than just checking them! It’s such a letdown right now to jump from doing firmware with Claude Code to drawing schematics manually like it’s 2022. :) When does KiCad get its little assistant pane on the right?
Schematic generation doesn’t really make sense to me because the cost of a problem going unnoticed is much more significant in hardware design than software.
Yes. I’d recommend trying the free plan with your design to see how it performs. You can also steer the review in a specific direction using custom instructions in "Advanced Options" if there are particular parts of the design you want analyzed.
Hi! This is a totally fair question, and I appreciate you raising it. Getting reliable performance out of an LLM on something as structured as a schematic is hard, and I don’t want to pretend this is a solved problem or that the tool is infallible.
Benchmarking is tricky right now because there aren’t many true “LLM ERC” systems to compare against. You could compare against traditional ERC, but this tool is meant to complement that workflow, not replace it. For this initial MVP, most of the accuracy work has come from collecting real shipped-board schematics (mine and friends’) with known issues and iterating until the tool consistently detected them. A practical way to evaluate it yourself is to upload designs you already know have issues, along with the relevant datasheets, and see how well it picks them up. Additionally, If you have a schematic with known mistakes and are open to sharing it, feel free to reach out to through the "contact us" page. Contributions like that are incredibly helpful, and I’d be happy to provide additional free usage in return.
I’ll also be publishing case studies soon with concrete examples: the original schematics, the tool’s output, what it caught (and what it missed), and comparisons against general-purpose chat LLM responses.
The goal isn’t to replace a designer’s judgment, but to surface potential issues that are easy to miss. Similar to how AI coding tools flag things you still have to evaluate yourself. Ultimately the designer decides what’s valid and what isn’t.
I really appreciate the push for rigor, and I’ll follow up once the case studies are live.
Hi! Great question. Right now the tool focuses on issues that show up in the schematic. So it’s very well-equipped to handle a lot of the classic “how did no human ever catch this” mistakes — things like reversed polarity, TX/RX getting swapped, missing pull-ups, etc.
But it sounds like in this case the root cause was more of a footprint/layout issue rather than a schematic one. I’m hoping to add footprint-level checks later on, once I can ingest full board files and mechanical data.
Hi! If the vacuum tube schematic is designed in KiCad or Altium, then yes! If your design was made in another tool let me know which one and I will do my best to add support for it.
I had a really similar experience, which is a big reason why I built this. Uploading my own schematics to the usual web LLMs gave a mix of useful notes and some pretty big misunderstandings. I really believe this tool is set up to deliver better results than the general-purpose GPT/Gemini/Claude interfaces for this kind of task. Hoping others try it and have a much better experience too!
Also good call on processing EasyEDA schematics. I hadn’t considered that initially, but I’m definitely going to add support for it.
Thank you so much! Totally agree. Knowing people in the space to sanity-check designs has saved me countless times. I’m hoping this tool can bring some of that ‘pre-flight checklist’ group wisdom to solo and newer designers as well. Really appreciate the feature ideas too!
Isn’t the primary issue that newer designers don’t know they show run ERC (or that ERC even exists)? Isn’t your tool going to have the same issue? i.e. how do user even know they should run it in the first place? How do you plan to overcome that barrier?
I’m not against more automated checkers, I’m very much for automated checkers, but I’m curious how you plan to not repeat the mistakes of the past.
Netlist.io is a web app that ingests your KiCad/Altium netlist and relevant datasheets so an LLM can reason about the actual circuit. It’s built to catch schematic mistakes that traditional ERC tools often miss, and it can even help debug already-fabbed boards by letting you describe the failure symptoms.
I built this because I was tired of shipping boards with avoidable mistakes — hopefully it saves you from a re-spin too!
Ingesting data sheets is an interesting angle compared to normal ERC, which KiCAD supports out of the box, but how good is it at the ingesting?
Datasheets themselves are inconsistent and incomplete so I’m wondering how you evaluated the accuracy of the import and what your acceptance criteria is.
Hi! Datasheets can definitely be inconsistent, and that’s one of the tougher parts of doing this well. LLMs are very much “garbage in, garbage out,” which is exactly why the tool doesn’t search the web or pull from any sort of automatic datasheet library. It only reasons from the netlist and the PDFs you upload, so you stay in full control of the context and the primary sources it can pull from. If the datasheet is clear, the results are usually very solid; if the datasheet is vague, the model reflects that instead of pretending otherwise.
I’d really recommend trying it with one of your designs: upload the netlist + a component’s datasheet and ask a specific question about the part in the design. It’s the easiest way to see how well the ingestion works in practice. Would love to hear your feedback after you try it!
From the mistakes actually found and confirmed, how likely do you think they could be progressively transformed into well defined rules that don't depend on LLM?