I have seen similar rate limit errors even when usage was low, and in my case it was tied to background sessions and cached conversations. Starting a completely new session or logging out and back in sometimes cleared it. Might be worth trying if you have not already.
Definitely get a lawyer who has handled founder-side acquisitions, not just general corporate work. The structure, earn-outs, and control terms matter a lot more than people expect. A good lawyer can easily change the outcome of the deal.
Situations like this usually end in some kind of quiet compromise. Companies rarely take a hard public stance if it risks access to key infrastructure or partnerships. What we’ll probably see is subtle policy changes rather than a dramatic announcement.
I’ve noticed the same. Features alone don’t hold up anymore, but tools that become part of someone’s daily workflow are much harder to replace. People stick with what saves them time consistently, not just what’s newest.
Interesting approach. I’ve noticed that giving AI a consistent persona does change how it responds, especially for writing tasks. It makes the interaction feel more focused and less mechanical over time.
In my experience, a vague or outdated Agent.md causes more damage than not having one, because people assume it is accurate and stop asking questions. A simple, honest doc that is kept current is far more useful than a detailed one nobody maintains.
I have seen this play out on real projects. The missing edge cases are usually what cause delays, not the main features. Using AI as a checklist and then trimming it down with human judgment seems to work better than relying on assumptions alone.
I have seen similar issues when tools try to rewrite structured files instead of preserving them exactly. Even small tag changes can break things in ways that are not obvious immediately. I have started double-checking diffs anytime an assistant edits XML or config files.
This matches what I have been seeing too. The bar feels much higher now — just wrapping an API is not enough unless there is real usefulness behind it. The teams solving specific, practical problems seem to stand out more.
Pick a boring, high-value industry. Build AI agents that replace manual workflows. Make it deep enough that it's not a wrapper. Have 2 founders - one technical, one with domain expertise.
I’ve run into this a lot. Sometimes fixing a small friction point in a tool saves hours later, but it’s easy to fall into endlessly tweaking instead of actually finishing the work. The hard part is knowing when the tool is “good enough” and moving on.
reply