Hacker Newsnew | past | comments | ask | show | jobs | submit | neuronexmachina's commentslogin

I'm curious how this compares to just setting up a claude-code-action with one of Anthropic's existing code-review plugins:

* https://github.com/anthropics/claude-plugins-official/tree/m...

* https://github.com/anthropics/claude-plugins-official/tree/m...


I've found that usually works ok, but currently tends to timeout with the Atlassian MCP when trying to do updates on large Confluence pages: https://github.com/atlassian/atlassian-mcp-server/issues/59

Yeah, we wrote our own CLIs for Jira/Confluence/Zendesk using their REST APIs instead. Works well, although a bit more work.

My flow is pretty similar, except I also add in these steps at the end of planning:

* Review the plan for potential issues

* Add context to the plan that would be helpful for an implementing agent


This was from congressional testimony this past week by executives from Waymo and Tesla, video and automated transcript here: https://www.c-span.org/program/senate-committee/tesla-and-wa...


> Claude now automatically records and recalls memories as it works

Neat: https://code.claude.com/docs/en/memory

I guess it's kind of like Google Antigravity's "Knowledge" artifacts?


If it works anything like the memories on Copilot (which have been around for quite a while), you need to be pretty explicit about it being a permanent preference for it to be stored as a memory. For example, "Don't use emoji in your response" would only be relevant for the current chat session, whereas this is more sticky: "I never want to see emojis from you, you sub-par excuse for a roided-out spreadsheet"


> you sub-par excuse for a roided-out spreadsheet

That’s harsh, man.


It's a lot more iffy than that IME.

It's very happy to throw a lot into the memory, even if it doesn't make sense.


This is the core problem. The agent writes its own memory while working, so it has blind spots about what matters. I've had sessions where it carefully noted one thing but missed a bigger mistake in the same conversation — it can't see its own gaps.

A second pass over the transcript afterward catches what the agent missed. Doesn't need the agent to notice anything. Just reads the conversation cold.

The two approaches have completely different failure modes, which is why you need both. What nobody's built yet is the loop where the second pass feeds back into the memory for the next session.


Is there a way to disable it? Sometimes I value agent not having knowledge that it needs to cut corners


90-98% of the time I want the LLM to only have the knowledge I gave it in the prompt. I'm actually kind of scared that I'll wake up one day and the web interface for ChatGPT/Opus/Gemini will pull information from my prior chats.


They already do this

I've had claude reference prior conversations when I'm trying to get technical help on thing A, and it will ask me if this conversation is because of thing B that we talked about in the immediate past


You can disable this at Settings > Capabilities > Memory > Search and reference chats.


I'm fairly sure OpenAI/GPT does pull prior information in the form of its memories


Ah, that could explain why I've found myself using it the least.


All these of these providers support this feature. I don’t know about ChatGPT but the rest are opt-in. I imagine with Gemini it’ll be default on soon enough, since it’s consumer focused. Claude does constantly nag me to enable it though.


Had chatgpt reference 3 prior chats a few days ago. So if you are looking for a total reset of context you probably would need to do a small bit of work.


Gemini has this feature but it’s opt-in.


Claude told me he can disable it by putting instructions in the MEMORY.md file to not use it. So only a soft disable AFAIK and you'd need to do it on each machine.


I ran into this yesterday and disabled it by changing permissions on the project’s memory directory. Claude was unable to advise me on how to disable. You could probably write a global hook for this. Gross though.


Are we sure the docs page has been updated yet? Because that page doesn't say anything about automatic recording of memories.


Oh, quite right. I saw people mention MEMORY.md online and I assumed that was the doc for it, but it looks like it isn't.


Yeah, and I was confused by the child comments under yours. They clearly didn’t read your link.


I understand everyone's trying to solve this problem but I'm envisioning 1 year down the line when your memory is full of stuff that shouldn't be in there.


I looked into it a bit. It stores memories near where it stores JSONL session history. It's per-project (and specific to the machine) Claude pretty aggressively and frequently writes stuff in there. It uses MEMORY.md as sort of the index, and will write out other files with other topics (linking to them from the main MEMORY.md) file.

It gives you a convenient way to say "remember this bug for me, we should fix tomorrow". I'll be playing around with it more for sure.

I asked Claude to give me a TLDR (condensed from its system prompt):

----

Persistent directory at ~/.claude/projects/{project-path}/memory/, persists across conversations

MEMORY.md is always injected into the system prompt; truncated after 200 lines, so keep it concise

Separate topic files for detailed notes, linked from MEMORY.md What to record: problem constraints, strategies that worked/failed, lessons learned

Proactive: when I hit a common mistake, check memory first - if nothing there, write it down

Maintenance: update or remove memories that are wrong or outdated

Organization: by topic, not chronologically

Tools: use Write/Edit to update (so you always see the tool calls)


> Persistent directory at ~/.claude/projects/{project-path}/memory/, persists across conversations

I create a git worktree, start Claude Code in that tree, and delete after. I notice each worktree gets a memory directory in this location. So is memory fragmented and not combined for the "main" repo?


Yes, I noticed the same thing, and Claude told me that it's going to be deleted. I will have it improve the skill that is part of our worktree cleanup process to consolidate that memory into the main memory if there's anything useful.


I thought it was already doing this?

I asked Claude UI to clear its memory a little while back and hoo boy CC got really stupid for a couple of days


It's odd, because as far as I can tell, the only reason one would need a Mac Mini would be for iMessage. Other than that, a Raspberry Pi should work perfectly fine and cost an order of magnitude less.


I'm reminded of an obscure Gamecube game called Odama (https://en.wikipedia.org/wiki/Odama) which was kind of a bizarre blend between pinball + RTS, where you commanded feudal Japanese troops using the Gamecube Microphone. Of course, this was 2006, so it only accepted a short list of vocal commands like "Company halt!" and "Charge!"


Looks like Cursor Agent was at least somewhat involved: https://github.com/wilsonzlin/fastrender/commit/4cc2cb3cf0bd...


Looks like a bunch of different users (including Google's Jules made one commit) been contributing to the codebase, and the recent "fixes" includes switching between various git users. https://gist.github.com/embedding-shapes/d09225180ea3236f180...

This to me seems to raise more questions than it answers.


The ones at *.ec2.internal generally mean that the git config was never set up ans it defaults to $(id -un)@$(hostname)


Indeed. Extra observant people will notice that the "Ubuntu" username was used only twice though, compared to "root" that was used +3700 times. And observant people who've dealt with infrastructure before, might recognize that username as the default for interactive EC2 instances :)


It's crazy that it also bans new models from Europe's Wingtra, Quantum Systems, and AgEagle, which are basically the only consumer fixed-wing drones available. Heck, those companies were even previously approved for the DOD's "Blue UAS" list: https://bluelist.appsplatformportals.us/Cleared-List/


It’s only crazy if you think Europe and the US are still allies. That simply isn’t the case anymore. The US is in its own now.


Not completely on its own, at least they still have Russia on their side (or rather the other way around).


If I understand correctly, this doesn't ban the import/sale of drone models which the FCC previously approved. That said, in October 2025 the FCC granted itself the authority to retroactively revoke previously-approved models, so this is something they could still potentially do.


It bans the import, but not sale of models the FCC has previously approved.


Your originally quoted text explicitly disagrees with you: "This update to the Covered List does not prohibit the import, sale, or use of any existing device models the FCC previously authorized."


Mea culpa. I've been reading some reporting earlier in the day. Trying to find verification for the claims I see that it was wrong.

Which is better than it could be, all things considered.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: