If it works anything like the memories on Copilot (which have been around for quite a while), you need to be pretty explicit about it being a permanent preference for it to be stored as a memory. For example, "Don't use emoji in your response" would only be relevant for the current chat session, whereas this is more sticky: "I never want to see emojis from you, you sub-par excuse for a roided-out spreadsheet"
This is the core problem. The agent writes its own memory while working, so it has blind spots about what matters. I've had sessions where it carefully noted one thing but missed a bigger mistake in the same conversation — it can't see its own gaps.
A second pass over the transcript afterward catches what the agent missed. Doesn't need the agent to notice anything. Just reads the conversation cold.
The two approaches have completely different failure modes, which is why you need both. What nobody's built yet is the loop where the second pass feeds back into the memory for the next session.
90-98% of the time I want the LLM to only have the knowledge I gave it in the prompt. I'm actually kind of scared that I'll wake up one day and the web interface for ChatGPT/Opus/Gemini will pull information from my prior chats.
I've had claude reference prior conversations when I'm trying to get technical help on thing A, and it will ask me if this conversation is because of thing B that we talked about in the immediate past
All these of these providers support this feature. I don’t know about ChatGPT but the rest are opt-in. I imagine with Gemini it’ll be default on soon enough, since it’s consumer focused. Claude does constantly nag me to enable it though.
Had chatgpt reference 3 prior chats a few days ago. So if you are looking for a total reset of context you probably would need to do a small bit of work.
Claude told me he can disable it by putting instructions in the MEMORY.md file to not use it. So only a soft disable AFAIK and you'd need to do it on each machine.
I ran into this yesterday and disabled it by changing permissions on the project’s memory directory. Claude was unable to advise me on how to disable. You could probably write a global hook for this. Gross though.
I understand everyone's trying to solve this problem but I'm envisioning 1 year down the line when your memory is full of stuff that shouldn't be in there.
I looked into it a bit. It stores memories near where it stores JSONL session history. It's per-project (and specific to the machine) Claude pretty aggressively and frequently writes stuff in there. It uses MEMORY.md as sort of the index, and will write out other files with other topics (linking to them from the main MEMORY.md) file.
It gives you a convenient way to say "remember this bug for me, we should fix tomorrow". I'll be playing around with it more for sure.
I asked Claude to give me a TLDR (condensed from its system prompt):
----
Persistent directory at ~/.claude/projects/{project-path}/memory/, persists across conversations
MEMORY.md is always injected into the system prompt; truncated after 200 lines, so keep it concise
Separate topic files for detailed notes, linked from MEMORY.md
What to record: problem constraints, strategies that worked/failed, lessons learned
Proactive: when I hit a common mistake, check memory first - if nothing there, write it down
Maintenance: update or remove memories that are wrong or outdated
Organization: by topic, not chronologically
Tools: use Write/Edit to update (so you always see the tool calls)
> Persistent directory at ~/.claude/projects/{project-path}/memory/, persists across conversations
I create a git worktree, start Claude Code in that tree, and delete after. I notice each worktree gets a memory directory in this location. So is memory fragmented and not combined for the "main" repo?
Yes, I noticed the same thing, and Claude told me that it's going to be deleted.
I will have it improve the skill that is part of our worktree cleanup process to consolidate that memory into the main memory if there's anything useful.
It's odd, because as far as I can tell, the only reason one would need a Mac Mini would be for iMessage. Other than that, a Raspberry Pi should work perfectly fine and cost an order of magnitude less.
I'm reminded of an obscure Gamecube game called Odama (https://en.wikipedia.org/wiki/Odama) which was kind of a bizarre blend between pinball + RTS, where you commanded feudal Japanese troops using the Gamecube Microphone. Of course, this was 2006, so it only accepted a short list of vocal commands like "Company halt!" and "Charge!"
Indeed. Extra observant people will notice that the "Ubuntu" username was used only twice though, compared to "root" that was used +3700 times. And observant people who've dealt with infrastructure before, might recognize that username as the default for interactive EC2 instances :)
It's crazy that it also bans new models from Europe's Wingtra, Quantum Systems, and AgEagle, which are basically the only consumer fixed-wing drones available. Heck, those companies were even previously approved for the DOD's "Blue UAS" list: https://bluelist.appsplatformportals.us/Cleared-List/
If I understand correctly, this doesn't ban the import/sale of drone models which the FCC previously approved. That said, in October 2025 the FCC granted itself the authority to retroactively revoke previously-approved models, so this is something they could still potentially do.
Your originally quoted text explicitly disagrees with you: "This update to the Covered List does not prohibit the import, sale, or use of any existing device models the FCC previously authorized."
* https://github.com/anthropics/claude-plugins-official/tree/m...
* https://github.com/anthropics/claude-plugins-official/tree/m...
reply