Hello :). This OSS product is for you (or future-you) who reached the point of wanting to tap into the ton of knowledge you have in your AI chat histories. "Hey, Agent, we have a problem with SomeClass.function, remind me what we changed in the past few months".
Product's tl;dr:
ContextCore is a local-first memory layer that ingests AI coding chats across multiple IDE assistants and machines, makes them searchable (keyword + optional semantic), and exposes them to assistants over MCP so future sessions don’t start from zero.
IMPORTANT: I emphasize local-first, as in nothing is sent to any LLM other than when you explicitly use the MCP server in the context of using an LLM. However, once you engage semantic vector search OR chat content summarization, we DO use LLMs (although you can use local ones).
ContextCore is not just “chat history storage.” It is *a developer-grade memory layer* that turns AI-assisted development from ephemeral to iterative—where prior debugging sessions, architectural decisions, refactors, and tool-call outcomes become reusable context rather than lost effort.
More in the README.md in the repo.
This is the first time I show this in a public forum :). My hope is that I get a little bit of traction, so that I can get some help to expand ContextCore's compatibility (to add parsers for IntelliJ or other IDEs for example - which is quite easy now that the project has solid architecure docs and templates). The project has a roadmap in the README.
The endgame for ContextCore is to become an engineer's reliable side-kick when it comes to digging into chat history and turning that into pure context gold at the MINIMUM amount of tokens spent. The current search system is decent, but much more can be done.
And my endgame is twofold: 1) give something back after being a lurker for years and 2) get some help to polish the search system and other areas of the product, so that we create an awesome, vendor-independent, cross-agent memory layer.
This couldn't have happened sooner, for 2 reasons.
1) the world has become a bit too focused on LLMs (although I agree that the benefits & new horizons that LLMs bring are real). We need research on other types of models to continue.
2) I almost wrote "Europe needs some aces". Although I'm European, my attitude is not at all that one of competition. This is not a card game. What Europe DOES need is an ATTRACTIVE WORKPLACE, so that talent that is useful for AI can also find a place to work here, not only overseas!
Product's tl;dr:
ContextCore is a local-first memory layer that ingests AI coding chats across multiple IDE assistants and machines, makes them searchable (keyword + optional semantic), and exposes them to assistants over MCP so future sessions don’t start from zero.
IMPORTANT: I emphasize local-first, as in nothing is sent to any LLM other than when you explicitly use the MCP server in the context of using an LLM. However, once you engage semantic vector search OR chat content summarization, we DO use LLMs (although you can use local ones).
ContextCore is not just “chat history storage.” It is *a developer-grade memory layer* that turns AI-assisted development from ephemeral to iterative—where prior debugging sessions, architectural decisions, refactors, and tool-call outcomes become reusable context rather than lost effort.
More in the README.md in the repo.
This is the first time I show this in a public forum :). My hope is that I get a little bit of traction, so that I can get some help to expand ContextCore's compatibility (to add parsers for IntelliJ or other IDEs for example - which is quite easy now that the project has solid architecure docs and templates). The project has a roadmap in the README.
The endgame for ContextCore is to become an engineer's reliable side-kick when it comes to digging into chat history and turning that into pure context gold at the MINIMUM amount of tokens spent. The current search system is decent, but much more can be done.
And my endgame is twofold: 1) give something back after being a lurker for years and 2) get some help to polish the search system and other areas of the product, so that we create an awesome, vendor-independent, cross-agent memory layer.
reply