Hacker Newsnew | past | comments | ask | show | jobs | submit | jrswab's commentslogin

Yes, I've used it with on OpenAI compatible API from an internal LLM at my job.


Thanks!


Sorry, I need to update that. I just added MCP support a day or so ago.


You don't have all the Claude Code overhead. It only gets what you give it.


what do you mean by that, not sure I understand


I'm excited to see how this plays out. Keep me updated on x(twitter)


Love to hear it! Thanks for checking it out and feel free to put up an issue on GitHub if you have any ideas for improvements.


Yes, I think it will be quite trivial to make a output allow list. That's a great idea!


I've shared a few flows I use a lot right now in some other comments.


That's my dream.


Dream, or _pipe_dream?


Not yet but it will be easy to add. If you need it can you create an issue in GitHub? I should be able to get that in today.


> Curious if you’ve experimented with workflows where agents produce artifacts (files, reports, etc.) rather than just returning text.

Yes! I run a ghost blog (a blog that does not use my name) and have axe produce artifacts. The flow is: I send the first agent a text file of my brain dump (normally spoken) which it then searched my note system for related notes, saves it to a file, then passes everything to agent 2 which make that dump a blog draft and saves it to a file, agent 3 then takes that blog draft and cleans it up to how I like it and saves it. from that point I have to take it to publish after reading and making edits myself.


That’s a really nice pipeline. The “save to file between steps” pattern seems to appear very naturally once agents start doing multi-stage work.

One thing I’ve noticed when experimenting with similar workflows is that once artifacts start accumulating (drafts, logs, intermediate reports, etc.), you start running into small infrastructure questions pretty quickly:

– where intermediate artifacts live – how later agents reference them – how long they should persist – whether they’re part of the workflow state or just temporary outputs

For small pipelines the filesystem works great, but as the number of steps grows it starts to look more like a little dataflow system than just a sequence of prompts.

Do you usually just keep everything as local files, or have you experimented with something like object storage or a shared artifact layer between agents?


In my prompting framework I have a workflow that the agent would scan all the artifacts in my closed/ folder and create a yyyymmdd-archive artifact which records all artifact name and their summaries, then just delete them. Since the framework is deeply integrated with git, the artifact can be digged up from git history via the recorded names.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: