It's techno-freemasonry. One must break through the symbolism. The author wielding it and transmitting it cannot just plainly say the knowledge. We don't have the vocabulary or grammar for these new things, so storytelling and story universes convey it. The zoomorphism and cinematic references ground us in what all these bots are doing mimetically.
I'm excited the author shared and so exuberantly; that said I did quick-scroll a bunch of it. It is its own kind of mind-altering substance, but we have access to mind-bending things.
If you look at my AgentDank repo [1], one could see a tool for finding weed, or you could see connecting world intelligence with SQL fluency and pairing it with curated structured data to merge the probabilistic with the deterministic computing forms. Which I quickly applied to the OSX Screentime database [2].
Vibe coding turned a corner in November and I'm creating software in ways I would have never imagined. Along with the multimodal capabilities, things are getting weirder than ever.
Mr Yegge now needs to add a whole slew of characters to Gas Town to maintain multi-modal inputs and outputs and artifacts.
Just two days I go, I had LLMs positioning virtual cameras to render 3D models it created using the Swift language after looking at a picture of what to make, and then "looking" at the results to see the next code changes. Crazy. [3]
ETA: It was only 14 months earlier that I was amazed that a multi-modal model could identify a trend in a chart [4].
I'm excited the author shared and so exuberantly; that said I did quick-scroll a bunch of it. It is its own kind of mind-altering substance, but we have access to mind-bending things.
If you look at my AgentDank repo [1], one could see a tool for finding weed, or you could see connecting world intelligence with SQL fluency and pairing it with curated structured data to merge the probabilistic with the deterministic computing forms. Which I quickly applied to the OSX Screentime database [2].
Vibe coding turned a corner in November and I'm creating software in ways I would have never imagined. Along with the multimodal capabilities, things are getting weirder than ever.
Mr Yegge now needs to add a whole slew of characters to Gas Town to maintain multi-modal inputs and outputs and artifacts.
Just two days I go, I had LLMs positioning virtual cameras to render 3D models it created using the Swift language after looking at a picture of what to make, and then "looking" at the results to see the next code changes. Crazy. [3]
ETA: It was only 14 months earlier that I was amazed that a multi-modal model could identify a trend in a chart [4].
[1] https://github.com/AgentDank/dank-mcp
[2] https://github.com/AgentDank/screentime-mcp
[3] https://github.com/ConAcademy/WeaselToonCadova/
[4] https://github.com/NimbleMarkets/ollamatea/blob/main/cmd/ot-...