What was it specifically about the style that stood out as incongruous, or that hindered comprehension? What was it that made you stumble and start paying close attention to the style rather than to the message? I am looking at the two examples, and I can't see anything wrong with them, especially in the context of the article. They both employ the same rhetorical technique of antithesis, a juxtaposition of contrasting ideas. Surely people wrote like this before? Surely no-one complained?
The problem is less with the style itself and more that it's strongly associated with low-effort content which is going to waste the readers time. It would be nice to be able to give everything the benefit of the doubt, but humans have finite time and LLMs have infinite capacity for producing trite or inaccurate drivel, so readers end up reflexively using LLM tells as a litmus test for (lack of) quality in order to cut through the noise.
You might say well, it's on the Cloudflare blog so it must have some merit, but after the Matrix incident...
I find it more amusing that the benchmarks claim 530 GB/s throughput on an M1 Pro which has a 200GB/s memory bandwidth. The 275 GB/s figure for chained transforms has the same problem.
I suspect the benchmarks, if not most of this project, was completely vibecoded. There are a number of code smells, including links to deleted files, such as https://github.com/jasnell/new-streams/blob/ddc8f8d8dda31b4b... an inexistent REFACTOR-TODO.md
These AI signals will die out soon. The models are overusing actual human writing patterns, the humans are noticing and changing how they write, the models are updated, new patterns emerge, etc, etc. The best signal for the quality of writing will always be the source, even if they are "just" prompting the model. I think we can let one incident slide, but they are on notice.