I know at least one of the companies behind a coding agent we all have heard of has called in human experts to clean up their vibe coded IAC mess created in the last year.
TIL serializing a protobuf is only 5 times slower than copying memory, which is way faster than I thought it’d be. Impressive given all the other nice things protobuf offers to development teams.
I guess that number is as good or as bad as you want with the right nesting.
Protobuf is likely really close to optimally fast for what it is designed to be, and the flaws and performance losses left are most likely all in the design space, which is why alternatives are a dime a dozen.
That's a 30x faster just by switching to a zero-copy data format that's suitable for both in memory use and network. JSON services spend 20-90% of their compute on serde. A zero copy data format would essentially eliminate it.
I wouldn't hold onto that number as any kind of fixed usable constant since the reality will depend entirely on things like cache locality and concurrency, and the memory bandwidth of the machine you're running on.
Go around doing this kind of pointless thing because "it's only 5x slower" is a bad assumption to make.
Serializing a protobuf can be significantly faster than memcpy, depending. If you have a giant vector of small numbers represented with wide types (4-8 bytes in the machine) then the cost of copying them as variable-length symbols can be less.
5x is pretty slow honestly. Imagine anything happening 5x as slow as you'd expect it to. I mean, for a recent project I had to inline Rust structs rather than parsing JSON too for specific fields, and that definitely sped it up.
Well, I spent a good part of my career reverse engineering network protocols for the purpose of developing exploits against closed source software, so I'm pretty sure I could do this quickly. Not that it matters unless you're going to pay me.
What are you even trying to say? I suppose I'll clarify for you: Yes, I'm confident I could have identified the cause of the mysterious packets quickly. No, I'm not going to go through the motions because I have no particular inclination toward the work outside of banter on the internet. And what's more, it would be contrived since the answer has already shared.
I think the point they're making is that "I, a seasoned network security and red-team-type person, could have done this in Wireshark without AI assistance" is neither surprising nor interesting.
That'd be like saying "I, an emergency room doctor, do not need AI assistance to interpret an EKG"
Sure, but that is aside from my original point. If somebody:
a) Has the knowledge to run tcpdump or similar from the command line
b) Has the ambition to document and publish their effort on the internet
c) Has the ability identify and patch the target behaviors in code
I argue that, had they not run to an LLM, they likely would have solved this problem more efficiently, and would have learned more along the way. Forgive me for being so critical, but the LLM use here simply comes off as lazy. And not lazy in a good efficiency amplifying way, but lazy in a sloppy way. Ultimately this person achieved their goal, but this is a pattern I am seeing on a daily basis at this point, and I worry that heavy LLM users will see their skill sets stagnate and likely atrophy.
>I argue that, had they not run to an LLM, they likely would have solved this problem more efficiently
Hard disagree. Asking an LLM is 1000% more efficient than reading docs, lots of which are poorly written and thus dense and time-consuming to wade through.
The problem is hallucinations. It's incredibly frustrating to have an LLM describe an API or piece of functionality that fulfills all requirements perfectly, only to find it was a hallucination. They are impressive sometimes though. Recently I had an issue with a regression in some of our test capabilities after a pivot to Microsoft Orleans. After trying everything I could think of, I asked Sonnet 4.5, and it came up with a solution to a problem I could not even find described on the internet, let alone solved. That was quite impressive, but I almost gave up on it because it hallucinated wildly before and after the workable solution.
The same stuff happens when summarizing documentation. In that regard, I would say that, at best, modern LLMs are only good for finding an entrypoint into the docs.
While my reply was snarky I am prepared to take a reasonable bet with a reasonable test case. And pay out.
Why I think I’d win the bet is I’m proficient with tcpdump and wireshark and I’m reasonably confident that running to a frontier model and dealing with any hallucinations is more efficient and faster than recalling the incantantions and parsing the output myself.
Oh come on, the fact that the author was able to pull this off is surely indicative of some expertise. If the story started had started off with, "I asked the LLM how to capture network traffic," then yeah, what I said would not be applicable. But that's not how this was presented. tcpdump was used, profiling tools were mentioned, etc. It is not a stretch to expect somebody who develops networked applications knows a thing or two about protocol analysis.
The specific point I was trying to make was along the lines of, "I, a seasoned network security and red-team-type person, could have done this in Wireshark without AI assistance. And yet, I’d probably lose a bet on a race against someone like me using an LLM."
I'm still waiting for a systems engineering tool that can log every layer, and handle SSL the whole pipe wide.
Im covering everything from strafe and ltrace on the machine, file reads, IO profiling, bandwidth profiling. Like, the whole thing, from beginning to end.
Real talk though, how much would such a tool be worth to you? Would you pay, say, $3,000/license/year for it? Or, after someone puts in the work to develop it, would you wait for someone else to duct tape something together approximately similar enough using regexps that open source but 10% as good, and then not pay for the good proprietary tool because we're all a bunch of cheap bastards?
We have only ourselves to blame that there aren't better tools (publicly) available. If I hypothetically (really!) had such a tool, it would be an advantage over every other SRE out there that could use it. Trying to sell it directly comes with more headaches than money, selling it to corporations has different headaches, open-sourcing it don't pay the bills, nevermind the burnout (people don't donate for shit). So the way to do it is make a pitch deck, get VC funding so you're able to pay rent until it gets acquired by Oracle/RedHat/IBM (aka the greatest hits for Linux tool acquisition), or try and charge money for it when you run out of VC funding, leading to accusations of "rug pull" and development of alternatives (see also: docker) just to spite you.
In the base case you sell Hashimoto and your bank account has two (three!) commas, but worst case you don't make rent and go homeless when instead you could've gone to a FAANG and made $250k/yr instead of getting paid $50k/yr as the founder and burning VC cash and eating ramen that you have to make yourself.
I agree, that would be an awesome tool! Best case scenario, a company pays for that tool to be developed internally, the company goes under, it gets sold as an asset and whomever buys it forms a compnay and tries to sell it directly and then that company goes under but that whomever finally open sources it because they don't want it to slip into obscurity but if falls into obscurity anyway because it only works on Linux 5.x kernels and can't be ported to the 6.x series that we're on now easily.
Been a paiyng Evernote customer since its launch. I unsubscribed at the beginning of 2025 after 7/8 years of shitty releases, not fixing old bugs, and new useless features.
I don't user Evernote very often, but I have a bunch of stuff stored in there and use it basically in a read-only mode. For a long time I was able to get the $36 / year plan which I felt pretty good about. It was a great app and service which I didn't use very much, so that felt like a fair price and I felt good about supporting them at that level. Basically every time I opened Evernote, I was paying $2.
But then the price tripled and for me, it's too much. I'll pay $2 per session, but not $5.
I remember their CEO (Phil Libin I think) on their podcast explaining how they were building a 100 year company. I really wanted to believe that.
I use Obsidian now and like it, but it feels like they are going down the same path. They keep adding features that don't really fit the original editor-for-a-folder-of-markdown-files. I wish they would stop.
It's a bummer but the feature treadmill seems inescapable. Bending Spoons will probably be able to buy Obsidian for a very nice price in a few years and the Obsidian founders will do very well.
If everyone gets salaries and equity is paid for then everyone's done great. And then we can build another one, or an open source equivalent once all the money's been spent researching useful features, and then we're done.
It's worse. When a company like this is "mature", they don't try to appeal to new users. They instead squeeze what they can out of the existing user base, because that user base is probably already dying off. This isn't about attaining a steady state business, its about seeing how much of the toothpaste you can still squeeze out the bottle before it crusts up.
This practice is derogatorily called "vulture capitalism" for a reason. I hope the remaining engineers are either lining up for retirement or networking around for their next gig.
Yeah but OpenAI is adding ads this year for the free versions, which I'm guessing is most of their users. They are probably hedging on taking a big slice of Google's advertising monopoly-pie (which is why Google is also now all-in on forcing Gemini opt-out on every product they own, they can see the writing on the wall).
Google, Amazon, and Microsoft do a lot of things that aren't profitable in themselves. There is no reason to believe a company will kill a product line just because it makes a loss. There are plenty of other reasons to keep it running.
Do you think it's odd you only listed companies with already existing revenue streams and not companies that started with and only have generative algos as their product?
Two very useful use cases for rebase: 1) rewrite history to organize the change list clearly. 2) stacked pull requests of arbitrary depth.
You’ve never run a bisect to identify which commit introduced a specific behavior?
This is when I’ve found it most useful. Having commits merged instead of squashed narrows down and highlights the root problem.
It’s a rare enough situation I don’t push for merge commits over squashed rebases because it’s not worth it, but when I have had to bisect and the commits are merged instead of squashed it is very very useful.
Those commit authors are who I noted as clear thinkers and have tracked over my career to great benefit.
Edit: Genuinely curious about the downvotes here. The concept directly maps to all the reasons the article author cited.
reply