This bothered me at first but I think it's about ease of implementation.
If you've built a good harness with access to lots of tools, it's very easy to plug in a request like "if the linked PR is approved, please react to the slack message with :checkmark:". For a lot of things I can see how it'd actually be harder to generate a script that uses the APIs correctly than to rely on the LLM to figure it out, and maybe that lets you figure out if it's worth spending an hour automating properly.
Of course the specific example in the post seems like it could be one-shotted pretty easily, so it's a strange motivating example.
It seems easier but in my experience building an internal agent it’s not actually easier long term just slow and error prone and you will find yourself trying to solve prompt and context problems for something that should be both reliable and instantaneous
These days I do everything I can to do straightforward automation and only get the agent involved when it’s impossible to move forward without it
> We still start all workflows using the LLM, which works for many cases. When we do rewrite, Claude Code can almost always rewrite the prompt into the code workflow in one-shot.
Why always start with an LLM to solve problems? Using an LLM adds a judgment call, and (at least for now) those judgment calls are not reliable. For something like the motivating example in this article of "is this PR approved" it seems straightforward to get the deterministic right answer using the github API without muddying the waters with an LLM.
Likely because it's just easier to see if the LLM solution works. When it doesn't, then it makes more sense to move into deterministic workflows (which isn't all the hard to build to be honest with Claude Code).
It's the old principle of avoiding premature optimization.
No, they've been doing "managing stacks of dependent pull requests" for a lot longer than AI code review. I've mostly been a happy user, they simplify a lot of the git pain of continually rebasing and the UI makes stacks much easier to work with than Github's own interface.
Bandwidth is the limiting factor in a lot of circumstances, and networks are very challenging to manage. Especially with an increasing number of users on mobile connections, reducing network usage can be the right call.
But performance matters, too, of course. It's tricky to balance them.
On the other hand, tls/443 is pretty undesirable for media delivery in videoconferencing because a) it's tcp-based and the required ACKs mean a big reduction in throughput and increase in latency, especially in the presence of packet loss, and b) most video services these days (and open source servers) use webrtc which encrypts the data in transit already--so the tls encryption is a waste of resources
Though tls/443 is usually still supported because it's most often allowed by even restrictive firewalls and networks
I read the article as saying you can't solve everything yourself. That's different than saying you should ignore problems. Instead you need to communicate when you see a problem that you're not positioned to solve, because you don't have the bandwidth or you're not in a position of authority for that domain
> you should be working on properly communicating the gap and its risk to the business (and risk to which part of the business) and NOT attempting to solve everything.
Very much this, it takes time to recapture the microphone and it's really annoying to lose the first part of what you say every time you unmute. I lead the video team at a videoconferencing app (gather.town) and we keep the microphone active when you mute for this reason.
As seems to be pretty common, for the sake of privacy we do stop sending audio to the media server. That's a tradeoff, since we're still susceptible to losing a little bit while the audio connection resumes.
Edit: as others have mentioned, also useful to keep bluetooth headsets in two-way audio mode rather than reverting to audio output mode, since that's really disruptive.
Just throwing it out there but maybe to avoid bandwidth spikes that might lead to latency depending on the setup, could you inject some kind of easily identifiable "is muted" signal along with white noise in place of silences? or would that sort of pre-mixing be too slow to do in real time on the client side?
As an alternative to a daily chat message, a teammate and I have been meeting for ~15 minutes at the end of the day to talk about what we did and how productive we were. It's been pretty helpful a few different ways. First, it forces me to think about what it is I'm supposed to be doing and figure out the next step. Not knowing how to approach my next task is a huge cause of procrastination for me. Second, it's a chance to notice when I've gone astray, and identify factors that lead to low productivity. (Like that I procrastinate when I haven't broken down my next task into small enough pieces.)
I think for this to work well, it needs to be with someone you don't feel the need to impress. Maybe you have a teammate you trust like that, or maybe you can find a coworker on a different team who doesn't impact your performance assessments. If you have that, this feels different than a standup. Standups easily devolve into signaling to the team that you've done work. Instead, a 1:1 meeting with a coworker who you don't feel the need to impress makes it way easier to be vulnerable and admit when you screwed around on the internet for a lot of the day.
Of course the specific example in the post seems like it could be one-shotted pretty easily, so it's a strange motivating example.