I spent the entirety of yesterday, from around 8:30 until almost exactly 5pm, doing a relatively straightforward refactor to change the types of every identifier in our system, from the protobuf the database, from a generic UUID type to a distinct typesafe wrapper around UUID for each one. This is so that passing ID’s to functions expecting identifiers for one particular type vs another, is less error prone.
It was a nonstop game of my IDE’s refactoring features, a bunch of `xargs perl -pi -e 's/foo/bar/;', and repeatedly running `cargo check` and `cargo clippy --fix` until it all compiled. It was a 4000+ line change in the end (net 700 lines removed), and it took me all of that 8.5 hours to finish.
Could an AI have done it faster? Who knows. I’ve tried using Cursor with Claude on stuff like this and it tends to take a very long time, makes mistakes, and ends up digging itself further into holes until I clean up after it. With the size of the code base and the long compile times I’m not sure it would have been able to do it.
So yeah, a typical day is basically 70% coding, 20% meetings, and 10% slack communication. I use AI only to bounce ideas off of, as it seems to do a pisspoor job of maintenance work on a codebase. (I rarely get to write the sort of greenfield code that AI is normally better at.)
>Could an AI have done it faster? Who knows. I’ve tried using Cursor with Claude on stuff like this and it tends to take a very long time, makes mistakes, and ends up digging itself further into holes until I clean up after it. With the size of the code base and the long compile times I’m not sure it would have been able to do it.
I've found the same thing, but I have also found that gen AI is pretty good at creating a script to do this. Generally, for very deterministic, repeated changes, I've found having the LLM write code is way better than having it makes a lot of changes. As it has to read more files, the context gets filled, and it starts to get goofy.
It was a nonstop game of my IDE’s refactoring features, a bunch of `xargs perl -pi -e 's/foo/bar/;', and repeatedly running `cargo check` and `cargo clippy --fix` until it all compiled. It was a 4000+ line change in the end (net 700 lines removed), and it took me all of that 8.5 hours to finish.
Could an AI have done it faster? Who knows. I’ve tried using Cursor with Claude on stuff like this and it tends to take a very long time, makes mistakes, and ends up digging itself further into holes until I clean up after it. With the size of the code base and the long compile times I’m not sure it would have been able to do it.
So yeah, a typical day is basically 70% coding, 20% meetings, and 10% slack communication. I use AI only to bounce ideas off of, as it seems to do a pisspoor job of maintenance work on a codebase. (I rarely get to write the sort of greenfield code that AI is normally better at.)