I work with individuals who attempt to use LLMs to write tests. More than once, it's added nonsensical, useless test cases. Admittedly, humans do this, too, to a lesser extent.
Additionally, if their code has broken existing tests, it "fixes" them by not fixing the code under test, but changing the tests... (assert status == 200 becomes 500 and deleting code.)
Tests "pass." PR is opened. Reviewers wade through slop...
The most annoying thing is that even after cleaning up all the nonsense, the tests still contain all sort of fanfare and it’s essentially impossible to get the submitter to trim them because it’s death by a thousand cuts (and you better not say "do it as if you didn’t use AI" in the current climate..)
Yep. We've had to throw PRs away and ask them to start over with a smaller set of changes since it became impossible to manage. Reviews went on for weeks. The individual couldn't justify why things were done (and apparently their AI couldn't, either!)
Additionally, if their code has broken existing tests, it "fixes" them by not fixing the code under test, but changing the tests... (assert status == 200 becomes 500 and deleting code.)
Tests "pass." PR is opened. Reviewers wade through slop...