> I can spot AI writing very quickly now, after just a few sentences or paragraphs.
Not denying this is true — but like a lot of what we've seen with AI, lets see how you feel in two years time when the models have improved as much.
I think it was actually Brian Eno that said it (essentially): whatever you laugh about with regard to LLMs today, watch out, because next year that funny thing they did will no longer be present.
I don't think the AI companies are systematically working to make their models sound more human. They're working to make them better at specific tasks, but the writing styles are, if anything, even more strange as they advance.
Comparing base and instruction-tuned models, the base models are vaguely human in style, while instruction-tuned models systematically prefer certain types of grammar and style features. (For example, GPT-4o loves participial clauses and nominalizations.) https://arxiv.org/abs/2410.16107
When I've looked at more recent models like o3, there are other style shifts. The newer OpenAI models increasingly use bold, bulleted lists, and headings -- much more than, say, GPT-3.5 did.
So you get what you optimize for. OpenAI wants short, punchy, bulleted answers that sound authoritative, and that's what they get. But that's not how humans write, and so it'll remain easy to spot AI writing.
That's interesting. I had not heard that. I wonder if making them sound more human and making them better at specific tasks though are mutually exclusive. (Or if perhaps making them sound more human is in fact also a valid task.)
I like that Brian Eno quote. If I recall correctly, he was also referring to nostalgia. Like, once the technology improves, you begin to miss the old rough edges. I know that I love seeing old images of Google DeepDream, for example.* It's the same reason why young people miss Playstation 2 blocky graphics, or why photographers sometimes edit their images for unreal Kodachrome color. The things annoy us today are the very things that we'll miss the most.
I have been testing them out in my day to day workflow every so often and I remain just as unimpressed as I was when I first tried copilot a year and a half or so ago
I don't think they are any better at coding than they were. I think people are lowering their standards to what LLMs can accomplish, not that LLMs have risen to meet our standards
Not denying this is true — but like a lot of what we've seen with AI, lets see how you feel in two years time when the models have improved as much.
I think it was actually Brian Eno that said it (essentially): whatever you laugh about with regard to LLMs today, watch out, because next year that funny thing they did will no longer be present.