Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well thats the argument most people here are making - that current LLMs are not good enough to be fully autonomous precisely because a human operator has to "put the right thing into them to get the right thing out."

If I'm spending effort specifying a problem N times in very specific LLM-instruction-language to get the correct output for some code, I'd rather just write the code myself. After all, thats what code is for. English is lossy, code isn't. I can see codegen getting even better in larger organizations if context windows are large enough to have a significant portion of the codebase in it.

There are areas where this is immediately better in though (customer feedback, subjective advice, small sections of sandboxed/basic code, etc). Basically, areas where the effects of information compression/decompression can be tolerated or passed onto the user to verify.

I can see all of these getting better in a couple of months/few years.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: