Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It might be just as likely that ChapGPT will cause a mistake like Knight Capital because no one bothered to thoroughly verify the AI's looks-good-but-deeply-flawed answer, and the two aren't mutually exclusive possibilities.


Right. I've had ChatGPT completely fail at something as simple as writing a batch file to find and replace text in a text file.


Sure, but humans do that all the time as well


Humans are a lot better at "I don't know how to do this; hey Alice, can you look this over if you've got a sec and tell me if I'm making a noob mistake"


Perhaps the actual phenomenon is that humans are much better at saying "Alice wrote this code, she's pretty good at scripting but she might have made a noob mistake, better check it", or even "I wrote this code.." than they are at saying "ChatGPT wrote this code, but that application is not guaranteed to have correctly identified my problem, but may have just returned something that seems right both to the statistical model and to me, but which is actually deeply flawed, better check it".


The Knight meltdown was more of a disfunction of change management and trading system operations than it was of using a decommissioned feature flag.

Source: worked there after the meltdown.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: