Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Makes sense. For our use case, having "incorrect" gpt output isn't a viable option so unfortunately it looks like the tech isn't there quite yet as we would need to manually verify the output, or we would need access to the model to train it on our own data, but not sure if that would improve the results.


Here’s a question for you - for a typical L2 support person or front line MSP NOC monitor, or junior SRE, or similar - what do you think would be the error rate? I wonder if there isn’t sort of a front-end role here that can be subsumed at least in part by automatic summarization, for the purpose of streamlining the escalation and routing process overall, even though errors would occur - since I imagine they already do. What do you think?


I've only worked at (very) small companies so I've never had to think about different tiers of support. The automatic summarization may be useful for better determining which specialist to route the request to, but in any case a wrong classification will get picked up by a human regardless and then re-routed appropriately. I imagine the error rate is already pretty high, so anything to automate/reduce that can go a long way.

I work in education tech where our product goes directly to students/teachers, so tolerance for "incorrect" answers generated by AI is much lower, or nonexistent.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: