Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> You and robotresearcher have still avoided answering this question.

I have repeatedly explicitly denied the meaningfulness of the question. Understanding is a property ascribed by an observer, not possessed by a system.

You may not agree, but you can’t maintain that I’m avoiding that question. It does not have an answer that matters; that is my specific claim.

You can say a toaster understands toasting or you can not. There is literally nothing at stake there.



You said the LLMs are intelligent because they do tasks. But the claim is inconsistent with the toaster example.

If a toaster isn't intelligent because I have to give it bread and press the button to start then how's that any different from giving an LLM a prompt and pressing the button to start?

It's never been about the toaster. You're avoiding answering the question. I don't believe you're dumb, so don't act the part. I'm not buying it.


I didn’t describe anything as intelligent or not intelligent.

I’ll bow out now. Not fun to be ascribed views I don’t have, despite trying to be as clear as I can.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: