Hacker Newsnew | past | comments | ask | show | jobs | submit | roywiggins's commentslogin

Yes, I've experienced the sense that there's a person on the "other end" even when I have been perfectly aware that it's a bag of matrices. Brains just have people-detectors that operate below conscious awareness. We've been anthropomorphizing stuff as impersonal as the ocean for as long as there have been people, probably.

Exactly. You only have to look at animistic religions that are all about anthropomorphizing stuff.

I once found myself reporting back the results of something to Claude the way I would to a human (“hey thanks your tip worked”) before catching myself to realize that it doesn’t care and it won’t learn from it (which would otherwise be a good reason).

it's mostly just agreeing with you (that yes, it was guessing). LLMs have very limited ability to even know whether it was guessing. But it can "cheat" and just say yes it was if it seems like that's what you expect to hear.


Ah, so that's where Fred Saberhagen got the gimmick for his first Berserker story - "Without a Thought" / "Fortress Ship", 1963.

(A spaceman uses a pet that plays with beads to simulate not being temporarily incapacitated by a 'mind beam' attack.)

Thank you.



> How can you seriously think you've created something when you're just using someone else's software?

Have you ever given a generative AI model a short input, been really pleased with the output, and felt like you accomplished the result? I have! It's probably common.

It's really easy to misattribute these things' abilities to yourself. Similar to how people driving cars feel (to some extent) like they are the car.


The word you are looking for, when your proprioception is extended into the tool (like feeling you are the car) you use: proprioextension. coined a while ago.

> Have you ever given a generative AI model a short input, been really pleased with the output, and felt like you accomplished the result? I have! It's probably common.

i mean, you did. becoming good at writing succinct and clever prompts, adding constraints, choosing good models for your use case, etc are all skills like any other.

most people are really bad at it though.


It doesn't help that LLMs roleplay to pretend to behave how their users think they do. You think it has "core programming"? Well, it will say it does. You think it abides by the Three Laws of Robotics? Ditto

He was hospitalized three times for mania!

It seems like he was at the very least close to that. Since we only get his first-person account it's hard to say, but:

> They discussed philosophy, psychology, science and the universe...

> When they went to their daughter’s birthday party, she asked him not to talk about AI. While there, Biesma felt strangely disconnected. He couldn’t hold a conversation. “For some reason, I didn’t fit in any more,” he says.

> It’s hard for Biesma to describe what happened in the weeks after, as his recollections are so different from those of his family...

> he was hospitalised three times for what he describes as “full manic psychosis”.

You don't get hospitalized three times for mania without being pretty severely detached from reality.


> They discussed philosophy, psychology, science and the universe...

I mean, I've discussed all those things with an LLM, mostly because I'm able to interactively narrow in on the specific bits I don't understand, and I've found it to be great for that.

The rest ... yes, definitely psychosis.


On its own, yes, of course. But this is coming from a guy who was hospitalized three times for mania, so when someone with that history says "we were discussing the universe" I take it in a very particular way.

An important part of using an LLM is to verify it's output, because they are very prone to just make stuff up. If you focus on what you don't understand, how do you verify the output?

They are prone to making stuff up, but less prone to sticking to it on interrogation (although obviously that does happen, and Gemini used to be terrible for this). I find restating my new understanding of something to a new context window to be a valuable part of the learning process, and most likely saves me here, esp as I have memories switched off.

I use LLMs a lot for medical advice and will normally take a bunch of second opinions from clean windows and other LLMs. Hasn’t killed me yet!


power law distribution ~1/x I think

Zipf's law?

"The X Trick" or "The Y Dilemma" or similar snowclones in a header is also a big AI thing. Humans use this construction too, but LLMs love it out of all proportion. I call it The Ludlum Delusion (since that's how every Robert Ludlum book is titled).

See also:

Canada to Shut Down Its VHF Weather Radio Service

https://www.radioworld.com/news-and-business/headlines/canad...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: