Skip to main content

in reply to Drew Kadel

No introspection and only waiting to generate the next line of a conversation. Sounds like quite a few humans!
in reply to Andre

people who think ChatGPT thinks have failed the Reverse Turing Test.

reshared this

in reply to Drew Kadel

alt text:
Something that seems fundamental to me about ChatGPT, which gets lost over and over again:
When you enter text into it, you're asking "What would a response to this sound like?"
If you put in a scientific question, and it comes back with a response citing a non-existent paper with a plausible title, using a real journal name and an author name who's written things related to your question, it's not being tricky or telling lies or doing anything at all surprising! This is what a response to that question would sound like! It did the thing!
But people keep wanting the "say something that sounds like an answer" machine to be doing something else, and believing it *is* doing something else.
It's good at generating things that sound like responses to being told it was wrong, so people think that it's engaging in introspection or looking up more information or something, but it's not, it's only, ever, saying something that sounds like the next bit of the conversation.
in reply to Drew Kadel

Love this. We want so badly for ChatGPT to produce answers, opinions, and art, but all it can do is make plausible simulations of those things. As a species, we've never had to deal with that before.
in reply to Nate Gaylinn

@ngaylinn Actually we as a species have had to deal with that before.

We call them grifters.

in reply to Drew Kadel

Just a superb explanation. I've been using the markov chain analogy but this is far stronger. Thank your daughter for me please!