I don't directly use these "services", but I think about them from time to time because they're going to impact me whether I want to use them or not. I was thinking about "indirect prompt injection" and other means of controlling the input to these things, and I just realized the whole concept as implemented is basically "garbage in" with a dash of "trust me bro" marketing.
We've set these things up so that we don't control all the direct inputs. We don't control nor curate the training input. We don't control nor inspect the implementation. Yet we're expected to hand over decision making and the power to take action?
I don't use them per se, but I have experimented with them to understand first-hand the problems with them, and it's hard to believe people trust them.
I give it some writing prompts, asking it to brainstorm some science fiction setting details with me... and while it's very cool at first, it can't accurately _remember_ what we've been discussing. It straight up gaslights me about what it said earlier, while being extremely apologetic.
I think the most dangerous part is how seductive it is... it really feels like I'm having a conversation with a real person, who is very helpful about elaborating on my ideas, and it's difficult to not feel like it _understands_ what we're talking about. And that's the real danger, thinking that it is doing anything but stringing together very advanced "most likely next word" responses.... but it really doesn't feel like that's what it's doing.
@raven Yeah, that's one of the things that really upset me at first. People and the media were making a big deal about these things passing the "Turing test" even to the point of getting some of the engineers who worked on them to apparently believe they were sentient or conscious.
But the only thing I saw was a direct attack on people. Taking advantage of us to gain our confidence, but doing it an automated fashion at scale. :/
It's a little sad for me to realize the Turing Test isn't the bar we thought it was... ChatGPT feels like a real person (including "misremembering" facts from earlier in the conversation), but it definitely isn't "intelligent".
Even if the data sourcing was entirely ethical and it wasn't burning electricity like it was free, I still have a lot of worries about misrepresentation and misuse.
Mhoye had a good comment about the nature of the Turing test and the context under which Turing developed it, but I can't seem to find the comment. However, I did manage to find this one:
"It turns out this whole time that the Turing Test was the wrong way to think of it. Thinking a chatbot is alive is not a test of how good the chatbot is, but of your own ability to think of other human beings as real complete people."
Now in my case, because we were brainstorming fictional elements, it really starts falling apart because of the "context window". It can only remember so much about past conversation and incorporate that, so it kept _forgetting_... but it never acted as if it forgot. "Oh, yeah, I remember that. Here it is again..." and gets some wrong (4 legs, 2 arms becomes 2 legs, 4 arms), and it makes up new "facts".
But I see the appeal! I really wish there weren't so many ethical concerns to deal with, because it is just amazing within its limitations. I'd never trust it to write code I couldn't knock out myself, or to accurately present facts. But working on _fiction_, where there are no real stakes?
I use random idea generators to challenge me to think in different directions, and an LLM is like the pinnacle of idea generators, all my tools rolled into one.
@raven You're example is harmless enough (assuming other problematic aspects could be mitigated). I mean, artists having been using various techniques to prompt themselves for quite some time. Decks, random words, games, etc.
It's just... That's not how these are being marketed. That's not the problem they claim they are solving. I hope I'm wrong, but I worry we cannot disentangle the harmless (perhaps even helpful!) aspects from the harmful ones in this case.
Random generators, prompts, templates, code completion
These aren't new tools that LLMs made up. These aren't like, science fiction! These are normal parts of our lives before LLMs. But now we expect LLMs to do it all instead, replace those tools with LLMs, and expect them to be better, despite them being demonstrably worse and wildly expensive.
Exactly so... it's one thing to create a fictional setting, it's another to come up with legal precedents for a court case, and have the LLM produce _fictional_ cases that look legit. Or to write code. I worry about someone deciding to use it to diagnose illness.
Your friendly 'net denizen
in reply to Your friendly 'net denizen • • •Sensitive content
scribe
in reply to Your friendly 'net denizen • • •Sensitive content
Carl C
in reply to Your friendly 'net denizen • • •Sensitive content
I don't use them per se, but I have experimented with them to understand first-hand the problems with them, and it's hard to believe people trust them.
I give it some writing prompts, asking it to brainstorm some science fiction setting details with me... and while it's very cool at first, it can't accurately _remember_ what we've been discussing. It straight up gaslights me about what it said earlier, while being extremely apologetic.
Carl C
in reply to Carl C • • •Sensitive content
Your friendly 'net denizen
in reply to Carl C • • •Sensitive content
@raven Yeah, that's one of the things that really upset me at first. People and the media were making a big deal about these things passing the "Turing test" even to the point of getting some of the engineers who worked on them to apparently believe they were sentient or conscious.
But the only thing I saw was a direct attack on people. Taking advantage of us to gain our confidence, but doing it an automated fashion at scale. :/
Carl C
in reply to Your friendly 'net denizen • • •Sensitive content
It's a little sad for me to realize the Turing Test isn't the bar we thought it was... ChatGPT feels like a real person (including "misremembering" facts from earlier in the conversation), but it definitely isn't "intelligent".
Even if the data sourcing was entirely ethical and it wasn't burning electricity like it was free, I still have a lot of worries about misrepresentation and misuse.
Your friendly 'net denizen
in reply to Carl C • • •Sensitive content
Mhoye had a good comment about the nature of the Turing test and the context under which Turing developed it, but I can't seem to find the comment. However, I did manage to find this one:
"It turns out this whole time that the Turing Test was the wrong way to think of it. Thinking a chatbot is alive is not a test of how good the chatbot is, but of your own ability to think of other human beings as real complete people."
dice.camp/@pawsplay/1125005091…
Wandering Star
2024-05-25 07:18:58
Carl C
in reply to Carl C • • •Sensitive content
Now in my case, because we were brainstorming fictional elements, it really starts falling apart because of the "context window". It can only remember so much about past conversation and incorporate that, so it kept _forgetting_... but it never acted as if it forgot. "Oh, yeah, I remember that. Here it is again..." and gets some wrong (4 legs, 2 arms becomes 2 legs, 4 arms), and it makes up new "facts".
But this is also what it does to actual facts.
Carl C
in reply to Carl C • • •Sensitive content
But I see the appeal! I really wish there weren't so many ethical concerns to deal with, because it is just amazing within its limitations. I'd never trust it to write code I couldn't knock out myself, or to accurately present facts. But working on _fiction_, where there are no real stakes?
I use random idea generators to challenge me to think in different directions, and an LLM is like the pinnacle of idea generators, all my tools rolled into one.
Your friendly 'net denizen
in reply to Carl C • • •Sensitive content
@raven You're example is harmless enough (assuming other problematic aspects could be mitigated). I mean, artists having been using various techniques to prompt themselves for quite some time. Decks, random words, games, etc.
It's just... That's not how these are being marketed. That's not the problem they claim they are solving. I hope I'm wrong, but I worry we cannot disentangle the harmless (perhaps even helpful!) aspects from the harmful ones in this case.
silverwizard likes this.
silverwizard
in reply to Your friendly 'net denizen • •@Your friendly 'net denizen @Carl C honestly, that's the thing that *gets me*
Random generators, prompts, templates, code completion
These aren't new tools that LLMs made up. These aren't like, science fiction! These are normal parts of our lives before LLMs. But now we expect LLMs to do it all instead, replace those tools with LLMs, and expect them to be better, despite them being demonstrably worse and wildly expensive.
like this
Your friendly 'net denizen and Carl C like this.
Carl C
in reply to Your friendly 'net denizen • • •Sensitive content
silverwizard likes this.
silverwizard
in reply to Carl C • •Carl C
in reply to silverwizard • • •Sensitive content
silverwizard likes this.
Your friendly 'net denizen
Unknown parent • • •Sensitive content
@gemlog Correction accepted. 😆
(Sorry about the cold. Hopefully it warms up.)
Your friendly 'net denizen
Unknown parent • • •Sensitive content