Does ChatGPT gablergh?
rys.io/en/165.html
> “Well you can’t say it doesn’t think” — the argument goes — “since it’s so hard to define and delineate! Even ants can be said to think in some sense!”
> This is preposterous. Instead of accepting the premise, we should fire right back: “you don’t get to use the term ‘think’ unless you first define it yourself”. And recognize it for what it is — thinly veiled hype-generation attempt using badly defined terms for marketing.
#AI
Does ChatGPT gablergh?
After observing the generative AI space for a while, I feel I have to ask: does ChatGPT (and other LLM-based chatbots)… actually gablergh? And if I am honest with myself, I cannot but conclude that itSongs on the Security of Networks
alcinnz reshared this.
Michał "rysiek" Woźniak · 🇺🇦
in reply to Michał "rysiek" Woźniak · 🇺🇦 • • •This blogpost was inspired by a short discussion with @brandon:
mstdn.social/@brandon@ioc.dev/…
I want to be clear that it's not meant to be a subtoot, and I don't think he makes that preposterous argument himself!
It's just that that particular conversation dislodged something in my brain and helped me understand a thing about the "ChatGPT can think" discourse.
Michał "rysiek" Woźniak · 🇺🇦
in reply to Michał "rysiek" Woźniak · 🇺🇦 • • •Updated the blogpost. Less confusing now, I hope!
Does #ChatGPT gablergh?
rys.io/en/165.html
> Imagine coming across an article that starts with:
>
> "After observing the generative AI space for a while, I feel I have to ask: does ChatGPT actually gablergh?"
>
> [Y]ou would expect the author to define the term “gablergh” and provide some relevant criteria for establishing whether or not something “gablerghs”.
>
> Yet when hype-peddlers claim LLMs “think”, nobody demands that of them
#AI
Does ChatGPT gablergh?
Songs on the Security of Networksrobryk
in reply to Michał "rysiek" Woźniak · 🇺🇦 • • •type of informal fallacy
Contributors to Wikimedia projects (Wikimedia Foundation, Inc.)robryk
in reply to robryk • • •Also, I'm somewhat conflicted about what social contracts I'd want around defining things.
On one hand, being explicitly imprecise has value. This is ~always part of figuring out what precise statements are true.
On another, being imprecise trashes modus ponens (because you end up doing the logical implication equivalent of the game of telephone).
An obvious contract that seems to satisfy both is to expect everyone to be explicit when they are imprecise. However, a failure mode of that is that people often don't want to bother being precise and this doesn't create ~any incentives not to be imprecise all the time.
Michał "rysiek" Woźniak · 🇺🇦
in reply to robryk • • •@robryk fair points, and thank you for motte-and-bailey fallacy — not exactly what I was talking about, but it's definitely relevant.
What I object to is AI hypers using undefined terms and then using this lack of definition against those who disagree with them.
Let's call my argument "Russels Thinking Teapot" — the fact that one cannot prove GPT (or a china teapot orbiting the Sun between Earth and Mars) does not think does not mean it actually does.
Etam
in reply to Michał "rysiek" Woźniak · 🇺🇦 • • •I believe chatbots understand part of what they say. Let me explain.
YouTubesilverwizard
in reply to Etam • •Radek Czajka
in reply to Michał "rysiek" Woźniak · 🇺🇦 • • •I think you're kind of relying on the assumption that “not being able to clearly delineate X” is the same as “having no usable concept of X”. It isn't.
The problem with the concept of “thinking” is that it often hides essentionalist thinking about human mind. “Thinking” is whatever cognitive function we can't replicate in a machine, because it's precisely what only “real minds” can do.
There was a time when calculating moves when playing chess was thought of as thinking. Well, really, it kind of still is! When I'm teaching children to play chess, I encourage them to analyze the situation, look for possible moves and try to plan a few moves ahead, and I definitely unequivocally conceptualize this process as *thinking* about the next move.
Since computers started doing it, though, we dismiss it as purely computational.
So it's kind of a dialectic: every time computers' cognitive ability improves somewhat, it can be claimed that it now moves into the area of “thinking” — and at the same time the claim can be instantly dismissed. Because it's not that there's a threshold where cognitive ability emerges as “real thinking” — the threshold is a moving goalpost. It's always the thing beyond what computers can do.
Michał "rysiek" Woźniak · 🇺🇦
Unknown parent • • •@fool yeah, "gablergh" is just a stand-in for "under-defined, difficult to delineate term like think". I might re-write that blogpost a bit.
Basically, once you get below the separator, it should become clear what's going on.
Brandon Blackburn
Unknown parent • • •@fool This seems to be a lexical discussion. So here is the definition of "thinking - using thought or rational judgment; intelligent."
To me, any annotation on whether an #ai "thinks" or is "intelligent" is again lexical. We all understand that electrical signals stored in a computational matrix perform actions. (Oh wait, that's the human mind...)
Brandon Blackburn
in reply to Brandon Blackburn • • •@fool The point is, humans have been trying to wrestle with any #intelligence similar to our own since before we realized dolphins and whales have complex societies.
It poses a ton of #philosophical challenges and it's easier to write off other intelligences as though they "don't count" - but I think it's important to consider the reverse perspective and what it would be like to have an internal monologue and have another species never recognize your intelligence.
Brandon Blackburn
in reply to Brandon Blackburn • • •Brandon Blackburn
Unknown parent • • •@tomw @fool Again, this is just a lexical distinction. What one chooses to call a machine thinking is secondary to the fact that at some point, an artificial general intelligence #agi will pass every #Touring test and walk around in a body indistinguishable from ours on the outside.
If we toss around terms like "fancy autocomplete engine" we really just kick the can down the road. AGI is not hokey science fiction anymore.
Tom Walker
in reply to Brandon Blackburn • • •Tom Walker
Unknown parent • • •@brandon @fool No, what I am saying is that the AI sounds like it is thinking because it is trained on the text of basically the entire web, then selecting the most likely next word.
It may sound like it is thinking. It may say it is thinking. But we know that it isn't. We know, very precisely, that it is a set of numbers in n-dimensional space generated using all that existing human-written text.
Brandon Blackburn
in reply to Brandon Blackburn • • •Brandon Blackburn
Unknown parent • • •@tomw @fool Here is a citation: gcrinstitute.org/papers/055_ag…
"While most AI research and development (R&D) deals with narrow AI, not AGI, there is
some dedicated AGI R&D. If AGI is built, its impacts could be profound"
I'm not talking about "Clippy" from MS Word. Something simply parroting data back is also not part of this discussion.
Again, focusing on the lexical aspect is a distraction. Call it what you will, this is more than "squawk I am sentient"
Tom Walker
Unknown parent • • •Brandon Blackburn
in reply to Tom Walker • • •Brandon Blackburn
Unknown parent • • •Tom Walker
in reply to Brandon Blackburn • • •Michał "rysiek" Woźniak · 🇺🇦
in reply to Brandon Blackburn • • •silverwizard
in reply to Michał "rysiek" Woźniak · 🇺🇦 • •Michał "rysiek" Woźniak · 🇺🇦
in reply to silverwizard • • •@silverwizard oh my, thank you!
Now I really need to rewrite it to make it less confusing, and link to some serious pieces on the topic!
silverwizard
in reply to Michał "rysiek" Woźniak · 🇺🇦 • •@Michał "rysiek" Woźniak · 🇺🇦 Oh sorry! I was reacting to the video the person posted. I think the article you wrote was one of the best on the topic.
I think Friendica and Mastodon don't always get along with who the mention goes to
Michał "rysiek" Woźniak · 🇺🇦
in reply to silverwizard • • •@silverwizard no no, everything was clear to me. Still, my blogpost could be clearer and better rounded, and it'll get there. 😄
Thank you for the positive feedback!
silverwizard likes this.
Stefan Midjich ꙮ҄
in reply to Michał "rysiek" Woźniak · 🇺🇦 • • •Michał "rysiek" Woźniak · 🇺🇦
in reply to Stefan Midjich ꙮ҄ • • •Stefan Midjich ꙮ҄
in reply to Michał "rysiek" Woźniak · 🇺🇦 • • •oh ok, I assumed your post was about using gpt.
I'm not really that interested myself either. but I was chatting to a friend earlier today and had the idea that I wonder if it can be instructed to respond like a real human. because as far as I've seen it responds like someone reading straight from wikipedia.
Michał "rysiek" Woźniak · 🇺🇦
in reply to Stefan Midjich ꙮ҄ • • •Michał "rysiek" Woźniak · 🇺🇦
Unknown parent • • •Michał "rysiek" Woźniak · 🇺🇦
Unknown parent • • •