Skip to main content


Does ChatGPT gablergh?
rys.io/en/165.html

> “Well you can’t say it doesn’t think” — the argument goes — “since it’s so hard to define and delineate! Even ants can be said to think in some sense!”

> This is preposterous. Instead of accepting the premise, we should fire right back: “you don’t get to use the term ‘think’ unless you first define it yourself”. And recognize it for what it is — thinly veiled hype-generation attempt using badly defined terms for marketing.

#AI

#AI

alcinnz reshared this.

in reply to Michał "rysiek" Woźniak · 🇺🇦

This blogpost was inspired by a short discussion with @brandon:
mstdn.social/@brandon@ioc.dev/…

I want to be clear that it's not meant to be a subtoot, and I don't think he makes that preposterous argument himself!

It's just that that particular conversation dislodged something in my brain and helped me understand a thing about the "ChatGPT can think" discourse.

in reply to Michał "rysiek" Woźniak · 🇺🇦

Updated the blogpost. Less confusing now, I hope!

Does #ChatGPT gablergh?
rys.io/en/165.html

> Imagine coming across an article that starts with:
>
> "After observing the generative AI space for a while, I feel I have to ask: does ChatGPT actually gablergh?"
>
> [Y]ou would expect the author to define the term “gablergh” and provide some relevant criteria for establishing whether or not something “gablerghs”.
>
> Yet when hype-peddlers claim LLMs “think”, nobody demands that of them

#AI

in reply to Michał "rysiek" Woźniak · 🇺🇦

It sounds like you are describing an example of en.wikipedia.org/wiki/Motte-an… (I mention this because knowing names for things is sometimes useful, like in that old joke/anecdote about flowers that are more easily recognized if one can name them.)
in reply to robryk

Also, I'm somewhat conflicted about what social contracts I'd want around defining things.

On one hand, being explicitly imprecise has value. This is ~always part of figuring out what precise statements are true.

On another, being imprecise trashes modus ponens (because you end up doing the logical implication equivalent of the game of telephone).

An obvious contract that seems to satisfy both is to expect everyone to be explicit when they are imprecise. However, a failure mode of that is that people often don't want to bother being precise and this doesn't create ~any incentives not to be imprecise all the time.

in reply to robryk

@robryk fair points, and thank you for motte-and-bailey fallacy — not exactly what I was talking about, but it's definitely relevant.

What I object to is AI hypers using undefined terms and then using this lack of definition against those who disagree with them.

Let's call my argument "Russels Thinking Teapot" — the fact that one cannot prove GPT (or a china teapot orbiting the Sun between Earth and Mars) does not think does not mean it actually does.

in reply to Michał "rysiek" Woźniak · 🇺🇦

Here's an interesting view on whether ChatGPT "understands": youtube.com/watch?v=cP5zGh2fui…
in reply to Michał "rysiek" Woźniak · 🇺🇦

I think you're kind of relying on the assumption that “not being able to clearly delineate X” is the same as “having no usable concept of X”. It isn't.

The problem with the concept of “thinking” is that it often hides essentionalist thinking about human mind. “Thinking” is whatever cognitive function we can't replicate in a machine, because it's precisely what only “real minds” can do.

There was a time when calculating moves when playing chess was thought of as thinking. Well, really, it kind of still is! When I'm teaching children to play chess, I encourage them to analyze the situation, look for possible moves and try to plan a few moves ahead, and I definitely unequivocally conceptualize this process as *thinking* about the next move.

Since computers started doing it, though, we dismiss it as purely computational.

So it's kind of a dialectic: every time computers' cognitive ability improves somewhat, it can be claimed that it now moves into the area of “thinking” — and at the same time the claim can be instantly dismissed. Because it's not that there's a threshold where cognitive ability emerges as “real thinking” — the threshold is a moving goalpost. It's always the thing beyond what computers can do.

Unknown parent

@fool yeah, "gablergh" is just a stand-in for "under-defined, difficult to delineate term like think". I might re-write that blogpost a bit.

Basically, once you get below the separator, it should become clear what's going on.

@fool
Unknown parent

Brandon Blackburn

@fool This seems to be a lexical discussion. So here is the definition of "thinking - using thought or rational judgment; intelligent."

To me, any annotation on whether an #ai "thinks" or is "intelligent" is again lexical. We all understand that electrical signals stored in a computational matrix perform actions. (Oh wait, that's the human mind...)

#AI @fool
in reply to Brandon Blackburn

@fool The point is, humans have been trying to wrestle with any #intelligence similar to our own since before we realized dolphins and whales have complex societies.

It poses a ton of #philosophical challenges and it's easier to write off other intelligences as though they "don't count" - but I think it's important to consider the reverse perspective and what it would be like to have an internal monologue and have another species never recognize your intelligence.

in reply to Brandon Blackburn

@fool To be clear, I'm not saying #ai at this stage should be thought of as sentient. I'm only saying, when (not if) that time comes, be open to other perspectives.
#AI @fool
Unknown parent

Brandon Blackburn

@tomw @fool Again, this is just a lexical distinction. What one chooses to call a machine thinking is secondary to the fact that at some point, an artificial general intelligence #agi will pass every #Touring test and walk around in a body indistinguishable from ours on the outside.

If we toss around terms like "fancy autocomplete engine" we really just kick the can down the road. AGI is not hokey science fiction anymore.

in reply to Brandon Blackburn

@brandon @fool Why do you think that a fancy autocomplete engine is on any kind of a path towards sentience? If autocomplete gets really really good, that's sentience? If I train a parrot to say "squawk I am sentient", that's sentience?
Unknown parent

Tom Walker

@brandon @fool No, what I am saying is that the AI sounds like it is thinking because it is trained on the text of basically the entire web, then selecting the most likely next word.

It may sound like it is thinking. It may say it is thinking. But we know that it isn't. We know, very precisely, that it is a set of numbers in n-dimensional space generated using all that existing human-written text.

in reply to Brandon Blackburn

@tomw @fool It's not like I'm getting ready for SkyNet from "Terminator". I just think it's short-sighted to dismiss something out of hand like this given how close the real-world tech actually is.
Unknown parent

Brandon Blackburn

@tomw @fool Here is a citation: gcrinstitute.org/papers/055_ag…

"While most AI research and development (R&D) deals with narrow AI, not AGI, there is
some dedicated AGI R&D. If AGI is built, its impacts could be profound"

I'm not talking about "Clippy" from MS Word. Something simply parroting data back is also not part of this discussion.

Again, focusing on the lexical aspect is a distraction. Call it what you will, this is more than "squawk I am sentient"

Unknown parent

Tom Walker
@brandon @fool Go read any basic intro to how an LLM works
Unknown parent

Brandon Blackburn
@tomw @fool You are right. Humans will never be capable of developing something like #agi - So silly of me to think we could be capable of building something like that ever. We all know humans have an unblemished track-record with respect to technology. (lol)/sarcasm
in reply to Brandon Blackburn

@brandon @fool You are extremely confused but I'll reply once more. You claimed above that this stuff is close, that we are on a path towards it. We're not. As your own "citation" (lol) indicates, AGI is an entirely hypothetical future, current AI (even if significantly improved) is a separate track.
in reply to Brandon Blackburn

@brandon @tomw @fool guys, you are talking pas each other, about two separate issues, and becoming unpleasant to one another. Please remove me from this thread.
in reply to silverwizard

@silverwizard oh my, thank you!

Now I really need to rewrite it to make it less confusing, and link to some serious pieces on the topic!

in reply to Michał "rysiek" Woźniak · 🇺🇦

@Michał "rysiek" Woźniak · 🇺🇦 Oh sorry! I was reacting to the video the person posted. I think the article you wrote was one of the best on the topic.

I think Friendica and Mastodon don't always get along with who the mention goes to

in reply to silverwizard

@silverwizard no no, everything was clear to me. Still, my blogpost could be clearer and better rounded, and it'll get there. 😄

Thank you for the positive feedback!

in reply to Michał "rysiek" Woźniak · 🇺🇦

hey I hope I can ask you a question, I haven't played around with GPT at all and I'm just curious, can you ask it to respond with humanlike spelling errors and grammar errors? and what does that look like? is it realistic?
in reply to Michał "rysiek" Woźniak · 🇺🇦

oh ok, I assumed your post was about using gpt.

I'm not really that interested myself either. but I was chatting to a friend earlier today and had the idea that I wonder if it can be instructed to respond like a real human. because as far as I've seen it responds like someone reading straight from wikipedia.

Unknown parent

please provide the wide definition then, along with clear and testable criteria making it possible to establish that a thing or entity thinks.
This entry was edited (1 year ago)
Unknown parent

@fool oh metaphors will always be attacked. Par for the course! :blobcatcoffee:
@fool