Skip to main content


Just got hit with:

"But can you prove ChatGPT is *not* intelligent in the human sense?"

Oh my that old chestnut. 🙄

I am not making a claim, I am merely rejecting somebody else's claim that ChatGPT is intelligent in the human sense.

The burden of proof is on whoever claims that ChatGPT is intelligent in the human sense. That proof would also need to be accompanied by a clear, unambiguous, testable definition of what "intelligent" means, that ideally includes humans but excludes calculators.

in reply to Michał "rysiek" Woźniak · 🇺🇦

The funny thing is that this is quite easy to show really, there are lots of examples that fit the Octopus test by Bender et al ludicrously well. Try playing chess against ChatGPT: it'll play the best moves so long AS it can draw in the opening theory literature, then start playing complete nonsense like illegal moves or not understanding that the players take alternating turns.
in reply to Michał "rysiek" Woźniak · 🇺🇦

Once I tried to roleplay with ChatGPT discussing its CHIP8 emulator project implementation.

ChatGPT presented a sensible although not the best approach in my opinion until I started asking why OOP. At one point ChatGPT proposed that each pixel of the monochromatic display could be represented by an object aggregated in line objects aggregated in display object... Intelligent my ass, it simply mixed oop fundamentals and CHIP8 specs without any regard for what they are.

in reply to Michał "rysiek" Woźniak · 🇺🇦

for some people technology is magical. They never learned much STEM, and they get a mobile phone in their hands and they're off. To ask a question like that, you have to think you can make up shit about how technology works. Maybe it's cognitive dissonance or maybe it's just wanting to sound clever. But I've come to the conclusion that at least 50% of the worlds population on the internet is poorly educated and ignorant of what science can and can't achieve.
in reply to Michał "rysiek" Woźniak · 🇺🇦

the number of times it says "i'm sorry about that mistake" and proceed to repeat the exact same insanity should be proof enough.

But yeah, the extraordinary claim is that it is intelligent, it's the one that needs to be supported by strong evidence.

in reply to Michał "rysiek" Woźniak · 🇺🇦

Saying "LLMs are intelligent because they can learn" is like saying "computer programs have legs because they can run." :ablobcatcoffee:
in reply to Michał "rysiek" Woźniak · 🇺🇦

I found that fairly often proponents of that idea cannot distinguish between a model that matches some of the observations and an actual system in question. If you squint hard enough

```
fn is_prime(number) { false }
```

is a fairly accurate model, the only problem with it is that it would not allow for most of modern computing to exist if it was fully true.

reshared this

in reply to Michał Kawalec

@monad_cat yeah. To me it's an issue of treating metaphors literally and then drawing far-reaching conclusions from them. Which would be funny if it weren't so dangerous.
in reply to Michał "rysiek" Woźniak · 🇺🇦

That's a pithy saying but pretty shallow. It's more accurate to say intelligence is a spectrum. A calculator is intelligent in that it processes symbols in a coherent fashion. #LLMs recognize patterns in mountains of data and statistically mimic them.

So, a more accurate pithy statement: "#LLMs are intelligent because they learn is like saying chameleons can act b/c they roleplay their environment."

Yes, both are true, to the same degree.

#AI #MachineLearning

in reply to PixelJones

@PixelJones

> That's a pithy saying but pretty shallow.

Well, in both cases a (somewhat useful) metaphor is taken literally, and then used to build conclusions on. Programs do not *literally* run, just as LLMs do not *literally* learn (in the human sense).

> It's more accurate to say intelligence is a spectrum.

You might want to research a bit the history of that way of thinking about intelligence. You might find some disturbing stuff. You can start here:
youtube.com/watch?v=P7XT4TWLzJ…

This entry was edited (1 year ago)
in reply to Michał "rysiek" Woźniak · 🇺🇦

I have no idea how you think that the idea that "intelligence is a spectrum" leads to eugenics & belief in the Rapture of the Nerds.

Because I'm a humanist, I think we should neither overhype #AI developments or dismiss them as harmless.

#AI
in reply to PixelJones

@PixelJones oh I am not dismissing them as harmless. Quite the contrary!

I am only dismissing the hype that is being generated around them based on their purported "intelligence" and the whole "superintelligent AI" boogeyman used to deflect and distract from real, already realized dangers with these systems.

As a humanist myself, I strongly believe words *matter*, and calling something "intelligent" is a very strong claim that requires very strong proof.

in reply to Michał "rysiek" Woźniak · 🇺🇦

@PixelJones

> I have no idea how you think that the idea that "intelligence is a spectrum" leads to eugenics

If intelligence is a spectrum, and if individual humans can be put on that spectrum, there is just one or two small steps towards "well only the most intelligent humans should reproduce". And the devil is always in the details of who defines what "intelligent" means and decides how to test for it.
nea.org/advocating-for-change/…
wellcomecollection.org/article…

in reply to Michał "rysiek" Woźniak · 🇺🇦

@PixelJones so it should come as no surprise that those systems, once deployed, very often end up displaying (among others) racist biases. This has been shown over and over and over again, including with ChatGPT, as much as OpenAI is trying to paint over it.

qz.com/1427621/companies-are-o…
insider.com/chatgpt-is-like-ma…

And that, combined with the power of capital that is thrown behind these systems today, is genuinely dangerous. Whole "are they intelligent" thing is just smoke and mirrors, a distraction.

in reply to Michał "rysiek" Woźniak · 🇺🇦

@PixelJones in other words, people making claims like "intelligence is a spectrum" and "GPT has sparks of intelligence"[1] happen to also be the people producing tools that have proven racist biases.

Meanwhile, people who attempt to shed light on why these racist (and other) biases end up in these LLMs, get fired from companies making them.

[2]So yeah, I am far from ignoring the actual dangers related to these systems. :blobcatcoffee:

[1] nitter.net/emilymbender/status…
[2] wired.com/story/google-timnit-…

in reply to Michał "rysiek" Woźniak · 🇺🇦

Again, you're arguing against positions I don't hold.

Absolutely, #AI is rife w/ biases & potential dangers. That doesn't mean that current systems haven't moved up a "spectrum" of intelligence or that placing them on that spectrum is endorsing them or prioritizing them over human values.

Besides, any "spectrum" of intelligence is subjective and multidimensional. Putting humans, animals, machines on some scale should not, must not, equate to their value or worthiness of survival.

#AI
in reply to PixelJones

@PixelJones that is a framing I can work with, even though I still don't agree with putting any of these systems anywhere on the "spectrum of intelligence".

I do not believe it is justified to do so: LLMs are just probabilistically generating text, that to me is a far cry from anything that could be called "intelligence". It's very mechanistic, even if it is quite complicated under the hood.

in reply to Michał "rysiek" Woźniak · 🇺🇦

@PixelJones I also do not believe it is *useful* to ascribe these systems intelligence in any sense of the word.

In fact, I believe there is ample data to the contrary: ascribing any sense of "intelligence" to these systems immediately fuels the hype and makes reasoned conversation about what they are, how they can be used, what are the dangers related to them, and so on, much more difficult.

It just muddies the waters and enables snake-oil salesmen to profit off of the confusion.

in reply to PixelJones

@PixelJones Allow me to chime in a bit.
0.: There is no commonly agreed definition of "intelligence", so this entire discussion has very weak foundations. (Most experts would agree that processing symbols in a coherent fashion is not a good definition.)
1.: While there is also no commonly agreed definition of "consciousness", many believe that it is a pre-requisite of "intelligence".
2.: Perhaps there is a "spectrum of intelligence", which may include various animals, or even plants. ->
in reply to Michał "rysiek" Woźniak · 🇺🇦

but can LLMs construct entertaining analogies? That's like saying they could be anywhere on Mastodon.
in reply to Michał "rysiek" Woźniak · 🇺🇦

"You can't prove human brains are different than LLMs!"

A human brain is a biological organ. An LLM is a probability distribution over sequences of words.

There are very few things that can be *more different* than human brains and LLMs.

This entry was edited (1 year ago)
in reply to Michał "rysiek" Woźniak · 🇺🇦

The brain is the result of half a billion years of evolution. My fish are vastly more intelligent than LLMs.

We haven't even cracked "basic" animal intelligence.

in reply to Michał "rysiek" Woźniak · 🇺🇦

Unfortunately LMMs nicely masks as intelligent due to text limitations. Although if you know where to apply pressure, the illusion breaks. These limitations become much more apparent in other applications like playing Minecraft.

About a year ago OpenAI released Video Pre-Trained model that was able to craft a diamond pickaxe. A nice case for VPT but no one was saying that AI solved Minecraft, a vastly easier task than mastering text or driving a car.

in reply to PiTau

@PiTau

> Unfortunately LMMs nicely masks as intelligent due to text limitations.

Oh snap, this is a great way of putting it! Hadn't thought about this aspect of the whole thing — the limited "domain" in which these models operate, so to speak, that makes it easier for people to not notice their deficiencies.

Thank you for pointing this out. It's one of the "ha, well obviously!" type things once somebody says it out loud.

in reply to Michał "rysiek" Woźniak · 🇺🇦

it's not the domain problem, as I've written if proper pressure applied the illusion breaks. ChatGPT is like a magic show. Meticulously prepared stage and planned out act to fool ones perception. However magicians are honest about their act, whereas ChatGPT is not. How many people come out of Pen & Teller show thinking these guys really can catch a bullet in their teeth?

But LLM lie is much worse, because capital seems to believe and go with it.

in reply to PiTau

@PiTau by "limited domain" I meant "it's text-only, it operates on text", which is a limited form of communication. I didn't mean any specific domain of human knowledge, that's why "domain" is in quoted, that's why I wrote "so to speak".

I like this framing because it helps explain how/why people fall for the "GPT is intelligent" ruse.

Just like parlor magicians dimming the lights to help hide the mechanics of their acts, GPT being limited to text only limits the ways illusion might break.

in reply to Michał "rysiek" Woźniak · 🇺🇦

It's good to see a proper coverage of tech giants push for AI regulation and the case for smaller models in polish. Shame such articles have smaller reach and impact than needed.
in reply to Michał "rysiek" Woźniak · 🇺🇦

"LLMs are sentient!"

No, not really, they aren't. Have you ever used Predictive Text Input? While the exact inplementation is different, Large Language Models are operating in a very similar manner: they simlly predict the next word.

"But Predictive Text is so dumb! And LLMs are smart!"

The difference is size. Phones can handle database of hundreds of kilobytes, and the LLMs are usually in tens of gigabytes.

#ArtificialIntelligence #LLMs

in reply to Peter Ellis

@peter_ellis yeah, basically you get some form of that whenever you debate anyone proposing that LLMs are intelligent. Sooner or later in the discussion they will reach for some form of that "argument".
in reply to silverwizard

@silverwizard yeah I blogged about that:
rys.io/en/165.html
in reply to Michał "rysiek" Woźniak · 🇺🇦

@silverwizard I noticed this flip. At first it was the “AI skeptics” noting the failure to define intelligence, but suddenly I started hearing that more often as a talking point from the “AI believers”. So annoying!
in reply to Michał "rysiek" Woźniak · 🇺🇦

💯 The worst is when people try to "prove" it by saying we can't prove human sentience either. Stringing words together that sound smart to people is not the same thing as stringing words together that actually MEAN anything.

I've written about this at length before, I'm starting to wonder if the tech bros have sentience themselves...
staygrounded.online/p/how-chat…

in reply to Michał "rysiek" Woźniak · 🇺🇦

@dingemansemark @JustinH

I mean, you jest, but...

When people say "LLMs don't really understand things; they just know how to string associated words together convincingly", my first reaction was "oh, you mean like most people?"

...and then you get to a field where what is rewarded most is not understanding anything at all, but sounding inspiring and leaderly?

Hell yeah.

....so....

...how long is it going to be before some of us get together and train an LLM to be a Visionary Tech Startup CEO...?

(Idle thought: Max Headroom giving a TED Talk)

in reply to Woozle Hypertwin

@woozle @dingemansemark @JustinH The whole sentience argument reminds me of the bear-proof trashcan problem. Either ChatGPT is sentient or Sam Altman isn't, you can't have it both ways.
in reply to Michał "rysiek" Woźniak · 🇺🇦

The lack of human type intelligence also isn't even hard to prove. Look at this prompt: twitter.com/andyzou_jiaming/st…

(Nitter addon enabled: Twitter links via https://nitter.net)

in reply to Michał "rysiek" Woźniak · 🇺🇦

Has to exclude calculators?!
But that's all LLMs are! A really fast calculator with a ton a precomputed data!

Granted, with the right equations, I can understand why some people might be tricked by them. They're getting good enough that at a passing glance, they look good! They just "know" nothing other than what words usually proceed other ones.

in reply to Michał "rysiek" Woźniak · 🇺🇦

Do you know @marleenstikker ? She is appointed chief editor for the @DeGroene for this summer and they are writing a lot about the concept of intelligence: groene.nl/artikel/andere-intel…

It might be interesting for you to translate some articles. If you can't read them but want to read certain ones, please let me know!

in reply to Michał "rysiek" Woźniak · 🇺🇦

I have met plenty of humans who I am comfortable claiming that ChatGPT 3.5 is already smarter than.
in reply to Michał "rysiek" Woźniak · 🇺🇦

I remember the "giant brain" claims of the 1960s, where they were already claiming that the the fact that computers could do arithmetic very quickly was already proof of intelligence.

The logic was, if they can already do things faster that we do with our brains, it's a tiny step to do everything else.

Oh, yes, and "power too cheap to meter", from nuclear fusion, was also imminent.

@SpaceLifeForm

in reply to Michał "rysiek" Woźniak · 🇺🇦

Intelligence is just a word that can be explained in a broad and it is result-based sense, "intelligence" is not "consciousness". You call the dog's name and the dog runs over - intelligent. When you ring the bell, the dog knows there's food and starts to get excited - intelligent
in reply to 甜味麻酱🏴🏳‍⚧

@machan such a broad definition of intelligence makes that term all-but useless.

That's basically the same meaning of "intelligent" that home appliance manufacturers use when they say "intelligent fridge" or "intelligent washing machine".

If that's the meaning that you want to use for LLMs, I don't mind at all — as it means just as much as in the case of "intelligent" fridges or washing machines. 🤷‍♀️

in reply to Michał "rysiek" Woźniak · 🇺🇦

that is useful, because your "intelligent in the human sense"(which in my opinion resemble "consciousness")is probably non-existent. everything can have kinda intelligence,it is truth which made this concept seems useless, but why we accept the concept "intelligent" instead of "consciousness" is that intelligence is easy to measure. if consciousness is non-existent (you can refer to the mental fictionalism), then what we need to focus on is just intelligence and only intelligence...
Your entire thread is actually a discussion about the philosophy of the mind, and it is very cutting-edge and complex, so I can only say one point that I support (i.e. mental fictionalism)
Unknown parent

in reply to Michał "rysiek" Woźniak · 🇺🇦

an easy one is that neural networks are a simplified model of neurons , otoh if you believe in substrate less intelligence and consciousness the argument gets more muddy?
in reply to flaeky pancako

@fleeky a model is not the thing it models. A map is not the territory.

Moving a mountain on a map does not mean a mountain actually moved in the territory.

And I am not even getting into how hilariously simplified the neural network model is compared to the actual brain — suffice it to say it completely ignores all the biochemistry and all the stuff actual neurons float about in.

in reply to Michał "rysiek" Woźniak · 🇺🇦

the surprising thing about neural networks and LLMs is one of complexity and emergence.. what we are debating right now is did the mountain actually move or is it just a philosophical illusion?

I still am a fan of neuro symbolic systems as a necessary part of the path for thinking machines but at the same time I think computation is simplified thought, otoh the whole debate gets very deep very fast..

in reply to flaeky pancako

@flaeky pancako @Michał "rysiek" Woźniak · 🇺🇦 my advice would be find out what a neuron is then. Neural Networks model neurons the way a balloon models a lung. Different forms, different design, different purpose, but you might be able to use it to explain it to a child.
in reply to flaeky pancako

@fleeky sure, plus there is the whole layer of semantics and imperfect models and all that.

And then: ethical dilemmas — if we want to claim ChatGPT is actually, literally intelligent, which would imply self-awareness and curiosity, should we ask if it suffers? Is shutting down an older model akin to killing an intelligent being? And so on.

We should absolutely be having these conversation, because it is genuinely fascinating. Which is another reason why I loathe the discourse around AI today.

in reply to Michał "rysiek" Woźniak · 🇺🇦

images from a talk by Josha Bach . I love this talk because he gives a concrete model of what he thinks consciousness is. When I look at it, it seems to be workable to the point that you could implement it within chatgpt or even Minecraft, but then I wonder, is this just then a constructed philosophical zombie? I honestly still don't know but at least if this got implemented we could all test the resultant agent.
in reply to Michał "rysiek" Woźniak · 🇺🇦

I went to a salon he held at a cafe in Berlin and one thing that I found interesting about him is how he simultaneously has a stance for complexity and emergence while at the same time he has an almost instinctual compulsion to put things into hierarchies.
in reply to Michał "rysiek" Woźniak · 🇺🇦

also here's a random interesting pdf you may enjoy checking out the summary linas.org/misc/Forced_Moves_or…

Also for anyone into ai linas.org/ Lina's vespitas is one of the most fascinating people to talk to about it!

Also the readme for his learning project is full of fascinating ideas 🙂
github.com/opencog/learn

in reply to Michał "rysiek" Woźniak · 🇺🇦

The ethical stuff seems like where the rubber meets the road, to me; the difficult practical part, as opposed to the purely theoretical parts ("what do we really mean by intelligence?") and the easier practical parts (things where you can just do an experiment and see if it i.e. can adequately answer real customer questions (probably not)).

Should we ask it if it suffers? We can, of course, but it will not give a consistent answer. From which we can probably conclude that it doesn't, or at least that we don't have any reason to think it does.

If we had a system that was "LLM plus some other stuff", and it did claim to suffer when people say mean things to it, and it did so consistently, at what point would we be morally obliged to believe it? I do think that's an interesting question, and I'm not sure how to answer it.

People tend not to talk about what Searle's Chinese Room (or Bender's Thai speaker (or not)) actually say, in detail. Do they lie and claim to be humans of a particular age etc? Do they claim perceptual abilities that they don't actually have? LLMs often do these things, for obvious reasons.

But what if a piece of software says “No, I can’t see or hear, the only perception that I have is in the form of words that come into my consciousness; I know about sight and hearing and so on in theory, from words that I’ve read, but I haven’t experienced them myself; still, I’m definitely in here, and as self-aware as you are!”

When do we dismiss that, and when do we not? I wrote a little here fwiw: ceoln.wordpress.com/2023/07/02…

Unknown parent

@szakib 💯

Re: 0 — I made that point (using the term "to think", but it just as well applies to "having consciousness" or "being intelligent") here: rys.io/en/165.html

Re. 1 — and I wrote about "consciousness" claims related to GPT here:
tecc.media/claim-gpt3-is-consc…

Dropping both links here in case they are useful.

@PixelJones

in reply to Michał "rysiek" Woźniak · 🇺🇦

@szakib

> I think completely deterministic systems (…) cannot possibly score more than rocks or hammers on this spectrum.

I tend to agree.

On a broad philosophical note: perhaps eventually we *will* understand human brains in their full complexity, and become able to fully explain them as completely deterministic systems.

We will then face the difficult task of squaring this with our notions of intelligence and consciousness.

But we are not there yet, not even close.

@PixelJones

in reply to Michał "rysiek" Woźniak · 🇺🇦

@szakib so *assuming* that brains are completely deterministic systems and then basing other strong claims ("ChatGPT is intelligent!") on that assumption is… well, let's just call it "unwarranted".

Anyway, thank you for chiming in!

@PixelJones

in reply to Michał "rysiek" Woźniak · 🇺🇦

@PixelJones There is a theory (IMO likely to be true) that there are quantum effects going on in the brain. If this is proven, it would prove it to be non-deterministic. (Also, it would be a big step towards proving we have free will!)

This is a fascinating topic and I'm sure it will keep many great minds busy for a very long time.

Unknown parent

@keith certainly. It's very much a very loaded, difficult question.

A question for thoughtful, careful consideration, not a question for snakeoil salesmen to use for hyping their probabilistic text generators.

Which is another reason why I recoil when I see people throwing around statements like "GPT is intelligent" willy-nilly just because the output looks kinda sorta human-made.

@PixelJones

in reply to Michał "rysiek" Woźniak · 🇺🇦

@keith and which is why I insist people provide the definition of the terms they are using (like "intelligent", "thinks", "consciousness") when they do make such statements.

@PixelJones

in reply to szakib

@szakib absolutely! And I am so here for it.

I just wish we could be having that conversation instead of "is a probability distribution over sequences of words intelligent".

@PixelJones