Skip to main content


Related to the end of my previous post. It's not that LLMs have no use. It's that everyone I know that likes them seems to forget about their error rates and how easily their hallucinations "sound right" but are so very wrong. It's even worse behavior than copy/pasting shit out of Stackoverflow or whatever the first search result out of AltaVista was in the early internet days. At least there there was some hope of auditing/correction at some point. None here. #media #ai #rant #chatgpt

Anban Govender reshared this.

in reply to Hank G ☑️

I'm used to instructing the machine, not the other way around.
in reply to Hank G ☑️

@Hank G ☑️ the instance I've found it sort of useful for is summarizing actual human text (so reducing real writing to key points). Current models seem better at reduction than generation. Who knows in 5-10 years!
in reply to JB Carroll

@JB Carroll @Hank G ☑️ With the current crop of LLMs chatbots, given the absolutely massive and parallel investment underway, I believe we’ve already plateaued for generative output quality. In part because of the inherent limitations of the tech itself (black box Markov Chain), and in other part because the input of the most popular models has already started to be tainted by its own output, used to generate content that AI crawlers later ingest indiscriminately.

So I wouldn’t put too much hope in the current generation of what we call AI. And given the manufactured hype and the obvious intellectual worker fungibility prospect the current tech offers, it appear it’s here to stay, way longer than similar grifts blockchain or NFTs that both didn’t have any labor cost reduction promise.

in reply to Hypolite Petovan

@Hypolite Petovan @Hank G ☑️ Yes I agree, current models are probably just going to try to brute force to increase LLM speed, not necessarily quality. I think whatever will replace those models such as those that simulate a human-like brain are probably the next step (once we have resolution enough to map individual neurons on that scale...already can do so for fruit flies). Then you get a brain-like entity but with "synapses" that can fire millions of times faster than ours can assuming the same total computational power. So a year of thinking to us would be a million years to a model like that. Definitely curious anyway to see what comes after LLMs in that regard.
in reply to JB Carroll

@JB Carroll @Hank G ☑️ I used to be dazzled by this kind of prospect, but then I read The Hitchhiker's Guide to the Galaxy and I agree with Douglas Adams, such a system would produce output so incomprehensible to us that it would be virtually useless.

As it turns out we do not currently need an artificial brain-like entity with synapses that can fire millions of times faster than ours, we need a global political will to apply existing theoretical solutions to practical issues mankind is facing on its home planet. And that definitely won't happen.

This entry was edited (2 weeks ago)

Anban Govender reshared this.

in reply to Hypolite Petovan

@Hypolite Petovan @Hank G ☑️ Fair enough. There are certainly plenty of scientists looking at both, but funding to apply to those practical solutions does require political will, which, alas, is based on whims and feelings of those in power, not scientific theory. Given the human propensity to avoid pain and only focus on the dangers right in front of us: easier said than done.
in reply to Hypolite Petovan

@Hypolite Petovan @Hank G ☑️ Yes I agree, current models are probably just going to try to brute force to increase LLM speed, not necessarily quality. I think whatever will replace those models such as those that simulate a human-like brain are probably the next step (once we have resolution enough to map individual neurons on that scale...already can do so for fruit flies). Then you get a brain-like entity but with "synapses" that can fire millions of times faster than ours can assuming the same total computational power. So a year of thinking to us would be a million years to a model like that. Definitely curious anyway to see what comes after LLMs in that regard.
in reply to Hypolite Petovan

@Hypolite Petovan @Hank G ☑️ Yes I agree, current models are probably just going to try to brute force to increase LLM speed, not necessarily quality. I think whatever will replace those models such as those that simulate a human-like brain are probably the next step (once we have resolution enough to map individual neurons on that scale...already can do so for fruit flies). Then you get a brain-like entity but with "synapses" that can fire millions of times faster than ours can assuming the same total computational power. So a year of thinking to us would be a million years to a model like that. Definitely curious anyway to see what comes after LLMs in that regard.