Skip to main content


A lot of the current hype around LLMs revolves around one core idea, which I blame on Star Trek:

Wouldn't it be cool if we could use natural language to control things?


The problem is that this is, at the fundamental level, a terrible idea.

There's a reason that mathematics doesn't use English. There's a reason that every professional field comes with its own flavour of jargon. There's a reason that contracts are written in legalese, not plain natural language. Natural language is really bad at being unambiguous.

When I was a small child, I thought that a mature civilisation would evolve two languages. A language of poetry, that was rich in metaphor and delighted in ambiguity, and a language of science that required more detail and actively avoided ambiguity. The latter would have no homophones, no homonyms, unambiguous grammar, and so on.

Programming languages, including the ad-hoc programming languages that we refer to as 'user interfaces' are all attempts to build languages like the latter. They allow the user to unambiguously express intent so that it can be carried out. Natural languages are not designed and end up being examples of the former.

When I interact with a tool, I want it to do what I tell it. If I am willing to restrict my use of natural language to a clear and unambiguous subset, I have defined a language that is easy for deterministic parsers to understand with a fraction of the energy requirement of a language model. If I am not, then I am expressing myself ambiguously and no amount of processing can possibly remove the ambiguity that is intrinsic in the source, except a complete, fully synchronised, model of my own mind that knows what I meant (and not what some other person saying the same thing at the same time might have meant).

The hard part of programming is not writing things in some language's syntax, it's expressing the problem in a way that lacks ambiguity. LLMs don't help here, they pick an arbitrary, nondeterministic, option for the ambiguous cases. In C, compilers do this for undefined behaviour and it is widely regarded as a disaster. LLMs are built entirely out of undefined behaviour.

There are use cases where getting it wrong is fine. Choosing a radio station or album to listen to while driving, for example. It is far better to sometimes listen to the wrong thing than to take your attention away from the road and interact with a richer UI for ten seconds. In situations where your hands are unavailable (for example, controlling non-critical equipment while performing surgery, or cooking), a natural-language interface is better than no interface. It's rarely, if ever, the best.

in reply to David Chisnall (*Now with 50% more sarcasm!*)

Natural language is for talking with other people, who we shouldn't control. I think there's a related problem in there somewhere.
in reply to David Chisnall (*Now with 50% more sarcasm!*)

Natural language interfaces are akin to magic. It's all under the assumption that the intent is somehow, magically recognised. Funnily, there are cautionary tales re magic addressing exactly this aspect. Almost all stories involving wishes turn the phrasing against the one expressing their wishes (see djinns, fairies, etc.).
in reply to David Chisnall (*Now with 50% more sarcasm!*)

I'm not so sure. I often express myself in natural language to ask people to do things, and that usually works out pretty well.

So it's possible in principle, it's just not something that computers can do yet. Maybe one day they will.

in reply to jarkman

my experience, shared with many neurodivergents, is that neurotypicals and even other neurodivergents very often misunderstand us, and vice-versa, and the misunderstandings are occasionally very hard to recover from, becoming another source of discrimination against minorities. AFAICT minds that work in one way build thoughts in ways that don't carry over very well to minds that work in other ways, especially when there isn't awareness of and tolerance for the differences. I've known people who can understand and "translate" expressions of thoughts in ways that enable people with different mind structures to communicate more effectively. it's an amazing skill. I wonder if LLMs extend the experience of facing frequent misunderstandings to a majority of the people, or if they could help people translate between different mind structures, different perceptions of context, and avoiding triggers
in reply to David Chisnall (*Now with 50% more sarcasm!*)

>> Wouldn't it be cool if we could use natural language to control things?
> The problem is that this is, at the fundamental level, a terrible idea.

This is a terrible take and you should really know better. It's not different than chastising people who use higher level programming languages or Dreamweaver to make a website instead of studying HTML.

We can all agree that e.g., setting down a person with no development experience and asking them to design a missile defense system for your country using natural language is a terrible idea.

We should all be able to agree that giving people a way to use natural language to build little apps, tools, and automations that solve problems nobody is going to build a custom solution for is a good thing.

in reply to feld

@feld

This is a terrible take and you should really know better. It's not different than chastising people who use higher level programming languages or Dreamweaver to make a website instead of studying HTML.


I feel like you didn’t read past the quoted section before replying with a needlessly confrontational reply.

It is very different. If you give someone a low-code end-user programming environment, they have a tool the helps them to unambiguously express their intent. It gives them a tool to do so concisely, often more concisely (at the expense of generality), which empowers the user. This is a valuable thing to do.

We should all be able to agree that giving people a way to use natural language to build little apps, tools, and automations that solve problems nobody is going to build a custom solution for is a good thing.


No, I disagree with that. Giving them a natural-language interface and you remove agency from them. The system, not the user, is responsible for filling in the blanks. And the system does so in a way that does not permit the user to learn. Rather than using the tool badly and then improving as a result of their failure, the system fills in the blanks in arbitrary ways.

A natural-language interface and an easy-to-learn interface are not the same thing. There is enormous value in creating easy-to-learn interfaces that empower users but giving them interfaces that use natural language is not the best (or even a very good) way of doing this.

@feld
in reply to David Chisnall (*Now with 50% more sarcasm!*)

there's an argument to be made about processes that enable free, ambiguous expression of ideas in natural language, with progressive removal of ambiguities. figuring out what questions to ask to help users understand and remove the ambiguities is a trainable skill, and perhaps it can even be machine-learned. there's a risk that users would then find such processes very hard to use, because of the huge number of questions they need to understand and figure out how to answer. but as they learn how to express themselves unambiguously, the system becomes easier and easier to use. at the end, users who survive the process enough times may have learned a programming language.
in reply to David Chisnall (*Now with 50% more sarcasm!*)

@David Chisnall (*Now with 50% more sarcasm!*) @feld "The system, not the user, is responsible for filling in the blanks" is such an important and valuable idea that explains all of the issues with NLP systems! Thanks - I need this specific idea!
in reply to silverwizard

This assumes the system won't be able to recognize the existence of these gaps and ask you what actions it should take if they're encountered. There's no rule that says the system should parse your natural language prompt and return a final result immediately; I expect a mature system will converse with you about a complex problem before emitting the final result.
in reply to feld

I have a feeling in a decade people will look back at all these conversations much like e commerce. Saying what were we thinking. The problems are actually not what we think and the solutions are far more impactful.

In all my testing of AI it’s only getting better and yes you have to have a conversation with it. Much like this entire thread to figure out what is what. That’s very natural to lots of humans. It’s not a leap for this to become how it is for these systems.

⇧