A weird bit of the LLM future is that the people using LLMs are only credulous dupes who think they are useful. This means we only see times when they are good, or times when they are so bad even credulous dupes can see it's wrong.
This leads to a dangerous narrative that LLMs aren't wrong in smaller less obvious ways. People say to factcheck - but they don't often know what the contours of that are.
Hypolite Petovan likes this.