A machine cannot feel embarrassment, therefore it should not be trusted with a critical decision
I’ve been trying recently to articulate the reasons why I don’t think LLMs are a trustworthy technology. It’s a shame that the machine learning profession has largely had its reputation stolen and twisted by LLMs.
Machine learning is great! Machine learning can help find the good photos in your sprawling photo library. It can be used to find photos of your family. It can react to patterns humans struggle to, and produce useful outputs based on that.
But LLMs are a different beast altogether. As commonly used in chatbot situations they break one of the fundamental rules of our own cognition; for all of human history, the use of language has been enough to differentiate a real person from an automaton emulating one; you have been able to trust that if you can have a conversation you are speaking with an intelligent being with interiority, thoughts, feelings and emotions.
LLMs can express all those things, because those things exist in its training data. Every apologetic character from fiction. Every Reddit thread of someone expressing their frustration. It’s all present, but it’s disconnected. It doesn’t connect to an actual feeling, thought, or intention.
An LLM cannot anticipate being wrong. It can express but not actually feel remorse, embarrassment, hope, or fear. And that is truly the thing that separates us. And that, I feel, is the thing that makes them dangerous for our brains.
In a professional context, say if I am asking someone for code review, there is a degree of confidence that their thoughts must pass before they’ll express them. If they’re unsure, they can ask for someone else’s opinion. An LLM has a confidence score for its impulses, but it cannot feel concern for being wrong, for wasting someone’s time, or seek input from a human to confirm or dispel them.
While it is very impressive that an LLM can take a question and formulate a reasonably coherent, convincing answer, the fact that it cannot want to do a good job, and in fact is trained to be confident above all else, is why I don’t believe this technology is going to stick around for much other than producing enormous quantities of garbage to trawl for ad revenue.
I am open to the idea that machine learning may get us there one day, but to me LLMs are clearly not the technology that could get us there.