Knowledge develops from the relationships between ideas. Large language models work by mapping these conceptual connections and generating new associations—an approach that can resemble a form of non-human “reasoning” rooted in patterns rather than lived understanding. Whether those associations prove meaningful depends heavily on user prompts and the user’s ability to distinguish when the model has produced something genuinely useful.
Every response from a chatbot is newly created from the prompt in front of it, shaped by its training data and configuration. ChatGPT cannot “admit” wrongdoing or accurately introspect its own behavior, despite recent claims in The Wall Street Journal. Nor can it “endorse murder,” as asserted in The Atlantic.
Users always direct the model’s output. LLMs can be seen as “knowing” things only in the sense that they can interpret and connect concepts based on patterns learned from massive datasets. These datasets contain countless conflicting ideas, perspectives, and cultural signals. The prompt determines which relationships the model emphasizes and which ideas get combined or ignored. If LLMs can produce insights, make connections, and process information at scale, then why not think of that as a form of self?
Because—unlike humans—LLMs have no enduring identity. Human personality persists over time and across interactions. When you reconnect with a friend after a year, you are speaking with the same person whose values, memories, and commitments have continued to develop. This persistent self underlies human agency and responsibility: it supports long-term decisions, stable beliefs, and the capacity to be held accountable.
An LLM-generated “personality,” by comparison, has no continuity across sessions. The system responsible for a witty or thoughtful reply in one conversation simply doesn’t exist in the next. If ChatGPT tells you, “I promise to help you,” it can reference the concept of a promise—but the “I” making that statement disappears the moment the message is delivered. Starting a new chat isn’t returning to someone who made a commitment; it’s initializing an entirely new instance of the model with no memory of any previous interaction.
