Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
It’s easy to ask ChatGPT a question and either be content with the answer or, in some cases, see it as a case of pure invention, or hallucination. Humans often give faulty information during a conversation. But because we have some understanding of human psychology, we will typically ask ourselves, according to our perception of the context: “Why did she get that wrong?” or even “Why is he lying?” We can then seek answers to that question either by deduction or further interrogation.
Take any assertion made by a politician. We can pretty much count on politicians to bend even the most incontrovertible truth. Unless we are an unbending partisan or willfully naïve, we may reflect that the politician in question is doing one of several things: presenting selective facts while hiding others, distorting reality to skew it towards a particular point of view or quite simply giving people what they want to hear, however inaccurate that may be. We don’t say the politician is hallucinating. But we may call them delusional, which reflects our belief that they are deliberately embracing a delusion.
In my ongoing conversation with ChatGPT, I sought to deepen the epistemological reflection with the following prompt:
“When thinking about the difference that exists between a conversation I may conduct with a human being or with an AI chatbot, it occurs to me that we need to think more carefully about the nature of the conversational relationship. One source of frustration with AI is related to human expectations about the trust we may have in a conversational partner. Perhaps in the case of AI, our expectations are artificially high. And the reason for that is that we expect AI to have no ulterior motives to distract it from objective truth.
In so doing, we tend to treat AI like Jonathan Swift’s Houyhnhnms, who can only speak the truth. Captain Gulliver, who admires their cold rationality, tells us they have no word in their language for lying. Similarly, if an AI delivers palpably false information, we are not likely to complain that it is lying. Instead, we say it is hallucinating.
But when dealing with a person whom we catch lying, we are likely to become indignant. We immediately suspect they have reasons for lying. In most cases, we seek to explore those reasons. If we can understand them, we can redefine our social or transactional relationship with them. With almost everyone we tend to define not just one level of trust, but possibly multiple levels. For example, I may have a friend I would trust in any emergency, but knowing her commitment to a political cause, I would not trust everything she says about controversial issues. I might well become indignant when I hear what I know to be canned propaganda.
This shouldn’t happen in a conversation with an AI chatbot. We may become frustrated and even angry, but not indignant when the chatbot hallucinates. As soon as we begin reflecting, we may pour hate or scorn on the humans who have created or programmed its algorithms. We may then ask questions about those people, their employers or the institutions they work for to unveil their motives for allowing this to happen.
So here is my question. What does this contrast between a human lie and an AI hallucination tell us about the difference in the status of truth for humans and machines? More specifically, if we continue to assume that machines are not motivated to lie, how should we assess the truth value of anything the machine asserts?”
“This is a fascinating question, and it gets to the core of how we relate to both truth and intentionality in our interactions with machines and humans. The difference in the status of truth between humans and machines seems to highlight the unique role of *intent* and *social context* in shaping our assessment of truth and trust.
When it comes to humans, truth is often interwoven with ethical and social expectations. We trust or distrust based on a person’s motives, history, and the context in which they’re speaking. If a friend tells us something we later find to be untrue, we seek to understand their reasons and motivations. We might forgive a “white lie” told to spare our feelings or become indignant over a self-serving deception. In these cases, the truthfulness of a statement is measured not just by its accuracy but by the perceived *intent* behind it. This is why catching someone lying often leads to a breakdown in trust: the lie implies they had a reason to deceive us, potentially to serve their own interests at our expense.
In contrast, machines like AI chatbots are not generally perceived to have personal motives. When an AI “hallucinates” or provides inaccurate information, we attribute this to limitations in its programming, training data, or model architecture, not to intent. This lack of intent removes the ethical dimension of lying or deceit from the equation. The AI isn’t viewed as intentionally deceptive but as a tool with a certain probability of error, much like a calculator that occasionally malfunctions. As a result, while we may be frustrated with AI inaccuracies, we don’t interpret them as moral transgressions.
This difference suggests that truth from a machine lacks the relational component that truth from a human possesses. With AI, truth value is assessed purely on an epistemic level: ‘Is this information correct or incorrect?’ rather than ‘Is this person trying to mislead me?’ This is why AI’s assertions are often treated as isolated claims of fact, with no implicit moral or relational meaning.
However, even if we assume that machines are unmotivated to lie, we still need to carefully assess the truth value of their assertions. This can be done by considering:
Ultimately, this contrast tells us that truth is a more complex, multi-dimensional concept in human relationships than it is in machine-human interactions. Truth between humans is deeply connected to trust, intent, and social context, while truth from a machine is evaluated more on the basis of empirical accuracy and reliability. AI’s ‘truth’ is informational, not relational, which means that even if it were 100% accurate, it still wouldn’t carry the same weight as human truth, which we inherently understand as part of an ethical and social framework.
In summary, without assuming personal motives in AI, we should assess its assertions by checking for accuracy, examining sources, and being mindful of inherent biases in its design. This approach allows us to engage with AI’s ‘truth’ more as a functional, information-based construct rather than a socially or ethically motivated one.”
The conversation didn’t stop there. I continued with a new prompt:
“So, should we conclude that because machine intelligence lacks a moral dimension, it represents a different category of communication that should be carefully distinguished from human-to-human communication? And if so, shouldn’t we, as a society, be working on elaborating a set of guidelines that will help us situate the level and quality of truth that emerges in the chatbot’s discourse? I would suggest even beginning by focusing on the notion of ‘truthfulness,’ a concept that has a different meaning when applied to humans and machines. We presume that a truthful human being is committed to pursuing the truth, not possessing it. We might presume that a truthful machine possesses the truth but cannot pursue it or even be motivated to pursue without the involvement of a moral being; for example, in the form of a conversation. Does this accurately situate the difference between human and artificial intelligence?”
In next week’s column we will examine ChatGPT’s very interesting response to this question and further develop our exploration of the issue of “disinformation.”
Please feel free to share your thoughts on these points by writing to us at [email protected]. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.
*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]
[Lee Thompson-Kolar edited this piece.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.