ChatGPT, the ultimate sophist?
Why the current craze about Artificial Intelligence is misguided

The Sophists were paid teachers of rhetoric in Ancient Greece, much criticized—fairly or not—by Plato and often the target of Socratic dialogues. I was thinking about sophistry the other day, while talking to a colleague about the meaning of ChatGPT, the AI system that has been all the rage during the past few weeks. The possibility struck me as worth investigating: is ChatGPT a modern-day sophist?
The first obvious thing to do was to ask Chat (as I shall call it). Here is the response I got:
“No, I am not a modern-day sophist. As an artificial intelligence language model, my purpose is to assist and provide helpful responses to the best of my ability based on the input I receive. My responses are not intended to deceive or manipulate people but to provide factual and informative answers to the questions asked of me. Unlike sophists, my responses are not aimed at winning arguments or persuading people to adopt a particular point of view, but rather to help users gain a better understanding of the topics they are interested in.”
So Chat claims in no uncertain terms not to be a sophist. We shall see about that. It occurred to me that there is another similar possibility: Chat may be a bullshitter in the sense articulated by Princeton philosopher Harry Frankfurt: someone whose speech is intended to persuade without regard for truth. According to Frankfurt, while a liar is aware of the truth and attempts to hide it, the bullshitter doesn’t care if what they say is true or false, so long as it is convincing.
Ah, that does sound to me a lot like Chat, though Plato would have likely argued that Sophists were, by this definition, the epitome of bullshitters. So I once again ask the bot, and here is its reply:
“As an artificial intelligence language model, I am incapable of lying or bullshitting in the sense described by Harry Frankfurt in his essay ‘On Bullshit.’ Bullshit, according to Frankfurt, is distinct from lying in that the liar knows the truth and deliberately misrepresents it, whereas the bullshitter is indifferent to the truth and is only concerned with making an impression or persuading their audience. However, as an AI language model, I have been programmed to provide helpful responses to the best of my ability based on the input I receive. While I cannot lie or bullshit, my responses may sometimes be imperfect or incomplete due to limitations in my programming or the data I have been trained on. I strive to provide accurate and helpful information and to be as transparent as possible about the sources and limitations of my knowledge.”
Case closed, then? Chat itself clearly articulated why it is neither a sophist nor a bullshitter. End of the essay. Or not. Part of my point is that we should not trust anything Chat says, so let’s dig a little deeper, beginning with some helpful background about the AI bot and how it works.
GPT means generative pre-trained transformer and its core function, like that of all chatbots, is to mimic a human conversationalist. Notice that such core function implies nothing about being transparent, accurate, or helpful, contra what Chat just claimed above. A first indication that it may, in fact, be a sophist-bullshitter.
Indeed, OpenAI—the maker of Chat—has acknowledged that ChatGPT “sometimes writes plausible-sounding but incorrect or nonsensical answers” and that it will “likely yield a lot of false positives and negatives, sometimes with great confidence.” Note that confidence unaccompanied by substance was an hallmark of the Sophists.
Moreover, Chat is allegedly incapable of uttering racist or sexist remarks, but this is only because prompts that may lead it there are especially filtered before Chat can respond. Which is kind of cheating. Cheating, I might add, that didn’t work, since Chat can in fact be “racist.” For example, it produced a rap song describing women and scientists of color as inferior to white male scientists.
Keep reading with a 7-day free trial
Subscribe to Figs in Winter, by Massimo Pigliucci to keep reading this post and get 7 days of free access to the full post archives.