

Discover more from Figs in Winter: New Stoicism and beyond
ChatGPT, the ultimate sophist?
Why the current craze about Artificial Intelligence is misguided

The Sophists were paid teachers of rhetoric in Ancient Greece, much criticized—fairly or not—by Plato and often the target of Socratic dialogues. I was thinking about sophistry the other day, while talking to a colleague about the meaning of ChatGPT, the AI system that has been all the rage during the past few weeks. The possibility struck me as worth investigating: is ChatGPT a modern-day sophist?
The first obvious thing to do was to ask Chat (as I shall call it). Here is the response I got:
“No, I am not a modern-day sophist. As an artificial intelligence language model, my purpose is to assist and provide helpful responses to the best of my ability based on the input I receive. My responses are not intended to deceive or manipulate people but to provide factual and informative answers to the questions asked of me. Unlike sophists, my responses are not aimed at winning arguments or persuading people to adopt a particular point of view, but rather to help users gain a better understanding of the topics they are interested in.”
So Chat claims in no uncertain terms not to be a sophist. We shall see about that. It occurred to me that there is another similar possibility: Chat may be a bullshitter in the sense articulated by Princeton philosopher Harry Frankfurt: someone whose speech is intended to persuade without regard for truth. According to Frankfurt, while a liar is aware of the truth and attempts to hide it, the bullshitter doesn’t care if what they say is true or false, so long as it is convincing.
Ah, that does sound to me a lot like Chat, though Plato would have likely argued that Sophists were, by this definition, the epitome of bullshitters. So I once again ask the bot, and here is its reply:
“As an artificial intelligence language model, I am incapable of lying or bullshitting in the sense described by Harry Frankfurt in his essay ‘On Bullshit.’ Bullshit, according to Frankfurt, is distinct from lying in that the liar knows the truth and deliberately misrepresents it, whereas the bullshitter is indifferent to the truth and is only concerned with making an impression or persuading their audience. However, as an AI language model, I have been programmed to provide helpful responses to the best of my ability based on the input I receive. While I cannot lie or bullshit, my responses may sometimes be imperfect or incomplete due to limitations in my programming or the data I have been trained on. I strive to provide accurate and helpful information and to be as transparent as possible about the sources and limitations of my knowledge.”
Case closed, then? Chat itself clearly articulated why it is neither a sophist nor a bullshitter. End of the essay. Or not. Part of my point is that we should not trust anything Chat says, so let’s dig a little deeper, beginning with some helpful background about the AI bot and how it works.
GPT means generative pre-trained transformer and its core function, like that of all chatbots, is to mimic a human conversationalist. Notice that such core function implies nothing about being transparent, accurate, or helpful, contra what Chat just claimed above. A first indication that it may, in fact, be a sophist-bullshitter.
Indeed, OpenAI—the maker of Chat—has acknowledged that ChatGPT “sometimes writes plausible-sounding but incorrect or nonsensical answers” and that it will “likely yield a lot of false positives and negatives, sometimes with great confidence.” Note that confidence unaccompanied by substance was an hallmark of the Sophists.
Moreover, Chat is allegedly incapable of uttering racist or sexist remarks, but this is only because prompts that may lead it there are especially filtered before Chat can respond. Which is kind of cheating. Cheating, I might add, that didn’t work, since Chat can in fact be “racist.” For example, it produced a rap song describing women and scientists of color as inferior to white male scientists.
And here is more evidence of cheating: according to the BBC, ChatGPT is not allowed by its human creators to “express political opinions or engage in political activism.” At least, that was the intention. But as we all know good intentions often pave the way to Hell, and sure enough research shows that Chat appears to have marked left-libertarian leanings, which led the conservative outlet The National Review to accuse it of being “woke.”
There are many other nefarious things of which Chat is already capable, other than racism and sexism. It can write phishing emails and malware, for instance. And there are already hundreds of books on sale at Amazon that have been written by Chat. As if that particular platform didn’t already have a stunningly high ratio of crap to valuable entries.
And we are not done. Emma Bowman of NPR said that “there are still many cases where you ask it a question and it’ll give you a very impressive-sounding answer that’s just dead wrong.” Not to mention that Chat has been tricked into generating pro-Nazi arguments, and even into delivering the instructions to make Molotov cocktails and atomic bombs. Great.
So why are software companies releasing difficult to control monsters like Chat? Sam Altman, the CEO of OpenAI, said that AI’s benefits for humankind could be “so unbelievably good that it’s hard for me to even imagine.” But wait, if the alleged benefits are so hard to imagine then how does he know they will come? He also said, just to be balanced, I guess, that in a worst-case scenario AI could kill us all. Of course that scenario is also based on absolutely no evidence or reason. He has just seen all the Terminator movies.
This whole story reminds me of an episode of the television sitcom The Big Bang Theory, about which I have written an essay (part of this collection). If you are familiar with the show you may remember that it is (especially in the early seasons to which this example belongs) about four really nerdy friends—a theoretical physicist, an experimental physicist, an astronomer, and an engineer—and their entirely un-nerdy but far more practical neighbor, Penny.
One day Penny enters the apartment where the four friends are conducting an “experiment” and asks what they are doing. They explain that they have been able to connect with some guy in China who can remotely, via computer, turn on and off the lights of their apartment. “But why?,” asks a rightly puzzled Penny. “Because we can!,” answer the boys in a chorus. This is what the people at Open AI sound to me.
To be fair, despite being characterized by researchers like Anton Van Den Hengel as a “stochastic parrot,” Chat can be used for positive applications. A friend of mine who is a programmer says his job is now easier because he can describe a problem to Chat, get a draft of a program in response, and work from there, checking and improving on the draft. But of course he would never trust Chat to just write the thing and be done with it. Because nobody can actually trust Chat.
What about Chat’s artistic abilities? They are a thing as well, though there too there is controversy. Back in January Chat was asked to write a song in the style of songwriter Nick Cave. Cave was sent the result. His response was that writing a song is “a blood and guts business ... that requires something of me to initiate the new and fresh idea. It requires my humanness. … With all the love and respect in the world, this song is bullshit, a grotesque mockery of what it is to be human, and, well, I don’t much like it.”
The issue isn’t that Chat—and even more so future AI—cannot do a decent job at imitating writers, journalists, musicians, painters, and so forth. The issue is that that’s all it is, imitation. And we don’t value an essay, a story, a song, or a painting only because of its aesthetic qualities. We value it because it is a “blood and guts business” carried out by a human being.
How narrow-minded and speciestic of me, right? Can I not conceive of a forthcoming AI that will achieve “general” (i.e., human-type) intelligence and consciousness, and whose creative outputs would therefore be just as valuable as human ones? I can. But Chat ain’t it. And it ain’t even the first step toward it.
This has been made clear by Noam Chomsky, Ian Roberts and Jeffrey Watumull in a recent article in the New York Times entitled “The False Promise of ChatGPT.” Chomsky and Roberts are linguists, while Watumull is director of artificial intelligence at a science and technology company. These people know what they are talking about, though I have to admit that the article is sloppy in certain places, as if it were put together in a hurry to respond to the onslaught of press coverage about Chat.
Nevertheless, Chomsky et al. make several important points. They begin by explaining in plain words what Chat actually does: “Roughly speaking, [it] takes huge amounts of data, searches for patterns in it and becomes increasingly proficient at generating statistically probable outputs—such as seemingly humanlike language and thought.” In other words, Chat truly is the sort of artificial parrot that Van Den Hengel described.
The crucial question is whether Chat understands and experiences what it does. Because without that it is just a parlor trick, albeit a very sophisticated and potentially useful (or dangerous) parlor trick. According to Chomsky and colleagues, though, “understanding has not and will not—and, we submit, cannot—occur if machine learning programs like ChatGPT continue to dominate the field of AI.” Why not? Because “the human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching. … The human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations. … [AI] programs are stuck in a prehuman or nonhuman phase of cognitive evolution.” They continue:
“The crux of machine learning is description and prediction; it does not posit any causal mechanisms or physical laws. Of course, any human-style explanation is not necessarily correct; we are fallible. But this is part of what it means to think: to be right, it must be possible to be wrong. [Human-type, general] intelligence consists not only of creative conjectures but also of creative criticism. … Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time.”
Which is why “it is at once comic and tragic that so much money and attention should be concentrated on so little a thing,” and, by contrast, not on actual human intelligence and creativity (see, for instance, the abysmal situation of funding of the arts and humanities in American universities).
One particular aspect of Chomsky et al.’s criticism struck me as exactly right: “Perversely, some machine learning enthusiasts seem to be proud that their creations can generate correct ‘scientific’ predictions (say, about the motion of physical bodies) without making use of explanations. … But this kind of prediction, even when successful, is pseudoscience.” To put it in other terms: to crank a lot of data and use emerging patterns to make predictions is the lowest form of investigation into the nature of the world, because it looks exclusively at correlations between phenomena. But correlation, as any good statistician will tell you, is not causation, and science is in the business of producing causal theories about how the world works. That’s why the type of AI that Chat embodies could never do science, though it could certainly be useful at uncovering the kind of patterns that scientists then use to formulate and test their causal hypotheses.
In a sense, Chat is an actual embodiment of a thought experiment proposed by philosopher of mind John Searle back in 1980 and known as the Chinese Room. The idea is to imagine a room inside which there is a person equipped with a Chinese dictionary. The person does not understand Chinese, but is capable of examining slips of paper that are inserted into a slot that leads inside the room, written in Chinese, look up the symbols in his dictionary, and write down a translation in English that he then slips out of the room through a different slot.
Searle’s point is that all the operator does is what computers in general do: manipulate symbols without understanding them. It’s all syntax without semantics, if you will. While the Chinese Room has generated decades of hair-splitting and logic chopping on the part of Searle’s critics, the basic point remains: if we want computers to actually think then we need to find a way for them to move up from simple syntax to semantics. And nobody, currently, has a clue on how to do that.
Chat works in pretty much the same way as the Chinese Room (another, even closer analogy is Google Translate). Which is why it isn’t a first step toward anything other than faster (and not necessarily more accurate or reliable) data crunching. Impressive? Yes. Useful? Perhaps. The beginning of general AI? I don’t think so.
But let me get back to my initial question: is Chat a sophist? So far we have established that the purpose of Chat is not to seek the truth (like a philosopher or scientist) but rather to imitate human speech and persuade the user to keep using it. That’s what a sophist would do.
We have also ascertained that Chat is not reliable because it uses whatever piece of information it has without caring whether it’s true or not, its only criterion being that of producing a convincing output. The result is a mix of truths and falsities of which even Chat itself is not “aware.” That fits Frankfurt’s definition of bullshit, as well as Plato’s conception of sophistry.
But there is one more criterion that identifies a true sophist: it has to demand payment for his bullshit. Chat, at first glance, doesn’t qualify, as it is free and produced by a company that has “Open” in the name. Except that that’s also baloney. OpenAI’s evaluation following the release of ChatGPT on November 30, 2022 is 29 billion. Also, while Chat is currently mostly free, OpenAI does have plans for later monetization. Indeed, beginning in February 2023 US-based users can register for a premium service at $20/month. There is also a “professional” version for $42/month. And the software is no longer “open” but proprietary. All the boxes to declare Chat a sophist have now been checked. Buyer beware.
ChatGPT, the ultimate sophist?
As the Bot explained it:
There are some similarities between a chatbot like GPT and the ancient Greek philosophers known as sophists, but there are also significant differences.
Like the sophists, GPT is designed to persuade and influence through language. GPT uses natural language processing (NLP) and machine learning to generate responses to user input that are meant to sound natural and convincing. Similarly, the sophists were known for their ability to use language to argue any side of a debate and to persuade others to their point of view.
However, there are also important differences between GPT and the sophists. One of the key differences is that GPT is a machine, while the sophists were human beings. GPT is programmed to generate responses based on a set of algorithms and data inputs, while the sophists relied on their own intellect and rhetorical skills.
Another difference is that GPT is not capable of independent thought or critical thinking, while the sophists were known for their ability to think creatively and critically. GPT is only as good as the data it has been trained on, and it cannot come up with new ideas or think outside the box.
In summary, while there are some similarities between GPT and the sophists in terms of their use of language to persuade and influence, there are also significant differences in their underlying capabilities and limitations.
I asked Chat to write a sonnet about Death in preparation for a eulogy I was going to give and it didn't do a half-bad job. I tweaked a couple of lines but it did most of the work.
O Death, our great and feared enemy,
You come to all, both the low and high-born,
Inevitable, you take us eventually,
To a place from which we cannot return.
You are the darkness that fills us with dread,
A mystery that none can fully understand,
From the moment we draw our first breath,
We know you wait with outstretched hand.
But though you come with oppressive might,
You cannot steal the memories that remain,
Of those who've passed beyond our mortal sight,
Those who've linked love's unbroken chain.
O Death, though mighty, you are behind us,
You cannot break the love that binds us.