Since you have an internet connection, Dear Reader, I guess you’ve heard about ChatGPT.  The Web is full of people arguing over what consciousness is and whether a Large Language Model (LLM) can have it. I don’t care to speculate on that; what interests me is that Owen Barfield created such an appropriate way to think about it a hundred years ago. This is all in his book Poetic Diction, which we in Tolkien scholarship know about because Verlyn Flieger told us about it in Splintered Light.

portrait of Owen Barfield from WikipediaThe part of Barfield’s work that applies here is the idea that humans invented language with words for large, unified concepts. Like breath, wind, and spirit weren’t three different words back then; people had a single thought that we’ve split up (splintered, if you will) into subconcepts now. The farther back linguists go, the more semantic unity they find. In the furthest depth of time to which linguistics can take us, it’s kind of amazing how many modern concepts come from a single proto-Indo-European root.

This splitting enables us to work with concepts that are more abstract than anything our ancestors had to deal with, but Barfield saw it as removing the poetry from language.  He phrased it as “the decline of language into abstraction.” (p.122) It’s anti-poetic. Now, after a few millennia of the process, we’ve reached the point where poets make new meaning by taking two splintered words and putting them in unexpected contact.1 (p. 116)

I have nothing against splintering ideas and abstracting them.2 It’s what humans do, like a prism splinters light. Pace Gandalf, that’s a good thing. It’s how we know as much about the universe as we do. It’s the intellectual equivalent of division of labor and specialization. But, like the way specialization means people have lost their broad range of skills, something is lost in the process. The myths that Tolkien saw as essential to the creation of language3 are gone now. As Barfield put it, “The myths still live on a ghostly life as fables after they have died as real meaning.” (p.146)

Large Language Models take the splintering of language to its logical extreme. GPT3 has 175 billion parameters describing how its corpus of input can be divided into words. And at the end, exactly as Barfield conceived it, the meaning is completely gone. The myth has been electrolyzed into component atoms and has ceased to exist. LLMs generate text without meaning, mixing truth and falsehood like a dog mixing paint colors, though the reader is free (and often unable to avoid) to impose meanings on it. There is a tiny pathway for human language in their construction. GPT3 in particular uses “reinforcement learning with human feedback”, in which hundreds of human beings graded its texts during the training phase, marking which ones sounded right and which wrong. That prevents complete gibberish, but I doubt that path is broad enough for actual meaning to travel along.

No, a world full of LLMs will need poets. It’s easy to tell the difference between human-generated verse and computer-generated. As the models improve, more people will be fooled, but not all the people all of the time. Barfield predicted it: the poet’s job is “in certain respects to fight against language, making up the poetic deficit out of his private balance”. (p. 116) Computer programs have no poetry; it’s easy to imagine that LLM-generated code will take over the software industry long before they affect more human works.4 We may be headed for a world in which concerned parents push their college-bound children away from degrees in computer science: “How will you ever get a job with a degree like that? You need to become a poet, like your cousin!”


Notes

Notes

  1. I can’t help thinking of protons in the Large Hadron Collider.
  2. Also known as “analysis”.
  3. “It would be more near the truth to say that languages, especially modern European languages, are a disease of mythology.” On Fairy-Stories
  4. Your Idiosopher is aware that corporate mission statements and annual reports can be perfectly synthesized by LLMs, and it does not affect the conclusion.