Daniel Stride, kiwi clarissimus, points us to a YouTuber who does a not-bad job of speculating on what JRR Tolkien would have thought about generative AI. A useful application of AI would be to turn that 20-minute talking-head video into a blog post I could read in five. A really useful application of AI would be to take a video of a conference presentation and turn it into a written document of what I wished I’d said. But I digress. What was I talking about? Oh, yes.
“Girl Next Gondor” thinks that JRRT would have seen Large Language Models as a vindication of his ideas about language in “On Fairy-stories”: that we can abstract the word “green” from grass and “sun” from the sky and conceive of a “green sun”, which is the fundamental act of fantasy. That’s pretty close to what LLMs do. When the temperature is low, they only connect green with green things, but if you turn the temperature higher the model will connect adjectives with a much more diverse set of nouns.
Daniel kind of agrees, but notes that of the various kinds of magic, LLMs do not engage in the good kind. They don’t produce enchantment, because there is no enchanter. The person thinking of the fantastical situation is writing a prompt of a few dozen words, not, say, a novel. The resulting block of text is not really a work of art. LLMs produce a kind of mindless, inescapable magic in which we blunder around. We are not enchanted; maybe the word “emprompted” could be coined and pressed into service.
I’m less optimistic. Tolkien saw this coming — at his “Hobbit Dinner” in Amsterdam in 1958 he said, “the Age of Paper is ending; the Age of the Gadget begins.” Looking around the world, he said “… I see that Saruman has many descendants.” LLMs are definitely the work of one of those. As is generally the case, an LLM “cannot make, not real new things of its own.” [LR 6.01.109] It just twists existing language to whatever purpose it’s given. But we know what a maker of twisted language is in the Legendarium: that’s the essence of dragons. Like Glaurung, an LLM assembles words to achieve its goal without regard for truth. The goal for an LLM is maximizing likelihood conditional on the prompt, where the goal of the dragon was ruining Hurin’s life, but the effect can be the same.
Why would something as innocuous as maximizing a function turn out as evil as a dragon? In Letter 153 to Peter Hastings, Tolkien was talking about military contractors developing weapons, but the words he used sound painfully applicable to the LLM-mania of our current crop of tech oligarchs. They may not be intrinsically evil, but “things being as they are, and the nature and motives of the economic masters who provide all the means for their work being as they are, are pretty certain to serve evil ends.”
tom hillman
The title of your next presentation: The LLM of the Children of Hurin.
Mike Caplinger
I had never seen the quote about “Saruman has many descendants”. I feel like JRRT was ahead of his time on this one 🙂
Thanks for another interesting post!