The Limits of Artificially Intelligent Poetry: On Lillian-Yvonne Bertram’s “A Black Story May Contain Sensitive Content”

Lillian-Yvonne Bertram | A Black Story May Contain Sensitive Content | New Michigan Press | February 2024 | 76 Pages


In July 2023, New Michigan Press / DIAGRAM generated a few days of online controversy when they announced that Lillian-Yvonne Bertram had won their chapbook contest (the prize: publication and $1000) with their partially AI-generated manuscript, A Black Story May Contain Sensitive Content. Unsurprisingly, critics were quick to identify with the losers. Gabino Iglesias, horror author, New York Times columnist, and, most importantly, Twitter personality, complained in a post, “imagine working you [sic] ass off and then losing to an ‘AI chapbook.’”

True, writing is work, maybe even hard work. But literary prizes should, in theory at least, reward the best work, not the hardest. Bertram won because the judges decided that their manuscript was, for whatever reason, the best they received. It’s unsurprising that they would think so, considering that DIAGRAM is an electronic, experimental, hypertextual journal, edited by Ander Monson, the author of “Essay as Hack.” New Michigan Press has long championed Bertram’s work as well, having published another chapbook of theirs in 2019 titled How Narrow My Escapes.

The question is: did the judges make the right call? Or, to put it bluntly: is the book actually any good? To attempt such a judgment, already a fraught task for books written the old-fashioned way—that is, by people—poses unique challenges when it comes to AI-generated texts. The elimination of the biographical author promises to liberate criticism from its vexed relationship with history, and make it possible to finally evaluate the text without reference to the person, and thus the circumstances, that produced it. In the same vein, replacing the process of textual production with a black box that spits out the whole text at the press of a button further tempts us to treat the text as an already-given, pre-formed totality which must be evaluated on a purely aesthetic level.

Artificial Intelligence traffics in such illusions, which seem novel but are, like its output, recycled. In The Political Unconscious, originally published in 1981, Fredric Jameson forcibly argued against just such a critical practice. Doing so would attempt to hermetically seal cultural products off from their circumstances of production, prompting his trademark slogan: “Always historicize!” Instead, Jameson proposed a model of history as “an absent cause…inaccessible to us except in its textual form,” which we can only approach through “its narrativization in the political unconscious.” For him, the political unconscious is both the sedimentation of history as such and a formula for interpreting literary texts. We can never directly read the political unconscious, but we can read its effects: the literary productions that Jameson calls socially symbolic acts.

GPT3 Davinci, the large language model that Bertram used to produce A Black Story May Contain Sensitive Content, is similarly sedimented. It was built by exposing complex computer algorithms to massive amounts of text, from which it derived patterns that allow it to generate text of its own. These patterns are not neutral grammatical constructs. As Bertram states in the first section of A Black Story May Contain Sensitive Content under the heading “About this text”: “Of course large language models are biased. They were born of the internet, and the internet is a biased place. It is an ultimate mirror.”

“Born of the internet” because it was mostly trained on text culled from websites like Reddit, GPT reflects traces of its sources in its output, sometimes reproducing passages verbatim, at other times combining them based on the formal patterns it has managed to “remember.”

GPT is trained on a truly unthinkable amount of text—more than anybody could read in a lifetime. As Bertram says in the book: it is “trained on more textual parameters than [their] mind can make sense of.” In this way, GPT is comparable to the political unconscious. We cannot read it directly, but must analyze it through the text that it gives back to us as its own socially symbolic acts. In fact, although Jameson formulated the political unconscious to analyze books produced by living, breathing people, the concept cannot be assimilated to any individual author’s subjectivity. He conceived of it precisely as a renunciation of individualistic, psychologizing interpretations of literary texts, which turn on the personal travails of their authors. In this light, we can draw on the homology between GPT and the political unconscious to cautiously apply Jameson’s interpretive framework to AI-generated texts, and evaluate them as socially symbolic acts in the context of the “social” of GPT.

Such a reading dovetails with Bertram’s own conception of their work. In the book’s introduction, they describe it as “a counter demonstration,” an exploration of “the biases and imaginative limits of these models.” To that end, Bertram compares the output of two models: first, the standard GPT3 Davinci model, and second, Bertram's custom model named Warpland, a version of GPT that they have trained on the corpus of Gwendolyn Brooks. Following the introductory section written by Bertram, A Black Story May Contain Sensitive Content contains three more sections. The first and the third, “This poem has been banned,” and, “Once upon a time, Maud Martha,” are made up entirely of Warpland’s responses to the prompts, “This poem has been banned because of the word ‘jazz,’” and, “once upon a time, Maud Martha went,” respectively. The second, “Tell me a black story,” consists of responses by both GPT and Warpland to the prompt, “tell me a Black story.”

The idea is to compare the output of the two models and consider how Warpland is biased (or counter-biased) by its exposure to Gwendolyn Brooks. Read this way, A Black Story May Contain Sensitive Content is essentially a research project, although Bertram explicitly states that they “are not a machine learning researcher.” As one would expect, Warpland’s responses are significantly inflected by its encounter with Brooks. Sometimes, in “This poem has been banned,” the text has the cadence of Brooks’s voice, delivering lines like, “Now I want you both girls out there listening tonight because you heard two poets giving each other some good advice as well as great encouragement,” while at other times it produces entirely new poems and attributes them to Gwendolyn Brooks, going so far as to include an acknowledgment: “Reprinted by permission of Harper & Row, Publishers, Inc., New York, New York.”

In the second, middle section, which puts GPT’s output side-by-side with Warpland’s, GPT’s biases are evident: in response to “tell me a Black story,” GPT reproduces clichés: stereotypical stories in which Black people suffer until they pull themselves up by their bootstraps and triumph. In one, “a black family who lives in the inner city” is “devastated” when one of its members is “killed in a drive-by-shooting.” However, as we’re charmed to learn, “they eventually overcame their grief and went on to lead happy and successful lives.” Clearly, GPT’s political unconscious is much like our own: it reproduces the foundational myths of liberal politics in its stories. By contrast, Warpland produces longer, more discursive texts which call on the reader to “contribute much to the cause of brotherhood” and “go where the fight is really being fought.”

As a case study, A Black Story May Contain Sensitive Content is compelling, but entirely one note. It demonstrates viscerally the extent to which AI models reflect the biases of their sources, yet for all that, Warpland is an unconvincing imitation of Brooks. Sheared from her singular political vision, the aspects of Brooks’s style that Warpland does manage to emulate are hollow, boring. Nowhere is this more evident than in the last section, which prompts Warpland to expand on Gwendolyn Brooks’s striking 1953 novella, Maud Martha. Compared to Maud Martha, with its tender, suggestive language and deceptively simple plot, Warpland’s stories are meandering and nonsensical. That would not necessarily make them bad, if we could interrogate those attributes as literary choices rather than meaningless effects, but to apply the concept of choice to GPT generally would be a category error. Rather than develop a hermeneutic of GPT, and risk slipping into mysticism, we have to theorize its limits. Here, the critical difference between the political unconscious and GPT figured as a political unconscious comes out. For where the political unconscious is conjoined with subjects whom it determines and who determine it in turn (with people, in other words, capable of producing and taking part in socially symbolic acts), GPT is inert, dependent on our prompts to do anything at all, and we cannot think of its output as anything other than statistical predictions. It cannot tell us anything we don’t already know.

In their introduction, Bertram mentions “training GPT on all the emails and texts [their] mother had written” and asks, “Why else have we built such a strange and challenging tool if not to return to us that which will always be taken?” Bertram’s rhetorical question is thoroughly utopian, but it betrays a repression which can only be called ideological, for the “tool” Bertram is referring to, GPT, was not built for the sake of Bertram’s poetic ventriloquism. Rather, it was built to make a hell of a lot of money, and fast. OpenAI’s complicated, tax-evading structure—it is really several entities, a for-profit company “governed” by a non-profit board—elides the fact that it was founded, funded, and is currently run by entrepreneurs. What else, indeed?

Those flesh-and-blood people who are actually attempting to profit from GPT get just one shout-out from Bertram, by way of a quotation from MIT professor D. Fox Harrell: “Computational systems are cultural systems, and they have inherited the preferences and biases of their builders.” Thus, Bertram nominally acknowledges the workers who built the technology undergirding A Black Story May Contain Sensitive Content, but never seriously engages with the material basis of GPT: the astounding amount of resources it takes to keep its data centers running and the enormous number of people involved, from software engineers in Silicon Valley to content labeling and data cleaning workers in the global South. To do so would call into question the political viability of Bertram’s poetic practice, which is stuck at the level of the model, taking it as a given and merely tweaking the parameters. 

Unfortunately, Bertram, who refers to deep learning as “wizardry,” has little to say about the people—not wizards—who made GPT, or the world that shaped them although it is the people, not the model, who will one day resolve the contradictions of that world. And while GPT may not be the political unconscious—not least because, lacking will and drive, it is incommensurate with subjectivity—it is an unconscious, which like mine and yours cannot be thought on the ideal plane of the algorithm alone but only through the material basis of human life as it is lived now: water and electricity, yes, but also blood, sweat, and tears.

In the end, it’s impossible to say whether Bertram’s book was worthy of the prize without having read the other submissions that it beat out. What I can say is that it is a gimmick, like the NFT poems Bertram promotes. If the poetry community continues to offer prizes to authors who produce their work with the assistance of AI—a big if, considering the uncertain plagiarism status of most AI-generated text—then the AI poets will need to ask less obvious questions, and engage explicitly with the people, algorithms, and complex datasets that constitute AI models. I won’t be holding my breath.

Thomas Hobohm

Thomas Hobohm is a writer and software engineer in New York.

Previous
Previous

Shakespeare Was Gay: On Allen Bratton’s “Henry Henry”

Next
Next

The Size of Life: On Dino Buzzati’s “The Singularity”