Add The Wire As Your Trusted Source
For the best experience, open
https://m.thewire.in
on your mobile browser.
AdvertisementAdvertisement

Large Servings of Slop: Writing and Research in the Age of AI

Poet and author Meena Kandasamy has found she that is being regularly cited in academic research papers as well as online articles and blog posts with hallucinated quotes falsely attributed to her.
Poet and author Meena Kandasamy has found she that is being regularly cited in academic research papers as well as online articles and blog posts with hallucinated quotes falsely attributed to her.
large servings of slop  writing and research in the age of ai
Meena Kandasamy is being regularly cited in academic research papers as well as online articles and blog posts with hallucinated quotes falsely attributed to her. Photo: X/@meenakandasamy
Advertisement

A few days ago, I met with the poet and writer Meena Kandasamy in Chennai to invite her for an event at the university I teach at. As we chatted about different things, I happened to mention how hard Generative AI was making our lives as writing teachers. Kandasamy put her head in her hands and asked if I knew what was going on with her. I did not; and heard her ongoing saga with numb incredulity. We have all heard of carelessly done work being described as sloppy, but Generative AI has given the word slop, a whole new alarming meaning that should worry every writer who is writing and publishing today.

It all began with a university syllabus listing on page 7; a poem by Kandasamy: 'Caste Out'. Kandasamy is no stranger in university syllabi, but she has not written this particular poem.

Kandasamy only found out about this when she started receiving requests for a copy of the poem.

Even if the university syllabus was a human error, Kandasamy generously allows, what has followed is AI slop – the term slop refers to text hallucinated by Generative AI such as ChatGPT, Claude etc. where this text is also falsely attributed to an author or a source that does not contain it. Even as she was telling people that she has not written a poem by that title, YouTube tutorial videos began appearing with summaries of the hallucinated poem aimed to help the students prepare for exams.

Advertisement

But this is not all. Kandasamy set up alerts to be notified when her work is cited online, only to uncover whole other buckets of slop. As it turns out, she is being regularly cited in academic research papers as well as online articles and blog posts with hallucinated quotes falsely attributed to her. Writers and researchers are clearly not reading actual sources or tracking down the provenance of quotations that they are using. Or perhaps reading to write in the time of ChatGPT has come to mean generating AI quotes in “the style of…”. It appears there is an alternate universe of AI-hallucinated slop being recycled through AI-generated literature reviews, followed by AI-generated and humanised articles for publication. Kandasamy explains what it feels like to be at the receiving end of these farcical hallucinations:

"This is a violation for which I find no words. In LLM-GPT-generated AI slop, a pale imitation mimic-style string of words becomes the stand-in for you, the author. This is worse than plagiarism—because this is not theft of the text, this is theft of the author. This on-the-spot churning and fake attribution negates two decades of my work as a writer […] The endless proliferation made possible by the internet means that there is no way to stem this rot. Tomorrow, the fake quotes will end up in citations, dissertations, research papers. The imitation will replace the writer."

Advertisement

Imagine a hall of mirrors redoubling distortions all the way to infinity. Behind Generative AI research writers and journals that publish without proper reviewing and citation checks is the sole purpose of boosting impact factors via citation counts. What we have is comeuppance for the borrowed academic truism of 'publish or perish'.

In the meanwhile, Kandasamy finds herself devoting time to a job she did not sign up for. Getting You Tube videos with hallucinated work in her name taken down, and writing to editors of journals and magazines apprising them of the hallucinated quotes, and requesting the articles to be modified or retracted. Below is an example of a journal article that uses hallucinated quotes along with a screen shot of the email Kandasamy sent to the editor:

Advertisement

Advertisement

In the AI slop in this case, as Kandasamy pointed out, are terribly regressive ideas that would make no sense for her to think or say, quite apart from the fact that the actual poem has nothing to do with these lines. These lines are dangerously meaningless. Meaningless because in the contexts she writes in, it would be absurd for Kandasamy to call herself a “brown woman”, nor will she say that her people have never been oppressors for why she writes in English – a dangerously simplistic and hard to defend claim. To the researchers these lines should have given pause. But in their sentences of analysis that follow, it is clear that not only have they not read the poem 'Ms Militancy' or the collection it comes from, they are not reading to respond to even the AI-generated lines that they have cited. The work of analysis is meant to be a demonstration of one’s reading. But how can we expect analysis in the absence of reading, when the act of reading is now largely becoming one of reading AI-generated bibliographies and summaries with hallucinated texts?

For Kandasamy, there are weeks that bring more than one helping of AI slop. One of the editors of a publication that Kandasamy wrote to responded by saying that they ran the article through Quillbot, an AI detector, which passed it. The AI detector does not require or check for actual citation. It also cannot successfully detect AI content, as Kandasamy can testify. Perhaps publication platforms need to reinstate editorial processes which included painstaking citation checks, instead of leaving the work of bots to be reviewed for accuracy by other bots.

Kandasamy has been bringing attention to these violations on social media. She has been spelling out the contours of this deep faked new geography of knowledge production and dissemination that seems to have found ground in institutions of education, amongst researchers, and writers on different platforms and their publishers – without any pause or protest. These violations are no longer a matter of niceties of scholarly ethics and acknowledgement of AI as a collaborating partner. The reckless use of Generative AI comes with the clear and present danger of writers being held accountable for things they have never written. While we debate the ethics of AI, how does such a gross violation of one’s identity and work land on the writer? Kandasamy responds:

"Awards or rape and death threats; young women putting my poems on their bedside or book burnings — my writing has exposed me to extreme hate, it has showered me with endless, abundant love. I own the both of them fiercely, protectively. To see a fake machine-generated quote is to lose the sense of myself; to lose my humanity, my vulnerability and my strength; to become something so programmable and predictable that it unsettles me."

This feeling of being unsettled cannot be Kandasamy’s alone. Anyone who writes and merits being subject of research should be afraid. The euphoria of discovering Generative AI (thanks to the solid push from its creators and social media influencers) and its potential possibilities are dominating the conversation around this young technology. What is missing is the conversation on how Generative AI use that involves research and writing has also mostly already pulled the carpet from under our feet. In my classroom, I miss the days of good old plagiarism when we could teach students how acknowledgement and citation would actually solve the problem. And we could focus on showing students how to engage with words and ideas of others in sentences of their own.

Generative AI use, certainly in the humanities and social sciences, is hollowing out fundamental learning capacities of students and researchers, not to mention that it has also eroded the concept of academic dishonesty. What Kandasamy is experiencing is just one of Generative AI’s nightmarish by-products, with no real solution in sight.

Anannya Dasgupta directs the Centre for Writing & Pedagogy at Krea University, where she is also a faculty in the Division of Literature and the Arts.

This article went live on August fifth, two thousand twenty five, at thirty-nine minutes past five in the evening.

The Wire is now on WhatsApp. Follow our channel for sharp analysis and opinions on the latest developments.

Advertisement
Advertisement
tlbr_img1 Series tlbr_img2 Columns tlbr_img3 Multimedia