While watching the morning news as usual, I came across an article in The Daily Beast titled, “One doctor published several research papers at breakneck speed. ChatGPT wrote it all.”
As a researcher, I am very curious. I’m not against the idea of use AI to help create research or even write a paper, but as I finished reading the article, a shudder of boredom rose through my body.
The lead researcher of the work, University of Tennessee radiologist Som Biswas, submitted an article to the journal radiology written mostly by ChatGPT. Biswas told the editor ahead of time about copyright; it underwent peer review and was subsequently published. After this initial success, he used OpenAI to write several more papers within four months and publish five of them in different journals.
Although I can certainly see value in OpenAI support in research and even, perhaps, writing for people who have difficulty producing quality prose, there is an undercurrent in the ideas of Biswas and others that are summed up beautifully in a single paper. written by a group of French rheumatologists named “ChatGPT: When artificial intelligence replaces rheumatologist in medical writing. In an upbeat tone, the authors declare: “…for researchers who are now prolific, one can imagine that with AI, their output could be doubled or even tripled. ” The authors go on to conclude that “AI represents an important step front in helping to create original scientific work” (emphasis added).
I’m not arguing that there’s necessarily something wrong with using AI to assist with research writing. (I’m simply not sure about this at the moment.) However, I see the thinking in the paper as representing another cog in the neoliberal machine for quantitative research and emphasizing numbers. at the cost of quality.
It is clear that the number of citations and the number of publications are the main measures by which research faculty are measured — along with the perceived prestige of the journals associated with the influencing factors. Thus, using OpenAI to increase “productivity” is assessed—conceptually only in terms of publication volume, not the quality of what is published—which serves to further support best practices. neoliberalism is polluting higher education institutions and eroding research quality output.
If widespread use of OpenAI becomes the norm in research publishing, the continued assault on quality will intensify. In particular, there will be many predatory journals that go after junior scholars, especially research papers, so that they appear effective – even if the journals they publish are worthless.
And the already exploitative for-profit academic publishing industry will become even more fierce as supply outstrips demand, driving up the fees publishers can charge authors for the fruit of their labor. High. Indeed, OpenAI will enhance the ways in which such publishers get most of the labor (in the form of reviewers and authors) needed to produce their products without having to pay for those who do not. person doing the work.
Under the current neo-liberal regime in higher education, the use of OpenAI will increasingly push research into a zero-sum game structure in which those who are most productive in terms of creating whether AI-generated research will be the winner in the competition for increased achievement. , promotions, and research grants—all based on performance measures that prioritize quantity over quality.
In an article in The Daily Beast, Brett Karlan, a postdoctoral fellow in AI ethics at Stanford University, points out, “[g]Even if the pressure to publish really does pile up, I think academics will start to rely on ChatGPT to automate some of the more boring parts of writing, and most likely the same people who wrote the articles. It’s virtually impossible to publish and submit them to predatory journals that will find workflows automating this with ChatGPT.”
Karlan was right with this assessment. But the problem is not with the researchers nor with ChatGPT, but with the performance rating system that underpins the neoliberal obsession with profits and quantification.
As publishing companies profit from their freelance workforce, research organizations “measure” the performance of those workers on the basis of the number of publications. This pushes researchers to publish more, reducing the quality and increasing the exploitability of publishing companies. OpenAI is perfectly positioned to bolster a dangerous system that has already taken a toll on research effort, as well as the mental health of researchers under extreme pressure to publish more, more. more and more.
OpenAI is being touted as an academic game changer. I agree. And the new game is the game where the quality is almost entirely dependent on the quantity because countless new articles are constantly being released, many of which won’t have time to read. That’s probably okay, since many of the same articles wouldn’t be worth reading under any circumstances.
But what needs to happen now is that higher education institutions begin to develop a system of professional ethics that shapes the way OpenAI is used in publishing—and that impacts the way scholars think about academic integrity.
Open AI won’t go away, but if academic institutions don’t tackle research and publishing issues with this now, it will do more damage to an already broken system. severely damaged. Unfortunately, this will require higher education administrators to do something they may find both intimidating and uncomfortable—they will need to evaluate not only OpenAI, but its values and Neoliberal expectations are profoundly damaging the quality of research and education.