Will ChatGPT Get Tenure?

Will ChatGPT Get Tenure?

Research will likely become ever more AI-assisted, but what does this mean for academia? In this blog post, our guest authors engage in blue-sky thinking to sketch some thought-provoking future scenarios and discuss potential implications for scholarly communication and research evaluation.

Since the inception of ChatGPT in late 2022, the question of how AI will change academia is a concern in almost every discipline. Social media and mailing lists alike are flooded with suggestions for new AI-based tools, workflows, or seminars. At the same time, many journal publications now come with some form of AI involvement – from analyzing and interpreting data to the drafting of whole sections. While it appears likely that future research practices will be even more AI-assisted, precisely what this means for the global academic system is still unknown. Here we want to engage in some blue-sky thinking and present a few, hopefully thought-provoking, sketches of the potential future of scholarly communication.

AI-assisted research could accelerate publishing and diversify research outputs.

For example, consider a textual explanation of a table created by a model such as ChatGPT. The explanation could be generated on the spot by anyone accessing a research report and its statistical results online. Such an on-demand output could incorporate the most recent AI model and the specific preferences of the reader, for example language, audio or visual outputs instead of text, the depth of the information, or a particular statistical framing. Such on-demand outputs might further democratize scholarly communication, making academic knowledge more accessible to everyone, even including the wider public. But how will this affect the future role of fixed and static forms of scholarly information? Imagine AI could generate an article in any language on demand or dynamically add new studies into a living literature review. The publication of AI-generated content as static journal publication then seems less useful from this perspective. Turning AI models into mere static text generators not only undermines the potential of AI-assisted work, it also further increases the burden for reviewers, who face an unprecedented amount of work. So, what does a “publication” look like in such a dynamic world of scholarly communication? Also, how could peer review of dynamic outputs be organized, or is it even possible in such a system?

AI-assisted research will affect the role of publishing venues on a systemic level.

In our envisioned world, AI could extend a publication system that already changes towards smaller and more diverse outputs and micro-publication. AI-based tools would add on-demand explanations or translations to those collections of data and figures. This could allow for more fine-grained searching that considers individual parts of papers and their ontologies. Even further, AI-based tools might create syntheses and summaries on even broader scales. Against this background, new patterns of centralization such as research platforms or specialized repositories that contest traditional roles of journals and publishers seem likely. Rather than serving publications to human readers, they provide the databases that are used by AI models. So, will we even need journals if human readers can retrieve knowledge directly from AI? Of course, this also affects evaluation regimes where journals have taken center stage. We are curious what the future roles of academic journals and peer review will look like. Likewise quantitative evaluation systems will be challenged too. How can they account for a more dynamic form of research communication? What kind of metadata and indicators will be required for assessing quality and impact?

AI use will challenge existing formats for research evaluation and create the need for new forms of assessment.

These ideas make clear that AI use will also fundamentally impact research evaluation. On the one hand, AI might enhance and support traditional evaluation procedures. The use of AI in writing manuscripts may for example remove language-based bias, e.g., a paper is rejected or recommended for major revisions due to language and communication issues. This could level the playing field for non-native English speakers. As such, existing evaluations might become more focused on research quality rather than rhetorical ability because of a change in the underlying modes of academic publishing. On the other hand, however, some developments will challenge existing formats and logics of research evaluation and create the need for new types of evaluation. The abovementioned shift from journals to platforms and AI interfaces would fundamentally undermine journals’ role as the locus of peer review and as such the guarantors of publication quality. Their perceived stance as proxies for research quality would diminish, and reputational markers such as journal impact factors and journal rankings would also lose meaning. This has substantial consequences for evaluation procedures that are based on the reputation of journals. Alternative indicators such as article-level metrics might thus become more prominent. Such metrics might, however, also be challenged by highly dynamized publication types which make the idea of a research output less discrete and tangible. Today, constantly updated literature reviews, versioned preprints, or software publications already offer a glimpse of this future. While current metrics are focused on measuring the performance of outputs (e.g. stable research papers), there might arise a need for more process- or throughput-oriented metrics in the future. Such metrics track versions of articles over time and consider update times, views per version, visualize changes in an article's content, and even new forms of citations that take into account the potentially changing nature of the cited piece.

All in all, we assume that academic publishing will become more open and dynamic, and that this will affect the roles of authors and publishing venues. We’d like bibliometricians and evaluators alike to constantly ask ‘what is valuable for the members of the communities we observe?’ Remaining open to this fundamental question means to meet the AI revolution creatively and independently of how it will be shaped by private firms. It can also help to set the right incentives and intended consequences when building new assessment frameworks.

Together with our colleagues at DZHW and the wider community in science studies, we want to keep track of how AI will change scientific practice. Our working group “Automation and AI in Academic Publishing” consists of Otmane Azeroual, Göde Both, Judith Hartstein, Felicitas Hesselmann, Max Leckert, Matteo Ottaviani, Sabrina Petersohn, Marion Schmidt, Alexander Schniedermann, Dimity Stephen, Christoph Thiedig, Theresa Velden, and Lautaro Vilches Hernandez. Contact: hesselmann@dzhw.eu

Header image: Pixabay
DOI: 10.59350/qghy4-j1q63 (export/download/cite this blog post)

0 Comments

Add a comment