Why AI Shouldn't Be the Future of Academia
Briefly

The emergence of AI in academia raises profound questions about the nature of scholarly work. While some view it as inevitable, AI products generally compromise accuracy and creativity, contradicting the core values of research. Despite pressures to adopt AI, the merits of opposition remain significant, signaling that reliance on AI may impede essential human qualities. Comparisons to other societal fabrications—like professional wrestling and cryptocurrencies—highlight that AI's perceived capabilities may be mere illusions, emphasizing the importance of authentic scholarly pursuits over artificial shortcuts.
The widespread application of AI appears to be inevitable. The options are presented so that academics are divided into those who use AI well and those who use it poorly. As someone who has recently eschewed the use of AI, I have been called a Luddite and told that I will be left behind. Perhaps AI is inevitable, like mechanized textile work—but the supposed inevitability of AI is a marketing strategy of those selling a product and not a fact.
AI products typically reduce accuracy, innovation, creativity, humanity, credibility, and are in contradiction to the values of research and scholarly communication. Accuracy is not the goal of LLMs. Seeming to be accurate is the goal, raising questions about the true purpose of being a researcher, scholar, and academic.
There is a lot of artifice in influential aspects of society. Professional wrestling seems like athletic competition, but is scripted. Cryptocurrencies seem like an economy, but have no true backing or inherent value. Pornography seems like sex, but lacks human contact.
Read at Psychology Today
[
|
]