In a recent publication,1 the authors introduced the concept of “Artificial Intelligence-Directed Scientific Production” (AIDSP), raising concerns about potential ethical conflicts in the use of artificial intelligence (AI) for scientific publications, particularly in terms of permission and regulation. We would like to present our perspective on these questions.
Scientific publications and AI: In our opinion, AI should be allowed in scientific publications. At its core, AI is a predictive tool that enhances productivity through its predictive capabilities. These capabilities enable the efficient completion of numerous tasks, thereby contributing to increased productivity. By performing these predictive tasks, AI serves as a facilitator of scientific reflection, which is inherently generative. The value of AI lies in streamlining tedious tasks that add little value to the final scientific product, while the value of science resides in interpreting results, not merely obtaining them.
The merit of scientific publications should not be based on the use of established theoretical frameworks, format, or grammatical quality, but rather on generating new theoretical frameworks or adapting existing ones. We would like to emphasize that AI is not inherently generative; its primary function is to be predictive and exploratory.
The question should not be “AI or no AI?” but rather “How do we adapt to AI?” or “What are the best ways to optimize our work based on AI?”. Not adopting AI would put us at a competitive disadvantage compared to those who do. From our perspective, this would be analogous to opposing the printing press or calculators for facilitating mathematical calculations.
Regulatory framework and AI: In our opinion, creating a regulatory framework for AI beyond simply providing information about the AI used is challenging for the two following reasons:
Complexity: The inherent complexity of AI requires deep knowledge in various fields such as theoretical, practical, and business aspects of AI, as well as extensive reflection in the philosophy of science and law. It is unlikely that individuals with comprehensive knowledge in such diverse fields exist, and a lack of broad debate may result in bias.
Technological scope: Unlike scientific production, many algorithmic developments and applications emerge outside the academic realm in a decentralized manner, often with open-source code. Major technology companies tend to index their scientific research within their research agendas.2,3 Nothing would prevent Microsoft from using open databases (e.g., MIMIC-IV,4 SICDB5), developing algorithms based on them, and publishing the results on their website.
These factors make specific regulatory frameworks quickly obsolete or render general regulatory frameworks incapable of capturing the nuances of this rapidly evolving field.
In conclusion, we should focus on how to work with and report on AI to better understand its limitations. Embracing AI in scientific publishing requires addressing ethical considerations while acknowledging the need for appropriate regulatory frameworks.
Translation performed using the GPT-4 language model (demonstrated in supplementary material).
FinancingNone.
Conflicts of interestThe authors declare that none have conflicts of interest.