Over the past few years significant advances have been made in the field of artificial intelligence (A.I.). However, its potential has grown exponentially since last November 2022 OpenAI (San Francisco, CA, United States) introduced a language model (LM) called ChatGPT (Generative Pre-trained Transformer Chat).
Since then, several branches of science have turned to this A.I. to take advantage of all it has to offer. In medicine, for instance, it stands as an aid to draft scientific articles, and has already proven its potential to pass the MIR exam.1
However, could ChatGPT be used in intensive medicine to write scientific papers?
Any intensivist can go to his smartphone and ask ChatGPT to recommend titles for an article he is writing, suggest formulas for statistical analyses, list the most cited publications on a given topic or summarize an article without any restrictions other than his own creativity.
The spectrum of A.I. extends beyond the scope of ChatGPT including applications to generate images from ideas. Therefore, there is a series of browsers available like «Perplexity A.I.» that generates responses with reference quotes, search engines like «PaperA.I.» for reference reviews automation or «Writefull» that improves the writing of scientific papers.
It is with tools like this that ChatGPT would be working as an «office secretary» saving us time when having to look for abstracts, search for information or structure research methodologies, thus giving birth to A.I.-driven scientific publications (AIDSP).
Then, what would the ethical implications be from using ChatGPT in AIDSP?
It will depend on whether this assistance is merely structural like support in a theoretical framework, or rather support in a methodological framework or grammatical support.
Recently, reports have been published on human inability to determine whether an article has been written by another human or A.I. Ironically, at the same time, other artificial intelligences have been developed capable of solving this question. As a matter of fact, there is this debate of whether artificial intelligences should be recognized as the author of the manuscript2; in other cases, ChatGPT has already been listed as the lead author.3
ChatGPT can also be applied to discuss the implications of A.I. in the field of intensive medicine like those described by Reiz4 in the sense of knowing who is the owner of information, who is the «true» author, and whether the results generated could be restricted.
Cases when ChatGPT has «lied» with erroneous data when asked to summarize a certain scientific article5 spark the debate even further. This could be due to not having enough articles in the LM training setting and would be part of the limitations of A.I. including biased conclusions, erroneous citations, omission of significant papers or simply plagiarism.
We authors believe it is necessary to discuss this topic in our scientific society and ask ourselves whether AIDSP should be allowed in intensive medicine. Is it enough to use A.I. as an assistant? Should scientific journals provide a regulatory framework? If so, what quality standards should be required?
Finally, we understand that disruptive technologies pave the way for a promising future. However, the first steps should be taken with caution.
And no. This article has not been drafted by an A.I.