We wish to thank professors Lazcoz Moratinos and de Miguel Beriain for their comments1. We fully understand their preoccupation on the scientific and legal challenge posed by the use of artificial intelligence in the management of critically ill patients. However, we wish to make some considerations on this regard.
The clinical decisions made by intensivists are also based on a learning process much like the one used by the algorithm. Also, the concept of “intuition” that we use in our daily routine clinical practice could be interpreted as unfounded or unexplained, and yet it is actually based on a process that is similar to that used by artificial intelligence. These decision-making processes are subjective and do not fall within any legal framework. However, in a manner of speaking clinicians are somehow “natural artificial intelligence”.
When doctors use antibiotics, they may not fully understand the molecular mechanisms involved that make the drug kill the bacteria. Although it is desirable that the mechanism of action and “biological plausibility” are known, if clinical studies with enough numbers of patients shows that the use of antibiotics improves the diagnosis of the patient, using them is completely justified. Fleming did not know how penicillin worked when he started using it. But also, antibiotics can have side effects and even death in very isolated cases. Should we then stop using them?
The excessively rigorous implementation of the regulatory framework in the management of data in clinical studies is already having negative consequences in the progression of conditions like Alzheimer's disease or diabetes2. A reasonable regulatory framework would give artificial intelligence the same recognition antibiotics and other drugs have and require the same verification procedures of their safety and efficacy in clinical trials. However, if the use of an artificial intelligence-based algorithm for the management of shock in septic patients would positively improve mortality with fewer side effects in clinical trials, should we stop using it simply because the clinician does not exactly understand how it works? This degree of demand is not applied to other novel therapies especially assuming the human cost associated with living without new therapeutic tools like this one.
Also, progress is being made trying to understand the “reasoning” processes behind artificial intelligence-based tools3 (similar to those used to understand how pencillin broke the bacterial wall) to a point that a few years from now we may be able to understand why algorithms make this or that decision.
Artificial intelligence can be a very useful tool in the coming future and improve our management of critically ill patients. As a matter of fact, it is already being used in other specialties4. We hereby recommend not doing a disservice to ourselves with extreme legal arguments of protecting patients who would probably suffer the consequences of certain therapeutic opportunities derived from the use of artificial intelligence taken away from them. With a very demanding legal framework we would still die of pneumonia for not being able to use penicillin.
Please cite this article as: Núñez Reiz A, Sánchez García M. En respuesta a «Big Data Analysis y Machine Learning en medicina intensiva: identificando nuevos retos ético-jurídicos». Med Intensiva. 2020;44:320–320.