Critique of impure reason: Unveiling the reasoning behaviour of medical large language models
- Journal Title
- eLife
- Publication Type
- Review
- Abstract
- Despite the current ubiquity of large language models (LLMs) across the medical domain, there is a surprising lack of studies which address their reasoning behaviour. We emphasise the importance of understanding reasoning behaviour as opposed to high-level prediction accuracies, since it is equivalent to explainable AI (XAI) in this context. In particular, achieving XAI in medical LLMs used in the clinical domain will have a significant impact across the healthcare sector. Therefore, in this work, we adapt the existing concept of reasoning behaviour and articulate its interpretation within the specific context of medical LLMs. We survey and categorise current state-of-the-art approaches for modelling and evaluating reasoning in medical LLMs. Additionally, we propose theoretical frameworks which can empower medical professionals or machine learning engineers to gain insight into the low-level reasoning operations of these previously obscure models. We also outline key open challenges facing the development of large reasoning models. The subsequent increased transparency and trust in medical machine learning models by clinicians as well as patients will accelerate the integration, application as well as further development of medical AI for the healthcare system as a whole.
- Publisher
- eLife Sciences Publications
- Keywords
- Humans; *Machine Learning; *Language; Artificial Intelligence; *Models, Theoretical; Large Language Models; computational biology; deep learning; explainable AI; medical AI; medicine; natural language processing; reasoning behaviour; systems biology
- Department(s)
- Laboratory Research
- Publisher's Version
- https://doi.org/10.7554/eLife.106187
- Open Access at Publisher's Site
https://doi.org/10.7554/eLife.106187- Terms of Use/Rights Notice
- Refer to copyright notice on published article.
Creation Date: 2026-01-08 05:22:15
Last Modified: 2026-01-08 05:22:22