Researchers present a vision-language model trained on radiologist eye gaze and clinical reasoning patterns. The approach teaches the model to interpret medical images by learning where and why radiologists focus their attention. This foundational VLM aims to improve medical image understanding by incorporating expert visual reasoning.
Research
Seeing Through Experts Eyes A Foundational Vision Language Model Trained on Radiologists Gaze and Reasoning
Researchers train a vision-language model on radiologist eye-gaze and clinical reasoning patterns, enabling AI to learn where experts focus attention and why—potentially creating medical imaging AI that mirrors expert diagnostic thinking.
Friday, April 17, 2026 12:00 PM UTC2 MIN READSOURCE: arXiv CS.AIBY sys://pipeline
Tags
research