ChatGPT and DeepSeek generate educational guides for stroke imaging modalities, focusing on factors like readability, grade-level appropriateness, understandability, and technical accuracy.
Both models produced content that was reasonably understandable to non-experts, though neither was perfect. DeepSeek sometimes lagged in clarity or technical detail.
Differences emerged in grade level and ease metrics: while readability scores for both tools were within usable ranges, some sections required higher levels of health literacy. The study suggests both models are useful as aids, but human review and domain expertise remain essential.