This paper presents the findings of a longitudinal multi-method study involving 112 developers and clinicians co-designing an Explainable Artificial Intelligence (XAI) solution for a clinical decision support system. The study identified three key differences between developer and clinician mental models of XAI, including opposing goals, different sources of truth, and the role of exploring new vs. exploiting old knowledge. Design solutions are proposed to address the XAI conundrum in healthcare, such as the use of causal inference models, personalized explanations, and ambidexterity between exploration and exploitation mindsets.