Feedback Is Often Anchored in Memory, Not Standards
Many faculty- thoughtful, committed educators-unintentionally assess learners against an internal benchmark:
“What would I have done?”
That benchmark is shaped by:
Their own training ear
Local culture at the time
Personal clinical style
Speciality norms that may have evolved
The supervisors who trained them
This creates a hidden variability:
Two excellent physicians may give completely different feedback on the same performance, not because one is wrong, but because their reference points differ.
Residents experience this as inconsistency.
Institutions experience it as “evaluation noise.”
Faculty experience it as frustration when learners don’t “apply feedback.”
The Shift We Need: From Personal Standard → Shared Standard
If we want feedback to drive development rather than confusion, institutions must move from individual interpretation to collective alignment.
That doesn’t mean scripting faculty.
It means clarifying:
What competence looks like here
Which differences are stylistic vs. safety-relevant
Where flexibility is appropriate
What we are intentionally trying to produce in graduates
Practical Ways to Operationalize Better Feedback
Separate “Clinical Safety” From “Clinical Style”
Faculty should explicitly name which comments relate to:
Patient safety/decision-making (non-negotiable)
Efficiency or communication preference (variable)
Personal style (optional adaptation)
Residents learn faster when they understand which category they’re in.
Build Micro-Calibration Into Existing Meetings
Instead of adding new workshops, use:
Faculty meetings
CCC discussions
Case conferences
Ask one simple question:
“What are we actually expecting at this level?”
Five minutes of shared discussion reduces months of mixed messaging.
Give Faculty Language That Anchors Feedback to Growth, Not Comparison
Encourage phrasing like:
“At this stage, we’re looking for…”
“The next step in development is…”
“Here’s why this matters clinically…”
This shifts feedback from: “That’s not how I do it”
to:
“Here’s how physician grow into this responsibility.”
Make Expectations Visible to Learners
Many programs define competencies internally but never translate them into lived guidance.
Consider:
A one-page “What Success Looks Like on This Rotation”
Examples of strong performance at each level
Shared language across evaluators
Clarity reduces perception-based critique.
Train Faculty to Recognize Generational Drift in Training
Medicine evolves quickly.
What felt essential ten years ago may now be:
Automated
Team-based
Digitally supported
Less central to outcomes
Faculty development should include reflection on how practice has changed, not just how to teach.
Why This Matters Beyond Education
Inconsistent feedback isn’t just an educational issue.
It’s an organizational one.
When expectations vary:
Learners expend energy decoding culture instead of improving practice
Programs struggle to measure growth accurately
Institutions risk producing physicians shaped more by change than design
Clearer feedback systems don’t standardize people.
They stabilize environments so growth can happen intentionally.
A Reframe for Leaders
The goal of graduate medical education is not to reproduce how we trained.
It is to prepare physicians for a system none of us trained in.
That requires feedback grounded in shared purpose, not personal history.
Closing Thought
We don’t need better scripts for feedback conversations.
We need clearer agreement about what we are trying to build together.
Once that’s aligned, the conversations become easier for everyone.


