Hattie and Timperley’s Feedback Model in Resident Education

A Textual Narrative Synthesis on the Application of Hattie and Timperley’s Model for Feedback in Resident Education

Bethany Figg1*, Sally A Santen2, Clara Bihn1, Troy Hicks3, Mary Jo Wagner1

1Central Michigan University – CMU Medical Education Partners, Department of Graduate Medical Education

2University of Cincinnati, College of Medicine

3Central Michigan University, Teacher and Special Education Department

[email protected]

OPEN ACCESS

PUBLISHED 30 June 2025

CITATION Figg, B., Santen, S.A., et al., 2025. A Textual Narrative Synthesis on the Application of Hattie and Timperley’s Model for Feedback in Resident Education. Medical Research Archives, [online] 13(6). https://doi.org/10.18103/mra.v13i6.6711

DOI https://doi.org/10.18103/mra.v13i6.6711

ISSN 2375-1924

ABSTRACT

Background: Because competency-based assessment focuses on skill acquisition, educators need to provide feedback in resident training programs that is timely, specific, and actionable for improvement of these skills. Hattie and Timperley’s model offers medical faculty a tool for providing quality feedback.

Objective: The literature was reviewed to discover residency programs that use Hattie and Timperley’s Model for Feedback and to encourage faculty to consider utilization of Hattie and Timperley’s Model for Feedback for physician learners.

Methods: Google Scholar, Scopus, ERIC, and the Web of Science databases were searched for relevant articles. The inclusion criteria were peer-reviewed publications in English that referenced the Hattie and Timperley seminal article in studies of resident education.

Results: While the Hattie and Timperley seminal article is widely referenced in the literature (n=26,955), the Model for Feedback does not appear to be utilized as often in resident training program feedback processes. Themes emerged from this study that discuss feedback as resident assessment, assessment of feedback given to residents, and models for feedback that are being used to assess resident competence.

Conclusions: While Hattie and Timperley’s review article was frequently cited for background for feedback, the model does not appear to have been widely adopted as a framework for providing feedback for residents. Since faculty in residency training has the autonomy to choose their models and tools, the Hattie and Timperley Model for Feedback—focusing on a combination of assessment activities to enhance learning and teaching—should be considered.

Keywords

Hattie and Timperley, feedback, resident education, competency-based assessment

INTRODUCTION

During the time spent in clinical settings, resident physicians must attain a high degree of proficiency with the skills necessary to practice. Canada created the Canadian Medical Education Directives for Specialists (CanMEDS) framework in 1996, identifying the overall abilities a physician must have to practice in any specialty, describing the basic structure of “competency” as the measure. In 1999, the Accreditation Council of Graduate Medical Education (ACGME) introduced six clearly defined national competency-based domains of residency training: Patient Care, Medical Knowledge, Practice-Based Learning and Improvement, Interpersonal and Communication Skills, Professionalism, and Systems-Based Practice. While intentions were good, and did indeed result in better performance on certifying exam scores, the ACGME admitted that this approach made residency training more prescriptive and, consequently, occasions for innovation steadily disappeared. In 2001, the British published their first competency-based system -”Competency based selection system for general practitioner registrars” and other physician training systems followed. By switching to a competency-based approach, it was postulated that what is taught to the resident physician and what is expected of a physician in the real world would be in tighter alignment.

Competency-Based Medical Education (CBME) is defined by the Association of American Medical Colleges (AAMC) as “an outcomes-based approach to the design, implementation, and evaluation of education programs and the assessment of learners, using competencies or observable abilities. The goal of CBME is to ensure all learners achieve the desired outcomes during their training.” Because the competency-based system of medical education and assessment places a large focus on skill acquisition, residency training faculty, in this paper called graduate medical education (GME) faculty, should be skilled in providing feedback that is timely, specific, and actionable.

Given the well-defined structure of CBME, including very detailed outcome measures, it is surprising that medical education has not, in the same manner, delineated a structure for providing feedback. The history of GME is an apprenticeship model, so the learner is expected to learn by modeling their behavior after the faculty expert, with less of a focus on educating when compared to the deliberate teaching of medical students. Kruger and Dunning’s seminal article indicates that self-assessment, as often seen in this apprentice model, is imperfect and implies that informed insight through feedback may improve residents’ practice. This principle was likewise found by Sargent et al. Though physician supervisors acknowledge this precept, in practice, providing this kind of effective feedback is not followed by many faculty members.

Still, to assist residents in becoming independent medical specialists, feedback on clinical performance is fundamental to enhance learning while confirming, restructuring, expanding, and overwriting their clinical performance. The current structure of providing feedback in GME includes two components: a verbal discussion proximate to the observation of the learner’s actions, and a written component, commonly identified as an assessment or evaluation, that tends to be a more global summary. Studies indicate that residents desire this feedback, but often find it generic, vague, disengaged, and untimely. Studies also indicate that few professional development opportunities have been implemented to help supervising faculty reliably observe skills and behaviors to provide feedback to residents, as well as skills needed for giving timely, specific, and actionable feedback. Medical education has many choices when it comes to feedback tools and — while standardization could be useful — medical educators cannot wait for the perfect assessment tool and should utilize the best combination of tools available for their assessment of resident activities. Sticking with traditionally-utilized feedback methods like Pendleton’s Rules (learner describes what went well, teacher describes what went well, learner describes what they could do better, teacher describes what they could do better) and the “Feedback Sandwich” (praise, critique, praise) may not produce the desired effects of improving the process to achieve competency and, instead, focus more on outcomes.

In 1996, Kluger and de Nisi conducted a comprehensive review on feedback based on 131 studies, with over 12,000 participants and noting that of these, the effects of about a third of the studies were negative. Based on that study, Hattie and Timperley provided a conceptual analysis of feedback, and determined the type of feedback and the way it is delivered can affect different results. Hattie and Timperley then proposed a model for feedback that identifies particular properties and circumstances that make feedback effective. The Hattie and Timperley (H&T) Model for Feedback provides a combination of assessment activities for enhancing learning and teaching, and urges educators to consider feedback as part of an ongoing process of assessment and instruction rather than a separate entity. The H&T Model for Feedback aims to reduce discrepancies between current understandings and performance level with an ideal goal by asking three major questions: “Where am I going?” (What are the goals?), “How am I going?” (What progress is being made toward the goal?), and “Where to next?” (What activities need to be undertaken to make better progress?). This model further distinguishes four levels of feedback for each question aimed at 1) the task level (how well tasks are understood or performed), 2) the process level (the main process needed to understand or perform tasks), 3) the self-regulation level (self-monitoring and regulation of actions) and 4) the self-level (characteristics of the learner).

The H&T Model for Feedback has been recently examined in several fields. In the field of psychology, a meta-analysis of 435 studies researching the effects of feedback on student learning determined that feedback, while powerful in and of itself, could be improved by providing feedback in the manner described by the H&T Model for Feedback. In the field of computer education, a study was conducted through a virtual classroom that compared learning before and after the application of the H&T Model for Feedback. It was observed that this feedback model was applicable in digital teaching and allowed greater activity and communication and advanced the students’ competence. A study in human-robotics interactions found that negative feedback when applied with the H&T Model for Feedback was crucial for interpersonal feedback. Given the wide acceptance of this model in other fields, the purpose of this study is to examine the ways in which medical professionals utilize the H&T Model for Feedback in GME. As there is currently no consistent feedback norm, the review of the literature might also be used to encourage faculty to consider utilization of the Hattie and Timperley’s Model for Feedback for all physician learners.

METHODS

The methodology utilized for this systematic review was a textual narrative synthesis, described by Xiao and Watson as “a standard data extraction format by which various study characteristics can be taken from each piece of literature…” and includes “…a quantitative count of studies that has each characteristic.” Since this study was investigating the social phenomenon of providing feedback in medical education, a textual narrative synthesis seemed appropriate as compared to other forms of systematic literature reviews that look statistically at effect sizes. This review organized the findings into similar subgroups and compared differences across the studies found. Various characteristics were extracted from each study, producing a quantitative count of the studies with particular qualities (Appendix 1). When first searching the literature for articles citing Hattie and Timperley’s Model for Feedback, the article was cited 14,800 times in Google Scholar, 4,001 times in Web of Science, 2,745 times in the ERIC database, and 5,409 times in Scopus. Utilizing the “Search within citing articles” feature on Google Scholar to filter “Medicine” resulted in 4,310 results which were then filtered further with “Graduate Medical Education” and resulted in 233 articles. These were saved to a Google library, then limited by English language and journal articles, resulting in 123. These 123 articles were exported to Excel and hand sorted by abstract, or full text when abstract was not available, to find articles focusing on GME. About half of these articles included undergraduate medical education (UME) learners, but articles only referencing UME were excluded. Next, the full text of each of the articles was examined to determine if the H&T Model for Feedback was utilized as a tool or only utilized as a reference to support the article. This yielded 37 results. This same process was utilized for each additional database: ERIC, Scopus, and the Web of Science database. Combining this with the results from the Google library search resulted in 106 articles after duplicates were removed. The final number of articles discussed in this review was 21 (Appendix 2) which were reviewed to extract a thematic analysis and generate an argument for utilizing the H&T Model for Feedback for GME. Following the guidance on conducting a textual narrative synthesis by Popay, Rodgers, Arai, and Britten, these articles were closely examined to describe the existing body of literature, identify the scope of what has been studied, and to note gaps that need to be filled.

Flowchart
Figure 1: Flowchart

ARTICLE ANALYSIS

The literature search yielded 21 articles that refer to Hattie and Timperley’s Model for Feedback as a useful guide to the importance of feedback in GME. Out of these, Zelenski et al specifically conducted a training session with faculty in a variety of specialties on using the H&T feedback model to improve the quality of written feedback. After conducting the session, they saw a percentage increase in faculty citing specific next steps and desired outcomes in their feedback to learners. Within the other 20 articles, one actively utilized the H&T Model to analyze the quality of feedback provided by medical educators, one study discussed using elements of the model for feedback in progress portfolios as part of their residents’ training cycle, and a handful of articles discussed the three questions and/or the three terms (feed-up, feed-back, and feed-forward) in H&T’s Model for Feedback as a way to organize the quality of feedback. The three terms were categorized by 1) improvements to the learning by providing specific performance parameters (feed-up), 2) presenting methods for improving parameters (feed-back), and 3) providing personalized tips for parameters based on past performance (feed-forward).

While the goal of this study was to determine the utilization and make an argument for use of the H&T Model for Feedback in graduate medical education settings, of the relevant articles, the literature search revealed no studies where faculty were explicitly implementing the model in their feedback process. While there are no empirical studies of H&T, 21 articles in the literature referenced their model for feedback as a useful guide. From these studies focusing on the feedback for formative assessment of resident performance/competence, three themes emerged. These themes were: assessment of feedback provided to residents; residents’ interaction with feedback; and models for feedback that have been developed from Hattie and Timperley’s work.

FEEDBACK FOR FORMATIVE ASSESSMENT

The first theme identified was a focus on the importance of feedback for formative assessment of resident performance to determine the gap in skills and knowledge to attain competence. In one study of this review, Ali et al analyzed feedback comments provided by medical faculty to residents, and assessed quality by the model for feedback criteria. They found the quality of feedback was often poor and that there is a need for feedback to bridge the gap between what is expected, what went right, what can be improved, and how to improve it. The authors promoted Hattie and Timperley’s framework as a model that could enhance the effectiveness of feedback and the ingredients for success: “They concluded that useful feedback should include: Main goals, positive language, plan to achieve progression and — most importantly — to ensure language used is pitched at the right level of trainee.” Zelenski et al also compared previous feedback from faculty based on the H&T framework. Their institution saw a need for improved feedback in order to better evaluate residents: At our institution, students, residents, and fellows often comment that the feedback they receive is limited and generally not useful. This became evident to residency and fellowship program leadership after clinical competency committees were formed. Committee members struggled to provide learners with substantial recommendations for improvement, given the substandard written feedback that faculty provided. Through an hour-long faculty development session, the study concluded that even a brief intervention and immediate practice utilizing the framework provided measurable improvement in future written feedback.

Holmboe et al discussed how useful assessment methods exist, and that a focus on assisting training programs to utilize these methods more effectively is needed. Framing feedback as an intervention for developing skills can help residents understand “where they are going.” The study highlighted the role of feedback in assessment connecting to the idea that Hattie and Timperley report: ineffective feedback can delay development, while effective feedback can promote success. In particular: Feedback is only as good as the assessment that informs it: inaccurate assessment leads to ineffective feedback and potentially delayed development. However, effective feedback can be a powerful tool for professional development. As noted by Hattie and Timperley in their extensive review across the continuum of education, feedback may be the most potent “intervention” in helping learners progress.

Molloy et al discussed feedback myths, including the error of assuming educators inherently know how to provide feedback. In medical education, a focus is placed on improving the feedback skills of teachers rather than focusing on how to engage learners in the feedback processes. Referring to H&T as a recent model for feedback, Molloy et al stated: The perceived usefulness of the feedback influences learner achievement and interest. Attention to this aspect can be seen in more recent models of feedback. The assumption in medical education is that feedback is related to tasks or discrete knowledge, which narrows feedback to vocational competence as opposed to capability. Hattie and Timperley’s Model for Feedback encourages the engagement of learners in the feedback process by feeding forward at the task level (scaffolding learning), the process level (increasing complexity), and the self-regulatory level (reducing reliance, encouraging autonomy), all with the intent of engaging the resident in their continued success. While developing their competency-based curriculum for reforming medical education in the Netherlands, Scheele et al reported that different assessment methods worked for different aspects of a resident’s medical training. Citing Hattie and Timperley’s ideas for making feedback more specific, and thus more actionable, the authors recommended focusing on only a few aspects of an experience during assessment. This would better target the feedback, while ensuring other experiences would focus on different competencies, and collectively address all competencies: The DAPCD [Dutch Advisory Board for Postgraduate Curriculum Development] recommends to focus on a few roles per task to make feedback more specific and hence more valuable. Moreover, we want to target observations in practice to those areas carefully chosen by the profession, while simultaneously making sure that all the different CanMEDS [Canadian Medical Educational Directives for Specialists] roles receive attention.

Duitsman et al studied conversations between medical faculty and residents to analyze the content of their feedback conversations and determine ways to improve the dialogue. The authors referenced Hattie and Timperley when considering the content of feedback as fundamental to enhance resident skill and performance: “In the context of training residents to become medical specialists, feedback on clinical performance is fundamental to enhance their learning and confirm, restructure, tune, expand and overwrite their clinical performance.” In order to assist surgical residents with developing skills for a particular procedure, Yovanoff et al analyzed the feedback given to learners through the lens of task-specific, processes-specific, and self-regulatory types of feedback. Acknowledging Hattie and Timperley’s research on feedback as a critical part of this learning process: While the use of VR [Virtual Reality] simulators may have an advantage over standard simulators for improving learning gains through adaptive and real time feedback, research has shown that the timing and type of feedback (task specific, processes specific, or self-regulatory) provided to an individual can change what and how they learn. The theme of “feedback as formative assessment” is also mentioned in the studies by Norcini and Burch, Duitsman et al, Stegeman et al, Van der Kleij et al, and Driessen and Scheele; these studies are discussed in further detail under other themes in this review. While this theme focused on feedback as a tool for providing formative assessment of resident performance, the feedback itself was also examined for its usefulness in resident performance assessment.

ASSESSMENT OF FEEDBACK PROVIDED

Assessing the usefulness, timeliness, and quality of feedback itself was the focus of several of the studies reviewed. Ali et al analyzed feedback comments provided by medical faculty to residents, and quality was assessed by the model for feedback criteria. They found the quality of feedback was often poor and that there is a need for feedback to bridge the gap between what is expected, what went right, what can be improved, and how to improve it. The authors promoted Hattie and Timperley’s framework as a model that could enhance the effectiveness of feedback and the ingredients for success: “They concluded that useful feedback should include: Main goals, positive language, plan to achieve progression and — most importantly — to ensure language used is pitched at the right level of trainee.” Tham, Burr, and Boohan studied feedback in medical specialties to identify commonly occurring themes and discovered that while feedback was mainly positive, it was not necessarily quality. Recommendations included an addition of specificity to feedback, the development of action plans, and better timing of delivery for residents to have better insight into their actions. Citing Hattie and Timperley’s research on the timing of feedback, the authors confirmed a loss of effectiveness as more time passes between the interaction and the feedback: With regards to the timing of the feedback in relation to the assessment taking place, there was some evidence (based on what was written) that the written feedback was done a few days after the assessment… A study looking at delayed versus immediate feedback demonstrated that the effectiveness was reduced when feedback was delayed.

Another study focused on the effect of positive and negative feedback on a surgical residents’ well-being. The results were consistent with Hattie and Timperley’s claims that negative feedback in a positive environment can be a powerful intervention, but poor quality of feedback can result in a resident’s inability to see their knowledge and skill gaps and impeded improvement of skills: “The data illustrate an apparent lack of quality feedback and signifies a missed opportunity to enhance the learning experience with a failure to reduce the gap between current and desired understanding.” While studying validity of assessment, Govaerts discussed the importance of feedback containing more than just information about observed performance (“feeding back”), but including “feeding up” of understanding the performance goals, and a “feed-forward” to inform what is needed to achieve the goals. When specifically considering the validity of assessing with this feedback, Govaert cites Hattie and Timperley’s research on indicating this is not an easy endeavor: However, if the main purpose of formative assessment is to stimulate further learning and use of feedback for performance improvement, one might argue that the key question to be addressed in the validity inquiry must be whether the assessment actually achieves these goals. Unfortunately, a wealth of research findings indicate that there is no simple answer to the questions of when, for whom, and for what feedback works. The theme of evaluation of feedback provided is also mentioned in the studies by Ali et al, Molloy et al, Driessen and Scheele, Fluit et al, Tham et al, and Holmboe et al; these studies are discussed in further detail under other themes in this review. Focusing on feedback itself led to the discovery of two sub-themes of resident interaction with their feedback, including self-regulation and perceived credibility discussed below.

LEARNER SELF-REGULATION WITH FEEDBACK

Focusing on feedback itself led to the discovery of resident interaction with their feedback, including self-regulation and perceived credibility. Molloy et al recognized the importance of engaging learners in the feedback processes. The perceived usefulness of the feedback influences learner achievement and interest. Attention to this aspect can be seen in more recent models of feedback. The assumption in medical education is that feedback is related to tasks or discrete knowledge, which narrows feedback to vocational competence as opposed to capability. Hattie and Timperley’s Model for Feedback encourages the engagement of learners in the feedback process by feeding forward at the task level (scaffolding learning), the process level (increasing complexity), and the self-regulatory level (reducing reliance, encouraging autonomy), all with the intent of engaging the resident in their continued success. Some of the studies noted the tension that exists within the residents themselves. When assessing their own competence and the incoming information from their medical faculty, an internal battle can directly affect their ability to self-regulate and participate in self-directed learning. In a study by Norcini and Burch, several formative assessment methods were examined on their usefulness of feedback used as an educational tool. Feedback that focuses on self-regulation addresses the interplay between commitment, control, and confidence. It concentrates on the way trainees monitor, direct, and regulate their actions relative to the learning goal. It implies a measure of autonomy, self-control, self-direction, and self-discipline. Self-assessment of residents’ own abilities and the inadequacy of that judgment is discussed by Eva et al. The authors considered the interplay between a learner’s fear of looking inadequate, confidence in skills, whether warranted or not, and reasoning process rife with emotion and confirmation bias — a recipe for making the acceptance of feedback difficult. Acknowledging H&T’s argument for engaging the learner in the feedback process for feedback to be successful, Eva et al. stated: Feedback is never provided in a vacuum. As such, any effort to improve performance and overcome reliance on often flawed personal judgments must be considered in the context of what receivers believe provides important guidance regarding the credibility of that feedback. How feedback is perceived and discussed will determine how feedback is interpreted and adopted.

When discussing ways for medical faculty to facilitate learning, Desy et al discussed effective feedback as an important component. Effective feedback for learning was not considered to be only positive or negative, but conditional on the resident, depending on their prior knowledge and ability to self-assess. Hattie and Timperley’s research was considered for focusing on improving the resident processing of feedback rather than faculty only stating the outcome of an observed interaction: “Despite this complexity, there are general recommendations on how medical teachers can improve the effectiveness of feedback, such as allowing learners to self-assess and identify solutions.” In their article discussing Self-Determination Theory in medical education, Kusurkar, Croiset, and Ten Cate considered ways to encourage learners towards autonomy by helping them see where they need to go, identifying their gap in learning, then determining how to mitigate it: “Give timely, positive and constructive feedback to the students on the process of learning, to show the gap between the current and the desired understanding, rather than the task of learning (i.e. grades).” The theme of “learner self-regulation with feedback” is also mentioned in the study by Johnson et al, but discussed in further detail under other themes in this review. Along the lines of residents interacting with their feedback, learners’ perceived credibility of feedback will be explored next.

LEARNER PERCEIVED CREDIBILITY OF FEEDBACK

An additional tension identified involved the relationship between the resident and the faculty. Residents will disregard the credibility of feedback provided by a faculty member who may not have observed them long enough, or discussed things the resident considered out of their control. In a discussion by Telio, Ajjawi, and Regehr, they recommended that faculty consider the educational alliance between the faculty and the resident to proactively lay the groundwork for productive feedback conversations. Telio, et. al., mentioned H&T’s research had feedback conceptualized as a one-way process where the supervisory relationship is not considered. The perfect feedback conversation may fall on deaf ears if the relationship between the two factors is not already established: Recognizing that the learner may be closely examining the supervisor’s commitment to the educational alliance very early in the relationship reinforces the importance of being authentically interested in the learner upon introduction and suggests why failing to demonstrate authentic interest early in the relationship may result in the later reluctance of the learner to “listen” to the supervisor’s valuable feedback. Johnson et al conducted an extensive literature review on the role of feedback on learner outcomes. The authors determined a need for learners to not only understand what their target performance should be, but how it differs from their current state and what knowledge and skills are needed to address the gap. While this speaks to learner self-assessment, the authors also reported the importance of a learner’s interpretation of their medical faculty’s knowledge, experience, and relationship. As cited by the authors in relation to Hattie and Timperley, these factors could make or break the learners’ perceptions of feedback credibility: “The learner-educator relationship strongly influences face-to-face feedback; the personal interaction can enrich or diminish the potential for learning.” Credibility is one of the key influences on a learner’s ability to respond to feedback (the other involves learning culture) according to a study by Watling. When residents consider the faculty member providing the feedback as reliable and trustworthy, an environment for learning can be established. Watling cited Hattie and Timperley’s research to support this influence on the conditions needed for feedback to improve performance: Despite their different contexts, these studies provide ample warning that we cannot approach the use of feedback in any educational setting with the presumption that it will be effective in promoting learning and performance improvement… Understanding under what conditions feedback improves performance thus becomes a critical challenge for medical educators.

The sub-theme of “learner perceived credibility of feedback” is also mentioned in the studies by Molloy et al, Kusurkar et al, and Yovanoff et al, but these studies are discussed in further detail under other themes in this review. The focus on the “feedback as assessment” and “assessment of feedback” lends to the final theme: models for feedback.

MODELS FOR FEEDBACK

Some of the studies in the review proposed a new or revised framework or model for feedback that stemmed in part from Hattie and Timperley’s research. Driessen and Sheele proposed that medical education shift the focus from assessing resident performance to a focus on resident learning. Conversations between medical faculty and the resident receiving feedback should include mastery goals in order to ensure the resident is not only receiving performance feedback, but also opportunities for learning and advancing. Driessen and Sheele encourage the use of H&T’s Model for Feedback in order to introduce those mastery and performance goals: “In appraisal interviews, supervisors and trainees should address three questions: Where am I going? How am I going? Where to next? To answer the first question, trainees should have a clear understanding of the desired practice and competence.” While creating a framework to guide medical faculty in providing feedback, Fluit et al placed a heavy emphasis on considering the quality of feedback. Their framework, labeled EFFECT (evaluation and feedback for effective clinical teaching) contains a domain for feedback that resembles Hattie and Timperley’s feedback questions (What am I doing correctly? What can I improve?). Citing Hattie and Timperley, the authors identified the important role of feedback in deliberate learning: “Others emphasize the importance of learning from activities that residents perform in clinical practice, providing feedback, or creating a positive learning climate.” A study in the Netherlands focused on the imparting of knowledge from medical faculty to residents and the acquiring of knowledge by the residents from the medical faculty as an ongoing interaction rather than an educator-driven interaction. Stegeman, Schoten, and Terpstra proposed an interactive master-apprenticeship framework that encompasses faculty modeling and feedback as educational routes. The modeling function and feedback function are used together as a didactical component of learning rather than the educator providing a series of unrelated topics for didactics. The authors cited Hattie and Timperley’s research on feedback theory and the factors and conditions needed for feedback to be useful: “The power of feedback is frequently mentioned in articles about learning and teaching. ‘Content’, ‘structure’, ‘process’ and ‘time’ have been studied in close detail…”

INTERPRETATION AND IMPLICATIONS

Through this textual narrative synthesis of the research literature, despite the large volume of studies referencing Hattie and Timperley research, the method of feedback remained largely unused by medical faculty to provide feedback to residents. Some reasons to consider for this absence include the points that medical faculty were unaware of the model, they might not consider it to be a good model, or it might not be a good model for medical education. Perhaps medical faculty may already be in the habit of utilizing a different feedback model, and are comfortable with those results, and do not wish to learn or look at a different model for feedback. While GME acknowledges that quality feedback is critical for resident success, no clear feedback model is widely adopted. Review into specific areas of importance for feedback, such as feedback for formative performance, learner relationship to feedback and the quality of feedback, would require more research when searching for a universal feedback model that is successful for GME. Nonetheless, this review suggests the importance of timely, specific, and actionable feedback and the barriers that may occur involving medical faculty and residents. Because effective feedback can be one of the most potent interventions to help learners progress in competency-based medical education, residency programs should be providing faculty with the professional development needed to acquire and hone these skills. Sequentially, faculty should be providing residents with the skills needed to identify and accept the gaps in their understanding, and the actions needed to close that gap. Structured feedback utilizing the H&T Model for Feedback would provide a framework to have both timely, specific feedback, as well as actionable feedback with goals applied at the right level of resident understanding and capabilities. The setting of feedback should be considered as well, and should take place immediately — as well as summatively — to meet both the needs of the learner and the accrediting bodies. The three questions “Where am I going?” (The goal), “How am I going?” (Current progress), and “Where to next?” (Actionable steps) can be applied immediately after a clinical encounter, summatively at the end of a rotation, and semi-annually to describe goals and metrics needed for each residents’ journey on competency-based medical education milestones. Feedback should be considered a dialogue, not a one-way street. While feedback may seem to flow in one direction, it should be a conversation where educator and learner can discuss goals and how to reach them, and in the process, strengthen the feedback skills of the educator as well.

LIMITATIONS

Limitations of this textual narrative synthesis review include the minimal number of publications directly linking GME with the use of the H&T Model for Feedback. While some studies may have been inadvertently excluded from the review, the main consideration is that most studies referenced the importance of feedback in ways identified by Hattie and Timperley, rather than utilizing the process outlined by Hattie and Timperley. To that end, medical education researchers should note that areas in need for future study would be an exploration into the actual implementation of faculty development on the delivery of feedback, the quality of feedback, and relationship building for learners and faculty to have the dialogue needed for feedback to be implemented, all with the Hattie and Timperley model as a framework.

CONCLUSION

This review identified 21 articles that discuss the importance of the feedback conversation and the barriers to feedback meeting its intended purpose. The H&T Model for Feedback provides a widely used tool in other fields that can address the parts of the feedback conversation that can produce quality results, but indicates that it is not utilized widely, if at all, in medical education. While the purpose of feedback is to provide learners with the tools needed to progress in competency-based medical education toward the independent practice of medicine, the setting needs to be prepared for both educators and learners to be successful in this process. Since GME has the autonomy to choose their assessment tools, while needing to provide timely, specific, and actionable feedback, offering professional education on and encouraging the use of this framework for feedback — focusing on a combination of assessment activities to enhance learning and teaching — should be considered.

Conflict of Interest Statement: The authors have no conflicts of interest to declare.

Funding Statement: None.

Acknowledgements: None.

REFERENCES

  1. Frank JR, Danoff D. The CanMEDS initiative: implementing an outcomes-based framework of physician competencies. Med Teach. 2007 Sep; 29(7):642-7. doi:10.1080/01421590701746983. PMID: 18236250.
  2. Shweiki E, Beekley A, Jenoff J, et al. Applying expectancy theory to residency training: Proposing opportunities to understand resident motivation and enhance residency training. Advances in Medical Education and Practice. Published online April 2015:339-346. doi:10.2147/amep.s76587
  3. Nasca TJ, Philibert I, Brigham T, Flynn TC. The next GME accreditation system — Rationale and benefits. New England Journal of Medicine. 2012;366(11):1051-1056. doi:10.1056/nejmsr1200117
  4. Patterson, F. (2001). Competency based selection system for general practitioner registrars. BMJ, 323, 2-2.
  5. Competency-Based Medical Education (CBME) | Association of American Medical Colleges. AAMC. https://www.aamc.org/what-we-do/mission-areas/medical-education/cbme.
  6. Poeppelman RS, Liebert CA, Vegas DB, Germann CA, Volerman A. A narrative review and novel framework for application of team-based learning in graduate medical education. Journal of Graduate Medical Education. 2016;8(4):510-517. doi:10.4300/jgme-d-15-00516.1
  7. Ende J. Feedback in clinical medical education. JAMA: The Journal of the American Medical Association. 1983;250(6):777. doi:10.1001/jama.1983.03340060055026
  8. Govaerts M, van der Vleuten CP. Validity in work-based assessment: Expanding our horizons. Medical Education. 2013;47(12):1164-1174. doi:10.1111/medu.12289
  9. Norcini J, Burch V. Workplace-based assessment as an educational tool: AMEE guide no. 31. Medical Teacher. 2007;29(9-10):855-871. doi:10.1080/01421590701775453
  10. Watling CJ. Unfulfilled promise, untapped potential: Feedback at the Crossroads. Medical Teacher. 2014;36(8):692-697. doi:10.3109/0142159x.2014.889812
  11. Kruger J, Dunning D. Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology. 1999;77(6):1121-1134. doi:10.1037//0022-3514.77.6.1121
  12. Sargeant J, Armson H, Chesluk B, et al. The processes and dimensions of informed self-assessment: A conceptual model. Academic Medicine. 2010;85(7):1212-1220. doi:10.1097/acm.0b013e3181d85a4e
  13. Duitsman ME, van Braak M, Stommel W, et al. Using conversation analysis to explore feedback on resident performance. Advances in Health Sciences Education. 2019;24(3):577-594. doi:10.1007/s10459-019-09887-4
  14. Hattie J, Timperley H. The power of feedback. Review of Educational Research. 2007;77(1):81-112. doi:10.3102/003465430298487
  15. Kamali D, Illing J. How can positive and negative trainer feedback in the operating theatre impact a surgical trainee’s confidence and well-being: A qualitative study in the north of England. BMJ Open. 2018;8(2). doi:10.1136/bmjopen-2017-017935
  16. Stegeman JH, Schoten EJ, Terpstra OT. Knowing and acting in the clinical workplace: Trainees’ perspectives on modelling and feedback. Advances in Health Sciences Education. 2012;18(4):597-615. doi:10.1007/s10459-012-9398-4
  17. Tham TC, Burr B, Boohan M. Evaluation of feedback given to trainees in medical specialties. Clinical Medicine. 2017;17(4):303-306. doi:10.7861/clinmedicine.17-4-303
  18. van de Ridder JM, McGaghie WC, Stokking KM, ten Cate OT. Variables that affect the process and outcome of feedback, relevant for medical training: A meta-review. Medical Education. 2015;49(7):658-673. doi:10.1111/medu.12744
  19. Ali AS, Bussey M, O’Flynn KJ, Eardley I. Quality of feedback using workplace based assessments in urological training. British Journal of Medical and Surgical Urology. 2012;5(1):39-43. doi:10.1016/j.bjmsu.2011.10.001
  20. Bartlett M, Crossley J, McKinley R. Improving the quality of written feedback using written feedback. Education for Primary Care. 2016;28(1):16-22. doi:10.1080/14739879.2016.1217171
  21. Bing-You RG. Why medical educators may be failing at feedback. JAMA. 2009;302(12):1330. doi:10.1001/jama.2009.1393
  22. Branfield Day L, Miles A, Ginsburg S, Melvin L. Resident perceptions of assessment and feedback in competency-based Medical Education: A FOCUS Group Study of one Internal Medicine Residency Program. Academic Medicine. 2020;95(11):1712-1717. doi:10.1097/acm.0000000000003315
  23. Cantillon P, Easton G. Feedback – what’s new? Education for Primary Care. 2015;26(2):116-117. doi:10.1080/14739879.2015.11494323
  24. Driessen E, Scheele F. What is wrong with assessment in postgraduate training? lessons from clinical practice and educational research. Medical Teacher. 2013;35(7):569-574. doi:10.3109/0142159x.2013.798403
  25. Gaunt A, Patel A, Fallis S, et al. Surgical trainee feedback-seeking behavior in the context of workplace-based assessment in clinical settings. Academic Medicine. 2017;92(6):827-834. doi:10.1097/acm.0000000000001523
  26. Hewson MG, Little ML. Giving feedback in medical education. Journal of General Internal Medicine. 1998;13(2):111-116. doi:10.1046/j.1525-1497.1998.00027.x
  27. Ibrahim J, MacPhail A, Chadwick L, Jeffcott S. Interns’ perceptions of performance feedback. Medical Education. 2014;48(4):417-429. doi:10.1111/medu.12381
  28. Molloy E, Ajjawi R, Bearman M, Noble C, Rudland J, Ryan A. Challenging feedback myths: Values, learner involvement and promoting effects beyond the immediate task. Medical Education. 2019;54(1):33-39. doi:10.1111/medu.13802
  29. Ramani S, Post SE, Könings K, Mann K, Katz JT, van der Vleuten C. “it’s just not the culture”: A qualitative study exploring residents’ perceptions of the impact of institutional culture on feedback. Teaching and Learning in Medicine. 2016;29(2):153-161. doi:10.1080/10401334.2016.1244014
  30. Telio S, Ajjawi R, Regehr G. The “Educational Alliance” as a framework for reconceptualizing feedback in medical education. Academic Medicine. 2015;90(5):609-614. doi:10.1097/acm.0000000000000560
  31. Holmboe ES, Sherbino J, Long DM, Swing SR, Frank JR. The role of assessment in competency-based Medical Education. Medical Teacher. 2010;32(8):676-682. doi:10.3109/0142159x.2010.500704
  32. Scheele F, Teunissen P, Luijk SV, et al. Introducing competency-based Postgraduate Medical Education in the Netherlands. Medical Teacher. 2008;30(3):248-253. doi:10.1080/01421590801993022
  33. Pendleton D, Schofield T. The Consultation: An Approach to Learning and Teaching. Oxford University Press (OUP); 1986.
  34. Kusurkar RA, Croiset G, Ten Cate OTh. Twelve tips to stimulate intrinsic motivation in students through autonomy-supportive classroom teaching derived from self-determination theory. Medical Teacher. 2011;33(12):978-982. doi:10.3109/0142159x.2011.599896
  35. Kluger AN, DeNisi A. The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin. 1996;119(2):254-284. doi:10.1037/0033-2909.119.2.254
  36. Wisniewski B, Zierer K, Hattie J. The power of Feedback Revisited: A meta-analysis of educational feedback research. Frontiers in Psychology. 2020;10. doi:10.3389/fpsyg.2019.03087
  37. Eslava J, Arones M, Godoy Y, Guerrero F. Characterization of meaningful learning associated with feedback in a digital transformation. 2021 5th International Conference on Deep Learning Technologies (ICDLT). Published online July 23, 2021:128-131. doi:10.1145/3480001.3480023
  38. Martinovic A, Kunold L. Unfortunately, your task allocation is in need of improvement. 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI). Published online March 7, 2022:909-913. doi:10.1109/hri53351.2022.9889503
  39. Xiao Y, Watson M. Guidance on conducting a systematic literature review. Journal of Planning Education and Research. 2017;39(1):93-112. doi:10.1177/0739456×17723971
  40. Popay J, Roberts H, Sowden A, et al. Guidance on the conduct of narrative synthesis in systematic Reviews. A Product from the ESRC Methods Programme. Version 1. Semantic Scholar. doi:https://doi.org/10.13140/2.1.1018.4643
  41. Zelenski AB, Tischendorf JS, Kessler M, et al. Beyond “Read More”: An Intervention to Improve Faculty Written Feedback to Learners. J Grad Med Educ. 2019;11(4):468-471. doi:10.4300/JGME-D-19-00058.1
  42. Van der Kleij FM, Feskens RC, Eggen TJ. Effects of feedback in a computer-based learning environment on students’ learning outcomes. Review of Educational Research. 2015;85(4):475-511. doi:10.3102/0034654314564881
  43. Yovanoff M, Pepley D, Mirkin K, Moore J, Han D, Miller S. Personalized learning in medical education: Designing a user interface for a dynamic haptic robotic trainer for central venous catheterization. Proceedings of the Human Factors and Ergonomics Society Annual Meeting. 2017;61(1):615-619. doi:10.1177/1541931213601639
  44. Govaerts M. Workplace-based assessment and assessment for learning: Threats to validity. Journal of Graduate Medical Education. 2015;7(2):265-267. doi:10.4300/jgme-d-15-00101.1
  45. Fluit C, Bolhuis S, Grol R, et al. Evaluation and feedback for effective clinical teaching in Postgraduate Medical Education: Validation of an assessment instrument incorporating the canmeds roles. Medical Teacher. 2012;34(11):893-901. doi:10.3109/0142159x.2012.699114
  46. Eva KW, Armson H, Holmboe E, et al. Factors influencing responsiveness to feedback: On the interplay between fear, confidence, and reasoning processes. Advances in Health Sciences Education. 2011;17(1):15-26. doi:10.1007/s10459-011-9290-7
  47. Desy J, Busche K, Cusano R, Veale P, Coderre S, McLaughlin K. How teachers can help learners build storage and retrieval strength. Medical Teacher. 2017;40(4):407-413. doi:10.1080/0142159x.2017.1408900
  48. Johnson CE, Keating JL, Boud DJ, et al. Identifying educator behaviours for high quality verbal feedback in Health Professions Education: Literature Review and expert refinement. BMC Medical Education. 2016;16(1). doi:10.1186/s12909-016-0613-5
  49. Watling CJ. Unfulfilled promise, untapped potential: Feedback at the Crossroads. Medical Teacher. 2014;36(8):692-697. doi:10.3109/0142159x.2014.889812

APPENDIX 1

Article characteristics

Quantitative count of the studies with particular qualities

  • Reviews of the Medical Literature on Feedback and Assessments: 6
    • Driessen & Scheele, 2013; Holmboe et al., 2010; Johnson et al., 2016; Kusurkar et al., 2011; Norcini & Burch, 2007; Telio et al., 2015
  • Studies on a group of medical professionals: 10
    • Ali et al., 2012; Duitsman et al., 2019; Eva et al., 2012; Fluit et al., 2012; Kamali & Illing, 2018; Scheele et al., 2008; Stegeman et al., 2013; Tham et al., 2017; Yovanoff et al., 2017, Zelenski et al., 2019
  • Articles with Specific Medical Specialties represented:
    • Internal Medicine: 4 Duitsman et al., 2019; Eva et al., 2012; Fluit et al., 2012, Zelenski et al., 2019
    • Anesthesiology: 3 Fluit et al., 2012; Scheele et al., 2008; Tham et al., 2017
    • Pediatrics: 3 Fluit et al., 2012; Scheele et al., 2008; Stegeman et al., 2013
    • Gynecology: 2 Fluit et al., 2012; Scheele et al., 2008
    • Radiology: 1 Duitsman et al., 2019
    • Surgery: 5 Duitsman et al., 2019; Fluit et al., 2012; Kamali & Illing, 2018; Stegeman et al., 2013; Yovanoff et al., 2017
    • Neurology, Geriatrics: 2 Fluit et al., 2012; Tham et al., 2017
    • Urology: 1 Ali et al., 2012
    • Psychiatry, and Pulmonary Disease: 1 Fluit et al., 2012
    • Genetics, Neurophysiology, Genitourinary Medicine, and Palliative Medicine: 1 Tham et al., 2017
    • Infectious Disease: 2 Tham et al., 2017, Zelenski et al., 2019
    • Cardiology, Hematology/oncology, Hospital Medicine, Nephrology, Pulmonary/Critical Care: 1 Zelenski et al., 2019
  • Other characteristics:
    • Included undergraduate medical education programs, midwifery programs, and practicing physicians: 2 Eva et al., 2012; Kusurkar et al., 2011

APPENDIX 2

Article List

  1. Telio, S., Ajjawi, R., & Regehr, G. (2015). The “Educational Alliance” as a Framework for Reconceptualizing Feedback in Medical Education: Academic Medicine, 90(5), 609–614.
  2. Holmboe, E. S., Sherbino, J., Long, D. M., Swing, S. R., Frank, J. R., & for the International CBME Collaborators. (2010). The role of assessment in competency-based medical education. Medical Teacher, 32(8), 676–682.
  3. Molloy, E., Ajjawi, R., Bearman, M., Noble, C., Rudland, J., & Ryan, A. (2020). Challenging feedback myths: Values, learner involvement and promoting effects beyond the immediate task. Medical Education, 54(1), 33–39.
  4. Eva, K. W., Armson, H., Holmboe, E., Lockyer, J., Loney, E., Mann, K., & Sargeant, J. (2012). Factors influencing responsiveness to feedback: On the interplay between fear, confidence, and reasoning processes. Advances in Health Sciences Education, 17(1), 15–26.
  5. Norcini, J., & Burch, V. (2007). Workplace-based assessment as an educational tool: AMEE Guide No. 31. Medical Teacher, 29(9–10), 855–871.
  6. Kusurkar, R. A., Croiset, G., & Ten Cate, O. Th. J. (2011). Twelve tips to stimulate intrinsic motivation in students through autonomy-supportive classroom teaching derived from Self-Determination Theory. Medical Teacher, 33(12), 978–982.
  7. Watling, C. J. (2014). Unfulfilled promise, untapped potential: Feedback at the crossroads. Medical Teacher, 36(8), 692–697.
  8. Driessen, E., & Scheele, F. (2013). What is wrong with assessment in postgraduate training? Lessons from clinical practice and educational research. Medical Teacher, 35(7), 569–574.
  9. Desy, J., Busche, K., Cusano, R., Veale, P., Coderre, S., & McLaughlin, K. (2018). How teachers can help learners build storage and retrieval strength. Medical Teacher, 40(4), 407–413.
  10. Scheele, F., Teunissen, P., Luijk, S. V., Heineman, E., Fluit, L., Mulder, H., Meininger, A., Wijnen-Meijer, M., Glas, G., Sluiter, H., & Hummel, T. (2008). Introducing competency-based postgraduate medical education in the Netherlands. Medical Teacher, 30(3), 248–253.
  11. Duitsman, M. E., van Braak, M., Stommel, W., ten Kate-Booij, M., de Graaf, J., Fluit, C. R. M. G., & Jaarsma, D. A. D. C. (2019). Using conversation analysis to explore feedback on resident performance. Advances in Health Sciences Education, 24(3), 577–594.
  12. Kamali, D., & Illing, J. (2018). How can positive and negative trainer feedback in the operating theatre impact a surgical trainee’s confidence and well-being: A qualitative study in the north of England. BMJ Open, 8(2), e017935.
  13. Fluit, C., Bolhuis, S., Grol, R., Ham, M., Feskens, R., Laan, R., & Wensing, M. (2012). Evaluation and feedback for effective clinical teaching in postgraduate medical education: Validation of an assessment instrument incorporating the CanMEDS roles. Medical Teacher, 34(11), 893–901.
  14. Tham, T. C., Burr, B., & Boohan, M. (2017). Evaluation of feedback given to trainees in medical specialties. Clinical Medicine, 17(4), 303–306.

“`

Interested in publishing your own research?
ESMED members can publish their research for free in our peer-reviewed journal.
Learn About Membership

Call for papers

Have a manuscript to publish in the society's journal?