Explore This Issue
December 2014If you are an academic otolaryngologist, teaching probably comes naturally to you, but can you tell which of your residents are struggling? Over the last few years, educators have become more interested in identifying residents in need of extra attention long before the yearly Otolaryngology Training Examination (OTE) results come in and well before their five-year training periods are over.
In 1999, the Accreditation Council for Graduate Medical Education (ACGME) began requiring that all residency programs evaluate their trainees according to six core competencies: patient care, medical knowledge, practice-based learning and improvement, systems-based practice, professionalism, interpersonal skills, and communication. The requirements changed again in July 2014 when the ACGME began evaluating otolaryngology programs according to its Next Accreditation System, which uses outcome measures rather than duration of training as the basis for accreditation. Residents must now show competency in 16 areas that the ACGME has termed “milestones.”
These requirements spurred educators to begin thinking about new ways to evaluate residents, said Gregory Grillone, MD, professor and vice chairman of otolaryngology-head and neck surgery at Boston University School of Medicine, where he is also chair of the Graduate Medical Education Committee. “That reinforced our need to provide more frequent and useful feedback to residents that would help them improve in areas where they had deficiencies,” he said.
Dr. Grillone and other otolaryngology educators share their tips for evaluating residents below.
Test, Test, Test
A few years ago, to determine how well their residents were acquiring factual knowledge from didactic lectures, Dr. Grillone and his colleagues at Boston University decided to administer regular quizzes. Residents were given three to four multiple choice questions before and after lectures, which they answered using automatic transponders that collate data electronically. Dr. Grillone and his colleagues described their method in a paper recently published in The Laryngoscope (2014;124:E309-E311). (See “Benefits of Pre- and Post-Lecture Questions,” below.)
“The questions and answers would be embedded in my presentation. If they did the reading and still didn’t get the questions right at the beginning of the lecture, they would know the answer by the time the presentation was over,” Dr. Grillone said.
Scores were tracked over the course of an academic year and then compared with scores on the OTE from that year. Overall OTE scores demonstrated a significant correlation to the scores from the lecture-based quizzes.
The study also showed that residents who had previously scored poorly on the OTE but were then assigned an individual learning improvement plan that included weekly reading lists and regular meetings with assigned faculty mentors to discuss reading topics showed an improvement in scores on the next OTE exam.
Lesson learned? “An organized educational program that uses review questions to assess knowledge can be used to track acquisition of medical knowledge, identify gaps, and facilitate the development of remediation plans that result in improved medical knowledge,” Dr. Grillone said.
Ask the Right Questions
Before the start of each academic year, professors should research their residents’ knowledge base and adapt lectures accordingly, said Pamela A. Rowland, PhD, professor of surgery and associate chair of education at the University of North Carolina at Chapel Hill. “If most people know what the parathyroid is and how it functions, and one person doesn’t, assign readings to that person ahead of time. Otherwise, you’ll end up boring the other people in that room,” she said.
Dr. Rowland recommended sending students an e-mail with questions before each new topic is introduced. For students who indicate a lack of knowledge about the topic, she suggested sending them a list of recommended readings or videos to watch so that they can catch up with the rest of the class.
Evaluate Surgical and Clinical Skills
How do you identify which residents need to improve their surgical and clinical skills while there’s still time to train them? For Dr. Grillone and his colleagues at Boston University, the way to do this is through regular assessments. Approximately 48 hours before they participate in a surgery, residents complete a surgical competency assessment form that tests their knowledge of the case, including rationale for the procedure, surgical steps involved, and any potential risks for that patient. They must then e-mail the form back to the attending, who adds his or her comments. The day of the surgery, the resident brings a copy of the evaluation form, and, after the procedure, the attending completes the backside of the form, which uses a Likert scale to evaluate the resident’s performance. Questions focus on understanding of the rationale and indications for surgery, knowledge of how to do the surgical procedure, overall technical skill, use of time and motion, communication with the surgical team, and ability to code the case correctly.
Dr. Grillone uses a similar tool, the Clinical Competency Assessment Tool, or CCAT, to assess competency in an outpatient clinical setting. “We’re glad we implemented this system because when the ACGME Milestones were introduced last year, we realized this would be an important component of how we could make meaningful and valid assessments of resident competency, Dr. Grillone said.
Give Immediate Feedback
According to Nasir I. Bhatti, MD, associate professor of otolaryngology-head and neck surgery and anesthesiology critical care medicine and director of Adult Tracheostomy Airway Services at Johns Hopkins University School of Medicine in Baltimore, the key to better surgical evaluations is to give feedback as close to the end of the surgery as possible. In an arrangement similar to the one used at Boston University, residents at Johns Hopkins are evaluated by a scoring system called objective structured assessment of technical skill (Visit enttoday.org for an example of this form). Dr. Bhatti and his colleagues authored a study published in The Laryngoscope in 2012 that reported that evaluations completed within six days were more reliable and indicative of actual performance than those completed after six days (2012;122:2418-2421).
performance.
—Nasir I. Bhatti, MD
“When more time passes between an actual performance and evaluation, a lot of recall bias comes in,” he said. “If I had a good impression of you in a previous testing, I will score you higher. An evaluation based on a fresh recollection is a true reflection of the performance.”
Regular evaluations, Dr. Bhatti said, provide attending surgeons with a framework for giving residents formative feedback. “Summative feedback is, ‘You did a good job, I’m proud of you,’” he said. “Formative feedback is, ‘These are the good things you did well, and these are the things we need to work together to improve, and this is how you should improve.’”
Because providing reviews after every surgery can become cumbersome, Dr. Bhatti said the members of his team at Johns Hopkins only complete them for certain surgeries that occur at the start of the rotation, in the middle, and at the end. “There’s less of a burden on evaluators who are busy practitioners,” he said.
Dr. Rowland suggested otolaryngologists take a cue from the airline industry, which asks crew members to do a two-minute debriefing after every flight to determine what went well and what didn’t. After every surgery, she said, the attending should ask the resident, “How do you think you did? What do you think you improved on during this surgery?” For their part, residents should turn to the attending and ask, “How can I do better next time?” “It shows respect for the teacher and gives student something to focus on,” she said.
Make It a Group Effort
It’s also a good strategy to let other staff members—including nurses, nurse practitioners, and physicians assistants—evaluate residents, Dr. Bhatti said. That way, they get a 360-degree view of how well they work with other team members.
Dr. Rowland said such evaluations can also present an opportunity to discuss the resident’s communication skills. “The attending may say, ‘You’re competent. You were able to perform at a level appropriate for PGY3, but other staff said you could have been kinder to the circulating nurse,’” she explained.
Share Best Practices
Most importantly, educators should publish summaries about the evaluation strategies and tools that work so that other programs don’t find themselves having to reinvent the wheel, Dr. Bhatti added. “We should be sharing our experiences through literature and communication across programs. It’ll make life for the program director much easier if others share their resources,” he said.
Stephanie Mackiewicz is a freelance medical writer based in California.