“Think of it this way,” Dr. Rudmik said. “If you’re a betting person as a policy maker or payer, the odds are around 75% that you’d be correct in supporting ESS because of its long-term clinical and cost effectiveness, whether that be via favorable reimbursement rates, coverage determinations, treatment guidelines, etc.”
Explore This Issue
November 2014David W. Kennedy, MD, a rhinology professor at the University of Pennsylvania in Philadelphia, is uniquely qualified to evaluate any long-term look at the merits of surgery versus medical therapy for refractory CRS. Dr. Kennedy is one of the pioneers in bringing modern methods for performing ESS to the United States and, in the 1990s, authored some of the first studies to measure the utility of ESS over an extended time period—nearly a decade in two of his studies (Laryngoscope. 1992;102(12 Pt 2 Suppl 57):1-18; Laryngoscope. 1998;108:151-157).
“But I have to tell you, I’m no healthcare economist and therefore cannot comment on the economic modeling aspect of the study,” Dr. Kennedy said. “So if I were asked to be a reviewer for this paper, my first question—and it is absolutely critical to the validity of its findings—would be a clinical one: That is, what was the severity of disease for these patients? Did they really have refractory disease?”
“My understanding from a recent conversation with the investigators is that the mean Lund-Mackay CT [computed tomography] scores for the patients in their database is 12, which equates to moderately severe disease,” Dr. Kennedy said. “That tells me that their patient selection criteria, at least as it pertains to disease severity, was sound.” (Information on Lund-Mackay scores was not included in the published study.)
A Few Too Many Assumptions?
Martin Citardi, MD, professor and chair of the department of otorhinolaryngology–head and neck surgery at the University of Texas Medical School in Houston, was a bit more willing to take
on the statistical modeling strategies used in the Rudmik study. He said the central question of cost effectiveness that the authors posed “is a good one to be asking” in this era of cost containment. “And I suspect that their conclusion is probably right. I am concerned, however, that their analysis is based on assumptions on top of assumptions on top of assumptions, so the foundation of their analysis may have some flaws.”
Dr. Rudmik agreed that any statistical model is only as sound as the quality of the variables built into it. “That’s why we took such great pains to scour the NIH studies for real-world patient data to make those assumptions solid,” he said. Moreover, to ensure that the economic model accounted for those clinical and financial variables as much as possible, “we didn’t just include the mean values for all of our data,” he stressed. “We also used the values in between the 95% confidence interval [CI] ranges for each parameter in the study.” In fact, “we must have run our sensitivity analysis at least 15,000 times, picking random samples for each parameter in the 95% CI range. So I do feel strongly that the model appropriately accounted for the inherent presence of uncertainty around the true value of each variable included in the model.”