I was in the midst of a two week vacation in Maine when I got the e-mail from the ACCME that the 2010 Annual Report Data was now available. My first thought was, “ooooooh…data! Let me take a look!” My second thought was “Put down the iPad, you doofus, you’re on vacation!” and I promptly grabbed a book and headed for the nearest hammock.
Yes, part of the appeal for not looking at the 2010 report was motivated by vacation laziness, but I had another reason that had been lurking in the back of my mind ever since I completed my own 2010 annual report back in March. Simply said…I don’t trust this data.
I consider myself a little bit of an amateur stathead (I was one of those who was playing fantasy football 20 years ago, using a calculator and yellow legal tablet to “research” potential picks). I’m certainly no statistician, but I do enjoy breaking down data, looking for trends, etc. In past years, I would commonly have dissected the ACCME’s annual report data within a week or two of it being released. This year, I don’t think I’m going to be doing that (By the way…how geeky did that last paragraph make me sound? I’m quite a catch, eh? Sorry ladies, I’m already spoken for! Try to contain your disappointment…)
Why don’t I “trust the data”? I guess the main reason is that this is the first year for the ACCME’s PARS system, which brought with it three factors that I think had the potential to skew the system:
1) Entering the data in PARS was more complicated than sending in the Excel spreadsheets of the past. Not to toot my own horn, but I’m usually pretty adept with this sort of data accumulation and entry, and I had my moments of throwing pencils at the computer screen while using PARS (What? Nobody else does that? Moving on…)
2) There were a few subtle differences with how the ACCME asked for data to be reported. For instance, instead of asking for a cumulative total of grant support for 2010, it was requested that grant support for each individual 2010 activity be given and then that was automatically tallied as the cumulative total. Maybe for most organizations that didn’t make a difference, but it might.
3) Acknowledging that the PARS system was more complicated than the system of the past, the ACCME provided a much more detailed and thorough explanation of the required data than was available in the past. Maybe I’m the only one, but there were a couple of times when I found myself reading through the PARS FAQ section and murmuring “ohhhhh…so THAT’S what they want…”
I don’t think any of these three factors caused any major differences in how providers reported their data, but even minor differences across 700+ providers would have the possibility of skewing the data.
Am I being overly paranoid about this? Quite possibly, yes, but all the same, I’m going to take a wait and see approach. I’m not going to pop any champagne corks over what appears to be a slight increase in overall CME income, nor am I going to go cry in the corner because it looks like medical education companies are getting a much smaller piece of the pie. In some ways, I guess I’m looking at the 2010 data as a new baseline. I’ll be much more interested in seeing the 2011 data and comparing it with 2010.
So, no data analysis blog entries from me this year. I’ll be writing about more important things, like how I have man-crushes on marginal baseball players or guest blog posts from fictitious CME superheroes. But if you want to write about it, go for it. I’ll probably read it.
Just don’t ask me to believe it.