Trauma registries are key to improving trauma center performance and system outcomes. But is the underlying data reliable? Recently, investigators from several centers studied interhospital variability and accuracy in data coding and scoring by registrars. The results — published in the September issue of the Journal of Trauma and Acute Care Surgery — suggest that there is widespread inaccuracy in trauma data sets.
The study was led by Sandra Strack-Arabian, CSTR, CAISS, trauma program manager at Tufts Medical Center in Boston. At the time of the study, Arabian was the program’s trauma research and data quality manager. Recently, she explained the origin of this research study and talked about ways to improve trauma data.
Q. Why did you do this study?
Every year we see a lot of trauma research presented at conferences like AAST, EAST and TQIP. But the theme that seems to recur time and time again is the question about the quality of the data underlying the research. That got me thinking. Everybody is always complaining about the quality of the data, so why don’t we just put it to the test? Another factor that is concerning is that few trauma registries have the funding for appropriate and consistent registrar training. So I also wanted to see if we could show the need for consistent training and support for trauma registrars.
Q. How did you conduct the study?
The study used an anonymous survey. For the survey, we created a fictional trauma case that included some EMS information, some ED information and some information about diagnosis codes, Abbreviated Injury Scale (AIS) codes and procedures. The survey asked registrars to read the case and then abstract the data and code it. The goal was to measure just how different the answers were, and we also wanted to measure accuracy.
Q. Could you put the results in a nutshell?
In a nutshell, we found that there was about a 64% overall accuracy rate and significant variability between registrars. We found that it really didn’t matter how many years you have been working as a trauma registrar, or whether you are a CSTR or CAISS, or whether you work at a TQIP institution, or whether your center is ACS-verified or state-verified or both, or all of the above.
Q. What was the biggest surprise?
There were two big surprises. One is that the CSTRs didn’t fare any better than non-CSTRs. The other is that the TQIP centers fared better in one category only — pre-hospital Glasgow Coma Scale scoring. That was kind of interesting because TQIP includes monthly trainings and certainly emphasizes data quality, yet there was still variability among that group.
Q. You presented this research at EAST 2015. What was the audience reaction?
There were a couple of folks who said, in effect, “So what? There’s variability in the data. We’ve always known that.” My response was that this is important when you consider for what we are using this data — to further develop patient care and safety protocols and to forward the science of trauma. When you consider all the research that is based on this data, you have to ask, What are we going to do about it? Is this something we’re happy with, or something we want to improve on as a national group?
Q. What is driving variability in trauma data?
I think it’s built into the system. There are some blurred lines in terms of what the National Trauma Data Bank expects you to follow. We have the NTDB data dictionary, but individual centers can also create their own institutional dictionary and many states also have their own data dictionary. It gets very confusing, and sometimes when you think you’re talking apples and apples, you end up actually talking apples and oranges. The dictionaries often begin on the same page, but over the years definitions do change. When there isn’t consistent training, people kind of veer from the NTDB standards. And oftentimes we fall victim to the mentality of “that’s what my predecessor did, therefore I will continue to do so.” And I think that was very evident in the findings of our study.
Q. What does this say about the trauma registrar profession?
I think many, if not most, trauma services are blessed with really good registrars, folks who are really conscientious about their data. They take a lot of pride of ownership in the work they do. They are extremely detail-oriented, hard-working folks. The problem in many cases is that they are not getting adequate support and/or consistent training. In some cases they’re not even regarded as an important member of the trauma team. And lastly, some folks are wearing five hats at the same time. They are the program manager, the registrar, the coordinator and the chief cook and bottle washer.
Q. How can trauma programs give registrars more support?
I can tell you what some of the successes here at Tufts have been. In our institution, the trauma registrar is considered not only an important part of the team, but they’re on an even playing field with everybody from an administrative perspective. Their input is not only sought after or desired, it is mandatory that the trauma registrar has input. And it’s not just about the numbers, but what they think the issues are. Many times the registrars are the first to find gaps in care, because they’re closely monitoring that record as they abstract it. At Tufts, the registrar is a salaried position as opposed to an hourly position, and that has made a huge difference. We also support the educational needs of registrars a little bit above the recommendation. The study speaks to the fact that we in the field of trauma obviously need some kind of structure and consistent training for registrars, something that can help propel the quality of data.
Q. What can registrars do to make trauma data more reliable?
The first thing is data validation. It’s a key component of data accuracy, but many people do not really know where to begin with it. You don’t necessarily have to have a whole team to do it. But you have to have some kind of way to check your work or check your coworkers’ work, and that has to be built into your daily or weekly routine. That’s a must. The second thing is report writing, which goes hand-in-hand with data validation. Learn how to run some simple reports to see if you have any holes in your data or if there’s anything you have been entering incorrectly. For example, someone asks for a count of how many activations you had this week. When you run the report, oftentimes you can already tell based on how you queried the data if something’s not looking right. Unfortunately, a lot of people do not know how to generate reports.
Q. How do you improve report writing?
You don’t get good at data validation or report writing unless you have a good grasp of your registry itself and a good grasp of how your registry works. That point is grossly understated in many registrar training programs. You have to have a real understanding of the database — where data goes when you enter it, where it comes out, which fields affect which fields, etc. To understand these things, you have to establish a good rapport with your vendor, because your vendor can help walk you through how things work.
Q. What other recommendations do you have for registrars?
I think one of the first things a registrar should do is sit down and look at the NTDB data dictionary. Every time that new data dictionary comes out, it is really incumbent upon all of us to review the revisions carefully. Start by going to the change log and seeing what has been added, what has been deleted, and what has changed. The next thing is to make sure that those changes are communicated before they take effect. You may have to educate physicians or mid-level providers about how they now have to document certain information. For example, one key change recently in the NTDB data dictionary is ED time. They clarified that ED time means “time from decision to admit”, not “time patient leaves the ED.” That’s a huge difference. It can have a big effect on PI, and it changes the ballgame for many people. It’s really important to understand these definitions. And if you’re unclear about a data element or its definition, pick up the phone and call the NTDB. Once you are comfortable and confident that you know each data element definition, only then can you begin abstracting and entering your data correctly.
Q. What can trauma program leaders do to help improve data?
One solution may be to get trauma medical directors more involved with the coding as well as trauma program managers. Coding and AIS rules are very specific and very detailed. Getting more people educated on coding will lead to better documentation, and better documentation will lead to better data. And that will be a win-win for everybody.