August 5 2020
QI Collaborative Data Science Committee Meeting
External Attendees:
- Justin Indyk (Nationwide)
- Nana Jones (Cincinnati)
- Mark Clements (CMH)
- Todd Alonso (BDC)
Mark and Todd and Sanj (Joslin) helped put this together.
Learned a lot from mapping 8 sites in the last two years.
Why does this need to be a standing group?
We now have a new set of measures that we are collecting this new grant year:
- Social Determinants of Care
- COVID-19
- Other issues that might come up
Important to make sure that these changes are doable for all sites.
We are now at 21 sites in the QI Collaborative (University of Miami – adults and peds)
6 adult sites, probably 7 by the end of the week.
Additional 13 sites signed up (besides 8 mapped sites) waiting to be mapped
We have funding to bring on a total of 13 new sites, on track to be there by early 2021
Todd: Difficult for sites to add new things, we should be very careful not to overburden the teams.
Mark: We designed the data spec to be much more comprehensive knowing sites would only be able to achieve 50-60% of what the data spec calls for; important to understand difference between data spec (mapping data) and the QI data recording (derived from what’s collected in the data spec); did we give out a charter of the scope of this group? We should get feedback on the charter.
Osagie: We will send out a link to the charter after the meeting, can be seen on Trello pages.
Nudrat (30,000 ft. view of Data Spec)
Todd: We should consider simplifying the A1c metric to <7% for all patients regardless of age. Easier to communicate and in line with current recommendations. I think different records take down gender/sex differently (concerns about trans patients)
Anton: I asked Bing and it’s the legal definition, whatever is on their government-issued ID.
Justin: At Nationwide, we have a committee that’s working on this to improve displays. Two separate fields: Sex (legal, administrative) and Gender (displayed) – can be done through Epic (are sites using?)
Nana: Here we have the technical sex, but then there’s a preference (“prefers XYZ…”); we will probably need to choose one and stick with it.
Todd: I think it’s more of a technical question and I would defer to Anton.
Mark: It all depends on the platform; probably all looks different to everyone; requires additional mapping. I would say it doesn’t meet the 80/20 rule in my book. I think it’s important, but not as critical. Keep it on the list, but don’t prioritize.
Todd: I’m thinking of an A and a B for simplicity’s sake. We need to take one of them down. Build it in with capacity is easiest way to go.
Nudrat: Primary Insurance type is documented two different ways, often don’t agree. Can we agree to keep Insurance Type in either position, and if so, which one?
Anton: Patient files are not time-stamped, we get them once a month and it goes into our Database (every month it’s getting written-over – we don’t know any history as a result) if we look at the encounter from the past and find what their primary insurance was at the time.
Mark: It concerns me that we’re overwriting the patient file. Are we losing the patient info for those who have graduated?
Anton: It’s growing but no one is falling off. Not removing, just updating.
Todd: We want encounter-level insurance data; younger patients more likely to go on Medicaid than older (change over time is nice to track)
Mark: Encounter-level info helps us map from particular encounter to encounter
Nudrat: Fair to remove primary, secondary, tertiary insurance types?
Mark: Yes, I like that.
Nudrat: We are doing well with encounter date; not as well on class (some sites provide, others don’t)
Anton: Nationwide hadn’t been reporting class until last month, started ramping up in mid-March; I don’t know how we can go back and get the info.
Nudrat: Is that of interest to the group?
Mark: It doesn’t break the 80/20 rule, happened so infrequently it’s not a priority
Nudrat: Codes for depression screenings and A1c; not really doing well reporting depressing screenings; doing better with A1c mapping; what can we do to improve?
Todd: Went to PHQ-8 to avoid suicide questions, don’t know if we’ll use 8 or 9 more
Mark: CMH uses PHQ-8, we have a separate suicidal question; I suspect most people aren’t recording these scores in their medical records
Mark: We’re pulling all the numbers from Red Cap, all the data we’ve sent over time is from Red Cap; some sites might be in the same boat not getting it in medical record because it’s in another electronic system.
Todd: We do both. We don’t have a psychologist and don’t have enough staff to do suicide screening effectively, same issues with depression screening.
Mark: CMH is in the process of getting it out of Red Cap and into the medical records
Nudrat: We need to ask all sites how they’re collecting their screening data, best ways to map
Todd: Is it mapped? Is it EMR? (re: screenings)
Nudrat: Date of diagnoses needed for all records; do we push to get this information? How much can we trust it? How can we do better to collect and mapping this information?
Todd: We don’t have a good electronic record for date of diagnoses; default is “today” for date of diagnoses; anyone could delete or put a new problem in and date will default to today unless manually changed (lots of work on the backend); what’s the best we have, then next best, then next best? How do we handle info from other sites when we get new patients from other clinics? Lines get blurred by whether they got adequate diabetes education at diagnoses at the first clinic and when they came to new clinic. “Pre-diabetes” cases also blur the lines for date of diagnoses.
Mark: We reformatted all documentation to match data spec as best we could. We have a place to put date of diagnosis that is reliable, but human error could get it wrong, need clinic observations to confirm and validate.
Todd: We do the same thing. I have colleagues that notoriously don’t put things on the problem list. This needs to be front-and-center (getting sites to report a best date); Mark’s flowsheet with the required fields is an excellent recommendation.
Justin: We don’t do dates, just month and year, though reliable.
Todd: Would be a good thing coming out of this group if we could provide a “Top Ten List” of best practices for data mapping.
Mark: There is some underreporting for reporting extreme hyperglycemia; the DKA should be higher than that; if they had their DKA encounter at another hospital; should be captured on the patient report, but if we don’t ask the question, it won’t get recorded. At CMH we have 95% in-town, easier to capture.
Todd: Harder for us because our patients are spread throughout the state. Many younger/at-risk patients come from Denver Health. Don’t know what ones we don’t get in that report
Nudrat: 1-2% sounds accurate?
Todd: Yes, we’re about 1-2% for certain patient years (since last visit); filled in chart 70-80% of the time; those not filled in were not asked or negative; some colleagues fill in before meeting, others automate. Have you been to the hospital since we’ve seen you last? Did you have any scary lows? Afraid we might be underreporting. More CGM use has brought it down, but still many not being recorded. How can we improve documentation? If you do any validation, I would love to have that as a conversation for our group.
This Post Has 0 Comments