Over the past two days, I attended and presented at the 3rd Annual Patient Registries Summit in Philadelphia. It was attended by a wide variety of organizations representing healthcare, pharma and specific disease and patient advocate organizations. Some highlights below:
Alex Cole of Alexion Pharmaceuticals spoke about global regulatory perspectives on registries. A key point as organizations build data repositories was to differentiate between clinical trial registries and specific disease registries, noting that the first is developed with the analysis in mind while the second is developed not knowing what analysis will be required once available.
That’s a really important distinction as it relates to the fundamental change in the approach to big data. The historical model for integration followed an ETL model and now it follows the ELT model. That is, previously when data was to be aggregated, it was extracted, transformed and loaded in a predetermined data model. As a result of this, data would be bounded within an application in a specific form at and later difficult to blend with other data or to answer questions that were only thought of once the data model was set in place. In the new model, data is extracted and loaded into a data repository (data lake) and then the transformation into the proper data model is done when questions need to be answered, allowing the data to arranged in a variety of models, whether a relational database, a graph database, or other, opening access to all data any time and not bounding it.
Aliza Fink of the Cystic Fibrosis Foundation presented on informing and improving care using registries and mentioned a 2017 initiative whereby they reached out to patients with the disease to ask them what questions they had that might be able to be answered through research of the information contained in the registry.
While this sounds simple, it has not been the norm to date as it’s overwhelming enough to try to answer the questions the researchers have. What’s important here from a data integration and data management standpoint is if you have a data model where access to the data is easy, and then you layer on an analytics application that can report based on a myriad of combinations against the data, you can answer all kinds of questions. Those you have now, and those that will come in the future. But f the data model is bound within an analytics application, then you’re restricted to only being able to answer the questions that the pre-configured queries allow you to against the pre-configured data model. So this is a good example where having untethered access to all of the data available for review by any application, you can answer questions that may not have been thought of at the time when the registry was designed.
Registries can last for a long time. Over that time frame, some questions are answered, but many more are asked. The endpoints we thought we were driving toward will change. Furthermore, technology isn’t staying still. All of this together means that a registry will evolve and a flexible data-centric approach is key.
A few of the other interesting topics of discussion included:
- The ability to start linking all kinds of disparate data sources together for analysis, such as the inpatient database, claims database, transplant registry and more as well as how to allow for interoperability between registries
- How patient information entered at home can often be more reliable than the information provided in a clinical setting as the patients (or caregivers) are less stressed and have more time to think about the correct answer
- How harmonizing outcomes measures is a way to promote the validity and use of registry data, giving it credibility for research and to patients.
Data is now being looked at as the “new oil” or “new currency” and gaining access to and blending it from a variety of sources can help to improve the quality and outcomes of patient care.
How have you used data to improve patient outcomes?
Until next time,