The word “insight” means different things to different people and is defined as “the capacity to gain an accurate and deep intuitive understanding of a person or thing”.
Some think of eye catching charts and visuals are the best way to gain insights. In fact, data visualization tools are just one piece of a comprehensive business intelligence strategy. You have to be sure the data supporting the visuals is complete and of high quality first. Dirty or incomplete data can turn your “insight” into a trap that could have devastating impacts on research, treatment or operations, a textbook “garbage in, garbage out” situation.
Generating a continuous stream of insights requires access to data repositories that can be combined with other sources to allow for the information to be interpreted in useful ways, which in the case of healthcare and life sciences means improving operations, patient care and even lowering costs. The first step to accurate analytics, whether prescriptive, descriptive, or predictive starts with a healthy data architecture.
In most cases I have seen that the data requires a significant amount of cleaning and harmonizing before it can be trusted or analyzed for insights. This is especially the case when we manage clinical trial data or sensitive patient data where security and compliance are critical. Most organizations want to jump straight into predictive analytics and regression modelling across millions of data points before taking a hard look at the quality and scalability of their data infrastructure.
In Healthcare, the holy grail around data involves blending clinical, operational and financial data together, and adding the addition of IoT data from devices makes it that much more difficult to gain insights if the proper data foundation hasn’t been laid. In Life Sciences, sales, operational, research, public health, and clinical trial data are examples where various blending can help bring real-time insights to speed drugs to market and allow for proper surveillance.
Integrating and connecting large volumes of data between a growing number of disparate sources is not an easy task for most organizations to accomplish. Historic models of on-premise software don’t provide the capabilities for addressing both on-premise and cloud-based sources for data in the new data economy. Cloud-based, data Platform as a Service models allow for seamless integration and harmonization of all types of information in a compliant way, without having data scientists spend 80% of their time on data wrangling and 20% on the analytics which inspire insights, but rather the other way around, enabling them to spend 80% of their time analyzing the data and coming up with the insights.