Dr. Tom Karson spoke first, informing us that globally, we are in the zettabyte era, and we cannot do anything with this volume of data in healthcare without big data analytics. Genomic sequencing & epigenomic analysis are given as examples of big data that medical research and practice need to have and manage daily.
Dr. Karson clarified the reality in our healthcare systems where there is no governance or standardisation of the data sets that individual groups within the provincial health system, and that this is the lowest step of the DELTA five stage maturity model for data analytics. dr. Karson's point was that we need to evolve through this maturity model to better use the data, but that we cannot do that without in parallel establishing and maturing governance over this data.
Additionally, Dr. Karson insisted that we must develop, recruit, and educate the appropriate talent pool to manage big data, and be able to turn data to information, and then to insight.
Julie Lockner from Informatica followed to discuss how we prepare our data centres for big data. The questions came up around pure capacity, security and governance, and obtaining the skills needed to manage these systems.
When asked what is stopping people from dealing with big data better they say: "Time constraints on business analysts and lack of skills for staff in how to manage big data."
We are next introduced to the concept of hadoop, which allows for real-time massive data processing on standard hardware platforms, as an Open Source solution.
Our last speaker on this topic is Rachel Debes, a biostatistics researcher from Cerner. Rachel states that the two biggest drivers towards big data solutions is electronic medical/health records, and the emergence of an accountability framework for the Canadian healthcare system. I would suspect that she is overlooking medical research requirements and data generation/analysis, but I'll assume she's targeting the clinical administrative audience here.
ADM Kislock asked the panel "is big data bad?" and the response was that it is not, but it's all about the governance and skills to handle that big data responsible and effectively.
A question came up from the audience as to whether the protections we put in place around big data in the possession of healthcare are nullified by patients and the general populations freely placing health and health-care information in the public domain via social media, which can be mined by anyone who wishes to invest in that.
My thoughts are that people will place this information in the public domain along with all kinds of other things that if that data were placed into government care we would be held accountable, and the fact that people are irresponsible with their information, or that people on an individual level doesn't feel that certain information is actually "private" doesn't absolve us of our responsibility to protect the data given into our care. If the public voice eventually changes the definition of what is "private" or "personal information"
then we will adapt our levels of governance accordingly.
Dr. Karson provided an answer to this question that mostly aligned with me thoughts, and cited the regulations we work under in the healthcare industry.
Dan Gonos from HP asked the panel their thoughts on the challenges with mining unstructured data. Dr. Karson answered that unstructured data is best mined if you have discrete unstructured data and you understand the data sources, so that the algorithms can be modified to assume contexts. Julie added that certain vernacular can complicate free-form data, further to Dr. Karson's point.
- Posted using BlogPress from my iPad
Location:Hwy 97 S,Kelowna,Canada