red horizontal rule

Solving for Data Integration, Regulations, and Model Harmony to Maximize the Benefits of Ai in Healthcare

Published by 

The race to valuable and responsible Ai in healthcare is on. Interest and curiosity from providers, physicians, and patients are supportive. The perception is not that it will replace or diminish the role of doctors. On the contrary, it will alleviate it. In his book “Deep Medicine”, Dr. Eric Topol, a renowned cardiologist, describes Ai as a necessary apprentice to physicians. With Ai, duplicative testing and imaging decreases, and the accuracy of diagnosis increases.

No two patients are alike. A true patient care requires individualized assessment of all patient’s unique symptoms, medical history, environment, habits, cardio functions, blood work, diet, microbiome, and even genome. No human can crunch this interconnected data to arrive at a valuable actionable health insight. The siloed specializations of today’s practices make it near impossible. But Ai removes these boundaries and makes true integrative medicine a possibility. According to Topol, Ai will give physicians time back. It will pull them away from screens and put them back in front of the patient making human connection, especially in task and protocol heavy specialties like radiology and pathology. Or it can augment physicians where care is scarce like dermatology and mental health.

Although this future is in reach, some consideration regarding approach, alignment, and anticipation are necessary. It is moving fast, but in many directions. Data, regulation, and narrow learning are the backbone. Sharpening how we address them can be the difference between healthy and unhealthy Ai.

From data consolidation to federation

Ai needs data, and it has to be diverse. As organizations explore integrating data from other parties, they are faced with walls of obstacles. Most data integration efforts between organizations halt because of contract negotiations. This is common across industries, but in healthcare it is magnified. Securing ownership and guaranteeing monetization are at play. Without clear understanding what that can be, most take a step back and now they are without the missing piece for what completes their work. The result is unscalable solution or less than intelligent result.

Even when negotiation is bypassed, heterogeneous governance, protocols, and format limitations spring up. There are some standards floating around. All accounts to maintain security and privacy are necessary, but it is more than that. It is rare that two sources have coordinated data structures and methods—within an organization or across organizations.

Data integration via ETL (exchange, transfer, and load) is the most common, but it is not the most effective or scalable. It is complex and slow. An alternative is data federation. Data federation is a form of data virtualization where data does not leave its source. Instead, it is aggregated virtually with an added layer that serves as a common interface. Cisco notes that data federation and virtualization can generate significantly faster analytics and intelligence, while saving as much as 75% compared to data replication and consolidation. Enforced common governance is not necessary. Each source maintains their own governance while adhering to individuals’ privacy and security.

Creating and owning the virtual layer that manages access to data sources requires planning and work. It is easier within an organization. For interorganizational data federation, the role of a trustee or broker may become necessary. That can be a government agency or a non-profit organization. Organizations with valuable datasets can participate. A trustee is an independent party that helps make valuable connections for the benefit of healthcare in general. Access to data is incentivized per use or by the attribution to value each dataset creates.

Locking steps between Ai innovation and regulation

Regulators believe Ai innovators are moving too fast, and Ai innovators think regulators are moving slow. Regardless who’s right, the pace has to align. The European Union (EU) finally passed the Ai Act in December 2023 to regulate the use of Ai and manage the risk levels it poses to users. It is the first targeted Ai regulation in the world. It was initially proposed in April 2021—that’s over two years or negotiation. Other countries, especially the USA, will follow. But at this pace, Ai innovation is always going to be ahead.

Innovators and regulators both agree that there is a responsibility on innovators to support regulations and help to advance it and bring it up to speed. Things are all good now: positive vibes and positive results. When something goes wrong—and it is bound to happen—regulations will become stricter and more limiting. The benefits outweigh the risks now. But one harmful incident, especially in healthcare, can flip the balance. The error rates of Ai overall and in specific healthcare diagnosis are less than 5%, which beats that of doctors. People are more forgiving though of other people than machines.

There is a flood of experimentations with Ai. While it is good to test different implementation methods and options, it also creates noise. Besides, while technology systems are relatively cheap, Ai models and data training is not. Funding will be spread thin. Wider enterprise value-driven guidelines and monitoring may be needed to help support the good ones through and weed the bad ones out.

The future harmony of multiple Ai models

Healthcare organizations are still navigating their first implementation and use of Ai. Healthcare’s immediate benefits from Ai are narrow and deep. Each function will have its own Ai solution to support clinicians: radiology analysis, skin diagnosis, eye diagnosis, blood analysis, genome analysis, cardio analysis, and more. Any practice will have tens of those Ai models running in conjunction. Larger institutions like hospitals will probably hundreds. Without proper coordination of the multiple models, benefits will not be realized.

Now is the time to start planning and arranging for multiple model environments. This will require more than just an information technology (IT) department or an enterprise architect. Just like multiple departments and teams are connected and aligned under an organizational culture, so will multiple Ai models in a single environment need—harmony. Eventually many, or all, of the different models will converge to offer holistic and personalized care. But that is involved and will need more time. Establishing and maintaining harmony will be a more immediate need.

Share:

Unlock Growth
red horizontal rule

Experience experts weigh in on their top strategies for our most successful clients.