Setting the Stage: AI Governance in Healthcare
The Greater Illinois Chapter of the Health Information Management Systems Society (HIMSS) recently hosted an in-person regional conference, where RCG Global Services sponsored an awards luncheon and a roundtable discussion on AI Governance for healthcare. As the moderator (Dr. Rob Nelson, GM/Head of Health Sciences) at RCG Global Services, I was honored to lead a vibrant conversation with preeminent leaders in healthcare on the topic of AI governance who are leading the way: Mary Gagen, VP Advanced Analytics, Advocate Healthcare; Dr. Nirav Shah, MD; Infectious Disease MD, Medical Director of Quality Innovation and Clinical Practice Analytics, Outcomes Researcher, Director of Infectious Disease Research, North Shore University Health System and Chad Konchak, System Assistant Vice President, Data Analytics at Endeavor Health.
We quickly focused on the complexities and necessities of AI governance in healthcare. This blog summarizes the key takeaways from the roundtable, providing healthcare executives with insights and strategies for implementing robust AI governance frameworks.
Here are a few key takeaways that other healthcare executives may like to know. One thing is for sure, this is new “territory,” and organizations are still learning and evolving. But the process is not entirely new, as we have experience with Data and Analytics, New Technology, and Innovation.
The Importance of AI Governance in Healthcare
AI governance has become a pressing priority in the healthcare sector – a big driver being the speed of adoption versus various risks. The increasing integration of AI technologies into healthcare services necessitates a clear framework to ensure responsible use and to manage those applications that quickly arrive on the scene through a host of vendors as well as institutional innovations. This involves understanding and mitigating risks associated with AI, such as third-party risks and the impact of unannounced new features by technology providers. The task can seem overwhelming, but with the right focus and attention, it can be addressed in a way that drives value for the organization (and reduces risk).
Establishing a Governance Structure
Creating a solid governance structure is essential for effective AI integration. At this stage, there are several new frameworks, but they do not all align. Therefore, taking a thoughtful approach to creating a structure that works for a specific organization is needed, considering the potential for burnout, over-governance, and redundancy with our committees and groups. There is value in understanding and aligning (as appropriate) with strategic frameworks such as the HHS (Health and Human Services) Artificial Intelligence (AI) Strategy and adhering to guidelines from the White House Executive Order on AI. These frameworks emphasize the need for a strategy that ensures the safety and effectiveness of AI applications in healthcare.
Operating Models for AI Governance
Defining a clear operating model for AI governance helps healthcare organizations manage AI technologies effectively. This includes establishing processes for continuous monitoring, evaluation, and improvement of AI systems. Key actions involve setting up governance committees that are inclusive yet efficient, ensuring they do not overburden members who serve on multiple committees. The panel stressed the importance of having the right process and structure – not too heavy, but as with Goldilocks and the bears – “just right.”
Implementation Roadmap
Creating a comprehensive implementation roadmap is crucial for transitioning from planning to execution. This roadmap should outline steps for integrating AI into existing systems, managing AI portfolios, and ensuring all initiatives align with organizational goals. A well-defined roadmap helps in rationalizing AI initiatives, products, and vendors to optimize resources and minimize risks. The panel had different views of how formalized this needed to be but stressed the importance of goals – keeping that list of goals short and clear (SMART).
Challenges and Strategies
Implementing AI governance comes with significant challenges. One major obstacle is taking inventory (and categorizing and evaluating) of all AI applications within an organization. Another is managing a multitude of vendors. To overcome these challenges, organizations must develop solid frameworks and strategies, drawing on examples of best practices from within the industry.
Stakeholder Engagement
Engaging key stakeholders is vital for successful AI governance. This includes IT (Information Technology) teams, compliance officers, new technology, innovation, and senior leadership. Each stakeholder plays a crucial role in ensuring that AI technologies are implemented responsibly and align with the organization’s strategic goals.
Aligning AI with Business Value
AI governance must ensure that investments in AI projects align with the organization’s business value. This involves consulting frameworks like RCG’s proprietary framework or NIST (National Institute of Standards and Technology) AI Risk Management Framework (AI RMF) to evaluate and manage risks. Effective governance ensures that AI initiatives contribute positively to the organization's value and objectives.
Responsible AI Implementation
Governance supports the responsible use of AI by setting clear priorities and domains such as AI principles, compliance, and portfolio management. Responsible AI implementation also involves integrating AI governance with enterprise risk management to ensure a cohesive approach.
Addressing Fairness, Bias, and DEIB (Diversity, Equity, Inclusion, and Belonging)
The panel felt this is a hugely important topic. AI governance frameworks must address fairness and bias to ensure alignment with organizational values. Promoting Diversity, Equity, Inclusion, and Belonging (DEIB) within AI governance helps mitigate biases and fosters an inclusive environment.
Communication and Education
Effective communication about AI governance is essential for building awareness and accountability. Organizations must develop strategies to educate stakeholders about AI governance frameworks and their importance. Clear communication helps in setting expectations and ensuring everyone is on the same page.
Key Takeaways and Lessons Learned
The roundtable concluded with several key lessons:
- Establish Clear Frameworks: Develop and implement comprehensive AI governance frameworks.
- Engage Stakeholders: Ensure all key stakeholders are involved in the governance process.
- Align with Business Goals: Make sure AI initiatives support the organization’s strategic objectives.
- Promote Responsible Use: Foster a culture of responsible AI usage through clear principles and compliance mechanisms.
- Educate and Communicate: Build awareness and understanding of AI governance across the organization.
The Road Ahead: AI's Future in Healthcare
The AI Governance Roundtable highlighted the critical need for structured governance frameworks in healthcare. By addressing challenges, engaging stakeholders, and aligning AI initiatives with business goals, healthcare organizations can effectively manage AI technologies. The insights and strategies discussed provide a roadmap for healthcare executives to strengthen their AI governance efforts, ensuring the safe, effective, and responsible use of AI in healthcare.
For more information on AI governance and related strategies, healthcare executives are encouraged to explore resources such as the HHS Artificial Intelligence (AI) Strategy, the White House Executive Order on AI, and the NIST AI Risk Management Framework (AI RMF). These resources provide valuable guidance for developing robust AI governance frameworks in the healthcare sector. Further, RCG Global Services offers consultation, frameworks, and services to help healthcare organizations navigate the complexities and essentials of AI Governance.