The adoption of Artificial Intelligence (AI) in healthcare has rapidly accelerated, bringing with it both unprecedented opportunities and significant challenges. For wound care specifically, AI promises to revolutionize everything from assessment accuracy to prior authorizations.
To ensure its safe and responsible deployment, The US Joint Commission and the Coalition for Health AI (CHAI) have released crucial guidance. This guidance outlines seven core elements for the safe and responsible use of AI tools in healthcare delivery settings.
We’ve broken down what these guidelines mean for wound care clinicians and how you can practically incorporate these elements as you consider adopting AI, whether for a small practice or a large organization.
1. AI Policies and Governance Structures
Evaluating healthcare organizations need to adopt and implement a systematic approach to the implementation, evaluation and use of AI tools. A quick way to start is to establish a “Triage, Test and Train” framework. Triage which tools can support the most critical needs. Test with champion users to refine the process definition. Followed by training all users while including AI education (see #7). Large organizations can evolve this into a formal AI committee. Smaller organizations can dedicate technology champions who are responsible for educating all team members.
2. Patient Privacy and Transparency
Ask technology providers how and where patient data is being used. Is patient data anonymized and de-identified? How do they use patient data to train their models?
3. Data Security and Data Use Protections
Ensure AI vendors have appropriate certifications demonstrating their commitment to safe data usage. Look for vendors that are HIPAA compliant and can provide evidence of relevant security certifications, such as SOC 2 Type II. These certifications demonstrate an independent commitment to protecting patient information against unauthorized access and use.
4. Ongoing Quality Monitoring
Understand how AI vendors are regularly monitoring the performance of their AI-enabled tools. Ask questions like: “How often do you audit the model’s accuracy?” Or “What is your process for reporting a decline in performance?”
5. Voluntary, Blinded Reporting of AI Safety-Related Events
Treat AI safety incidents the same way you would treat any patient safety event. If an AI tool provides an incorrect recommendation that could potentially harm a patient, treat it as a near-miss or adverse event. Report these events (blinded) to Patient Safety Organizations (PSOs) like CHAI’s Public Registry: https://www.chai.org/workgroup/public-registry.
6. Risk and Bias Assessment
Bias is a major risk in image-based AI. It is vital to inquire about how AI algorithms are tested across different demographics to account for inherent bias. Ask vendors to provide data on how their algorithms perform on patients with diverse skin pigmentation (i.e. accurately detecting erythema/redness on darker skin), age groups, and various co-morbidities, like diabetes-related ulcers. Ensure the tool works reliably across your entire patient population.
7. Education and Training
The technology provider must offer appropriate onboarding and training to ensure that users at all levels are educated on the AI’s functionality and limitations. Ensure all staff, from the front-line nurse using the camera to the administrator reviewing the reports, receive mandatory, role-specific training. Emphasize that AI tools are decision-support systems, not replacements for clinical judgment. Clinicians always have the final say.
Evolving with Responsible Innovation
The responsible integration of AI is not just about avoiding risk; it’s about building trust. By applying these seven core elements to your due diligence process, wound care clinicians can ensure that new technologies genuinely enhance patient outcomes, improve workflow efficiency, and uphold the highest standards of safety and ethics.
Swift Medical is proud to deliver responsible AI. Ask us how we deliver in each of these categories.