Hospitals, clinics, and public health agencies across the United States are deploying AI tools to draft clinical notes, summarize patient histories, flag potential drug interactions, and support triage decisions. State policymakers may be tempted to restrict the use of AI in healthcare out of concern for patient safety. But avoiding AI is not the same as making it safe. Policymakers who want to protect patients while ensuring clinicians can use tools that improve care should look to Utah for how regulatory sandboxes can maximize patient access to beneficial tools while minimizing clinical risk.
In 2024, the state legislature passed the Artificial Intelligence Policy Act, which created the Office of Artificial Intelligence Policy (OAIP) and authorized it to run a regulatory sandbox where companies can apply for temporary relief from certain state rules to test AI systems under government supervision. Rather than restricting AI upfront, the sandbox gives the state a way to evaluate how these tools perform in practice and build regulation around evidence rather than assumptions.
A pilot within the sandbox shows how this approach works in practice for one important application area: routine prescription renewals. Doctronic, a health technology platform, is using an AI system under a state-approved regulatory mitigation agreement to let patients with chronic conditions renew certain prescriptions at participating pharmacies. Instead of waiting for a physician’s office to manually review and approve a refill request—a delay that can lead patients to miss doses—patients scan a QR code at the pharmacy counter to begin an AI-guided screening. The system verifies the patient’s identity and checks their medication history using a nationwide prescription data network used by pharmacies and doctors. If the request meets the state’s safety criteria for one of roughly 190 eligible low-risk medications, the system authorizes the refill for the pharmacist within minutes.
This sandbox approach offers advantages. First, and perhaps most importantly, it maximizes the chance that beneficial tools actually reach patients. Broad restrictions don’t just block bad AI, they block good AI too. If a state were to bar clinicians from using AI to support treatment decisions, that wouldn’t distinguish between a poorly validated chatbot and a rigorously tested clinical decision support tool. Utah’s sandbox creates a path for companies to demonstrate what their tools can do under real conditions, meaning promising tools get a chance to prove themselves rather than getting swept up in categorical prohibitions written before anyone has seen them work.
Second, Utah’s sandbox helps the state regulate more intelligently over time because it gives regulators a chance to learn before creating new rules. Experience makes it easier to write regulations that target actual failure points rather than imagined ones, and to identify where within a workflow AI introduces genuine clinical risk and where it doesn’t, rather than treating an entire application area like prescription renewals as safe or dangerous wholesale. In the Doctronic case, that means the state can measure refill timeliness, patient access, safety outcomes, workflow effects, and costs to determine exactly where the tool improves care and where safeguards might be needed.
Third, rather than avoiding liability questions by banning the technology outright, the sandbox addresses them directly by defining responsibility in advance. This allows clinicians to participate without risking their licenses while ensuring patients remain protected if something goes wrong. Outside of a supervised pilot, a pharmacist who relies on an AI-generated refill authorization could risk violating scope-of-practice rules, because most pharmacy laws assume that a physician personally approves every prescription renewal.
That ambiguity discourages clinicians from using new tools even when they appear safe. The regulatory mitigation framework is a formal agreement between the Office of Artificial Intelligence Policy, the technology provider, and state regulators such as the Division of Professional Licensing. Under this arrangement, the state grants a safe harbor, committing not to pursue enforcement actions against pharmacists or physicians who rely on AI authorizations within the pilot’s approved parameters. To close the accountability loop, companies such as Doctronic must carry malpractice insurance that explicitly covers the AI’s clinical outputs.
Finally, running a sandbox stress-tests the regulatory framework itself. The Doctronic pilot does not just reveal how well the AI performs; it highlights where existing prescription renewal processes are slow, fragmented, or unnecessarily burdensome for providers and patients. Testing an alternative workflow under supervision allows the state to see which steps meaningfully protect patient safety and which simply add delay. That insight is valuable not just for governing AI, but for improving healthcare processes more broadly and identifying where regulation can better support efficient, high-quality care.
Utah’s sandbox shows that responsible AI governance is not about prohibiting new tools but about creating a process to evaluate them. States that build systems for supervised experimentation will be better positioned to protect patients while improving care. Those who rely on restrictions alone will struggle to do either.
Image credits: Jeremy Thompson/Flickr
