By Dr. Natalie Marks, Chief Veterinary Officer, OpenVet
Last week, the FDA released updated guidance on Clinical Decision Support Software. It's a 27-page document written for human medicine, covering which software qualifies as a regulated medical device and which doesn't. At first glance, it has nothing to do with veterinarians or animal health.
But I've spent the last week reading it carefully, and I think every veterinary AI company, every corporate practice group evaluating AI tools, and every veterinarian using AI in their workflow should read it as well.
Not because it applies to us legally. It doesn't. But because it answers a question our industry has been avoiding: what does trustworthy clinical AI look like?
The Regulatory Void We're All Operating In
Most people in veterinary medicine already know, but rarely say aloud, that there is no regulatory body that evaluates or approves AI tools for veterinary use. The FDA regulates veterinary drugs through its Center for Veterinary Medicine, but when it comes to AI software that helps veterinarians make clinical decisions? Nothing. No approval process. No validation requirements. No minimum standards for transparency or accuracy.
This means any company can build a veterinary AI tool, market it to clinicians, and present it to patients without anyone verifying whether it works, is safe, or whether the veterinarian using it has any way to evaluate the recommendations it provides.
If that sounds like a problem, it's because it is.
In human medicine, the FDA has regulated clinical decision support software for years. The 21st Century Cures Act created a framework that distinguishes between software that's a medical device (and requires regulatory oversight) and software that supports clinician decision-making without replacing their judgment. The January 2026 guidance update further refines this framework. Although the specifics are about human healthcare, the underlying principles are universal.
What the FDA Is Actually Saying
The guidance outlines four criteria that clinical decision support software must meet to avoid being regulated as a medical device. Stripped of the legal language, they come down to this:
- The software doesn't analyze medical images or raw signals from diagnostic devices. It processes medical information, not raw data.
- It displays, analyzes, or presents medical information to a healthcare professional. It's an information tool, not a black box.
- It provides recommendations to the clinician on prevention, diagnosis, or treatment. It supports decisions; it doesn't make them.
- It enables the clinician to independently review the basis for the recommendations, so they don't have to rely primarily on the software's output.
That fourth criterion is where most AI products falter, in both human and veterinary medicine. The FDA says: if the clinician can't understand why the software is making its recommendations, can't see the evidence behind them, or can't evaluate whether they apply to their specific patient, then that software isn't decision support. It's something else, and it needs to be regulated differently.
Why This Matters for Veterinary Medicine
I've talked to hundreds of veterinarians about AI over the past two years. The number one concern, by far, is trust. Not whether AI can be accurate. Most vets believe it can. The concern is whether they can verify it, understand the reasoning, and catch errors before they reach a patient.
This isn't paranoia; it's responsible care. Relying blindly on AI output as a veterinarian can lead to mistakes. Even if the tool is correct 95% of the time, the remaining 5% of errors can harm animals.
The FDA guidance formalizes this idea into a clear standard: good clinical AI demonstrates transparency. It details the data used, explains the development and validation of its algorithm, highlights the limitations of its training data, alerts when input data is missing or unusual, and provides enough context for the clinician to agree, disagree, or formulate a better question.
None of this requires government regulation. It simply requires a decision to build it.
The "Black Box" Problem Is Real
Veterinary professionals frequently highlight the black box problem as their top ethical concern with AI. This issue is tangible, not just theoretical. Currently, veterinarians nationwide are using general-purpose AI tools such as ChatGPT to answer clinical questions. These tools are quick, free, and often accurate. However, they are also opaque, lacking citations and source verification, and exhibiting hallucinatory references and a lack of transparency. There's no way to tell if a dose recommendation stems from reliable sources like Plumb's or from an irrelevant Reddit thread included in the training data. Additionally, they do not provide confidence scores or uncertainty indicators, leaving veterinarians unable to assess the output's trustworthiness for their specific patient.
To be clear: this isn't a knock on those tools. They weren't built for veterinary clinical use, but veterinarians are using them clinically anyway because the profession is drowning in complexity, and nothing better has been available.
The million-dollar question is what "better" means. The FDA just provided a pretty clear answer.
What We Built at OpenVet, and Why
When we began developing OpenVet, we committed to ensuring that every recommendation the system provides is evidence-based and traceable to its source. This decision was driven not by regulatory requirements but by the profession's critical need: where incorrect drug dosing for the wrong species can be deadly, a veterinarian must be able to verify any suggestion before trusting it.
This decision influenced our entire framework. Each clinical response cites textbooks, guidelines, and peer-reviewed sources. The system indicates confidence levels to differentiate facts from authoritative sources and AI-generated reasoning. It notes when evidence conflicts or is incomplete. For queries about drugs with significant species differences, such as acetaminophen (safe for dogs but lethal for cats), the system requires species verification before providing an answer.
We also built a tiered source authority in which published textbooks and respected journals carry more weight than general web references. We built an architecture that shows the veterinarian not only what the system recommends but also why, from what sources, with what level of confidence, and where the gaps are. And, we are developing uncertainty flags for emerging research areas with limited evidence.
We didn't follow the FDA guidance solely because we read it; we did it because it's the proper approach to building clinical AI. However, reading the guidance was reassuring because the FDA-mandated principles for human medicine are the same as those we voluntarily adopted for veterinary medicine.
The Standard Is Coming Whether We Build It or Not
Veterinary AI is currently unregulated, but that won't last forever. The EU AI Act doesn't distinguish between human and veterinary applications. It classifies AI systems by risk level, and a veterinary diagnostic AI could easily fall into the high-risk category. The USDA is increasingly interested in AI for disease surveillance and food safety. The AVMA will eventually develop practice guidelines for the use of AI in clinical settings.
When that regulation is finalized, it will almost certainly borrow from the FDA's existing framework. Because why would regulators start from scratch when there's a detailed, tested framework already in place?
Companies that build to that standard now will be ready. Companies that don't will scramble to retrofit transparency, explainability, and validation into systems never designed for them.
Honestly, the regulatory debate is secondary. The core issue is straightforward: veterinarians should have access to AI tools of the same quality as those available to human doctors. They have the right to understand the basis of the recommendations they depend on. They deserve systems that honor their expertise and assist their judgment rather than attempt to replace it.
What Veterinarians Should Ask Their AI Vendors
If you're a veterinarian assessing AI tools for your practice or a practice manager thinking about AI integration, here are some questions I would ask, based on the principles outlined in the FDA guidance:
- Can I see the sources behind every recommendation? Not a vague "based on veterinary literature," but specific, evidence-based citations to specific texts that I can verify.
- Does the system tell me when it's uncertain? When the evidence is thin, conflicting, or when my patient doesn't match the population the algorithm was trained on?
- Do I understand what data the system is using as input? Can I see what it's pulling from the patient record, and what it's missing?
- Has the system been validated, and can I see the results? Not marketing claims, actual validation data with methodology I can evaluate.
- Is the system designed for veterinary medicine specifically, or is it a general-purpose AI with a veterinary skin on top?
These aren't trick questions; they set the standard for reliable clinical AI, whether you're a human doctor in a hospital or a veterinarian in a mixed-practice clinic at 2 AM trying to determine the correct antibiotic for a septic foal.
The Opportunity
I mentioned earlier that the absence of veterinary AI regulation poses a problem, and it indeed does. However, it also presents an opportunity. The companies that prioritize trust—by creating transparent systems, publishing validation frameworks, and welcoming scrutiny instead of avoiding it—are the ones that will set the industry standard.
At OpenVet, we believe veterinary medicine deserves a clinical intelligence system grounded on the same principles of transparency and evidence that the FDA now requires for human medicine. Not because we must, but because the profession and the patients it serves are worth building it right.
The FDA guidance consists of 27 pages of complex regulatory language. However, the main point can be summarized in one sentence: if your clinician cannot see the evidence supporting your AI's recommendations, then you haven't created decision support. Instead, you've created a black box.
Veterinary medicine has too many black boxes. It's time to build something better.
