Who Should Govern Surgical AI? Not the FDA—and Not Surgeons Either

Jeffrey A. Singer

AI in Health Care: A Policy Framework for Innovation, Liability, and Patient Autonomy—Part 7

A recent commentary in JAMA Surgery contends that surgeons must lead the governance of artificial intelligence in the operating room. The authors highlight rapid advances in surgical AI—including systems capable of performing multistep procedures—and warn that the “rulebook” is being written without sufficient surgical input.

For example, the authors note that in 2025, researchers at Johns Hopkins University described a stepwise autonomous system operating on the da Vinci Surgical System that successfully clipped and divided the gallbladder duct and artery during ex vivo porcine gallbladder removals. This work builds on earlier studies showing that autonomous systems can perform intestinal anastomosis (surgical connection) with more consistent results than expert surgeons, as well as AI models that learn complex surgical tasks through imitation and hierarchical planning.

The authors raise legitimate concerns. Current regulatory frameworks, including those from the Food and Drug Administration—through its Digital Health Center of Excellence and AI-enabled medical device guidance—and Europe’s Artificial Intelligence Act, remain largely abstract and do not clearly define what “human oversight” should mean in an operating room, where decisions must be made in seconds. The authors worry that, without surgical leadership, governance will default to engineers, hospital administrators, and regulators—potentially leading to misaligned liability, inconsistent training standards, erosion of surgical skill, and diminished professional autonomy. Their solution is straightforward: surgeons must step in and help write the rules.

Real Problem, But The Wrong Solution

That diagnosis is partly right. The prescription misses the mark.

The problem is not simply that the wrong people are writing the rules. It is that we continue to assume someone must write them centrally. The authors warn about regulatory capture, and they are correct to do so. Well-organized interests with the resources to maintain a constant presence in regulatory processes often shape outcomes in their favor. But shifting authority from regulators to surgeons risks replacing one set of insiders with another. The deeper question is not whether surgeons or regulators should control surgical AI, but whether centralized gatekeeping is the right model for governing a fast-moving, complex technology in the first place.

The Case Against Centralized Control

Experience suggests it is not. Centralized control—whether by government agencies or professional bodies—tends to slow innovation, entrench incumbents, and diffuse responsibility. It also creates the illusion of safety while often failing to answer the most important question: Who is accountable when things go wrong? That question becomes even more pressing as surgical systems gain greater autonomy.

Start With The Patient

The more appropriate starting point is not professional authority but patient autonomy. The JAMA Surgery commentary implicitly treats surgeons as the primary stakeholders in this debate. Yet patients bear the risks and experience the outcomes. They should be free to accept or decline AI-assisted care, understand its potential benefits and limitations, and choose among competing approaches. Artificial intelligence may improve precision, expand access to expertise, and reduce variability in performance. Those benefits will matter only if patients are allowed to weigh them against the risks in light of their own values and preferences.

Liability, Transparency, and Choice

If we move away from centralized gatekeeping, the alternative is not the absence of governance but a different framework—one that focuses on outcomes rather than preemptive control. The most urgent need is clear liability rules. One of the authors’ strongest concerns is that responsibility in autonomous or semi-autonomous procedures remains undefined. That concern is well-founded. But the solution is not more prospective oversight—it is clarity about who is responsible for what. Manufacturers should bear responsibility for defects in the systems they design, train, and update. Clinicians should be responsible for the decisions they actually control. Liability should follow control, not tradition. Without that clarity, courts are likely to default to holding surgeons responsible for outcomes they did not cause, thereby recreating the very misalignment the authors seek to avoid.

Transparency is equally essential. AI systems should be subject to strict rules against misrepresentation, with clear disclosure of their capabilities, limitations, and uncertainty. Outcomes and failure modes should be openly reported. This enables patients and clinicians to make informed decisions without requiring a central authority to grant or deny permission. In other areas of life, we rely on information and accountability rather than prohibition. Medicine should not be an exception.

There is also a role for evaluation and certification, but not as a monopoly. Independent organizations, including specialty societies, private evaluators, and other entities, can assess and compare technologies. Their role should be to inform, not to control. This preserves the value of expertise without turning it into a barrier to access.

None of this means surgeons should be excluded. Their clinical insight is indispensable, especially for defining how these systems function in real-world settings. But input is not the same as authority. A framework in which surgeons “lead governance” risks introducing a different form of capture—one driven by professional incentives rather than commercial ones. Standards may evolve in ways that favor incumbents, slow the adoption of new approaches, or limit patient choice. 

Health policy offers many examples of how well-intentioned professional oversight can drift in that direction. Scope-of-practice laws have long been used to limit competition from nurse practitioners and other clinicians, often with little evidence that patient outcomes improve. Certificate-of-need laws, originally justified as a way to control costs, have instead allowed incumbent hospitals to block new entrants and preserve market share, reducing access and keeping prices high.

The choice, then, is not between regulators and surgeons as competing gatekeepers. It is between centralized control in any form and a system grounded in clear liability, meaningful transparency, and patient choice. The authors are right that the window for shaping these systems is still open. But if we respond by reinforcing gatekeeping—whether governmental or professional—we risk constraining the very innovation that could improve care.

Artificial intelligence will change surgery. The question is whether it will do so in a way that empowers patients or simply rearranges who holds authority over them. If we get the framework right, we can encourage innovation while safeguarding safety and preserving clinical judgment. If we get it wrong, we may find that we have replaced one set of constraints with another.

The goal should not be to decide who controls surgical AI. It should be to ensure that no single group controls it.

To read other parts of this blog series, go here.