AI Governance is not doing well!
ComputerWorld.com reported that “Across industries, CIOs are rolling out generative AI through SaaS platforms, embedded copilots, and third-party tools at a speed that traditional governance frameworks were never designed to handle. AI now influences customer interactions, hiring decisions, financial analysis, software development, and knowledge work — often without being formally deployed in the classical sense.” The February 2, 2026 article entitled “Why AI adoption keeps outrunning governance — and what to do about it” (https://www.computerworld.com/article/4122948/responsible-ai-gap-why-ai-adoption-keeps-outrunning-governance-and-what-to-do-about-it.html) included these comments about “Why legacy data governance struggles under genAI”:
Even where governance exists, it’s often built on assumptions that no longer hold. Fawad Butt, CEO of agentic healthcare platform maker Penguin Ai and former chief data officer at UnitedHealth Group and Kaiser Permanente, argues that traditional data governance models are structurally unfit for generative AI.
“Classic governance was built for systems of record and known analytics pipelines,” he said. “That world is gone. Now you have systems creating systems — new data, new outputs, and much is done on the fly.” In that environment, point-in-time audits create false confidence. Output-focused controls miss where the real risk lives.
“No breach is required for harm to occur — secure systems can still hallucinate, discriminate, or drift,” Butt said, emphasizing that inputs, not outputs, are now the most neglected risk surface. This includes prompts, retrieval sources, context, and any tools AI agents can dynamically access.
What to do: Before writing policy, establish guardrails. Define no-go use cases. Constrain high-risk inputs. Limit tool access for agents. And observe how systems behave in practice. Policy should come after experimentation, not before. Otherwise, organizations hard-code assumptions that are already wrong.
What do you think?