The Mutual Group’s Shannon Woods on creating internal governance frameworks when regulation lags innovation — and why saying ‘Yes, and here’s how’ beats reflexively saying a No’
As with all new technologies, the pace of artificial intelligence development continues to outstrip regulatory frameworks. That’s left insurance compliance leaders in an uncomfortable position. How can they ensure that their organizations keep pace with customer security expectations and still keep ahead of competitive peers? While they tinker with this balance, the pressure to avoid exposing their organizations to undefined risks is growing swiftly in urgency, making effective AI governance increasingly essential.
Shannon Woods, chief legal and compliance officer at The Mutual Group, has chosen to meet this challenge by building certainty internally wherever it does not exist externally, at the industry level or within state and federal oversight.
She’s created comprehensive internal governance structures that allow The Mutual Group to move forward with AI adoption while maintaining the oversight necessary to protect both the business and its policyholders.
Woods’ approach reflects a fundamental shift in the role that legal and compliance leaders take. She sees her role shifting from serving as gatekeepers who slow innovation with reflexive a “no” response, to a more proactive stance. In that sense, she describes her function as a “strategic enabler.” As she told The Insurance Lead, it starts with understanding regulatory frameworks deeply enough to guide the business toward what’s possible rather than simply blocking what might be risky.
Importantly, the decisions are grounded in the human element of insurance expertise. At least for the foreseeable future, human intelligence gets the final say over machine intelligence.
At The Mutual Group, that philosophy translates into tracking every AI implementation across the organization. That means embedding contractual obligations for vendor notification when AI usage changes, and running quarterly governance committee meetings backed by annual tabletop exercises that stress-test the company’s response to potential AI failures.
The Insurance Lead spoke with Woods about how she’s built this governance framework, why she interviews team members based on whether they think in black-and-white yes-or-no terms, and what it takes to support business innovation when the regulatory landscape remains frustratingly unclear.
The Insurance Lead: One of the biggest challenges with AI adoption in insurance is that technology is moving faster than regulation. How do you navigate that gap at The Mutual Group?
Shannon Woods: My approach has always been that I don’t want legal or compliance to slow the business down. Instead, we try to anticipate where regulation is likely headed and build internal guardrails early, so we can be proactive rather than reactive.
Practically, that starts with visibility. We maintain a comprehensive inventory of every AI tool used across the organization—whether it’s deployed internally or through a third-party vendor. From a vendor management perspective, no new vendor is implemented until we fully understand how and where they’re using AI.
We’ve also embedded contractual obligations requiring vendors to notify us whenever their AI usage changes. That means we always have an up-to-date view of how AI is being used on our behalf, not just at onboarding but on an ongoing basis.
That level of governance allows us to stay comfortable innovating, even while regulatory requirements continue to evolve. While external guidance may lag behind the technology, we’ve created clarity within our own organization. By establishing a framework that works for us, we’re able to explore AI’s benefits responsibly while managing risk in a thoughtful, disciplined way.
Your role combines legal and compliance. Some might see that as being the person who holds up the stop sign when others want to move fast. Is that how you see it?
I see the role very differently. I don’t view legal and compliance as functions that slow the business down—I see them as enablers. My responsibility is to support the business in moving forward, not to act as a stop sign.
To do that effectively, I need a strong command of the regulatory framework and the relevant constraints. But just as important is partnering with the business to say, “Yes, you can do this—and here’s how we can do it safely and responsibly.” My default is never to say no unless it’s truly necessary. There are moments when a hard stop is required, but those are the exception, not the rule.
More often, the answer is, “Yes, with some guardrails—and here are a few ways we can approach it.” That mindset leads to better solutions and stronger outcomes for the organization.
It’s also how I think about building my team. When I interview candidates, I ask them to walk me through how they would communicate complex or sensitive advice to the business. If the approach is purely black-and-white, it’s usually not the right fit.
When you build leadership teams—across legal, compliance, and the broader executive group—that see their role as enabling the business and driving it forward, you tend to get better, more balanced decisions across the organization.
Building entire teams this way is the distinguishing factor — when not just my role, but all executive-level roles think “my job is to support the business and drive it forward,” you have better outcomes.
You’ve established a cross-functional AI governance committee. How does that actually work in practice? How do you keep it from becoming just another meeting about meetings?
When we launched our AI program, we built governance into it from the start. That program has three core components: a formal AI policy, enhanced vendor management requirements, and a cross-functional AI governance committee. The committee isn’t separate from the work—it’s how the work gets done.
The committee has three primary responsibilities. First, it maintains continuous visibility into all AI use cases and risks levels across The Mutual Group, both internally and through third-party vendors. Second, it oversees ongoing monitoring and response planning for AI-related risks, including security threats, hallucinations, bias, or other disparate or unintended outcomes. This isn’t a one-time review; it’s an ongoing assessment designed to surface issues early.
Third, the committee serves as the formal mechanism for keeping the Board of Directors informed and, when appropriate, engaged. When board involvement is needed—whether for awareness, decision-making, or escalation—the governance structure is already in place rather than being created in the middle of a crisis.
This committee wasn’t modeled after an external template. It was created because it felt necessary for our organization, and in practice it’s proven to be a practical, disciplined tool—not just another meeting, but a structure that supports responsible innovation.
You also run annual “tabletop exercises.” What does that look like?
Once a year, we conduct a cross-functional tabletop exercise designed to stress-test our AI governance. We walk through realistic scenarios where something goes wrong—such as a third-party vendor experiencing an AI-related security issue, or an AI model producing hallucinations that could impact policyholders.
The purpose is to identify who needs to be involved and what actions need to be taken to mitigate the risk created by AI. By playing these scenarios out in real time, we gain insight into potential gaps, whether in processes, communication, or resources, and can address those proactively.
These exercises ensure that the right people know when they need to step in, how they need to respond, and how information should flow—whether that’s engaging regulators, educating the board on the actual risk exposure, or addressing external concerns that may affect customers and policyholders.
How do you determine who participates in the committee?
We established the committee last fall—initiated in the third quarter and finalized in the fourth—with a defined core membership that attends every meeting. That core includes the CIO, who brings deep insight into systems and vendor management, along with myself, the CEO, and the COO. These are the leaders ultimately responsible for understanding AI usage and making decisions about how it’s deployed across the organization.
Beyond that standing group, participation is expanded based on the agenda. When we’re evaluating a specific use case or a more advanced application of AI—particularly as tools become more integrated into decision-making processes—we will invite the leaders whose business areas would be directly impacted.
That structure gives us both consistency and flexibility. It ensures the right stakeholders share a common understanding of how the technology works, how it’s being used, and how we’re monitoring it to identify and address any potential risks or unintended outcomes.
What triggered the decision to formalize this governance structure when you did?
The decision was driven by the realization that AI is an opportunity for the organization, even as the regulatory landscape remained uncertain. Rather than waiting for perfect clarity—which may never fully arrive—we chose to proactively establish our own governance framework.
Vendor management quickly emerged as a critical component. It wasn’t enough to understand how AI was being used internally; we also needed visibility into how our third-party partners were using AI on our behalf. Embedding contractual requirements for vendors to notify us of changes in their AI usage ensures that visibility is ongoing, not one-time.
Ultimately, the goal was to move from reactive compliance to proactive governance. By the time regulatory guidance becomes more defined, we want to be able to demonstrate that we’ve approached AI adoption thoughtfully, responsibly, and with the right controls in place from the outset.
Looking at your approach more broadly, how would you advise other compliance leaders who are trying to balance innovation with risk management?
I’d encourage compliance leaders to rethink their role—from preventing problems to enabling progress safely. That shift requires a deep understanding of both the regulatory environment and the business itself.
Compliance can’t function as a simple gatekeeper. To add real value, you have to understand the constraints well enough to say, “Yes, and here’s how we can do this responsibly.” That means investing the time to understand what the business is trying to achieve and why, then working collaboratively to design solutions that are both permissible and prudent.
It’s also important to accept that complete regulatory clarity isn’t coming anytime soon. Waiting for fully formed frameworks—particularly around AI—puts organizations at a disadvantage. Instead, build internal governance structures now, with the expectation that they will evolve as external guidance matures
That approach allows compliance to actively support innovation while still managing risk in a disciplined, thoughtful way.