The MAPTYCS co-founder and author of ‘Guardians of Uncertainty’ explains why tomorrow’s risk managers must be outgoing, collaborative, and culturally fluent storytellers. (Yes, storytellers.)
Ernest Legrand has spent his career at the intersection of business and technology. A former IBM senior executive who led global initiatives across banking and insurance, he co-founded Insurtech provider MAPTYCS in 2016.
MAPTYCS, which relies on geospatial and artificial intelligence systems, was created to address a fundamental gap, says Legrand: risk managers have a wealth of sophisticated tools and data sources at their disposal, but they often lack the tools to make that information meaningful in moments that demand clarity and speed.
His recently published book, “Guardians of Uncertainty,” takes a decidedly different approach from business literature in general, let alone books discussing risk management. Rather than offering frameworks from a theoretical distance, Legrand, who also serves as MAPTYCS chief product officer, interviewed 12 active global risk managers at major international enterprises — from Hyatt Hotels, Comcast NBCUniversal, Mars, Cisco, McAndrews & Forbes or Bechtel in the US to Veolia, Serra Verde, Sonepar or Bouygues Construction in Europe and International SOS in Asia — to understand how they navigate everything from climate risk to AI ethics in real time.
What emerges is a portrait of a profession in transformation.
In Legrand’s telling, the traditional risk manager was typically confined to insurance purchasing or compliance auditing. But as AI changes the nature of work, and client demands reflect a world of accelerated threats from geopolitical and climate risks to cyber issues, risk managers are morphing into the role of a strategic advisor embedded in C-suite decision-making.
These leaders don’t just model risk, Legrand says. They serve as exemplars of trust across their organizations. Credibility serves as the foundation for new risk managers to do their jobs and to translate uncertainty into something navigable and less threatening. The chief ingredients of risk managers’ credibility are personal attributes like cultural fluency, emotional intelligence, and the ability to craft compelling narratives.
Yes, the job still requires the ability to review spreadsheets and actuarial tables. But while AI and advanced analytics are powerful force multipliers, Legrand asserts that only humans can derive meaning and convey context in a relevant way so far. Software programs can flag compliance issues, for example, but they cannot navigate ethical gray zones. It can recognize patterns, but when those patterns break, human intuition and storytelling remain essential to help organizations make sense of chaos.
The Insurance Lead spoke with Legrand about why “risk managers must be empathetic and outgoing” people, rather than clinical, office-bound statisticians. We also discussed how geospatial visualization transforms comprehension, and what the next decade holds for a key insurance industry career undergoing a fundamental identity transformation.
The Insurance Lead: Your book quotes the International Organization for Standardization (ISO) definition of risk as “the effect of uncertainty on objectives.” You also draw important distinctions between risk, uncertainty, threat, and danger. Can you walk us through this philosophy and how it shapes practical risk management?
Ernest Legrand: One of the first things I discovered is this notion of “uncertainties versus risk.” Risk involves known probabilities and potential outcomes — good or bad. That’s what risk is. Uncertainty, by contrast, is where all the probabilities are unknown.
What I found interesting is that the risk managers I spoke with don’t conflate the two. They have systems that can operate even when uncertainty dominates, even though they don’t know the probabilities. The systems are designed to address that in the long run.
The distinction between threat and danger is also essential. A threat is just a potential source of harm. Danger, however, implies more immediacy or proximity. You have to recognize this distinction to prioritize what you want to put in place and allocate resources wisely.
The other thing I noticed is the way risk managers see risk. It’s not something you simply have to avoid. Of course, they should minimize adverse outcomes. But many of them see risk as an opportunity to grow, a chance to strengthen their company’s resilience. They see it as a way to do better and try something new, something that’s ultimately more profitable for the company.
Your book emphasizes the entwined nature of risk and opportunity. How do you help organizations balance protection with innovation?
Most of the time, the job involves thinking about how to transform scenario planning into adaptability. They reframe quantitative models; still, they don’t want to eliminate them. They don’t want to eliminate uncertainty.
They just want to understand it better and make it navigable. And they want to play out what happens from an adaptability standpoint for the company.
All of them told me they want to shift communication from fear-based cues — “be careful, alert” — toward something that tells a strategic story that emphasizes possibilities and future outlook.
“Storytelling”is often perceived as a softer skill, and a “nice to have,” compared to focusing strictly on the data and the tools that put it into actionable strategies. Do risk managers see genuine, practical value in being able to tell stories about the data they access?
Storytelling was something they all mentioned as helping them get their point across more than numbers alone. It’s how they tell the story. And it usually has to come from a strategic perspective.
Looking at a spreadsheet doesn’t automatically tell you anything. You need to determine what story the spreadsheet or data is telling. That’s what makes it real, makes it actionable, brings it to life. It’s about understanding and presenting data in context that anyone can grasp and interpret.
You interviewed risk managers dealing with everything from climate change to AI ethics. Are there universal principles that apply across these different areas?
It’s about analyzing which principles are domain-specific and which are universal. You ask: “What does this risk mean?” With climate change, it’s a long-term, systemic direction. For cyber, it’s about rapid, asymmetric threats. So you start to frame the conversation differently when approaching regulation or strategy.
But there’s also this notion of creating what we call “adaptive capacity.”
Traditionally, you forecast, then create something to align with that projection. Increasingly, risk managers are seeking greater agility. Rather than saying, “We think this is going to happen in this amount,” they say, “Let’s create a capacity that can adapt more flexibly.”
That means investing more in redundancy and in decision-making processes under uncertainty. Nobody is totally right or totally wrong in these scenarios. But you know why you made a particular decision and in what context. All of that is intended to boost risk managers’ agility.
Several executives you profile emphasize that “risk management is about people.” As AI reshapes every professional role, how do you reconcile this “human-centered” view with greater adoption of automation?
They all see — and they express it eloquently — that the human side is a strategic necessity. Again, AI can model risk using vast amounts of data. But only humans can model its meaning.
AI can do a lot to help you get there. But at the end of the day, AI cannot suggest a specific meaning that is useful for the situation you’re in. It can’t adequately and specifically consider the context. But who can model meaning? Who can make a judgment call? Humans.
That’s why you have this human-centered leadership that [VP of Risk Management at Hyatt Hotels] Jennifer Pack, [VP of Risk Management at Bouygues Construction] Zaïella Aïssaoui and [Chief Risk Officer at International SOS] Franck Baron mentioned, but they all feel the same way. They need humans to complement technology, the entire model they have at their disposal. Without humans, many things will be incomplete and risky. For all of these risk managers, human-centered leadership is not a nostalgic ideal; it is a strategic necessity.
Most of them see AI as an opportunity—it comes with risk, of course—but more as a force multiplier than a threat. It helps them do things faster and handle much larger volumes of data. But machines today, and for the foreseeable future, cannot replicate moral judgment or empathy, which are absolutely critical for understanding context. Machines cannot replicate context.
The other thing is what I call cultural fluency. People have different relationships with life, risk, and uncertainty across countries. AI can recognize patterns. But when patterns break, you need human intuition and storytelling to help the organization make sense of the chaos.
It’s not that humans have to accommodate technology. Without humans, the type of relationship they have with each other based on influence and trust, technology — specifically AI — will never be able to help the organization effectively.
How do you see the risk manager role evolving over the next five to 10 years?
Right now, we’re going through what I see as a fundamental shift in the identity of the risk manager role. For years, risk managers were domain-bound. They were either focused on insurance purchasing, audit, or regulatory frameworks — making sure everything was compliant. But now, more and more, they’re involved in executive decision-making. That’s a subtle, but key change.
The most seasoned risk managers are embedded in this type of executive decision-making. People ask them for their opinion on things that will shape the company’s strategy alongside the CFO, the CEO, and the CMO. So they’re shifting from technical specialists to strategic advisors.
All 12 risk managers I interviewed are already at this stage of being strategic advisors. In my opinion, they represent the evolution of the risk manager role. But they’re a small number who are already there. Many others are solely responsible for buying insurance or ensuring the company is GDPR-compliant.
The other shift is that they have to be less reactive and much more proactive. They need to be more involved in designing new models that anticipate and enable what can happen, using foresight tools, scenario modeling, and real-time analytics to take action before something happens. It will require translating uncertainty into narratives that decision-makers can understand. Storytelling will be as vital as statistical fluency.
The third point is moving from silos to integration. They have to work with more and more departments within the company. They cannot be confined to the CFO or one single department. They have to be collaborative and cross-functional because everything they touch — ESG [Environmental, Social, and Governance], cyber, climate risks, you name it — affects finance, operations, sales, everything, including the company’s reputation.
The last thing is shifting from control to culture. Risk managers will need to reshape the organization’s mindset and become stewards of the risk culture across the company.
That suggests risk managers need different skills than we might traditionally associate with the role.
Absolutely. If you think you can become a good risk manager in your office, on your own, with your computer, you will never succeed.
They’re all adamant on one thing. To be successful as a risk manager, you have to be outgoing, collaborative, and enjoy navigating across all departments of the organization to understand, as best you can, your company’s business.
If you are not outgoing and do not have these personal attributes, your chances of succeeding are very low. This is not just me speaking — all the people I interviewed for Guardians of Uncertainty said it. That’s part of the evolution of the risk manager role.
Your work at MAPTYCS focuses on geospatial intelligence. How does that inform your perspective on risk management?
The purpose of MAPTYCS is to fuse geospatial intelligence with real-time risk analytics. Working with MAPTYCS taught me that data alone is not enough. What matters is making data meaningful in moments that most of the time demand clarity and speed.
We help organizations visualize climate and property risk across portfolios. We integrate external threat intelligence from government agencies and third-party data providers, such as Munich Re, Precisely, and others, to enable rapid responses for our clients.
What I came to realize is that technology is only as powerful as the questions you ask. And visualization is important because it transforms comprehension. Risk managers realize that if you can show something visually — not just in Excel spreadsheets, but in a way that truly visualizes your actions — you substantially improve the comprehension for your audience. For risk managers, their audience starts with senior management.
When risk is mapped and becomes tangible, when you visually show where concentrations of value are and how close they are to a tropical storm, risk managers can show the chain of reactions, the chain of consequences, the cascading vulnerabilities. In that regard, my work at MAPTYCS really helped me understand what risk truly means, particularly climate risk and AI, and how technology can help risk managers.
We see technology as a strategic amplifier. It is not about replacing the risk managers’ judgment but about amplifying their storytelling, what they plan to do, what they recommend, and what they are doing to ensure it is adequately understood across the organization.
You also describe risk modeling as becoming a “sandbox for innovation.” Can you elaborate on that?
Risk managers I speak to are emphasizing what they call “scenario-based innovation.” They want to innovate based on scenarios they play out. It’s not innovation for innovation’s sake or just because AI is interesting. They start from “what if” scenarios.
Some of them have even created “what if” labs where they can examine a specific type of risk and combine business models with stress tests and technology to see how this can help them overcome plausible disruptions. Plausible is the keyword. They don’t do this in a vacuum or in theory—they think it through with scenario-based approaches.
They take all the work they’re doing in risk scenarios and turn it into a sandbox for innovation. In the sandbox, they try out innovation not only from a technology standpoint but also from a way of thinking standpoint.
If readers take away just one big idea from “Guardians of Uncertainty,” what would you want it to be?
My perspective on how to approach risk and uncertainty has changed after working on this book. I realized it’s way beyond just the technical rigor of risk management. Emotional intelligence matters. Understanding different cultures matters. You don’t manage risk culture in India the same way you do in New York.
That’s why I keep coming back to the importance of cultural fluency in a global risk environment. This emotional intelligence, this understanding of culture, and the importance of telling a story and a narrative very clearly — it’s even more important than the numbers and quantities. That’s the shift.