A Personal Reflection on AI Risk Management
What does a teddy bear and a toy robot have to do with AI risk?
What does a teddy bear and a toy robot have to do with AI risk?
Before I returned to MAS in 2023, I hated the notion of AI governance. Fresh out of PhD studies but a jaded middle aged man, I thought some of the conversations in the space were rather naive. Sentient AI. Existential risks. Ethical considerations for AI that we don’t even apply to humans.
To the me back then, AI governance was like a teddy bear, cute but all fluff. You could hug it and everyone would nod along with pride. Then karma struck. I was asked to lead AI risk supervision at MAS. Suddenly my job description required me to tackle this teddy bear.
Governance and risk management were not new to me. I had been doing it for more than a decade in MAS even before my PhD. But governance and risk management have specific meanings in finance.
I joined MAS in 2007, right before the Great Financial Crisis. And I saw firsthand how governance and risk management were grounded in real failures and risks. No hypotheticals. Governance and risk management in finance are more like toy robots, not teddy bears. Sharp edges, nothing cuddly. But it moves.
After a while, I realized that I was biased. Not all of AI governance was a teddy bear. I discovered the NIST Risk Management Framework, ISO 42001. Also existing risk management frameworks, such as technology and model risk management frameworks that had started to address the new risks from the increasing use of AI (see my simple reading list on AI governance and risk management).
So robots did exist. Different shapes and shades. But I thought that there were too many.
Reading frameworks, comparing documents, talking to practitioners who had actually implemented these things over the past two years, I wondered if there was a simpler way to look at AI risk management. Not just for organizations, but also at a personal level. Which means that it cannot be a 100 page playbook, but something easy to remember.
So instead, just three questions. I’m not sure they’re the right three, but they’re the ones I seem to keep returning to.
What’s at risk? You can’t manage what you haven’t found or understood. This means discovering where AI exists and profiling how risky each system actually is.
How do we manage it? Once you know what’s at risk, you need controls in the right areas relevant to that risk, and ways to check if those controls are working.
Who’s accountable? Controls without owners become theatre. Someone has to own the risk, and the organization needs capability to sustain it.
These aren’t original questions. They’re the same questions risk management in finance has been asking for decades. The 2008 crisis didn’t teach new principles. Rather, it reminded us (really really loudly) what happens when we forget the old ones.
Some quick reflections on each of these questions below.
What’s at risk?
I’m pretty sure the 2008 financial crisis didn’t create risk. It just revealed the house of cards that had been built.
Banks discovered exposures they didn’t know they had. Off-balance-sheet vehicles that suddenly appeared. Opaque instruments nobody understood. The problem wasn’t that these were risky. Rather, nobody knew they were there until the reckoning.
Same with AI. You can have the most elegant governance policies. But if you can’t find where AI is and figure out which ones actually matter, everything else is just performative.
Think about your phone. You’ve used hundreds of apps over the years. Most forgotten. Some daily. A few have access to your photos, location, bank account. Do you treat them all the same? Of course not. The banking app gets the biometric lock. Candy Crush doesn’t.
Same instinct. Identify where AI exists. Profile which ones matter. Not everything needs the same attention, but you have to know what you have before deciding.
And once you know what you have and what matters, record it. A good inventory isn’t just for risk management. It’s memory. Ever rediscovered a really useful app on your phone? Same with AI. The AI tool one team loves might solve another team’s problem. Without the inventory, you reinvent wheels. With it, you see where else to go.
The beauty of doing this well? It helps you scale, not just manage risks.
How do we manage it?
After 2008, we went into overdrive. More rules. More controls. I hated being involved in international discussions on some of these reforms. New rules to compute capital for market risk, counterparty credit risk. Why the hate? Before 2008, it was common to hear of actual rocket scientists being hired into quantitative finance. After 2008, I thought I needed to be a rocket scientist just to make sense of the new rules.
You might feel that this reminds you of the jargon around AI controls. Guardrails. Red teaming. Alignment.
But what I learned is that it was not the complexity that mattered. In fact, complexity was what caused the Great Financial Crisis (ever heard of the single factor copula model?). It was whether the controls made sense for the risks involved.
I think two things matter. The right controls. And checking if they work, and continue to work.
Think about onboarding someone you’ll depend on. A new hire. You’d want to know: Can I trust what they’re telling me? Can I step in if things go wrong? Will they hold up under pressure?
Same questions for AI. Can I trust the data? Do I even need fairness and explanation? Can humans step in when needed? Will it hold up under stress?
You ask the right questions. Not every question. Match the scrutiny to the risk. A high risk customer-facing credit model gets the full onboarding. An internal summarization tool gets a lighter touch. And you don’t just check once. Things change. The new hire who was great in month one might be struggling by month six.
You want to know before shit happens.
Who’s accountable?
This is where 2008 probably made the least sense. Bankers responsible for the crisis walked away. Everyone else paid. It gave us Occupy Wall Street. Some would say it’s still contributing to the political fractures we see today.
They could walk away because accountability wasn’t clear. Everyone’s responsibility meant no one was.
With AI, this could be way worse. In 2024, Air Canada’s chatbot invented a refund policy that didn’t exist. A customer relied on it. When he complained, the airline argued the chatbot was “a separate legal entity” responsible for its own actions. Nice try. Laughed out of court.
And you don’t need to go that far to see the pattern. Does this sound familiar? “We need a control for the AI system’s risks.” “No problem, let’s place a human in the loop.” Sounds good. Box checked. But which human? Doing what exactly? With what authority to override? And do they actually understand what they’re looking at?
Accountability requires clarity. Not just about ownership, but about the what and how.
Accountability without capability is also empty. Can that person actually ask the right questions? Spot when something’s off? Push back on the confident-sounding nonsense from vendors or developers or the AI itself?
How many in 2008 really understood what a single factor copula model even was when buying CDOs priced by them?
FIN
I joined MAS in 2007 not knowing what a capital ratio was. I left in 2025 having written the AI risk management guidelines for the financial sector.
I wrote this to make sense of that arc before it drifts away. Eighteen years. Different domains. capital rules, model audits, investment risk, AI. Different jargon. But somehow the same questions make sense.
Maybe that’s the through-line I couldn’t see while I was in it.
Definitely less teddy bears. More robots.


