Until recently, it seemed as if every proposal that crossed the desk of Mastercard Chief Technology Officer Ed McLaughlin included the word “blockchain.” “Wouldn’t a database work better?” he would ask, and the response would often be “Yes, but this is on the blockchain.”
These days, it’s artificial intelligence. And while Mastercard has been using AI to fight fraud on its network for years, the recent advances in generative AI, which mines enormous amounts of data to create all kinds of new content, is opening up exciting opportunities. The company is using generative AI to create synthetic fraud transaction data to evaluate weaknesses in a financial institution’s systems and spot red flags in large datasets relevant to anti-money laundering. Mastercard also uses gen AI to help e-commerce retailers personalize user experiences.
But using this technology doesn’t come without risks — among them, using AI as a sledgehammer, or, as McLaughlin puts it, “pounding screws in with very expensive socket wrenches.” Companies, he says, should be asking themselves, “What are the hard problems you’ve never been able to solve? Where can AI actually add value? And how can you do that while managing potential harms?”
Businesses of all sizes are grappling with these questions. A recent VentureBeat survey of global executives in data, AI, security and marketing found that more than half of organizations are experimenting with generative AI on a small scale, but fewer than 20% are already implementing it — and nearly one in 10 say they have “no idea” how to engage with it.
“You can think small with AI and do small things, or you can think big and truly transform your business, your industry or the world,” says Rohit Chauhan, executive vice president for AI for Cyber & Intelligence. “We want to think big, but in both cases, the application of AI needs to be done in a responsible and safe way so it delivers greater good for the world. The biggest risk of AI is not using it.”
We spoke to Mastercard leaders about how they are minimizing risks, exploring opportunities and making the right investments when it comes to generative AI.
What are the risks businesses should consider and mitigate while pursuing new opportunities?
Risks need to be addressed head-on when enterprises are considering whether to adopt generative AI technology. Those risks include inherent bias in datasets, insufficient privacy protections for people’s data after it’s fed into AI models, and “hallucinations” — the repetition of falsehoods by AI.
Strong data responsibility principles and practices should already be in place before taking the leap into generative AI, says JoAnn Stonier, who led Mastercard’s data program for more than five years and was recently appointed a Mastercard Fellow specializing in responsible AI and data. Last year, Mastercard updated its own data responsibility principles to highlight inclusion so it could ensure that data practices, analytics and outputs are comprehensive and equitable. The company’s commitment to “Privacy by Design” also embeds strong privacy and security protections into AI models, adds Caroline Louveaux, the company’s chief privacy and data protection officer.
“We’ve built on our standards and principles within the company walls for the responsible use of generative AI and data,” Stonier says. “This includes do’s and don’ts for employees as well as guardrails on how to learn and test the new technology without compromising sensitive or confidential information. We’re on the right side of history.”
That guidance — which, for example, advises employees not to accept the first results, and to run queries multiple times in multiple ways — helped inform the guidance of the Aspen Institute’s U.S. Cybersecurity Group for other companies as they build their own generative AI road maps. “These types of collaborative efforts to build and scale best practices are necessary to encourage responsible innovation with generative AI,” says Andrew Reiskind, Mastercard’s chief data officer.
What kind of internal governance can be put in place to make sure AI is implemented in the right way?
There is no need to start from scratch. Instead companies should leverage existing policies, processes and tools, working across the enterprise to identify the right way to build them.
Taking an interdisciplinary approach is crucial. Data scientists, product developers, software engineers and system architects know the “how,” but human resources professionals, policy experts, ethicists and lawyers, among others, can also provide the “why” — or the “should we?”
To that end, Mastercard established the AI Governance Council five years ago to oversee the company’s AI activities and ensure they fit with its values and data responsibility principles, Louveaux says. “We sometimes seek advice from independent experts or customers, because hearing how others are viewing our AI innovations is helpful to shine a light on what may be blind spots. This goes beyond compliance — it’s about earning and maintaining trust in how we handle data and the technology.”