Driving responsible AI: How to set the rules of the road

October 30, 2023 | By Vicki Hyman

Until recently, it seemed as if every proposal that crossed the desk of Mastercard Chief Technology Officer Ed McLaughlin included the word “blockchain.” “Wouldn’t a database work better?” he would ask, and the response would often be “Yes, but this is on the blockchain.”

These days, it’s artificial intelligence. And while Mastercard has been using AI to fight fraud on its network for years, the recent advances in generative AI, which mines enormous amounts of data to create all kinds of new content, is opening up exciting opportunities. The company is using generative AI to create synthetic fraud transaction data to evaluate weaknesses in a financial institution’s systems and spot red flags in large datasets relevant to anti-money laundering. Mastercard also uses gen AI to help e-commerce retailers personalize user experiences.

But using this technology doesn’t come without risks — among them, using AI as a sledgehammer, or, as McLaughlin puts it, “pounding screws in with very expensive socket wrenches.” Companies, he says, should be asking themselves, “What are the hard problems you’ve never been able to solve? Where can AI actually add value? And how can you do that while managing potential harms?”

Businesses of all sizes are grappling with these questions. A recent VentureBeat survey of global executives in data, AI, security and marketing found that more than half of organizations are experimenting with generative AI on a small scale, but fewer than 20% are already implementing it — and nearly one in 10 say they have “no idea” how to engage with it.

“You can think small with AI and do small things, or you can think big and truly transform your business, your industry or the world,” says Rohit Chauhan, executive vice president for AI for Cyber & Intelligence. “We want to think big, but in both cases, the application of AI needs to be done in a responsible and safe way so it delivers greater good for the world. The biggest risk of AI is not using it.”

We spoke to Mastercard leaders about how they are minimizing risks, exploring opportunities and making the right investments when it comes to generative AI.

What are the risks businesses should consider and mitigate while pursuing new opportunities?

Risks need to be addressed head-on when enterprises are considering whether to adopt generative AI technology. Those risks include inherent bias in datasets, insufficient privacy protections for people’s data after it’s fed into AI models, and “hallucinations” — the repetition of falsehoods by AI.

Strong data responsibility principles and practices should already be in place before taking the leap into generative AI, says JoAnn Stonier, who led Mastercard’s data program for more than five years and was recently appointed a Mastercard Fellow specializing in responsible AI and data. Last year, Mastercard updated its own data responsibility principles to highlight inclusion so it could ensure that data practices, analytics and outputs are comprehensive and equitable. The company’s commitment to “Privacy by Design” also embeds strong privacy and security protections into AI models, adds Caroline Louveaux, the company’s chief privacy and data responsibility officer.

“We’ve built on our standards and principles within the company walls for the responsible use of generative AI and data,” Stonier says. “This includes do’s and don’ts for employees as well as guardrails on how to learn and test the new technology without compromising sensitive or confidential information. We’re on the right side of history.”

That guidance — which, for example, advises employees not to accept the first results, and to run queries multiple times in multiple ways — helped inform the guidance of the Aspen Institute’s U.S. Cybersecurity Group for other companies as they build their own generative AI road maps. “These types of collaborative efforts to build and scale best practices are necessary to encourage responsible innovation with generative AI,” says Andrew Reiskind, Mastercard’s chief data officer.

What kind of internal governance can be put in place to make sure AI is implemented in the right way?

There is no need to start from scratch. Instead companies should leverage existing policies, processes and tools, working across the enterprise to identify the right way to build them. 

Taking an interdisciplinary approach is crucial. Data scientists, product developers, software engineers and system architects know the “how,” but human resources professionals, policy experts, ethicists and lawyers, among others, can also provide the “why” — or the “should we?”

To that end, Mastercard established the AI Governance Council five years ago to oversee the company’s AI activities and ensure they fit with its values and data responsibility principles, Louveaux says. “We sometimes seek advice from independent experts or customers, because hearing how others are viewing our AI innovations is helpful to shine a light on what may be blind spots. This goes beyond compliance — it’s about earning and maintaining trust in how we handle data and the technology.”

Regulating AI

As the private sector grapples with the opportunity and risk in the rapid evolution of artificial intelligence, governments are accelerating development of standards around AI. The European Union has drafted a far-reaching AI Act, which classifies AI systems by risk to the health, safety or fundamental rights of a person, and the U.K. is hosting a global conversation on the risks of the technology at the AI Safety Summit this week.

In the U.S., President Biden on Monday unveiled an executive order that creates new guidelines around AI, including requiring developers of the most powerful AI systems to share safety test results with the federal government and developing guidance for content authentication and watermarking for labeling AI-generated content. The executive order also addresses ways to strengthen consumer privacy, reduce algorithmic bias that can exacerbate discrimination, and develop best practices for AI in the workplace.  

How do you tackle new ideas with generative AI?

Mastercard has embarked on a broad effort to pilot gen AI-powered products and services and to identify the technology, tools and capabilities that it needs to scale, says Mohamed Abdelsadek, executive vice president of data, insight and analytics. That includes systematically taking inventory concepts from across product teams and launching hackathons and company-wide innovation challenges around gen AI concepts that produce hundreds of ideas.

“We’re prioritizing revenue or efficiency impact, the ease of implementation, and then balancing that with the degree of risk involved,” he says. But before any AI-powered product or application goes live, significant testing is critical, which is why most of the use cases will likely be internal before being rolled out publicly.

“We’re deliberately being a little more focused early on to make sure we get it right,” Abdelsadek says. “We want to put together the right infrastructure and right processes to avoid some of the risks that come with generative AI. At the same time, we want to ensure we’re enabling innovation across the organization.”

Given the possibility of generative AI replacing some human jobs, how do you fuel a culture that encourages exploration and adoption of AI?

Cultivating a culture of innovation and collaboration is critical to Mastercard’s vision to power economies and empower people, says Ken Moore, the chief innovation officer who oversees Mastercard Foundry, the company’s R&D arm.  And that goes beyond the R&D division itself — Foundry runs a company-wide innovation program called Sandbox, which gives employees anywhere in the company the opportunity to solve challenges Mastercard is looking to tackle. A recent challenge on unlocking opportunities in the Web3 space resulted in the Mastercard Artist Accelerator, which gives up-and-coming artists access to tech-based tools like AI and NFTs to propel their music careers.

As for incorporating AI into company operations, human oversight will likely be an essential function of generative AI for the foreseeable future, Moore says, especially when addressing challenges like fake information and hallucinations.

“Mastercard’s current exploration of generative AI is focused on harnessing the technology to create efficiencies that support, not replace, employees,” he says. “If AI could be used to accelerate time to solutions for tedious day-to-day tasks, how could we utilize that saved time to produce more of what only humans are singularly capable of doing, like building and nurturing relationships, or ideating new products and services?”

Vicki Hyman, director, communications, Mastercard