Speaker 1:
From the New York Stock Exchange at the corner of Wall and Broad Streets in New York City, welcome Inside the ICE House. Our podcast from Intercontinental Exchange is your go-to for the latest on markets, leadership, vision, and business. For over 230 years, the NYSE has been the beating heart of global growth. Each week, we bring you inspiring stories of innovators, job creators, and the movers and shakers of capitalism here at the NYSE and ICE's exchanges around the world. Now, let's go inside the ICE House. Here's your host, Lance Glinn.
Lance Glinn:
AI is reshaping our world, but without responsibility, it can amplify bias, erode trust, and cause real harm. As decisions about health, jobs, and safety are more reliant on technology than ever before, accountability is just as important as innovation. Responsible AI means building systems that are not only powerful, but fair, transparent, and aligned with human values. It's the difference between tech that serves society and tech that risks it. Credo AI is pioneering the future of responsible AI by seamlessly integrating governance into the AI development lifecycle. Founded in 2020 by today's guest, CEO Navrina Singh, Credo AI is powering the trust renaissance in AI and accelerating the path to trustworthy, scalable, and impactful artificial intelligence. Navrina, thanks so much for joining us inside the Ice House.
Navrina Singh:
Thank you so much for having me, Glenn.
Lance Glinn:
So you have had an impressive career spanning leadership roles in Microsoft, Qualcomm, and then you made in March of 2020, the bold move in the heat of a pandemic nonetheless to launch Credo AI. I want to just start with the company's origin story. What inspired you to create Credo and as the company's grown, how has your vision for it evolved over the last now over five years?
Navrina Singh:
Yeah, absolutely. It's interesting, many founders will share that it is not just one moment that wakes you up one morning and says, let's start a company. It's a set of million little things, problems that you've seen in your life experiences that forces you to really challenge the status quo and figure out how you would create something better than imagined. And that's exactly my story to start Credo AI. After spending 20 years building AI products in some amazing companies across the world like Microsoft, Qualcomm, and then being on the board of Mozilla launching Mozilla AI, what became very clear was that we are living at a very interesting moment where unlike the previous revolution, AI transformation and revolution is going to change everything. It's going to change our society, it's going to change the way we work, the way we live, who we are in this new human evolution.
And what was really critical, as I saw, especially over the past 20 years building AI products, was as technologists, we sort of put aside the responsibility and put aside the trust element, which was so critical in building these systems. And we gravitated as builders towards MLOps and LLMOps and really just the core infrastructure of technical capabilities. But what was missing was the infrastructure of trust. And I truly believe that for this transformational technology to be beneficial to humanity, to add to our prosperity, to add to this world, it really needs a new layer of trust. And that's what we started with the vision to create Credo AI. So five years ago started Credo AI to really make sure that we were the standard of what good looks like in artificial intelligence and to build this new infrastructure of trust, which basically was bringing together not only your data scientists, AI experts, ML engineers, but also your governance risk and business perspective very early on in the design development procurement of these AI systems, which is going to transform the world.
Lance Glinn:
So you mentioned the word trust quite a few times in that answer, and there's a lot of talk right now about responsible AI. It's a phrase that we hear often, but you've been very intentional about giving that idea some structure. In fact, Credo AI's mission is to ensure AI is always in service of humanity. So in a world where ethics and innovation could often feel like they're pulling in different directions, how do you and your team stand true to that creed?
Navrina Singh:
What a great question. One of my goals is to actually eradicate the word responsible. Responsibility and accountability should actually just be weaved into how we are building artificial intelligence. And so for us, from day one, we've been really intentionally focused on how do we define trust? How do we trust not only the systems, but also the people working on it as well as the ecosystem? And we've really started with, I would say, the first principles on understanding and figuring out what does alignment mean?
So as first steps within Credo AI platform, our software platform, we make sure that there is a good understanding of the context of the AI use case and what we need to measure to ensure that we can actually consistently measure it. And that's what we call the alignment problem. And as you can imagine, Glinn, this alignment problem is a very tough one because you have to not only align people on really understanding how you're going to measure, how you're going to consistently deliver across the goals you've set for that AI application, but then you also have to create the right processes so that you can actually gather evidence from the technical stack as well as the business stack.
And then lastly, you need to have very AI informed, AI literate individuals who are really actually stewards of that alignment process. So I would say that for us at Credo AI, we have created a standardized and scalable software to do that alignment. And then very quickly, as you can imagine, we are very prescriptive about what you need to measure. And this is where we have a proprietary technology called policy intelligence, where we codify company values, we codify policies, we codify AI regulations and standards, we codify best practices in the AI ecosystem based on an AI application. And then against those codified values, we go and measure and interrogate your AI models, your AI applications and data sets so that you're actually doing the things that you're doing. So trust is not a loose word here. It's actually a very intentional set of objectives, a very intentional set of measures, a very intentional set of evaluations that are done both at the technical level as well as the procedural level.
Lance Glinn:
So I want to pivot a little bit our conversation now to talk about you and your history. And I mentioned the first question, your experiences at Microsoft and Qualcomm among many others. How did these experiences, whether working on innovation strategy or just emerging tech as a whole shape your entrepreneurial spirit and your leadership style now?
Navrina Singh:
Oh, it's really interesting. One of my favorite sayings ever is, "You don't have to be great to get started, but you have to get started to be great." And so if you just unpack that a little bit, I would say a big part of my leadership style is leading from the front and biased towards action. Because if I'm expecting something of my team and members of my staff, if I myself cannot deliver on it, I think there's a big gap.
But the second thing is all about trust. Once you have brought in the most smartest talented people and you're sitting in the room learning from them, you got to trust them by providing them a set of prioritized commitments to see how they would deliver. And then I would say that the last thing that I've learned in my career is innovation not always is the answer. Innovation is just, as a builder, I get excited about building new technology and new tooling and new products, but I don't think it should be just focused on innovation. It should be focused on what is the pain that we are trying to solve that exists among the consumers, and in this case humanity? Because AI is coming in service of all of us. So I think really keeping grounded and biased towards action, building trust with my leadership team, but more importantly, consistently focused on what are we really trying to solve for and how do we deliver value have been some of the guiding principles.
Lance Glinn:
One or two other roles that I really want to dive in on real quick. You're currently a member of the US National AI Advisory Committee and were appointed in March of 2024 by the UN Secretary General to help shape global AI guidance. What do you see as just first and foremost, the role of government and global institutions in creating guardrails for AI?
Navrina Singh:
Governments and public sector and policy makers play such a critical role in ensuring that we have the right systems in place to infuse that trust. As a technologist, five years ago, I didn't have the understanding of policymaking. I didn't understand the regulations, I didn't understand standards as well as I should have. And I would say I've spent a lot of hard work past five years really being on tables to learn from folks in government to learn from standard setting bodies and policy makers. And one thing that has become very clear is how critical it is to have a bridge between private and public sector, especially in artificial intelligence.
Without that private public partnership, I think we are going to enter an era of this AI transformation, which is going to result in, I would say, scarcity rather than prosperity. It's going to result in misuse rather than in effective use of this technology. And I think it's really is critical, as I was mentioning for the technologists, the builders, the AI experts to be able to share the table and space with the policy makers and government officials who bring in a very different kind of thinking, especially on geopolitical grounds. So really a big, I would say, proponent of bringing these stakeholders together. That's the only way we are going to have progress in artificial intelligence.
Lance Glinn:
So I want to go back to Credo AI now, and I want to pivot to recent news. In November 2024, the company embarked on a collaboration with Microsoft Azure AI to empower enterprises in safely adopting cutting edge AI solutions. Earlier in May, that collaboration took a step forward with Credo AI's integration with Microsoft Azure AI Foundery. Can you first just walk us through what this launch means for the company and why it's such a pivotal moment in Credo AI's mission to bring trustworthy AI to the enterprise?
Navrina Singh:
When I think about some of the most transformational companies, Microsoft is up there not only as an AI leader, but as a company that has been leading with trust and leading with the right leadership of Satya Nadella at the forefront. And so to partner with an AI first company, the leader AI, and a company that believes in trust as deeply as we do has been just phenomenal. But why is this actually groundbreaking is because of two reasons. One is Azure right now serves 95% of Fortune 500 companies in the world. So the footprint that Azure has is pretty amazing and phenomenal. That gives us as Credo AI an opportunity to leverage that scale and to really be part of the ecosystem where AI is being built and used and powering the governance and trust within that ecosystem.
The second reason this is groundbreaking is Azure AI Foundry is really targeted for IT DevOps technical leaders. And so now with Credo AI partnership, we are actually pulling governance and trust right in the beginning of design and development of these powerful AI systems being powered by foundation models and large language models. So you really don't have governance or trust as an afterthought, but it has now become a strategic advantage because if you're able to weave in governance from day one in your design and development, you are actually going to get a much more trusted application at the other end. And that's what we are powering together. So the massive global footprint and the ability to shift governance left earlier in the design lifecycle of an AI application, I'm really excited to see the outcomes from this partnership.
Lance Glinn:
So in a release unveiling the launch in May, you mentioned that this collaboration and its most recent launch isn't just a step forward for Credo AI or Microsoft, but a step forward for trustworthy AI innovation worldwide. So as this partnership continues, what would in your eyes success look like a year from now? What would need to happen for you to say this is truly reshaping enterprise AI?
Navrina Singh:
I have been a very hardcore believer that trustworthy AI is not one company's on one person's mission. It really needs the collective we. It needs every enterprise, every individual from builders to policymakers involved in actually making sure that trust is infused with an artificial intelligence. So success for me from this partnership would mean that we've created an ecosystem of large enterprises, small enterprises, builders and investors and policymakers all coming together equally invested in trustworthy AI and governance from the onset, not when something goes wrong and there is regulatory scrutiny, not when your chatbot doesn't perform and consumers are coming after you, but because you deeply care about using trust as a competitive advantage and using it to make sure that this technology works for all the stakeholders involved.
Lance Glinn:
Now, Navrina in conjunction with AI trailblazers, the New York Stock Exchange is hosting the women in AI breakfast. When you look at just the industry today, where do you see the biggest gaps in gender representation and why does that matter? Closing those gaps, why does that matter for the integrity of the technology itself?
Navrina Singh:
It's really fascinating, and then you use the word integrity, which is I think a really right word here. If this technology has to work for everyone, it has to serve everyone. That means irrespective of your gender, racial background, education, et cetera, it has to really work for all the stakeholders. And especially as I think about women who actually constitute one of the largest representative set of consumers across every industry, it's really critical to have their voices represented, but more importantly as leaders in AI. So the biggest gaps I am seeing right now of, I'll name a few, but one is if you think about public and private sector boards, we would love to see more women on the boards, especially driving AI strategy and thought because there are some really amazing talented leaders who've been leading this work in some phenomenal companies.
The second is I would love to see more female founders building AI companies because again, I think the opportunity in AI is immense and also going from zero to one because of AI. If you're able to use AI technology much more effectively, actually now we are finding a new breed of entrepreneurs who can actually build businesses much faster than five years ago or even like a year ago. So I would love to see more women founders.
Third is it's really important, and this is something I'm seeing firsthand, and this is why AI literacy is very important to me, as I work with different private and public institutions and companies as well as with startups, what's been fascinating to watch is most of the builders who are adopting and using AI tools are men. And that actually is very concerning to me because I really believe that this is the moment in time whether you are a grad student or whether you are a female leader in a private or public sector, you need to not have the fear, but actually the excitement of using artificial intelligence and really putting that tool in your toolbox to win in this age of AI. And so I would love to see more women AI builders than exist.
And then lastly, what I would love to see is more women investors investing in women. We've raised about $43 million so far over the past five years, and for AI governance, that is really exciting. But it has been really hard to find women investors who are investing in infrastructure of trust, who are investing in critical core capabilities that make up artificial intelligence. So across board, across founders, across builders, and across investors, I would just love to see more women represented.
Lance Glinn:
Now, Navrina, as we wrap up, Credo AI earlier this year reached five years of AI development, trustworthy AI development. How do you envision the company's role in just shaping the future of the sector, especially as the technology continues to evolve and its capabilities, some of which may right now be unforeseen emerge in the months and years to come?
Navrina Singh:
From day one of Credo AI, we've just been very relentlessly focused on our mission, which is how do we become the standard of good AI and defining what that good looks like so that this technology works in service of humanity. So whether it is generative AI or our groundbreaking work in making sure the AI agents and agentic AI future is also governed, we just continue to stay very heads down focused on making sure that we become the infrastructure of trust. You can have any kind of AI tooling, whether it is data and AI infrastructure, or you can have your governance risk compliance tooling. But Credo AI is fundamentally that infrastructure that is going to power what really that good looks like and how you're going to measure it consistently. So over the next five, six, 10 years, our goal is to become synonymous with good AI and to continue powering this transformational technology, which is going to change our world.
Lance Glinn:
Well, Navrina, thank you so much for joining me for this conversation and for joining us here inside the ICE House.
Navrina Singh:
Thank you so much for having me.
Speaker 1:
That's our conversation for this week. Remember to rate, review, and subscribe wherever you listen and follow us on X at ICE House Podcast. From the New York Stock Exchange, we'll talk to you again next week Inside the ICE House.
Information contained in this podcast was obtained in part from publicly available sources and not independently verified. Neither ICE nor its affiliates make any representations or warranties, express or implied, as to the accuracy or completeness of the information, and do not sponsor, approve or endorse any of the content herein, all of which is presented solely for informational and educational purposes. Nothing herein constitutes an offer to sell, a solicitation of an offer to buy any security, or a recommendation of any security or trading practice. Some portions of the preceding conversation may have been edited for the purpose of length or clarity.