On the RegulatingAI Podcast, Sanjay Puri and Raju Narisetti explore AI, trust, language equity, open knowledge, and why AI must be treated as infrastructure.
WASHINGTON, DC, UNITED STATES, February 23, 2026 /EINPresswire.com/ —
What are the implications when artificial intelligence goes beyond organizing knowledge and actually begins to reshape it?
In a very insightful episode of the RegulatingAI Podcast, the host, Sanjay Puri, engages in a discussion with a veteran journalist and a global media leader, Raju Narisetti, on the real puzzle of regulating AI in the modern era: whether AI can help improve open knowledge or if it will silently undermine it.
Narisetti doesn’t mince words.
“We are treating AI-made information like it’s free. But the bill will come due in trust.”
The Quiet Erosion of Trust
The most perilous part, he says, is not the showy hallucinations. It’s something more insidious. People stop fact-checking. Institutions stop spending on fact-checking. And confidence starts to trump facts. Before we know it, we’re left living in a world where plausibility triumphs over proof.
AI makes the cost of information creation lower. But it also makes the cost of misinformation creation lower. That’s the double-edged sword that defines this moment in time.
But Narisetti is not entirely pessimistic. The solution to the problem of information decay can also scale: better systems of provenance, better incentives for quality, and a cultural shift that encourages us to “show our work” again. AI, he says, can help restore trust—if we choose to make trust the product, not the byproduct.
The Language Inequality Problem
One of the most interesting parts of this conversation is about language equity. Of the 7,000 languages spoken around the world, only 10 languages comprise 82% of the internet’s content. This is what happens when AI is trained in the majority languages.
When a language is not represented on the internet, it is AI-less.
According to Narisetti, multilingual design cannot be an afterthought. It has to be a foundation. Helping knowledge ecosystems like Wikipedia in hundreds of other languages is not charity—it is infrastructure. If the AI industry benefits from open knowledge, they have to give back to make it stronger.
We risk depleting the commons without replenishing it.
AI Isn’t a Software Patch—It’s an Operating Model Shift
Based on his experience as a global organization advisor, Narisetti points out the largest gap between the hype and reality of AI. Executives speak about models, demos, and pilots. But the real benefit lies in something much less exciting: workflow redesign, data cleansing, establishing governance guardrails, and change management.
“The model is the easy part. The operating model is the hard part.”
AI is more than a technology upgrade. It’s a business transformation challenge. Businesses that approach AI as a “plug-in” tool are disappointed. Businesses that transform decisions, incentives, and the role of people are seeing the impact.
The Global South Must Be a Co-Creator
Narisetti is clear that emerging economies cannot remain just data sources and customer bases.
“The global south can’t just be training data and customers. It has to be a co-creator.”
India, he says, has a special chance. With its size and creativity driven by constraints, it can show the way to develop multilingual, affordable, and practical AI. The aim is not to export AI. It is to develop contextual AI.
Availability of shared compute resources, datasets governed by local rules, and AI literacy are the keys to making inclusion substantive, not just a slogan.
Treat AI Like Critical Infrastructure
If Narisetti had two minutes with world leaders, his counsel would be this: “Stop treating AI like a trophy and start treating it like critical infrastructure.”
This means building systems that can say, “I don’t know.” This means building for high-risk edge cases first. This means aligning business incentives so engagement isn’t the price of truth.
By 2030, commodity content will be cheap. What will be expensive is “truth with receipts.”
The future of AI, as described in an interview by Raju Narisetti with Sanjay Puri on the RegulatingAI Podcast, will not be determined by the size of the models. It will be determined by whether we can create systems where errors can be traced, where corrections can be seen, and where trust can be lasting.
Trust is not optional in the age of AI.
It’s infrastructure.
Upasana Das
Knowledge Networks
email us here
Visit us on social media:
LinkedIn
Instagram
Facebook
YouTube
X
Legal Disclaimer:
EIN Presswire provides this news content “as is” without warranty of any kind. We do not accept any responsibility or liability
for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this
article. If you have any complaints or copyright issues related to this article, kindly contact the author above.
![]()



































