back

Do You Trust Your AI’s Inferences? — Sahil Yadav, Hariharan Ganesan, Telemetrak

Explainable AI: trust or verify?

In a data-driven world where AI increasingly makes critical decisions, the question of trust has become paramount. At a recent tech conference, Sahil Yadav and Hariharan Ganesan from Telemetrak presented a compelling case for explainable AI—technology that doesn't just deliver results but also provides clear reasoning for its conclusions. Their presentation addresses a fundamental challenge in AI adoption: can business leaders trust the black-box recommendations that algorithmic systems produce?

Key Points

  • The trust gap in AI adoption remains significant—executives and decision-makers struggle to implement AI solutions when they can't verify or understand the underlying reasoning.

  • Explainable AI (XAI) creates transparency by providing clear rationales for predictions and recommendations, making complex models accessible to non-technical stakeholders.

  • Human-AI collaboration works best when systems are designed to augment human decision-making rather than replace it entirely—explanations facilitate this partnership.

  • Implementation barriers for explainable AI include technical complexity, model performance trade-offs, and organizational resistance to transparency.

The Transparency Imperative

The most profound insight from the presentation is that transparency isn't just a technical nicety—it's a business necessity. When stakeholders understand why an AI system recommended a particular action, adoption rates skyrocket. This matters tremendously in the current business climate where AI investments are under increasing scrutiny. According to Gartner, nearly 85% of AI projects ultimately fail to deliver value, with "lack of trust" cited as a primary factor.

The speakers' emphasis on "showing your work" resonates deeply with the emerging regulatory landscape. As the EU's AI Act and similar regulations take shape globally, explainability is transitioning from a competitive advantage to a compliance requirement. Companies that build transparency into their AI systems now won't just win more customer trust—they'll avoid potential regulatory penalties down the road.

Beyond the Presentation: Real-World Applications

What the presentation didn't fully explore is how different industries are implementing explainable AI with varying levels of success. In healthcare, for example, Beth Israel Deaconess Medical Center in Boston has pioneered an explainable AI system for diagnosing pneumonia. Their approach involves highlighting the specific image

Recent Videos

May 6, 2026

Hermes Agent Master Class

https://www.youtube.com/watch?v=R3YOGfTBcQg Welcome to the Hermes Agent Master Class — an 11-episode series taking you from zero to fully leveraging every feature of Nous Research's open-source agent. In this first episode, we install Hermes from scratch on a brand new machine with no prior skills or memory, walk through full configuration with OpenRouter, tour the most important CLI and slash commands, and run our first real task: a competitor research report on a custom children's book AI business idea. Every future episode will build on this fresh install so you can see the compounding value of the agent in real time....

Apr 29, 2026

Andrej Karpathy – Outsource your thinking, but you can’t outsource your understanding

https://www.youtube.com/watch?v=96jN2OCOfLs Here's what Andrej Karpathy just figured out that everyone else is still dancing around: we're not in an era of "better models." We're in a different era of computing altogether. And the difference between understanding that and not understanding it is the difference between being a vibe coder and being an agentic engineer. Last October, Karpathy had a realization. AI didn't stop being ChatGPT-adjacent. It fundamentally shifted. Agentic coherent workflows started to actually work. And he's spent the last three months living in side projects, VB coding, exploring what's actually possible. What he found is a framework that explains...

Mar 30, 2026

Andrej Karpathy on the Decade of Agents, the Limits of RL, and Why Education Is His Next Mission

A summary of key takeaways from Andrej Karpathy's conversation with Dwarkesh Patel In a wide-ranging conversation with Dwarkesh Patel, Andrej Karpathy — former head of AI at Tesla, founding member of OpenAI, and creator of some of the most popular AI educational content on the internet — shared his views on where AI is headed, what's still broken, and why he's now pouring his energy into education. Here are the key takeaways. "It's the Decade of Agents, Not the Year of Agents" Karpathy's now-famous quote is a direct pushback on industry hype. Early agents like Claude Code and Codex are...