Real Talk AI: Building AI That Users Can Trust: Real Talk with Tapan Kamdar on Privacy-First Innovation and Leading at Scale
How Mozilla’s Senior Director of GenAI builds responsible AI products while leading teams of 200+, and what every AI PM can learn about balancing innovation with user trust
AI product management is undergoing a fundamental shift. The old playbook of shipping features on a roadmap no longer works when building intelligent systems that learn and evolve. Add privacy concerns, regulatory scrutiny, and the challenge of leading large, cross-functional teams; you have an entirely new game.
Welcome to Real Talk AI, a no-fluff interview series in which AI product leaders share how they actually ship AI products, the decisions, trade-offs, and systems behind the scenes.
Today’s conversation is with Tapan Kamdar, Senior Director of Product Management for Generative AI, Search & Productivity at Mozilla. With experience building discovery engines at Meta that drove 30 million new daily active users and scaling platforms at GoDaddy that fueled 10% YoY revenue growth for five consecutive years, Tapan brings a rare combination of scale thinking, technical depth, and mission-driven leadership to AI product development.
In this interview, you’ll learn:
How to balance AI innovation with user privacy using Mozilla’s responsible AI framework
What’s fundamentally different about leading large AI product teams (200+ people) vs traditional product teams
How to build discovery systems as self-reinforcing flywheels that drive both engagement and business outcomes
The evolution from “roadmap PM” to “AI-native PM” and what skills matter most in the future
Practical frameworks for structuring AI teams, defining metrics, and scaling platforms
What hiring managers actually look for at different AI PM levels, and common interview mistakes candidates make
Responsible AI That Ships
How do you balance AI innovation with user privacy and trust at Mozilla? What frameworks do you use for these decisions?
At Mozilla, responsible AI isn’t a constraint - it’s a competitive advantage. Our approach is guided by three core principles that we live through Firefox every day:
User Value First (User Agency): Every AI feature must deliver clear, repeatable value to users. If it doesn’t, it doesn’t ship. If it stops doing so, it unships. This sounds obvious, but it’s the filter that prevents “shiny object syndrome” in AI projects.
In Firefox, this means AI is opt-in only. Users are in full control. They decide when to enable an AI feature. We design every feature around permission, not assumption.
Privacy by Design (Privacy First): We minimize data collection, anonymize requests, and give users explicit control. Most companies treat privacy as a compliance checkbox. We treat it as a product feature that builds long-term trust, which drives usage and retention.
In practice, we’re developing on-device models that deliver intelligent features like tab organization and alt-text generation without sending data to the cloud. None of this data is used to train or improve models. On mobile, we go further with private model routing that obfuscates user identity across multiple requests.
Mission Alignment (Choice): Our AI work aligns with Mozilla’s mission of keeping the internet open, accessible, and user-centric. We think of Firefox as the platform that enables us to “show, not tell” our mission and vision to grow the open web in the age of AI.
Firefox is becoming an open AI platform. Users can choose between major AI providers and open-source alternatives, all integrated equally. There’s no default bias or walled garden; it’s the open web model applied to AI.
What This Means in Practice:
Firefox itself is our proof point: a working demonstration of what responsible, open, and user-first AI can look like. We’re using the browser to express our mission and live it—showing that innovation and integrity can coexist.
I encourage PMs to adopt the mindset of Shipping fast, but never at the cost of user agency. Privacy is an innovation lever, not a constraint.
PM Tip: Privacy-first AI doesn’t slow you down, it forces you to build features that are genuinely valuable. If you can’t justify a feature without collecting personal data, you shouldn’t build it.
Cross-Company Evolution
You’ve built AI products at Meta, GoDaddy, and now Mozilla—very different contexts. How has your approach to AI product management evolved across these companies?
The common thread across Meta, GoDaddy, and Mozilla has been unlocking outcomes by solving what people cared about most. However, each company taught me something different about how to do that.
At GoDaddy: Empowerment Small business owners didn’t want to spend hours searching for domains or waiting on hold with customer service. They wanted to grow. AI helped them find the right domain instantly, resolve issues proactively, and learn from how similar businesses scaled. This taught me how to use AI to anticipate needs and remove friction.
At Meta: Connection at Scale AI-powered discovery helps you find your tribe and stay close to them through feeds and messaging. The challenge was scale: billions of people, endless signals. That taught me how to build experiences and platforms that constantly learn and adapt without breaking trust.
At Mozilla: Responsibility Users want AI that helps them, without compromising their privacy or control. Here, AI is not just a feature; it’s an extension of our mission to keep the internet open and user-first. I’ve learned to treat trust as the most important outcome AI can unlock.
That progression—empowerment → scale → responsibility—has shaped my approach to AI today. Each experience prepared me uniquely for the next, giving me a perspective that combines pragmatism, scale thinking, and mission alignment.
PM Tip: Your career in AI should be a progression of increasingly complex challenges. Don’t just chase company names; chase the different types of problems that will round out your skillset.
Leading AI Teams at Scale
You’ve led teams of 200+ professionals building discovery engines. What’s unique about managing large AI product teams compared to traditional product teams?
The fundamentals remain the same: focus on the user problems, and keep the team excited about why we’re solving them. However, how you execute that mission is completely different.
The Traditional Approach:
User focus: Build features that directly address core pain points, instrument them, and ship only if they move the key metric.
Team focus: I use my Level Up Framework of Clarity, Energy, and Trust. Clarity is a one-page outcome, success metric, and guardrails so decisions can happen without me in the room. You protect energy by avoiding “just one more idea” once the plan is set. Trust comes from shared decisions: solicit input early, transparent tradeoffs, assign clear decision-making rights, and support the owner’s call within established guardrails.
The Level Up Framework Explained:
The Level Up Framework is a leadership model I developed to empower teams at scale. It’s built on three pillars:
Clarity - Create a one-page outcome document with success metrics and guardrails that enable decisions without you in the room
Energy - Protect team focus by avoiding “just one more idea” once the plan is set
Trust - Build it through shared decisions: solicit input early, make tradeoffs transparent, assign clear decision-making rights
I’ve detailed this framework in presentations and workshops. You can explore it further:
LinkedIn post with deck: https://www.linkedin.com/posts/tkamdar_thatss-what-makes-a-great-pm-activity-7344771752398307328-JtKe
Products That Count Webinar:
What’s Fundamentally Different About AI Teams:
We’re not just building with new technology, we’re pioneering an entirely new way of building. This requires us to rewire ourselves as leaders first.
The core evolution is the shift from building predictable, static products to cultivating adaptive, intelligent systems. A traditional product leader architects a fixed solution; an AI leader cultivates a system that learns and evolves.
I’ve had to unlearn my instinct for promising specific outcomes and instead teach my organization to embrace emergent behaviors as a feature, not a bug. My job is no longer to have the answers, but to create the systems that can discover answers we couldn’t have imagined.
This involves:
Teaching the team to build evaluation frameworks before features
Instrumenting for serendipity, not just success
Rewarding teams for the velocity of their learning, not just for shipping
How This Transforms Execution:
We no longer ship features; we ship hypotheses. Every release is an experiment that teaches us about the intersection of human behavior and machine intelligence, and we reward teams for the velocity of their learning, not just for shipping.
This requires reskilling at every level:
PMs define success through outcomes and guardrails, rather than rigid specifications
Engineers design for graceful handoffs and uncertainty
Designers create experiences that elegantly handle ambiguity
This is the muscle memory every product team will need in the next five years. The leaders who thrive will be those experimenting on themselves now, learning how to maintain team cohesion when the product’s behavior can evolve unexpectedly.
As leaders, we’re reimagining ourselves in the world of AI. We’re not just shipping AI products. We’re prototyping the future of product development itself.
PM Tip: If you’re still managing AI teams like traditional feature teams, you’re already behind. Start small: pick one project and run it as a learning system with hypothesis-driven development. Learn what breaks and what works before you scale this approach across your organization.
Discovery as a Flywheel
You built discovery engines that drove significant user growth at Meta. How do you approach product strategy for AI-powered discovery and recommendation systems?
Meta is a system of flywheels: network, discovery, conversations/engagement, and monetization. My approach at Meta was to drive the conversation flywheel at scale by building a self-reinforcing discovery flywheel.
This AI system connects people with relevant people, content, and communities, generating signals that constantly refine future connections and conversations. Crucially, the discovery and conversation flywheel drove the monetization flywheel, even without us taking goals to drive monetization directly.
By optimizing the feed for shareable and conversation-worthy content, we sparked more private conversations. This systems-thinking approach enabled my teams to drive an incremental 30 million new daily active users in the time I spent at Meta, leading those teams.
Ship Hypotheses, Not Features
This strategy transforms execution: we ship hypotheses, not features, and embrace learning from failure.
The Messenger Notes Example:
Our first hypothesis for Messenger Notes was that a single emoji could be a lightweight conversation starter. The experiment proved us wrong: users co-opted it to signal they were unavailable.
The hypothesis failed, but the learning was a success. We learned that if done right, conversation surfaces could be creation surfaces.
We iterated with a new hypothesis that simple text would unlock the connection and that the pivot was the key. The feature now accounts for 25% of contextual message creation. In its first month, it drove 1.2 billion note creations and attracted a significant number of new daily active messaging users.
This journey from a failed experiment to a massive success illustrates the strategy: it’s not about being right the first time, but about maximizing the velocity of learning.
PM Tip: In discovery systems, your job isn’t to predict what users want—it’s to build a system that learns what they want faster than your competitors. Optimize for learning velocity, not prediction accuracy.
Balancing Technical and Business Goals
With your technical background in ML and web personalization, how do you balance technical feasibility with business objectives when leading AI products?
I balance technical feasibility with business objectives by focusing my leadership on the “why” and the “what,” which empowers my teams to own the “how.”
Establish the “Why”: Define the core user problem and the business outcome we’re driving.
Define the “What”: Create a clear, one-page document that outlines the problem, strategy, success metrics, and guardrails. This clarity allows my technical teams to innovate and make decisions on the implementation without me in the room.
At GoDaddy Example:
My team owned the “how” by building the domain search engine from the ground up, an effort that drove 10% year-over-year revenue growth for five consecutive years and outperformed Google when it entered the domains business.
My role in their process is not to question the technical “how,” but to be a relentless user of the product. By constantly engaging with the experience, I can surface challenges and opportunities the team might not encounter. I then bring these real-world user perspectives back to them, which helps them refine their solutions.
This ensures the technical implementation stays deeply connected to the end-user experience without me ever dictating their approach.
PM Tip: The best technical PMs don’t out-engineer their engineers; they out-use the product. Become your own product’s most demanding user, and you’ll surface the correct problems for your team to solve.
Structuring AI Organizations
How do you structure collaboration between AI/ML engineers, data scientists, and other functions on large-scale AI initiatives?
I structure my organization around a shared purpose and a shared language.
Instead of functional silos, I build virtual teams of product managers, engineers, designers, and data scientists who are all aligned on the same user problem and business outcome.
Make Literacy Table Stakes:
To make this work, we invest heavily in making models, data, and privacy literacy table stakes for every role. When the entire team can reason about the technology and its implications, decisions move to the edge and away from leaders making the calls.
My role shifts from approver to coach. I focus on exploring the unknowns, setting clear expectations, and defining RACIs, which builds trust and empowers the team to own the “how.” This keeps their energy focused on solving real user pain, not on navigating internal approvals.
Run as a Learning System:
I run the organization as a learning system, not a feature factory. This means:
We ship hypotheses, not features
We reward the velocity of learning, not just shipping
We focus on cumulative user outcomes and experiments shipped per month
We treat user value and trust as equal, non-negotiable goals
We bake privacy by design into the product from day one
To stay connected to the work without dictating the “how,” I act as a relentless user of our products. This allows me to bring real-world challenges and user-centric insights back to the team, ensuring their innovative solutions remain grounded.
This is the muscle every product team needs to build, AI or not.
PM Tip: Cross-functional collaboration fails when each function speaks a different language. Invest in shared literacy across the team: engineers who understand user value, designers who understand model constraints, and PMs who understand data quality. That shared language is what enables true collaboration.
Defining AI Product Success
How do you define and measure success for AI products? How do you balance technical metrics with business outcomes?
I measure AI products by their impact on business outcomes, not by the performance of the underlying model or platform.
Technical metrics—such as accuracy, quality, and latency—are treated as necessary inputs and guardrails, but they are never the North Star. A model can be 99% accurate and still fail to solve a user’s problem, resulting in a business failure.
The Process:
My team starts by defining a primary business metric, user retention, incremental daily active users (DAU), or conversion rate, as the key success metric before any technical work begins.
The rule is simple: we ship only if an initiative moves that key metric. If it doesn’t, we cut it, no matter how technically impressive.
Creating Alignment:
The balance comes from framing every initiative as a clear hypothesis that connects the two: “We believe that improving [technical metric] will result in a Y% improvement in [business outcome].”
For example, at GoDaddy, the goal wasn’t just to build a highly accurate domain valuation model; it was to improve marketplace liquidity and user confidence, ultimately fueling revenue growth and bringing new buyers and sellers to the platform.
My role is to constantly challenge the team to bridge this gap by asking, “How will this lower latency translate to higher user retention?” This ensures that technical improvements are always treated as a means to an end, keeping our resources focused on creating measurable user value.
PM Tip: Never let your engineers or data scientists fall in love with technical metrics. Every technical metric should have a line of sight to a business outcome. If you can't clearly draw that line, you're optimizing for the wrong thing.
Privacy-First Development
At Mozilla, privacy is core to the mission. How does a privacy-first approach change how you build and deploy AI products?
At Mozilla, we treat privacy as a product capability you can see and control, not as a compliance checkbox. Privacy is a feature, not an add-on.
This philosophy fundamentally changes our technical strategy to be user-first, leveraging on-device AI to minimize data collection from the start. When personalization and cloud-based AI are used, we give users explicit control and transparency at every step.
Example in Practice:
Building features like “smart tab group creation using local AI,” where we don’t need to train or know what tabs you have open on your browser. By designing for user agency, we transform privacy into a lever for innovation, fostering long-term trust—our most important outcome.
Operationally:
Privacy is the default in our build and launch process:
We ship hypotheses, not just features
Our evaluation frameworks include privacy guardrails alongside traditional metrics like quality and latency
We measure trust by tracking opt-out rates and the usage of privacy-centric features
Our go-to-market strategy leads with radical transparency about what we collect, what stays local, and what the user controls (everything)
This approach transforms privacy from a tax on speed into a durable product advantage, allowing users to feel secure and our teams to innovate responsibly.
PM Tip: Privacy-first AI forces better product design. If you can’t build a feature without collecting sensitive data, you’re probably solving the problem the wrong way. Constraints breed creativity.
Scaling AI Platforms
You scaled GoDaddy’s platforms significantly. What are the unique challenges of scaling AI-powered platforms, and how do you address them?
Scaling an AI platform isn’t about handling more users; it’s about scaling the relationship with every user to solve more of their problems.
At GoDaddy, our vision was to be the “Small Business OS” for entrepreneurs. That relationship often started with a single domain, but our goal was to become their growth and success partner, not just a product vendor.
Our AI Platform as the Engine:
We scaled it to learn from our most mature customers, ingesting signals from their successful businesses, to create a personalized roadmap for new entrepreneurs. This transformed our platform from a simple tool provider into a proactive guide for growth.
Building a Resilient Learning System:
We built this as a resilient learning system designed to deepen that partnership over time. The platform didn’t just learn from our users’ actions on our site; it ingested signals from how their customers engaged with their products and how our users managed those relationships across various touchpoints.
Our models utilized this comprehensive, end-to-end view of success to guide new entrepreneurs, enabling them to replicate the outcomes of their successful peers.
This is how we consistently over-delivered value, expanded the customer’s relationship with the GoDaddy ecosystem, and grew their lifetime value (LTV), fueling our subscription revenue growth.
PM Tip: Don’t scale your platform by adding more features. Scale it by deepening the relationship. Build AI that understands each user’s journey and guides them toward success, that’s what drives sustainable growth and LTV expansion.
Fostering Innovation at Scale
How do you foster innovation within large AI product teams while maintaining focus on user outcomes and business goals?
I foster innovation by creating a system that directs creative energy toward our most important business goals, rather than treating innovation and focus as opposing forces.
The Level Up Framework:
I use my Level Up Framework of Clarity, Energy, and Trust to achieve this. The framework provides a “sandbox” for teams, offering:
A well-defined outcome
Clear success metrics
Explicit guardrails
This gives them the psychological safety and creative freedom to experiment on the “how,” knowing that any innovative approach is welcome as long as it serves the primary goal.
It’s a powerful way to align large, cross-functional organizations around a single, shared purpose, empowering them to innovate without losing focus.
Meta Example:
At Meta, our goal was to increase private sharing of content, which we knew drove overall usage. The shared outcome across the massive Facebook and Messenger organizations was to increase shares originating from the Facebook feed.
Instead of a single big bet, we conducted a series of hypothesis-driven experiments with different themes, testing how various feed ranking models could influence sharing behavior.
At that scale, even a 0.1% increase in private conversations was a massive win that directly translated to increased engagement. This focused, experimental approach allowed many teams to innovate in parallel, all contributing to the same core business goal.
Innovation wasn’t a separate track; it was the engine for moving our key metric.
PM Tip: Innovation without focus is chaos. Give your teams a clear North Star, explicit guardrails, and freedom to experiment within those boundaries. That’s when you get breakthrough thinking that actually ships.
The Evolving AI PM Role
Having worked in AI products for many years, how do you see the AI PM role evolving? What skills will be most important going forward?
Succeeding as an AI PM requires you to think from first principles because the core nature of product development is changing. What got you here—building predictable features on a roadmap—won’t get you there: cultivating intelligent systems that learn and evolve on their own.
Four Fundamental Shifts:
From Roadmaps to Outcome Portfolios: Traditionally, PMs have managed a linear feature roadmap focused on shipping predictable outputs. The AI PM will manage a portfolio of hypotheses, where every investment is tied to a single business outcome and is cut if it fails to move the key metric.
From CEO to Coach: Traditionally, the PM has been the CEO of the product, directing the “what” and influencing the “how.” An AI PM will act as a coach, building a learning system and empowering teams to make decisions on the edge.
From Deterministic to AI-Native UX: Traditionally, PMs designed static, predictable interfaces for deterministic user flows. To build trust and agency, the AI PM will design for uncertainty with fallbacks, correction loops, and explanations.
From Spec Writer to Model Manager: Traditionally, PMs focused on writing functional specs for engineers to implement. The AI PM will manage a portfolio of local and cloud models, making strategic tradeoffs between costs, latency, and accuracy for different user tasks.
The PM role is not going anywhere, but it is being fundamentally rewired. If you do not adapt quickly, you risk being left behind.
PM Tip: The best way to learn AI product management is to ship AI products. Don’t wait for the perfect role or perfect knowledge. Start experimenting with AI features in your current product, even if they’re small. The learning comes from shipping, not from reading.
Advice for AI PM Leaders
What’s your best advice for AI PMs who are stepping into leadership roles and need to build and manage large AI product teams?
My advice for new AI leaders is to recognize that you’re not just managing a product line; you’re cultivating a new way of building. Your success will depend on your ability to lead this organizational shift from first principles.
Lead with a Framework, not Directives: Traditionally, leaders drive execution through direct involvement and approvals. As an AI leader, you must coach your teams by providing a framework (e.g., my Level Up Framework of Clarity, Energy, and Trust) that empowers them to make decisions at the edge.
Focus on the User Problem, not the AI: Traditionally, teams can get caught up in the excitement of new technology. As an AI leader, you must relentlessly focus the team on the user problem, ensuring every initiative ships only if it moves a key business metric and cutting it if it doesn’t.
Build a Learning System, not a Feature Factory: Traditionally, leadership is about delivering a predictable roadmap of features. As an AI leader, your job is to build a learning system that rewards the velocity of learning from experiments and instruments for serendipity, not just predictable success.
Scale the Relationship, not just the Platform: Traditionally, scaling is about handling more users or traffic. As an AI leader, you must focus on scaling the relationship with every user, using AI to understand their journey and guide them toward success, which in turn grows their LTV.
PM Tip: The transition to AI leadership isn’t about learning new skills; it’s about unlearning old habits. Start by auditing your own leadership style: Are you still approving every decision? Are you still promising specific features on specific dates? Those habits will sink you in AI.
What Hiring Managers Look For
You’re building AI product teams at Mozilla. Walk me through what you look for when hiring at different PM levels.
When hiring AI PMs, I look less for how much they know about AI and more for how clearly they think about problems. AI is evolving fast; what endures is clarity of reasoning, user empathy, and judgment under ambiguity.
Everyone focuses on the best models and how many tokens they have spent to date, while very few focus on how well they solve problems using first principles with AI.
Entry or Mid-Level PMs (transitioning into AI):
The minimum bar is product clarity. Can you take a messy user problem and reason through how AI might make it better, not fancier?
I look for:
Curiosity about the problem
Willingness to learn fast
Humility to work closely with engineers
You don’t need to be an AI/ML expert; you do need to ask sharp questions and connect technology to user value.
Senior PMs:
The difference between good and great is systems thinking.
Good PMs ship AI features. Great PMs shape the feedback loops between users, models, and data. They think about how the product learns over time. They can translate model metrics (like precision and recall) into product outcomes (like retention and satisfaction).
Staff/Principal PMs:
At this level, it’s not about features or systems; it’s about strategy and stewardship.
The best PMs here define how Mozilla should apply AI responsibly at scale. They can:
Balance innovation with our mission
Design frameworks for teams to move fast without breaking trust
Mentor others to think this way
They understand that how we build matters as much as what we ship.
Common Mistakes I See:
Talking about AI like it’s the goal, not the tool - AI is a means to an end, not the end itself
Over-indexing on technical jargon instead of user outcomes - Impressive vocabulary doesn’t substitute for clear thinking
Missing the opportunity to structure their thinking clearly - Even smart PMs struggle to demonstrate their reasoning on demand
I kept seeing the same gap even at Meta and GoDaddy, and later while coaching candidates: smart PMs struggled to structure their thinking on demand.
In interviews and exec reviews, they’d describe features or models, but they did not demonstrate their reasoning: problem framing → options → trade-offs → choice → success metrics → risks/guardrails. That’s the muscle most underused and most visible when it’s missing.
PM Tip: In interviews, your goal isn’t to impress with what you know but to demonstrate your thoughts. Structure your answers with clear frameworks, show your reasoning process, and connect every technical decision back to user value. That’s what separates candidates who get offers from those who don’t.
Building Your AI PM Career
Why did you start SkillSculpt, and what outcomes has it created for people who commit to the work?
After years of hiring at Meta, GoDaddy, and Mozilla, I kept meeting capable PMs whose story and signal density didn’t match their ability. I started SkillSculpt to scale the 1:1 help I was giving into a repeatable, high-bar system.
I conduct evidence-based introspectives, rigorous mock loops, and develop outcome-driven strategies for growth. No hacks; just disciplined reps.
Those who lean in see the compounding effect:
Clients who commit consistently turn screens into callbacks and offers
Step into higher-scope roles faster
Accelerate promotions
Several have credited this coaching as the reason they found their next role
You leave with durable assets (story bank, rubric-aware feedback, prompt packs) you reuse across loops and roles, keeping your trajectory steep.
Sign up for SkillSculpt: https://rebrand.ly/SkillSculpt
Why did you write Product Sense: The Interview Casebook, and how should PMs use it?
I wrote it after watching strong PMs stumble under pressure; they needed full, worked cases, not just frameworks to translate thinking into crisp, high-signal answers.
The book distills my hiring bar into a repeatable flow with checkpoints you can run end-to-end. Use it like a plan: pick a case, time-box, record, grade against a rubric, iterate until the story is tight.
It’s built to convert preparation into outcomes: clearer narratives, fewer meanders, and stronger callbacks.
Grab it here: https://a.co/d/ig5AKo2
Want more PM insights weekly?
To give back to the PM community, from which I have learned so much, I write a weekly newsletter called Building Blocks on LinkedIn. It helps new and existing leaders build their careers and teams one brick at a time.
Subscribe: https://www.linkedin.com/newsletters/7167660111300157440/
Conclusion
Tapan Kamdar’s answers reflect what makes a truly effective AI product leader: the ability to balance innovation with responsibility, scale with quality, and technical depth with user empathy.
His Level Up Framework—Clarity, Energy, and Trust—is just one example of how structured thinking can transform the way large teams operate in the ambiguous world of AI products.
But perhaps the most important insight from this conversation is this: AI product management isn’t about predicting the future; it’s about building systems that learn faster than anyone else. That fundamental truth remains constant whether you’re at Mozilla, Meta, GoDaddy, or anywhere in between.
About the Contributor
Tapan Kamdar
Senior Director of Product Management - Generative AI, Search & Productivity @ Mozilla
Focused on responsible AI deployment, large-scale team leadership, and privacy-first product development.
Previously at Meta and GoDaddy.
📌 Explore Tapan’s work and content:
SkillSculpt (AI PM coaching): https://rebrand.ly/SkillSculpt
Product Sense: The Interview Casebook: https://a.co/d/ig5AKo2
Building Blocks Newsletter: https://www.linkedin.com/newsletters/7167660111300157440/
Stay in the Loop
If you enjoyed this conversation, you’ll love what’s coming next. Real Talk AI is a no-fluff interview series with AI PMs, DS/ML leaders, and builders sharing how they ship AI products, the decisions, trade-offs, and systems behind the scenes.
Subscribe to aipmguru.substack.com for more interviews, frameworks, and hands-on PM resources.
Share this post with a PM friend who’s GenAI-curious or AI-shipping-stuck.
Have someone in mind for a future edition? Nominate them in the comments below.
What resonated most with you from Tapan’s insights? Are you making the shift from “roadmap PM” to “AI-native PM” in your own role? Share your thoughts in the comments.
© 2025 Shaili Guru • Real Talk AI Series




