What Leadership Really Looks Like

In the product culture series, I want to delve into who the leader is.

In corporate life, “leader” is a word that gets stretched in too many directions. Sometimes it refers to someone with direct reports. Sometimes it points only to the highest rung of the ladder. But the truth is simpler: leadership is not about job level or headcount.

Leadership is about how you show up. It’s about whether you create momentum, clarity, and courage in the people around you. And that is available to anyone, in any role.

Here are five reminders about what leadership really looks like—and what they mean for product managers.

1. Leadership Is Influence, Not Authority

A product manager often has no formal authority over engineers, designers, or stakeholders. But influence comes from clarity, empathy, and the ability to connect the dots across perspectives. PMs lead when teams look to them for context and direction, not because of an org chart.

2. Leadership Is Initiative

Roadmaps shift, priorities compete, and gaps appear. A PM who takes initiative—spotting dependencies early, framing ambiguous problems, or convening the right people—turns potential stalls into progress. This proactive posture signals real leadership.

3. Leadership Is Accountability

Great PMs don’t just deliver features; they own outcomes. That means holding themselves accountable for impact, not just output. When things don’t land as expected, accountability looks like learning quickly, sharing openly, and rallying the team to adjust.

4. Leadership Is Empowerment

The best PMs are not the loudest voices in the room. They create conditions for engineers and designers to bring their best ideas forward. They remove obstacles, ensure decisions have air cover, and amplify contributions so the team feels true ownership of the product.

5. Leadership Is Courage

PMs face constant pressure—pushback from executives, uncertainty in the data, or disagreement between functions. Leadership shows up in the courage to defend the customer’s voice, challenge “the way we’ve always done it,” and make hard calls when the path is unclear.

But How Do You Actually Do That?

That’s the part most people trip on. We nod along at these principles, then look for a playbook. But here’s the uncomfortable truth: there isn’t one. Leadership is not a checklist.

It’s a choice you make in moments that don’t come with instructions. You don’t wait to be given permission. You don’t need to ask whether you’re “senior enough” to lead. You act. You experiment. You learn.

The way to lead is to start leading—today, in the smallest possible way. Try this:

  • In the next meeting, ask the question nobody is asking. Leadership is often the courage to surface what’s unsaid.
  • Connect two teammates who don’t normally collaborate. Leadership is creating new possibilities through relationships.
  • Frame a problem, not just a feature. Leadership is shifting the conversation from “what to build” to “why it matters.”
  • Take responsibility for an outcome, even if the failure wasn’t “yours.” Leadership is owning impact, not tasks.
  • Shine a spotlight on someone else’s idea. Leadership is making others feel seen and valued.

None of these requires a title. All of them require intent.

Closing Thought

Leadership is not reserved for people with certain titles. For product managers, it lives in the daily choice to influence without authority, act without waiting, own outcomes, empower teams, and have the courage to champion what matters. The real question isn’t “Am I a leader?” but “How am I leading today?”

GTM Playbook for Feature Products in the Platform and AI Era

Clubhouse and Twitter Spaces. Zoom and Microsoft Teams. Dropbox and Google Drive. The pattern is not about who shipped first or who had the clever feature. The pattern is that platforms with native distribution absorb features, then win on adoption. In 2025, AI accelerates that cycle. Features can be cloned in months, not years, and updates land on millions of seats overnight.

This is not a reason to stop innovating. It is a call to innovate with clear eyes about distribution, runway, and where a feature product must be extraordinary to survive.

The GTM reality check

Platforms win because they own daily surfaces and procurement paths. Twitter (X) turned social audio into a native feature, rolling Spaces to large segments of iOS and Android and then to everyone, while Clubhouse was still expanding beyond iOS. The social graph and the feed did the heavy lifting.

Microsoft bundled Teams into Office 365, shifting choice from end users to IT and procurement. The European Commission subsequently charged Microsoft with abusive bundling, and by 2025, signaled acceptance of Microsoft’s unbundling offer. The lesson is GTM, not UI. Defaults beat delight when distribution is strong.

Harvard Business Review has long noted why bundling works: it simplifies choice, strengthens the seller’s relationship, and lowers buyer friction. In a world of platform suites, bundling is not a pricing tactic. It is a distribution weapon.

AI Twist:


Platforms now ship AI copilots into the surfaces you already use. Microsoft has rolled out Copilot for Microsoft 365 and continues to expand it. Google has been weaving Gemini directly into Workspace apps and the side panel. The copy and bundle cycle is shortening.

Where feature products still win

Feature products matter because they can create step-change experiences before a platform catches up. The openings are narrow, but real. [...]

APIs are the Strategic Foundation for Agentic AI and Beyond

(Expanding on my earlier quick-thought piece on APIs)

APIs Hidden in Plain Sight

APIs are often dismissed as “technical plumbing,” invisible to most business leaders. Yet they quietly power nearly every digital interaction, from mobile payments to streaming recommendations. Some of the most valuable companies in the world—Amazon, Stripe, Twilio—built their fortunes by turning APIs into products. Now, APIs are entering an even more strategic chapter. They are becoming the backbone of agentic AI and orchestration frameworks like the Model Context Protocol (MCP).

Leaders who see APIs as minor enablers are missing the bigger picture. APIs are not small. They are strategic levers for growth, efficiency, and entirely new business models.

Why APIs Are Essential in 2025

In the era of agentic AI, APIs remain foundational. They serve as the medium for data access, action execution, and scalable integration across diverse platforms and workflows. Their importance can be broken down into three dimensions:

Actionable Interfaces

Agentic AI agents require APIs to interact with external systems and perform real-world actions such as scheduling meetings, processing transactions, and orchestrating tasks. APIs are what transform intent into execution.

Data Access

APIs provide direct, real-time access to structured data and services, enabling agents to retrieve information, analyze results, and make informed decisions autonomously. This data layer is essential for grounding agentic AI in business reality.

Inter-Agent Collaboration

APIs enable agents to coordinate with one another in complex workflows and distributed networks, standardizing how responsibilities and results are shared. This is how autonomous ecosystems scale.

Healthcare provides a vivid example. Optum has developed a robust portfolio of APIs that simplify secure access to claims, eligibility, and clinical data. These APIs reduce administrative burden while enabling digital health innovation. In the near future, agentic AI layered on top of these APIs could autonomously check benefits, submit claims, and schedule services without human bottlenecks. (Disclaimer: I work at Optum)

APIs as Growth Engines [...]

OpenAI’s GPT-realtime Brings a Step Forward in Voice AI

For years, voice AI has felt like a half-step behind its text-based counterpart. The standard architecture relied on a clunky chain: speech-to-text, a language model for reasoning, then text-to-speech. The result was often laggy, robotic, and disconnected from the flow of natural conversation.

OpenAI’s new GPT-realtime changes that dynamic. By unifying speech recognition, reasoning, and speech synthesis into a single model, it eliminates the pauses and disconnects that made past systems frustrating. The model not only hears and responds in real-time, but it also preserves tone and conversational nuance—something no pipeline system could fully achieve.

GPT-realtime breakthrough

Image: From Realtime Conversations documentation

The Technical Leap

Benchmarks highlight the shift. On Big Bench Audio, a reasoning test, GPT-realtime reached 82.8% accuracy compared with 65.6% for the previous pipeline. On MultiChallenge Audio, which measures instruction following, it scored 30.5% against 20.6%. Even in complex function calling, performance jumped to 66.5% versus 49.7% (Dev.to analysis).

This matters for product teams. Faster response time and higher accuracy mean users can treat voice AI as an actual conversational partner, not a clunky intermediary. Add to this a 20% drop in cost, plus new, natural-sounding voices like Cedar and Marin, and GPT-realtime becomes not just a technical upgrade but a usability breakthrough.

Early Adoption Is Already Underway

The shift is not theoretical. Zillow has begun experimenting with GPT-realtime to make property searches conversational and intuitive. Instead of typing filters, a user can simply ask, “Show me three-bedroom homes near downtown with a fenced yard,” and get results instantly, complete with natural follow-up.

Other use cases are emerging quickly:

  • Customer support agents that handle calls fluidly, thanks to built-in SIP integration.
  • Tutoring and educational tools that listen, see (through image input), and respond in real-time.
  • Healthcare support assistants that guide patients through intake or follow-up conversations with more empathy.

These aren’t edge cases—they are everyday workflows where latency and naturalness make or break the experience.

What It Means for Product Leaders

For product managers and technologists, GPT-realtime offers a rare combination: a step-change in technical capability and clear enterprise relevance. The faster integration path—no more stitching together speech-to-text and text-to-speech pipelines—means teams can experiment more quickly. And because the model can combine voice with vision, entirely new multimodal interfaces are now possible.

The strategic implication is clear. Voice AI is moving from a novelty to a platform-level capability. Companies that start experimenting now will be positioned to set user expectations, not react to them.

Looking Ahead

GPT-realtime marks the moment voice AI feels natural enough to use daily. It’s faster, more accurate, and already in the hands of innovators. For product leaders, the takeaway is simple: treat this not as a future opportunity but as a present one.

The Token Squeeze is Real

AI should feel like it's getting cheaper. After all, compute costs fall, models get optimized, and every year brings new claims of a 10x drop in inference prices. But as Ethan Ding argues in Tokens Are Getting More Expensive, the opposite is true: the economics of AI subscriptions are in a squeeze.

Ding’s Core Argument

The paradox is simple. While yesterday’s models do get cheaper, users don’t want them. Demand instantly shifts to the latest frontier model, which always carries a premium. GPT-3.5 may cost a fraction of what it once did, but the market moved on to GPT-4, Claude 3, and beyond.

At the same time, token consumption is exploding. A task that once required 1,000 tokens now consumes 100,000, thanks to advances in reasoning, retrieval, and long-context computation. Unlimited-use subscription models can’t withstand this surge. As Ding shows with examples like Claude Code, even the most creative pricing experiments eventually collapse under runaway token demand.

He suggests three possible ways out:

  1. Usage-based pricing from the start.
  2. Enterprise sales where switching costs create defensible margins.
  3. Vertical integration, where inference is the loss leader for cloud and developer services.

Extending the Argument

Ding is right: flat-rate consumer subscriptions are unsustainable. But the future might not be a strict choice between usage-based and enterprise-only strategies. There are other avenues worth exploring:

  • Hybrid models: Offer flat-rate tiers with defined token quotas, then metered billing for overages. This mimics mobile data plans and could ease users into variable pricing without shocking them with unpredictable bills.
  • Freemium for light tasks: Everyday consumer use—chatting, drafting short notes—could remain “free” or bundled, while heavier research or agent-based workloads become paid tiers.
  • Bundling with value-added services: Just as telecom bundles data with phones and streaming, AI providers could wrap agents with hosting, monitoring, or compliance features. This shifts the conversation from “pay for tokens” to “pay for outcomes.”

Another extension of Ding’s point is around agentic AI behavior. As models increasingly operate in loops—planning, critiquing, and iterating—they will consume tokens at orders of magnitude greater than those required for simple Q&A interactions. This means demand for compute is effectively infinite. Any model of pricing that assumes stable or predictable consumption is ignoring this reality.

Takeaway for Builders

The dream of $20 per month unlimited AI is exactly that—a dream. Token economics are forcing product teams to confront a hard truth: the marginal cost of AI isn’t vanishing, it’s multiplying as capabilities expand.

AI builders should look beyond consumer SaaS metaphors and study the pricing strategies of cloud infrastructure, telecom, and enterprise software. Ding’s framing of a “token short squeeze” is spot-on. The next challenge is designing models that align incentives across users, providers, and investors—before the squeeze becomes a choke.

Credit to Ethan Ding for sparking this discussion with his original article.

When AI Bots Rule the Web

Most of the traffic hitting websites today is no longer human. Cloudflare’s AI Insights dashboard makes this clear: the majority of crawling comes from AI bots, and the balance of power among those bots is shifting fast. For product managers, that reality changes how we think about traffic, attribution, and strategy.

AI Insights Cloudflare

Training bots dominate

Close to 80% of AI crawler traffic serves training purposes. These bots pull content to feed large language models, but they don’t bring visitors back. Unlike search crawlers, which at least create discoverability, training bots extract without referral. For content owners, this means the bulk of AI traffic is non-monetizable.

Takeaway: Track which bots are active on your sites. Segment training crawlers from user-action or search crawlers, since only the latter categories have the potential to send real traffic.

The crawl-to-refer gap

Cloudflare data highlights a striking mismatch: some AI platforms crawl tens of thousands of pages for every single user referral they generate. This “crawl-to-refer ratio” shows that most current AI traffic has limited downstream value for publishers.

Takeaway: Treat crawler monitoring as a business metric, not just a technical safeguard. The ability to measure crawl-to-refer ratios can inform decisions about when to allow, block, or charge for access.

A shifting bot ecosystem

Not all crawlers are equal. GPTBot, ClaudeBot, and Meta-ExternalAgent are now among the top training bots. On the user side, ChatGPT holds a clear lead in popularity, but competitors like Claude and Perplexity are gaining ground quickly. This dynamic suggests a multi-platform landscape where influence is distributed but volatile.

Takeaway: Don’t tie referral or integration strategies to a single AI platform. Instead, design flexible approaches—APIs, content partnerships, and bot-aware licensing—that can adapt as the ecosystem reshuffles.

Product strategy in an AI-first web

The web’s center of gravity is tilting away from direct human browsing toward machine aggregation. For product managers, the challenge is turning that shift into an opportunity. That means:

  • Measuring crawler behavior with the same rigor as user analytics
  • Negotiating access and licensing instead of passively absorbing bot traffic
  • Experimenting with attribution and monetization models designed for AI-mediated consumption

The key insight is simple: AI bots are not just background noise. They are now first-class participants in the web economy. Recognizing their patterns—and acting on them—is becoming a core part of product strategy.

Nano Banana and the Future of AI Image Editing

When Google teased three bananas in a post from CEO Sundar Pichai, the internet buzzed with curiosity. The reveal—Nano Banana (aka Gemini 2.5 Flash Image), a new AI image editing model. It was more than a quirky codename. It signals a shift in how we think about digital creativity. Unlike earlier AI tools that struggled to maintain consistency or required heavy post-editing, Nano Banana delivers precise, natural-language edits while keeping subjects unmistakably themselves.

This is not just another AI novelty. It is a disruptive technology with wide-ranging implications.

A New Creative Baseline

For decades, advanced image editing has been the domain of professionals using complex tools like Photoshop. Nano Banana lowers that barrier dramatically. A prompt like “remove the stain on this shirt” or “add a pet to this photo” yields high-fidelity edits instantly. For advertising and marketing teams, that means faster iterations, lower production costs, and the ability to personalize assets at scale. In entertainment and media, it blurs the boundary between professional-grade editing and consumer creativity.

The implication is clear: what once required specialized skills and hours of work is now accessible to anyone who can type.

Unsurprisingly, more sell calls from analysts on Adobe. Recently, by Rothschild Redburn:

"We spent the last few days testing the recently released preview of Google’s Nano Banana image editing model. We believe it will disrupt Photoshop, one of Adobe’s most widely used applications, adding to ongoing pressure on the company’s seat growth and pricing power. Nano Banana and Runway’s Aleph demonstrate the leap forward in the performance of generative AI tools in recent months, and the pace of improvement is a key concern: image and video generation models with fully editable outputs look increasingly likely to emerge within months, which we argue will call into question the durability of Adobe’s moat. We reiterate our Sell rating."


Analyst: Omar Sheikh

I discussed this topic of AI disruption to SaaS companies.

Shifting User Expectations and Platform Power Plays

When users experience AI that edits with precision and consistency, their expectations change. They will demand frictionless, prompt-driven creativity across platforms. Legacy tools that still rely on manual adjustments risk falling behind if they cannot integrate AI with comparable ease. This sets a new baseline for how people interact with digital media—intuitive, conversational, and fast.

Nano Banana is not just a consumer feature. By embedding it in Gemini, Google positions itself as more than a chatbot—it becomes a visual creativity platform. With APIs available through Gemini, AI Studio, and Vertex AI, developers can integrate Nano Banana directly into products, workflows, and apps. This gives Google a strong ecosystem play and puts pressure on Adobe, Canva, and OpenAI to match both technical precision and platform reach.

Long-Term Implications

The trajectory is clear: image editing is becoming as simple as typing. That democratization will reshape creative roles, shifting emphasis from technical skills to conceptual direction. We are likely to see new roles emerge—AI content editors, authenticity auditors, and brand integrity managers—focused on curating and verifying output rather than manually creating it.

For product teams, the immediate opportunity is to experiment: use Nano Banana for prototyping, content workflows, and rapid iteration. But the longer-term question is one of responsibility. How do we balance empowerment with safeguards, ensuring creativity thrives without eroding trust?

Closing Thought

Nano Banana may look like a playful launch, but its implications are serious. It represents the next step in the evolution of AI—tools that are not just generative, but precise, accessible, and embedded into everyday platforms. For technologists and product leaders, this is a signal moment: the future of creativity is being rewritten, one prompt at a time.

What Makes a Real Data Moat

The age of generative AI has created a strange paradox. On one hand, anyone can plug into models like GPT and build features quickly. On the other hand, defensibility has never been more elusive. If everyone has access to the same foundation models, what stops a competitor from copying your product?

The strongest answer is the data moat. Done right, it’s the most durable form of AI advantage a company can build. Done wrong, it’s just another buzzword.

What a Data Moat Is (and Isn’t)

A real data moat isn’t about collecting massive amounts of information. It’s about generating unique, structured, high-quality data every time a customer uses your product. That data becomes equity—it makes your product smarter in ways competitors can’t replicate.

Consider Tesla. Every mile driven by its vehicles contributes to a massive dataset of real-world driving scenarios. This data, from lane changes to rare edge cases, flows back into training its autonomous driving system. No competitor can shortcut this process without deploying millions of cars and collecting the same breadth of data. The moat is not just the data volume, but the compounding quality that comes from continuous, real-world feedback.

Or look at Stripe. Processing billions of transactions across millions of businesses gives Stripe unique visibility into global payment patterns. That structured data feeds directly into fraud detection models. Every suspicious charge, every pattern of merchant abuse, strengthens Stripe’s defenses. A competitor without that transaction history can’t replicate the same level of risk protection, no matter how advanced their AI models are.

By contrast, simply hoarding logs, clicks, or unstructured text without a plan doesn’t create defensibility. Volume without usability is noise, not a moat.

The Core Criteria of a Defensible Data Moat [...]