Duncan Lennox has a front-row seat to one of the biggest questions in enterprise software right now: how do you turn AI from a flashy demo into something that actually earns customer trust?

In this interview, he shares why judgment is the most valuable skill in the AI era, what it really takes to build AI that customers believe in, and why he thinks the next five years will make people feel more ambitious about their work, not less.

Mindstream: When you think about AI’s current trajectory, what excites you the most? Does anything concern you?

Duncan Lennox: I’m excited that AI is finally moving from being a clever demo to being a real teammate that helps small teams punch far above their weight. When you combine rich context with strong product craft and AI, you can materially improve how quickly customers see value.

My concern is that the hype cycle can push companies to ship AI for AI’s sake. Features can seem impressive, but often they don’t actually earn customer trust because they’re unreliable, opaque, or poorly integrated into real workflows.

If we don’t stay relentlessly focused on quality and solving real customer problems, we risk eroding confidence at the very moment this technology could be most transformative.

If you had to explain your AI philosophy in a single sentence, what would it be? How has it evolved over time?

In one sentence: Make AI work for businesses by grounding it in deep customer context, clear problems, and a relentless focus on quality and outcomes.

Earlier in my career, I was more focused on the underlying technology, things like models and infrastructure. Now, I start with customer goals and work backwards: what outcome are we trying to drive, what context does the AI need, and how do we know we’re really improving the experience rather than just adding another feature?

How do you personally decide what to automate and what not to automate?

I look at two dimensions: the level of judgment required and the cost of getting it wrong. Repetitive, well‑bounded tasks with clear success criteria - things like summarizing calls, drafting follow‑up emails, or routing tickets - are great candidates for automation, as long as we instrument quality and give humans an easy way to review and correct.

On the other hand, I’m very careful about automating decisions that involve nuanced trade‑offs, ethics, or long‑term customer trust. In those areas, AI should augment human judgment by providing context, options, and analysis.

Which human skill do you think is becoming more valuable in the AI era?

Judgment, and especially the ability to precisely frame problems, is becoming much more valuable.

In a world where AI can generate hundreds of options in seconds, the scarce skill is knowing which problem to attack, which signals to trust, and which trade‑offs are acceptable for your customers and your business.

Alongside judgment, I’d highlight storytelling: the ability to connect data, customer insight, and strategy into a narrative that helps teams make sense of change and stay aligned.

AI can surface patterns but humans still have to decide what they mean and what to do next.

Are you worried we’re in an AI bubble at all?

I think we’re in an AI hype cycle. Valuations and expectations will inevitably overshoot reality in the short term, because that’s been the case with every major platform shift. But AI is the most transformative technology of my lifetime, without a doubt.

The companies that will endure beyond the hype are the ones that treat AI as a way to deliver clearer customer value, as opposed to a branding or valuation exercise. 

How important is trust in AI for customer adoption and success?

Trust is everything. If customers don’t believe your AI will behave predictably, respect their data, and improve their outcomes, adoption stalls pretty fast.

That’s why we’ve invested so heavily in quality: tightening the feedback loops that allow us to understand and address customer concerns as quickly as possible.

In practice, earning trust means being transparent about what the system is doing, giving customers control, and being willing to slow down or roll back when quality isn’t where it needs to be. You have to earn trust by making quality and value ongoing obsessions.

Which recent AI breakthrough made you rethink something fundamental?

What’s changed my thinking most is the compounding effect of pairing strong models with rich, structured and unstructured customer context.

Once you see what happens when AI has access to really broad and deep data about an organization’s goals, its workflows and its landscape, you start to think differently about products. That’s really the essence of our Agentic Customer Platform.

With the right context layer, AI can help teams achieve materially better outcomes.

Finish this sentence: In five years, AI will make people feel ___ about their work. Why?

In five years, AI will make people feel more ambitious about their work.

As routine tasks become easier and faster, the bar shifts from “Can I get this done?” to “What’s the most meaningful problem I can tackle with the time and tools I have?”

The organizations that lean into this will use AI to create space for deeper customer understanding, more thoughtful product craft, and bolder goals. Capacity will become far less of a limitation.

What advice can you give to fellow leaders undertaking AI transformation in their own businesses?

First, start with your customers’ goals. Forget about the technology for a bit. Identify the specific outcomes you want to improve and then work backwards to see where AI can help, what data it needs, and how you’ll measure success.

Second, invest early in data quality and context. The best models in the world won’t help you if your underlying data is fragmented or stale.

Third, remember that AI transformation is as much about operating model and culture change as it is technology. Update your rhythms, incentives, and roadmaps in ways that ensure teams are accountable for quality and outcomes.

And remember to communicate clearly and often. People need to understand why things are changing, how your transformation connects to your vision and strategy, and what new mindsets and behaviors you need from them in order to be successful.

Finally, as a big Star Trek fan, how would you integrate AI to improve operations on the Starship Enterprise?

I’d give every crew member an assistant that understands their role, mission context, real-time developments, and the ship’s full telemetry.

Then I’d focus on coordination: an agentic layer that routes information and tasks across departments so the ship reacts as a single learning system. And I’d get Captain Kirk an agentic executive coach to keep him out of trouble. 

Duncan Lennox is the Chief Product & Technology Officer at HubSpot, in which role he leads the company’s global product portfolio and oversees Engineering, Product, UX, and Next Bets.

Keep Reading