My Approach
How I Think, How I work, How I lead.
I Make the Case in Their Language
When stakeholders are in conflict, picking a side rarely helps. My job is to find the shared constraint , usually a business outcome everyone’s already accountable to , and design the argument around that instead of the design itself.
When I’m facing engineering resistance to a direction I believe in, my first move isn’t to defend the design. It’s to ask: what evidence would change the frame from “should we build this” to “how do we build this?” Sometimes that’s a user study. Sometimes it’s a task analysis. Sometimes it’s a projection. But the goal is always the same , shift the conversation from design quality to user outcomes, and let the results do the arguing.
What I don’t do is fight for the full vision in one shot. I map what needs to be true to ship incrementally, negotiate the sequencing, and protect the path back to what I actually want to build.
The principle: if you’re still arguing about the design, you haven’t found the real lever yet.
Selling UX Without Talking About UX
Every leadership conversation I’ve been in that started with “user experience” ended quickly. The ones that landed started with money , how much we could make, how much we were leaving on the table, or what was about to walk out the door.
At GroupM, an EVP was ready to sunset a product because adoption was low. His read: the product isn’t working. My read: the diagnosis is wrong. I asked for six weeks, ran the research, and came back with a different framing entirely , what looked like a product failure was actually a retention crisis in disguise. That reframe became the mandate for an entirely new platform.
At Siemens, I wanted DesignOps investment. Leadership wanted features. So I audited teams across the organization, put hard dollar figures on redundant work, and framed the investment as operational cost reduction with a specific velocity promise. The conversation stopped being about design almost immediately.
The process: find the financial wound, name it precisely, and position design as the treatment , not the philosophy.
Finding the Real Problem
My rule when a problem is handed to me: assume the framing is wrong until I’ve pressure-tested it. The first articulation of a problem is almost always a symptom.
At Google, I was brought in to solve what looked like a guidance gap , users completing setup and not knowing what to do next. But when I ran post-launch interviews and then watched users actually working through real tasks, a different picture emerged. They weren’t confused about guidance. The underlying mental model the product was built on didn’t match how they actually reasoned about their work. The platform was organized around how it was built, not how engineers use it.
Once I could name that clearly, the design mandate wrote itself. The guidance problem disappeared because it was never the real problem.
The process: interview after the launch, not just before. Watch what people do, not just what they say. Keep asking why until the answer stops changing.
When Not Building Is the Right Call
There are features I’ve been right about that I’ve still cut. Being right isn’t sufficient , timing, sequencing, and resource reality matter just as much.
At Google I deferred a core interaction that I had strong user data behind because the engineering timeline didn’t fit the business window. What I didn’t do is just cut it and move on. Before agreeing to defer, I designed the underlying architecture to preserve the path back , so when the timeline opened up, the work could pick up without starting from scratch. I got alignment on the component standards that would support it. I made it someone else’s problem to build, but my job to architect the reentry.
The question I ask before cutting anything: does deferring this make it harder or easier to build the right version later? If the compromised version becomes the assumed target, you’ve cut more than a feature , you’ve cut the ceiling. Name the gap explicitly. Establish the reentry point. Then actually follow through on getting it built.
Building Systems That Teams Actually Use
The failure mode of most design systems is that they’re built for consistency and adopted out of obligation. Teams comply when someone’s watching and route around them when they’re not. That’s not a system, that’s a policy.
At Siemens, the architecture decision that made the system work was a deliberate split: a strong global foundation with explicit extension pathways for domain-specific needs. Teams could contribute back. A monolith feels like a straightjacket , a federated model with contribution pathways means teams have ownership in the system, not just obligations to it.
The post-acquisition piece was harder. Multiple design organizations, each protecting their standards, each hearing “standardization” as erasure. I reframed it as market interoperability. I showed each VP how a unified UX increased their product’s attach rate to the broader ecosystem. When one team pushed back for legitimate domain-specific reasons, I didn’t fight it. I found the hybrid. That trade was worth making.
The structural move that elevated quality more than anything else: integrating usability testing into sprint cycles as a hard definition of done. UX debt treated as rigorously as technical debt. Not a suggestion, a gate.
Shipping Incomplete Without Losing the Vision
I’ve shipped things I knew weren’t the best version. The calculus isn’t complicated, but it has to be honest.
The question I ask: does what’s shipping solve a real problem, even imperfectly? Does it create debt that makes the right experience harder to reach later? Have I protected the path back? If the first is yes and the second is no, you ship.
What I never do is ship something suboptimal without naming it explicitly , to the team, to leadership, in the documentation. The worst outcome isn’t shipping incomplete work. It’s when the team thinks the compromised version is the target. That’s how good products get permanently capped.
The reentry point needs to be named before you ship, not scheduled as a follow-up after. There’s a difference between a fast-follow that’s planned and a fast-follow that’s optimistic. I try hard to only commit to the first kind.
AI as a Distance Reduction
Most AI UX fails because teams treat it as a feature to ship. My frame is different: AI should reduce the distance between what a user is trying to do and what the system does. If it doesn’t do that, it’s a parlor trick.
In practice, this means designing AI interactions around the user’s mental model , their business problem, rather than the system’s underlying logic. The user shouldn’t need to understand what the AI is doing under the hood. But they do need to trust it.
The design pattern I rely on: progressive transparency. Let users inspect AI-generated outputs and drop into granular control at any point. Trust in AI-assisted tools isn’t built by hiding the AI, it’s built by showing users exactly what it’s doing and giving them the override. You earn the automation by proving you can explain it.
The broader point: you can’t build good AI products when your own team is scared of AI. I’ve invested heavily in building internal tooling that changes how designers and PMs work , not just what they ship. That’s where the cultural shift that makes better AI products possible actually starts.
Working With Engineering as a Partner
Technical constraints are almost never purely technical. When an engineer says something can’t be done, they usually mean it can’t be done the way I specified it, in the timeline I specified, with current resources. That’s a different problem, and it has different solutions.
My first move is to make the case harder before I compromise. Put the user outcome data on the table and ask engineering to explain why the gap isn’t real. Not combatively, to make sure we’re solving the actual problem and not a convenient version of it.
If the constraint holds, I change the question. Instead of “how do we build this,” I ask: “what’s the minimum architecture that makes the right version achievable as a fast-follow without redesigning the foundation?” That’s a collaborative conversation, not a standoff. I’m not asking for a shorter timeline, I’m asking for help sequencing the work.
The relationship has to exist before the conflict. If the first time I show up with data is when I need something, I’ve already lost.
Measurement as a Design Input
If I can’t answer “how will I know this worked” before we ship, we’re not ready to ship. Measurement isn’t a retrospective , it’s a design constraint that shapes the work from the start.
I separate leading from lagging indicators deliberately. Business outcomes, retention, revenue, market expansion, are the point. But they’re confirmation I’ll wait months to receive. So I identify the proxies that tell the causal story in real time: task completion rates, time-on-task, context switches, abandonment. Those run from day one.
The layering: usability performance in the first weeks, adoption and engagement over 60-90 days, business outcomes quarterly. Each layer confirms or challenges the story the previous one told. Strong usability but flat adoption usually means a distribution problem. Strong adoption but lagging retention usually means the product is solving the wrong problem. The layers keep you honest.
The other discipline: I tie specific metrics to specific decisions, not just to initiatives. Which design choice was responsible for which outcome? That’s the only way to get better at placing bets, and the only way to make a credible case for the next investment.
Making Great Work Possible
The shift from senior IC to principal is specific: you stop being the one who produces great work and become the one who makes great work possible. That’s harder than it sounds, and it’s where most designers get stuck.
Structural moves create the floor. At GroupM I put approval gates in place that blocked launches that didn’t meet the standard, not suggestions, blocked. The moment you let something through that doesn’t meet the bar, you’ve told everyone the bar isn’t real. You have to be willing to sit in the tension when something gets stopped.
Capability moves raise the ceiling. At Siemens, integrating usability testing as a hard definition of done elevated quality more than any workshop I ran. At Google, building and scaling AI tooling across the design organization gave individual designers quantified ammunition for pushback, the ability to model risk and walk into a conversation with data, not instinct.
Both levers are necessary. Structural without capability just blocks bad work without enabling better work. The real test: does your team operate at the same standard when you’re not in the room? If not, you haven’t raised the bar , you’ve made yourself a dependency.