Dropbox had invested in a solid scaled CS motion for its core product — the infrastructure, the playbooks, the tooling. The Sign API sat alongside it with a different customer profile: self-serve developers who had integrated signing directly into their own products and didn't meet the threshold for a dedicated 1:1 CSM. When the DRI for that motion moved on to a new opportunity, I raised my hand.

The appeal wasn't just filling a role. It was the chance to build something purpose-built — a motion designed specifically for how developer customers actually behave, not adapted from an enterprise playbook. In a scaled, self-serve model, you don't wait for customers to raise their hand. The whole point is to be ahead of the signal. That kind of proactive, data-driven motion was exactly the kind of work I wanted to own.

I had two years as the dedicated Sign API CSM. I knew the product, the developer persona, where integrations broke, what questions came up at every stage, and which signals in the data preceded a quiet renewal failure. That context was the foundation. So I wrote the program.

Start with the Segmentation Problem

Before you can design a scaled CS motion, you have to be honest about what "scaled" actually means — and who it's for. The mistake I see most often is treating it as a lighter version of high-touch. It isn't. It's a different model entirely, and it starts with accepting a different customer relationship.

For Sign API, the segmentation was clear:

Self-Serve · Digital Motion

Digital Success Motion — This Program

Developers who bought the API, integrated it, and are largely running it themselves. They don't expect a named CSM. They expect the product to work, the documentation to be good, and outreach to be relevant when it arrives.

They need a program that finds them at the right moment — not one that interrupts them at the wrong one.

Named Accounts · 1:1 Coverage

High-Touch 1:1 Coverage

Dedicated CSM assigned, QBRs, renewal planning, hands-on onboarding. A different motion for a different relationship.

The job here is knowing which self-serve accounts are ready to graduate to this tier — and having a clear pathway to get them there.

Getting this segmentation right is the foundation. If you blur the line — if you try to run a scaled motion like high-touch at lower cost — you end up with something that's too expensive to operate and too impersonal to work. The goal is intentional, not just cheap.

The Three Pillars

Once the segmentation was clear, the architecture of the program followed from a simple question: what does a self-serve developer actually need from a CS motion that they can't get from good documentation alone?

The answer isn't relationship management in the traditional sense. It's three things: timely and relevant communication, a lower barrier to resolving technical friction, and a signal that someone is watching — and will reach out — if their integration starts failing in ways they might not even notice.

01
Automated Email Nurture
A six-month lifecycle campaign organized by stage: onboarding, activation, adoption, expansion, renewal. Triggered by time sequences and, where tooling allows, behavioral signals — low API call volume, missing webhook configuration, silence past day 30. The goal isn't volume. It's relevance.
02
Self-Serve Developer Enablement
A packaged onboarding kit: curated documentation, a Postman collection covering common API workflows, a top-10 integration FAQ built from two years of actual support tickets, and access to monthly office hours. The philosophy: make it easy to never need a CSM — and the ones who do will find you.
03
Health Scoring & Escalation
Usage-based health scores in PlanHat for every self-serve account. Red-flagged accounts get personal outreach. Accounts showing strong growth signals get flagged for 1:1 upgrade. The data tells you who to talk to before they tell you there's a problem.
On the health scoring decision

The most counterintuitive design decision was weighting completion quality at 30% — higher than raw usage volume. Research from Gainsight shows that outcome-delivery metrics are stronger renewal predictors than volume alone. A developer processing 10,000 signature requests a month with a 70% completion rate isn't a healthy account. They're experiencing a product failure they may not have even diagnosed yet. That's the account you want to find proactively, not reactively.

How I Structured the Build

One of the things that gets underestimated in program design is the sequencing. It's tempting to try to launch everything at once — the emails, the health scoring, the enablement kit. But tooling dependencies are real, and a six-month window is actually a constraint that forces good prioritization.

Phase 1Months 1–2
Foundation
Get the infrastructure right before the motion starts

Audit and segment the self-serve account list. Map the actual customer journey — not the intended one, the real one, built from ticket patterns and onboarding drop-off data. Kick off the cross-functional partners: Sign API PM, Engineering lead, DevRel, CS Ops for Gainsight/PlanHat configuration. Agree on the charter before a single email goes out.

Phase 2Months 3–4
Build & Launch
Deploy the motion; don't optimize it yet

Launch the email campaign. Publish the self-serve onboarding kit. Go live with health scoring using the three metrics already available in Tableau — signature request volume, completion rate, and embedded completion rate — which together cover 55% of the composite score on day one, without waiting for additional data instrumentation. Run the first office hours and webinar.

Phase 3Months 5–6
Optimize
Let the data tell you what's wrong

Analyze email performance and refresh underperforming sequences. Run the first 90-day health score calibration — validate that the thresholds match actual account behavior, not just external benchmarks. Surface the accounts showing strong growth signals and route them toward 1:1 coverage. Prepare the six-month results review for leadership.

What the Success Metrics Actually Measure

I want to be direct about something: the metrics you put in a program proposal are often doing two jobs simultaneously. They're measuring whether the program works, and they're telling the people who approved it what they should care about. Getting these right matters as much as the program design itself.

For this program, I anchored the KPIs to the things that would actually tell us whether the motion was creating value — not just activity:

≥85%
NDR on self-serve accounts — the north-star metric
5–10
Accounts upgraded to 1:1 coverage in 6 months
+20%
Red→green health score improvement rate
<30d
Time to first signed document from API signup
≥30%
Email open rate on nurture sequences
100+
Office hours / webinar attendees per session

The metric I'm most attached to is the 1:1 upgrade pipeline — 5 to 10 accounts in six months. It's the one that makes the ROI case for leadership in the clearest possible terms: the scaled motion isn't just preventing churn, it's finding and qualifying expansion revenue. If a digital success program can't demonstrate a pathway to upgrade, it's harder to justify as a permanent motion and not just a stopgap.

The Part That Doesn't Go in the Proposal

Every program proposal is a document designed to get a yes. The nuance that doesn't fit neatly in a deck slide is this: the reason I was able to design this program at all is that I had spent two years close enough to the customer to know what they actually needed — not what a generic scaled CS framework would prescribe.

The FAQ in the self-serve enablement kit wasn't researched. It was assembled from memory — from the same fifteen questions that came up in support tickets and office hours and Slack threads, over and over. The health score thresholds weren't guessed. They were informed by knowing which accounts churned last year and what their data looked like in the 90 days before they did.

A program that works is the product of the people who built it knowing the thing deeply enough to make the right calls when no framework exists for it.

This is the argument I'd make to any CS leader evaluating whether to invest in a scaled motion for a technical product: the tooling is accessible, the frameworks exist, the ROI math isn't hard. The scarce resource is someone who knows the product and the customer well enough to build the right thing instead of a generic one.

The proposal was forty-some pages across the project plan, health scoring rationale, and email campaign playbook. Most of it was built in a few weeks. The two years of context it drew on took considerably longer.


The full supporting materials — 6-month project plan, PlanHat health scoring configuration, and 12-email campaign playbook — are available on request. This post covers the thinking; the documents cover the execution.