From Noise to Signal: Understanding Why Users Really Churn

A longitudinal mixed-methods research program that transformed a broken data instrument into actionable growth strategy.

30%→7%
Reduction in uncodeable "Other" responses after survey redesign
20%
Increase in users reaching first value milestone
~10%
Of churning users identified as immediately recoverable
C-suite
Findings reached CGO & Sr. Director PLG
My Role
Lead Researcher — sole owner of this research program
Methods
Survey redesign, qualitative coding, longitudinal analysis, cohort segmentation, session analysis
Stakeholders
Chief Growth Officer, Senior Director PLG, Growth team
Duration
Ongoing program, ~6 months across phases

A churn problem with no clear answer

What began as an analysis of a neglected survey evolved into an ongoing mixed-methods research program — diagnosing a broken data instrument, redesigning it, and using the resulting signal to uncover the real drivers of churn and directly inform PLG strategy.

SonarQube Cloud had a churn signal that everyone could see but nobody could fully explain. A significant proportion of organizations were downgrading to free plans — with a noticeable concentration around the 14 and 30 day marks — but the company lacked a clear picture of why. An in-product downgrade survey existed, but it had never been systematically analyzed, and its design made meaningful analysis nearly impossible.

When I pulled the data, the scale of the problem became clear: nearly 30% of respondents had selected "Other" as their downgrade reason. In a survey meant to capture churn drivers, that level of noise isn't a data point — it's a symptom of a broken instrument. The categories weren't reflecting reality, which meant any decisions being made about churn were built on an incomplete picture. I saw an opportunity to change that.

Three phases, one continuous program

Rather than treating this as a one-time study, I designed it as a longitudinal research program with three connected phases — each building on the last.

Phase 1 — Diagnose the existing data

I exported and analyzed 380 existing survey responses, including 173 qualitative write-ins. I used AI-assisted coding to categorize responses at scale, then reviewed and validated every theme myself. Of the 173 write-in responses, nearly 29% mentioned pricing as a factor — yet pricing wasn't even an option in the original survey. That single finding reframed the entire instrument.

Phase 2 — Redesign and relaunch

Using Phase 1 findings, I redesigned the survey with response categories derived directly from the qualitative analysis — including "issues with initial setup," "not ready to purchase yet," and "did not fulfill our needs" — replacing vague catch-all options with language that matched how users actually described leaving. I also added a follow-up question asking which tool users planned to switch to, establishing structured competitor tracking that hadn't previously existed.

I relaunched the survey and began analyzing responses monthly. The results validated the redesign immediately: "Other" responses dropped from ~30% to 7.1%.

Phase 3 — Cohort segmentation and session analysis

With a cleaner data stream, I segmented churn reasons by customer tenure, revealing that drivers varied meaningfully depending on how long someone had been on the platform before downgrading. I supplemented this with session analysis in FullStory, watching recordings of users during signup and setup to identify specific friction points in the activation experience.

The real story behind "just testing"

The findings fundamentally shifted how the Growth team understood their churn problem. What had appeared to be a story of users leaving because the product wasn't right for them turned out to be a more nuanced picture — one with several distinct and actionable segments.

"A meaningful portion of churning users weren't leaving because the product failed them. They were leaving because the conditions weren't right yet — and that's a very different problem to solve."

~10%
Not ready to buy yet
Internal alignment, budget cycles, and multi-stakeholder decisions — customers who intended to purchase but needed more time
~10%
Setup friction
Couldn't get set up successfully within the trial period — a product and onboarding problem, not a value problem

Two findings in particular caught senior leadership's attention. Roughly 10% of churning users indicated they intended to purchase but weren't ready due to internal organizational processes, budget cycles, or multi-stakeholder alignment. Another ~10% cited difficulty getting set up — a usability and onboarding problem, not a product-value problem.

The session analysis added a crucial layer to the second finding. Users were consistently taking a wrong turn early in the setup flow — a navigation pattern that looked intuitive on the surface but led them away from the activation path. This friction point became the basis for a welcome screen experiment that contributed to a 20% increase in users reaching a critical first-value milestone.

The tenure segmentation added another dimension: churn reasons varied meaningfully depending on how long someone had been on the platform before downgrading, pointing toward more targeted intervention strategies at different lifecycle stages.

Across all phases, I also identified a recurring pattern that hadn't been formally recognized: a segment of consultants and freelance engineers who set up SonarQube Cloud for their clients, churn when a project ends, and often return for the next engagement. Understanding this cyclical usage pattern has implications for how the company approaches acquisition, account structure, and re-engagement.

From data problem to growth strategy

30%→7%
Reduction in uncodeable responses — a direct improvement to the quality of data the entire Growth team relies on
20%
Increase in users reaching first value milestone, driven by experiment informed by session analysis findings
"This is an opportunity. It is material."
— Senior Director, PLG, in response to findings showing ~10% of churning users as immediately recoverable. The Chief Growth Officer, having reviewed the report independently, directed the team to explore dynamic downgrade offers as a direct next step.

Beyond specific metrics, this program established a new foundation for how the company thinks about churn. The redesigned survey is now an always-on instrument that the Growth team actively uses. The findings have been referenced in strategy conversations about PLG experimentation, onboarding investment, and customer segmentation. And the identification of the consultant/freelancer usage pattern has opened a longer-term conversation about how to account for cyclical users in acquisition and retention models.

Perhaps most importantly, this work demonstrated that the research function could own a business-critical measurement problem end-to-end — not just inform a feature decision, but diagnose a broken system, fix it, and use it to drive strategy.


Julia Stroud, M.S.