Why Ed-Tech Keeps Choosing Engagement Over Learning
Learning is the stated goal of most ed-tech companies. Engagement is what almost all of them end up optimizing for.
This isn’t because founders or teams don’t care about learning. In my experience, many of them genuinely do. Over time though, engagement quietly replaces learning as the primary signal of success. Dashboards fill up with time spent graphs, retention curves, and streaks. Learning outcomes, if they are discussed at all, become secondary, indirect, or anecdotal.
If learning is the goal, why does engagement keep winning?
The proxy trap
Learning is hard to observe directly. It is internal, uneven, and deeply contextual. Two students can spend the same amount of time on the same lesson and walk away with very different levels of understanding.
Engagement, on the other hand, is visible. Time spent, sessions completed, videos watched. These are clean and legible signals. They fit neatly into charts. They move quickly. They respond well to experimentation.
Over time, a subtle but important shift happens. Engagement starts as a proxy for learning. Under pressure from growth targets, investor updates, and weekly reviews, the proxy slowly becomes the goal.
This is not unique to ed-tech. Any system that relies on indirect measurement is vulnerable to this drift. Ed-tech is especially exposed because the thing it claims to optimize, learning, resists simplification.
What begins as a reasonable approximation, that more engaged learners might learn more, hardens into an assumption that if engagement is up, learning must be happening. At that point, intent no longer matters. The metric takes over.
The user is not the customer
There is another structural reason engagement dominates. The learner is rarely the one paying.
In K–12 especially, users are children, while customers are parents or guardians. These two groups have overlapping but very different needs.
Most parents don’t have direct access to their child’s learning process. What they can see are signals. Time spent on the platform. Visible activity. Apparent discipline. Engagement becomes a proxy not just for learning, but for reassurance.
From the company’s perspective, this creates a powerful incentive loop:
- Parents respond positively to visible engagement
- Engagement correlates with renewals and word of mouth
- Revenue follows engagement, not learning
Even teams that care deeply about learning find themselves pulled toward the metric that aligns users, customers, and business outcomes.
This is not a moral failure. It is a predictable outcome of how incentives are wired.
Why better learning metrics don’t automatically fix this
A common response is that we just need better learning metrics.
That assumes the problem is primarily technical, that once we invent more precise ways to measure learning, teams will naturally optimize for them.
I am not convinced that is true.
Even with robust and defensible learning metrics, they would still be lagging, slow to move, harder to explain, highly context dependent, and uncomfortable when they don’t improve.
Under real world pressure, teams tend to favor metrics that are fast, legible, and actionable. Engagement fits that bill far better than learning, regardless of intent.
The issue is not just what we can measure. It is what we are willing to act on, especially when the signal is ambiguous, delayed, or inconvenient.
A more honest reframing
Seen this way, the problem is not simply that ed-tech companies choose the wrong metric. It is that we expect a single metric to carry more meaning than it realistically can.
Learning may not be something you can safely turn into a single north star number without losing what makes it valuable. It may need to be treated as a constraint, something you regularly check against, rather than a lever you aggressively optimize.
That requires uncomfortable trade-offs:
- Accepting slower feedback loops
- Living with uncertainty longer
- Resisting the urge to explain everything with a chart
None of this is easy, especially in venture-backed environments that reward speed, clarity, and tidy narratives.
But pretending the problem is unsolved because teams don’t care enough, or because metrics are not sophisticated enough, feels like a convenient simplification and a disservice to the learners these products claim to serve.
I don’t have a clean answer yet.
But it increasingly feels like the real challenge in ed-tech isn’t measuring learning better. It is deciding when we are willing to act on what we already know, even when the metrics don’t make it easy.