Your clients probably have a children's data problem
A privacy officer on what the new regulatory frontier means for practitioners
If your client has a website, they probably have a children's data problem whether they know it or not. The legal exposure has quietly expanded beyond apps built for kids, and the cases moving through courts right now aren't just privacy claims anymore. They're being reframed as consumer protection violations, negligence and unfair business practices. Regulators and plaintiffs' lawyers aren't asking whether data was collected. They're asking whether the whole system was fair.
Ryan Johnson, chief pivacy officer, vice president and associate general counsel at Savvas Learning Company, knows where the gaps are. He breaks down why inferred data — not names or emails, but what the system figures out on its own — is the new regulatory frontier, what your clients need to know before it's too late and what a genuinely child-first compliance framework would actually require.
—Interview by Emily Kelchen, edited by Bianca Prieto
As we saw in the recent trial about social media addiction, there’s growing public concern about how tech companies collect and use children’s data. From your perspective, what has changed in the last few years?
What’s changing is a growing effort to reduce children’s screen time and the harmful impact of social media overall. We’ve moved from relatively static data collection to continuous, behavioral and predictive data ecosystems. Children are no longer just users, they’re inputs into optimization engines, personalization systems and increasingly AI models.
What kinds of data are we talking about?
It’s much broader than names or emails. We’re talking about persistent identifiers, device-level data, behavioral telemetry, engagement patterns, location data and inferred attributes like interests, learning profiles or even emotional states.
In many cases, the most sensitive data isn’t what a child provides directly; it’s what the system derives about them. That’s where the real regulatory and ethical risk is emerging.
And the risk is that companies that target children are gathering too much information about them?
It’s no longer a niche issue confined to “kids’ apps.” If your client has a digital product, a website or even passive tracking technologies, there’s a real possibility that children’s data is in scope, intentionally or not. And once it is, the compliance obligations, litigation exposure and reputational risk increase materially. This is one of those areas where companies often don’t realize they have exposure until it’s too late.
So this is something everyone who builds an app or even has a website needs to be thinking about?
Yes. Modern digital environments are designed to influence behavior, and children are disproportionately susceptible to those design choices.
Regulators and plaintiffs’ lawyers are not looking at the basic consent and notice requirements in the Children’s Online Privacy Protection Act (COPPA), but whether the entire system is fair, appropriate and non-exploitative in the first place. So the scrutiny today is less about whether data is collected, and more about how it’s inferred, combined and monetized over time.
Are courts treating these cases as traditional privacy claims, or are they evolving into something more like consumer protection or tort litigation?
We’re definitely seeing a shift. While some claims still sound in traditional privacy theories, many are being reframed through consumer protection, unfair or deceptive practices, invasion of privacy, wiretapping and even negligence-style arguments around product design.
The common thread is less about whether data was collected and more about whether the company’s practices were unfair, misleading or harmful, particularly in the context of children.
Do you think we are moving toward a model where children’s digital rights are treated as fundamentally different from adults’?
Yes, and I think we’re already there in principle, even if the legal frameworks are still catching up. The trajectory is toward recognizing that children require a different baseline, not just additional disclosures, but fundamentally different defaults. That includes limits on profiling, stricter data minimization and a higher bar for what constitutes acceptable use of their data.
What would a child-first digital policy framework look like in your view?
It would start with data minimization and purpose limitation as defaults, not afterthoughts. Ideally, a privacy-by-default, opt-in, versus the traditional, opt-out US-centric approach. It would restrict the use of children’s data for behavioral advertising, profiling or model training without clear, justifiable boundaries. It would require transparency not just in policies, but in outcomes, meaning organizations can explain how decisions affecting children are made. And importantly, it would shift accountability from the user to the company, placing the burden on organizations to demonstrate that their systems are safe and appropriate by design.
Is there tension between innovation and protecting children?
I think it’s largely a false dichotomy. Protecting children doesn’t prevent innovation; it forces more responsible innovation. The companies that get ahead of this are the ones that build trust, and in the long run, that’s a competitive advantage, not a constraint.

You're all caught up!
Thanks for reading today's edition! You can reach the newsletter team at raisethebar@mynewsletter.co. We enjoy hearing from you.
Interested in advertising? Email us at newslettersales@mvfglobal.com
Was this email forwarded to you? Sign up here to get this newsletter every week.
Raise the Bar is written and curated by Emily Kelchen and edited by Bianca Prieto.
Comments ()