The FTC’s Privacy Paradox: Guarding Kids by Tracking All Users

by · Reclaim The Net

The Federal Trade Commission’s January 28 workshop on age verification featured a familiar promise from Washington and Silicon Valley: that technology can keep children safe online without compromising privacy.

Yet many of the proposed systems, even those described as privacy-preserving, still rely on constant monitoring of how people behave online.

FTC Commissioner Mark Meador promoted “behavioral age verification,” which he defined as “ascertaining a user’s age by the way they interact with an online platform or system.”

He added, “Machine learning can help detect patterns in browsing and usage behavior that consistently indicate whether a user is too young to be on the platform.”

While Meador presented this as a technical solution that avoids intrusive ID or facial scans, the underlying approach requires platforms to watch and record how users move, click, and communicate. That continuous observation might not require a government ID, but it still means being profiled, not for advertising this time but to prove one’s age.

FTC Commissioner Christopher Mufarrige echoed support for expanding such tools, saying “age verification technologies will play an enormously important role in protecting kids online.”

He acknowledged the tension between new verification schemes and the Children’s Online Privacy Protection Act (COPPA), which restricts collecting personal data from children without parental consent. Rather than seeing that conflict as a red flag, Mufarrige said the agency is exploring “potential solutions” to reconcile privacy law with these technologies.

Several lawmakers and policy advocates also pushed for stricter verification requirements. South Dakota State Representative Bethany Soye argued that digital platforms should face the same kind of restrictions that exist in the physical world, saying, “We shouldn’t be treating the digital world any different from the physical world…if you are the one producing something dangerous to children, you should be keeping it out of their hands.”

Clare Morell of the Ethics and Public Policy Center framed age verification as both protective and empowering for parents. “Age verification laws are to both protect children and empower parents,” she said.

Morell insisted that “the technological means are there to both age gate and protect privacy,” and urged lawmakers to write laws focused on “account creation” instead of content.

That claim, that mass verification systems can protect privacy simply through design, drew skepticism throughout the event. Such systems still depend on the large-scale collection, inference, or sharing of user data, even if governments or companies avoid calling it that.

Sara Kloek of the Software & Information Industry Association said, “Everyone in the ecosystem is going to have a role to play, including the FTC and Congress, in the age verification process.”

Utah’s Katherine Hass credited enforcement pressure for the spread of parental controls, stating, “We would not have parental controls but for the lawsuits from the states, the FTC’s 6(b) authority, the companies saw the writing on the wall and are doing it because of the lawsuits.”

Others questioned whether these approaches truly enhance safety without building new databases of user behavior. Jennifer Huddleston of the Cato Institute warned that “one of the key concerns (about laws) are concerns around data privacy, and the data privacy of young users,” adding that verification laws “could provide a honeypot for bad actors.”

Apple’s Nick Rossi was more direct about the risks for developers. “Age assurance has never been a good fit for app developers,” he said, describing it as a compliance burden that forces unnecessary data transmission. “We can’t lose sight of that significant portion of our members for whom age assurance presents a risk without a need.”

From Google, Emily Cashman Kirstein acknowledged that the company already uses its own model to infer a user’s age from existing data.

“Google’s age inference model takes data we know about a user, without collecting additional data, and works to confirm whether the user is an adult,” she said.

“If the user is an adult, we don’t want to hamper their ability to use Google’s services, or ask for privacy intrusive information. If the user is a minor, we want the user to be able to take advantage of the services developed for minors.”

Even as she recognized that “there is always going to be a privacy tradeoff,” Kirstein suggested that app developers, not broader ecosystem players, should be responsible for verification.

Her statement reinforced a key contradiction in the tech industry’s approach: companies claim to minimize data use while maintaining systems that depend on continuous data inference.

The workshop revealed growing political and corporate enthusiasm for digital age verification, but also an unsettling consensus that monitoring user behavior may be an acceptable price for online safety.