What YouTube’s AI “Age Estimator” Gets Wrong
By Alexandra Chigrinuk
For millions of parents and kids, YouTube has always been a library, a classroom, and a stage. But now imagine a parent, fresh from turning on cartoons for their child, searching for information about gender identity or sexuality so they can better support their kid. Instead of helpful guides or community stories, they’re met with a feed scrubbed of anything deemed “too adult,” cutting them off from the knowledge that could help their child feel seen and safe. The same system that hides this information can just as easily silence videos about coming out, gender transition, or mental health. What begins as a safety feature quickly becomes a filter on reality—deciding which families, identities, and conversations are allowed to exist online.
This is the reality of YouTube's AI "age estimator." Launched in August, the system claims to protect kids by deciding who is “old enough” for mature content based on search and watch history. But what counts as “age-appropriate” is never actually defined. YouTube's Help Center explains that if the system flags you as underage, your account is limited: certain videos are restricted, “age-appropriate” recommendations show up in your feed instead, and “wellbeing” reminders nudge you to log off or go to sleep. Targeted ads are cut back too.
The problem? While YouTube claims to safeguard minors, it defines its own “age-appropriate” boundaries. Rather than deferring to child development experts or community input, the platform enacts rules under its opaque internal policies. As the Electronic Frontier Foundation observes, platforms “come up with their own policies on how to moderate legal content that may be considered harmful.” Meanwhile, the most notoriously unsafe parts of YouTube are often the ads, and this age verification system does nothing to fix that. Now, YouTube decides which topics minors, and adults mis-flagged as minors, are allowed to access. That’s control hiding as protection.
And the control is clumsy. Systems for flagging “mature” content have been unreliable, often restricting harmless videos while letting questionable ones slip through. A recent audit of short-form content found that age verification mechanisms are “largely ineffective,” allowing unsafe videos to reach younger viewers despite the platform’s safeguards. YouTube’s new AI only makes the problem worse. If it misidentifies you, the only way to regain full access is to surrender personal data: a selfie, email address, government ID, or even a credit card. Once uploaded, anonymity disappears; your real identity is tied to your online persona, trailing you across the internet. Google says such data is stored securely, but with little transparency about how long it’s kept or how it’s used. Even if Google adopted the highest possible standards and never misused the data, simply holding such sensitive information creates a tempting target for hackers, as the recent breach of an age-verification provider shows.
Plus, we’ve seen how similar tools fail. Studies show that age estimators overestimate young adults and underestimate older ones, also misidentifying across races, genders, and demographics. Other studies indicate that accuracy drops significantly for people of color, reflecting the ethnic biases embedded in training datasets. Such age estimators deepen digital inequities and work to amplify bias, and on a platform that shapes how millions learn and engage with the world, that matters.
And this system doesn’t just harm adults misidentified as children. What happens when LGBTQ+ teens search for community? When someone struggling with mental health looks for help? When citizens seek political content? An algorithm can distort what conversations are off-limits, costing us democracy, academic freedom, or our identities for the sake of “online safety.”
This tool may have been designed to protect kids, but in reality it fails on both ends: minors still slip through while adults are wrongly flagged. Instead of building universal safeguards, like reducing autoplay, or limiting targeted ads, YouTube has outsourced safety to AI. In doing so, YouTube has created a system that bulldozes both access and privacy at once.
An internet that demands policing to feel “safe” isn’t safe — it’s controlled, censored, and corrosive to democracy.
Chigrinuk is a civil rights intern at the Surveillance Technology Oversight Project (S.T.O.P.) currently completing her Master’s in Public Administration at Binghamton University.