Menu

New Data Privacy Proposal Points the Way Toward Possible Compromise Between Tech Cos and Users

The News: A sweeping new data privacy proposal could point the way toward a possible comprise between tech companies and users. The Age Appropriate Design Code is a new British online data privacy proposal aimed at increasing protections for children online. Coming on the heels of the otherwise comprehensive 2018 Data Protection Act, this new proposal outlines new rules that specifically address online safety for minors. Per the New York Times:

“The rules will require social networks, gaming apps, connected toys and other online services that are likely to be used by people under 18 to overhaul how they handle those users’ personal information. In particular, they will require platforms like YouTube and Instagram to turn on the highest possible privacy settings by default for minors, and turn off by default data-mining practices like targeted advertising and location tracking for children in the country.”

A similar set of guidelines, dubbed COPPA (the 1998 Children’s Online Privacy Act), already exists in the United States but only applies to children under 13. The Age Appropriate Design Code is scheduled to go before British parliament for a vote sometime this year, and to be applied soon after.

Sweeping new data privacy proposal points the way toward possible compromise between tech companies and users

Analyst Take: It’s hard to argue against the fact that both the Age Appropriate Design Code and the 2018 Data Protection Act are important keys to providing protections for minors and keeping them safe online. That said, the chasm between what technology companies think is ‘the right thing to do’ and what users are comfortable with as it relates to personal data privacy, for themselves and their children is, in most instances, a deep one. Here’s a look at some of what I think the most important elements of this discussion, especially as it relates to the new data privacy proposal, include:

Protecting the rights and safety of children online is important, but not without issues

Some tech industry lobbyists have argued that while the objective of the proposal is noble, the rules themselves, or how they would require technology platforms to comply to data privacy rules, may run afoul of its intent, and may even cause more harm than they aim to correct. Age-gating for instance, could unnecessarily limit the types of services that a website or platform provides. Small companies may also no longer be able to provide or direct effective advertising services to young adults — from introducing them to content that specifically caters to their tastes, to notifying them of products and offers that would likely be of value to them. An argument can also be made that in order to ensure compliance with regard to age verification, platforms may have to collect more data from would-be users than they might have otherwise collected absent these rules.

Expanding the same protections to other vulnerable online users is possible — why not do it?

While tech companies and regulators work on data privacy details, what struck my eye about this proposal is twofold:

First, it hints at a possible expansion of data privacy and protection rules based on age where such data protections already exist but only apply to children under 13, instead of children under  18. This upward shift in age inclusion opens the door to the next logical question: If tech companies can do this for 13 and 18 year olds, why can’t they also do it for 25 year olds, 45 year olds, and 75 year olds? Or rather, why must basic data protection rules be predicated on age at all? Why not just make them universal? Is there a compelling reason why adult users of technology platforms deserve to be put at greater risk of stalking, harassment, hacking, doxing, and violence than their younger counterparts?

Second, many of the tools and practices to be set in place to protect children’s data online could presumably be used to protect adults as well. For instance, one tenet of the new code focuses on tech companies “thinking about the risks to children that would arise from collecting and processing of their personal data works equally well with adults.” Following the same logic, this premise could also be required to consider the risks to women, vulnerable communities, users living with disabilities, and the elderly that could arise from collecting and processing their personal data. If companies understand that children must be protected online because they are vulnerable to a plethora of threats, shouldn’t these same companies also understand that other users are vulnerable as well? And therefore, is there a rational reason why some vulnerable users should be protected but not others? From a regulatory or legislative standpoint, could it not be argued that it is in the public interest for these same companies to extend privacy and online safety protections to other vulnerable users besides children?

Taking that logic a step further, could it not be argued that since all users are inherently vulnerable to data theft, privacy abuses, stalking, fraud, harassment, and a plethora of unpleasantness and genuine threats, these same tech platforms have a responsibility to protect all users as best they can?

Expanding these protections to all users by default only makes sense

The Code’s 15 governing principles can almost all be expanded to adult users: “Best interest of the child” can just as easily become “best interest of the user.” The “data protection impact assessment” can also easily be applied to users of all ages. Data collection features being required to be set on the highest data privacy settings by default also doesn’t have to be limited to children. With regard to data sharing: “Do not disclose children’s data unless you can demonstrate a compelling reason to do so, taking account of the best interests of the child” can easily become “Do not disclose users’ data unless you can demonstrate a compelling reason to do so, taking account of the best interests of the user.” One can go down the list of all 15 principles and make the same observation each time.

There is a template here, for universal data privacy and online safety

This is the crux of what I see as a possible direction for data privacy regulations aiming to help tech companies and their users find a healthier balance than the one currently being debated around the world. Based on this latest effort, it would essentially look like this:

  1. A shift from an opt-out data privacy model (in which the default could be maximum data collection and the user must actively opt out of it) to an opt-in data privacy model (in which the default is minimum data collection, and the user has to opt in to more data collection in order to benefit from more data-dependent services).
  2. An emphasis on putting the best interest of the user front and center of all data collection and processing decisions, not just as a matter of culture but as a matter of law.
  3. Controls, some on the user side, some on the platform side, that allow users to determine what types of content they are comfortable with receiving or being exposed to, and in some cases, even be protected from.
  4. A deliberate restoration of trust in the platform-user relationship (further reinforced by policies of transparency and clear disclosure).

Because these principles, along with the tools and data privacy practices that will enable their execution, can be expanded to all age groups, and also because users and the governments they look to to protect them from exploitation, fraud, loss of privacy, and other threats have been actively looking to address the dual issue of digital privacy and digital security, I see in this proposal a template for what could become a model for universal data privacy and online safety requirements. Ideally, technology platforms would adopt this approach all on their own, but if they cannot, or will not, legislatures and regulatory bodies around the world may begin to feel growing pressure to step in and compel them to do so.

Futurum Research provides industry research and analysis. These columns are for educational purposes only and should not be considered in any way investment advice.

Other insights from the Futurum Research team:

IOT Cybersecurity Regulations Kick In With the Start of 2020

Facebook Doesn’t Really Care About Your Privacy — and This is Why It Hurts Libra

Why CMOs Need to Be Involved in Privacy Policy Creation

 

 

Author Information

Olivier Blanchard

Olivier Blanchard is Research Director, Intelligent Devices. He covers edge semiconductors and intelligent AI-capable devices for Futurum. In addition to having co-authored several books about digital transformation and AI with Futurum Group CEO Daniel Newman, Blanchard brings considerable experience demystifying new and emerging technologies, advising clients on how best to future-proof their organizations, and helping maximize the positive impacts of technology disruption while mitigating their potentially negative effects. Follow his extended analysis on X and LinkedIn.

Related Insights
Is 2026 the Turning Point for Industrial-Scale Agentic AI?
February 5, 2026

Is 2026 the Turning Point for Industrial-Scale Agentic AI?

VP and Practice Lead Fernando Montenegro shares insights from the Cisco AI Summit 2026, where leaders from the major AI ecosystem providers gathered to discuss bridging the AI ROI gap...
Cisco’s "End of Gold": A High-Stakes Pivot to Skills-First Architecture
February 3, 2026

Cisco’s “End of Gold”: A High-Stakes Pivot to Skills-First Architecture

Tiffani Bova, Chief Strategy and Research Officer at The Futurum Group, examines Cisco’s 360 Partner Program and how its redesigned incentives, designations, and tools aim to align partner profitability with...
ServiceNow Q4 FY 2025 Earnings Highlight AI Platform Momentum
January 30, 2026

ServiceNow Q4 FY 2025 Earnings Highlight AI Platform Momentum

Futurum Research analyzes ServiceNow’s Q4 FY 2025 results, highlighting AI agent monetization, platform consolidation in CRM/CPQ, and a security stack aimed at scaling agentic AI across governed workflows heading into...
Microsoft Q2 FY 2026 Cloud Surpasses $50B; Azure Up 38% CC
January 30, 2026

Microsoft Q2 FY 2026: Cloud Surpasses $50B; Azure Up 38% CC

Futurum Research analyzes Microsoft’s Q2 FY 2026 earnings, highlighting AI-led cloud demand, agent platform traction, and Copilot adoption amid record capex and a substantially expanded commercial backlog....
Commvault Q3 FY 2026 Record Revenue, ARR Guide Trimmed
January 29, 2026

Commvault Q3 FY 2026: Record Revenue, ARR Guide Trimmed

Futurum Research reviews Commvault’s Q3 FY 2026 results, citing Unity-led cross-sell momentum, SaaS cohort scaling, identity resilience partnerships, and a modest ARR guide-down from mix and duration normalization....
As CrowdStrike Buys Seraphic, Is Browser Security Destined to Be Just a Feature
January 15, 2026

As CrowdStrike Buys Seraphic, Is Browser Security Destined to Be Just a Feature?

Fernando Montenegro, VP at Futurum, analyzes CrowdStrike's acquisition of Seraphic Security, a strategic move to secure the browser "blind spot" and extend Falcon's visibility to unmanaged devices....

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.