ConsumerID

Blog

Interview with Simone van der Hof on the Protection of Minors and the Digital Fairness Act Image by Freepik

Interview with Simone van der Hof on the Protection of Minors and the Digital Fairness Act

The Digital Fairness Fitness Check showed that minors need more protection in digital environments. Would therefore a social-media ban for those under the age of 16 be a good idea? In this interview, Simone van der Hof shares her ideas on how to enhance the protection of minors.

Introduction

In October 2024, the findings of the Fitness Check on EU consumer law on digital fairness were published. The leading question of the Digital Fairness Fitness Check was whether additional legislation is needed to ensure equal fairness online and offline. Amongst the problematic practices discussed in the Digital Fairness Fitness Check are advertising, social media commerce, and influencer marketing. These problematic areas intersect with each other; concerns related to influencer marketing cover issues ranging from misleading advertisements to problematic content in advertising. According to the Digital Fairness Fitness Check, consumer harms resulting from problems experienced in the digital environment are at the highest level among minors. A given example is the lack of awareness among younger consumers that some influencer-generated content is actually paid promotion. Meanwhile, Australia has recently introduced a social-media ban for those under the age of 16. 

Now that a legislative proposal for a Digital Fairness Act is expected in mid-2026 and the public consultation has started, Busra Naz Daskapan has interviewed Simone van der Hof about how to protect minors in the digital environment. Simone van der Hof is a Full Professor of Law and Digital Technology at Leiden University, and her expertise mainly covers digital children’s rights, data protection, age verification, unfair trade practices, and privacy.

Interview

How do you think unfair commercial practices in the digital environment constitute a threat to minors?

Simone van der Hof: I think, in particular, the fact that children are at the stage of developing their cognitive abilities makes them more vulnerable to certain commercial practices compared to adults. It is also important to notice that each child belongs to a different age group with a different stage of development. Another aspect is that online platforms, such as gaming platforms, function as both an entertainment and educational environment, which is quite attractive to young people. Those platforms are also important for children’s identity development, sense of belonging, and socialization. As a result, considering that children spend a considerable amount of time on online platforms during the phase where their identity and relationships with the world are developing, they become vulnerable to being targeted by commercial practices.

Furthermore, some groups of children are more vulnerable than others in terms of being targeted by commercial practices because of particular social, emotional, and psychological issues that they may have in their lives. Thus, while children are considered as vulnerable consumers from a consumer rights perspective, it is possible to mention vulnerable children from a children’s rights perspective, which points to an even more vulnerable group of consumers. For instance, those children may be more prone to spend money online for purchases, easily influenced by influencers, and less inclined to see through the fact that most of those influencers are making money rather than being friends with their followers. In addition, some practical implications of those kinds of vulnerabilities might be spending too much money, not being aware of the type of transactions entered into, entering subscriptions which are difficult to cancel, and becoming susceptible to certain design practices and even gambling practices, thus, obsessively using social media and gaming platforms.

What are your expectations from the proposal for a Digital Fairness Act in terms of addressing the issue of age-vulnerability in a concrete way, rather than a vague reference? 

Simone van der Hof: Firstly, I would like to see much more clarity on the concepts of vulnerable consumers in general and specifically the protection of children from unfair commercial practices. I think what might be commercial practices that are unfair in relation to children is already obvious. Thus, extending the scope of the blacklist with commercial practices that are unfair to children could be a way of providing more clarity. 

Secondly, I would like the burden of proof to be reversed so that companies would need to prove that they comply with the Digital Fairness Act and that their design choices are healthy for, or at least not unfair to, children. Also, I would like to see less focus on transactions. For this reason, I would like to see a similar approach to that of the Digital Services Act, where certain kinds of designs are prohibited not just for children but also generally for consumers, regardless of whether they are transaction-based or not. 

Another expectation from the Digital Fairness Act is to see an approach based on the precautionary principle, which is a principle also acknowledged in children’s rights law. Whenever there are indications that something is harmful to children but there is no conclusive evidence yet, the principle requires that children are protected. The principle potentially goes beyond addressing economic harm.

As a legal professional, how do you feel about prohibiting minors from using smartphones and accessing social media instead of regulating the behaviour of content creators? Do you consider it as taking the easy way out, or is it really the only solution?

Simone van der Hof: I don’t think it is the solution. Both smartphones, as well as other similar devices, and social media have become very important parts of children’s lives. It is important for them to socialize, to be entertained and also to entertain others, to play, to get information, to learn about new music, and to get news, which is all possible via smartphones and social media. So, these platforms are an important part of children’s lives. However, there are a lot of potentially negative aspects of these technologies that children might encounter online, such as commercial practices that can manipulate or deceive them. Therefore, the ban in Australia and also potential bans under discussion in other countries come from genuine concerns about the negative impact that these technologies have on children. However, it is important to determine what causes these negative impacts. In other words: first determine what the problem is and then address that problem instead of banning access to digital services or devices for groups of children. The problem is that digital services that children use are often not designed in age-appropriate ways. Online platforms often have features that are particularly unhealthy for children, but also for a lot of adults, to be honest. For instance, manipulative designs and gambling-like features, such as engagement-driven recommender systems, might cause addictions and obsessive use, which are real problems. And so, they need to be designed differently to make sure they are child-centred, and EU digital law, including consumer law, provides relevant requirements for age-appropriate design. 

The core of the problem is that although there are currently several legislative instruments that at least partly address those issues, enforcement is lacking. For instance, because a lot of the design choices used by platforms are based on behavioural data, a great part of the problems faced could have already been addressed by the General Data Protection Regulation with proper enforcement. Thus, consumer law and data protection law could very well complement each other with a good level of coordination.

In fact, bans may have negative effects. Children may seek access to platforms anyway, and may even access ones that are not age-appropriate or even more problematic than the ones they are using now. They also may not feel confident to talk about the problems they encounter there because they know they are not supposed to use them. Moreover, there is less incentive for platforms to opt for a healthier design if children are no longer allowed to use them.

Furthermore, parents need more guidance on the healthy use of digital services by their children, depending on their age and stage of development. In the Netherlands, evidence-based guidelines for healthy screen use were published in June 2025. This is an important initiative. 

It is predicted that a potential age-detection technology required by the social-media ban introduced by Australia might include facial estimation or another way of collecting personal data. As an expert in both data protection and unfair trade practices, how do you reconcile such a protection mechanism with the protection of the personal data of minors?

Simone van der Hof: Platforms already estimate the age of their users, but they need to use much more reliable methods in the case of a ban, or more preferably, to provide age-appropriate online spaces for children, because it imposes a responsibility on these platforms to make sure that they effectively enforce this minimum age requirement. In that context, there are several methods of age verification.

Leaving aside the technical details, facial recognition and facial age estimation are different technologies. A facial age estimation technology does not recognize the face and identify the user. Rather, it tries to predict the age of the user from the features of the face, such as the bone structure. A privacy-friendly way of facial age estimation is possible, as long as the data is not stored and the age estimation is just a one-time exercise, where it is confirmed that the user is potentially older than the minimum age. Having said that, I don’t think facial scans would be an adequate method since the result would be just an estimation. In addition, this method may not work as well on children as adults because of the facial structures, and it may not work well on every skin colour. 

Another method could be estimating the users’ age by looking at their behaviours on the platform, such as their posts, clicks, swipes, and social network. However, this is not compliant with the GDPR, and it would potentially also be a high-risk system under the AI Act. So, legally, that is also a problematic method. A privacy-friendly way of age verification could be ensured using an app where the age information of the user may be stored through a prior verification by authorities which are under an identification obligation, such as the government, banks, or health professionals. Social media platforms then only receive a ‘yes’ or no’ from the app to the question of whether the user meets the minimum age requirement. This method is called the double-blind method, in which the app provider does not track the online activity of the user because it stays on the device, and the relevant platform solely receives the information that the user is old enough to use its services. I think this is a perfectly privacy-friendly way of age verification and much more reliable than just an estimation by way of scanning the face or tracking the behaviour of the user. The only matter to consider is how to provide children access to this kind of an identity wallet.

0 Comments