Towards the Digital Fairness Act: Interview with Mark Leiser on Dark Patterns
The forthcoming Digital Fairness Act is anticipated to be introduced as a legislative initiative in the fourth quarter of 2026. For this first post of this blog series on the DFA, we have interviewed Dr. Mark Leiser about the regulation of dark patterns.
Introduction
“Dark patterns are commercial practices deployed through the structure, design or functionalities of digital interfaces or system architecture that can influence consumers to take decisions they would not have taken otherwise”, according to the Digital Fairness Fitness Check (p. 146). Dark patterns, also known as deceptive designs, manifest in various ways. One example is nagging, which occurs when you are repeatedly asked to choose an option you have already declined. Another example is ‘confirm-shaming’, which uses emotional manipulation to make you feel guilty for not choosing a certain option. Think of a button that says ‘No, I prefer to pay full price’ instead of a simple ‘No’.
The use of dark patterns is by no means an exception in business; rather, it has become a prevalent feature of online commerce. The figures presented in a report from 2022 commissioned by the European Commission speak for themselves: 97% of the most popular websites and apps used by EU consumers were found to deploy at least one dark pattern. Luckily, the EU regulatory framework has not remained silent on these practices. These practices are addressed through a multi-layered regulatory system, as outlined in the Digital Fairness Fitness Check. Central to this is the Unfair Commercial Practices Directive (UCPD), but there are also prohibitions on certain digital interfaces in the Consumer Rights Directive (Article 16e), Digital Services Act (Article 25), Digital Markets Act (Article 13), General Data Protection Regulation (Article 25), and AI Act (Article 5(1)).
However, as also identified in the Digital Fairness Fitness Check, this regulatory framework has its gaps and shortcomings. A fundamental limitation of the existing regulatory framework is the lack of legal certainty regarding whether specific forms of deceptive designs are fair or unfair under the UCPD, because the UCPD largely relies on a case-by-case assessment. This broader problem is compounded by the limited scope of other instruments: the DSA's dark pattern prohibition applies only to online platforms, excluding standalone web shops, search engines, and services in which platform features are merely secondary or incidental. Furthermore, Article 25(2) of the DSA stipulates that its prohibition does not apply to practices covered by the UCPD or the GDPR.
To gain a better understanding of the challenges surrounding dark patterns and what we can expect from the forthcoming Digital Fairness Act, we have interviewed Dr. Mark Leiser. He is a recognised expert in dark patterns and the regulation of digital manipulation. He is also the author of the book Dark Patterns, Deceptive Design and the Law, which explores the full spectrum of manipulative design techniques and their legal implications, from user interface tactics to deeper systemic architectural forms.
Interview:
In the public consultation of the DFA, respondents were asked whether certain dark patterns required new EU action, including click fatigue, false impressions of choice, nagging, urgency and scarcity claims, confirm shaming, sneaking into the online basket, features leading to unexpected results, and ambiguous language in choice presentation. Do you agree with these practices being the most problematic, or do you think this list is too limited?
Mark Leiser: First, I should mention that I have long been an advocate that there is something problematic with the term dark patterns. When working with Harry Brignull, I suggested ‘deceptive design’. We got some blowback because “deception” has particular legal meanings, but the purpose was not for lawyers; it is for designers who just want to know whether what they are doing is wrong.
To answer the question: thinking about dark patterns from the perspective of just what the user sees, as the European Commission does, is quite limiting. I could manipulate you in lots of different ways, while it looks fair to the user or the regulator. This is where I bring in the iceberg metaphor. The user interface is the tip of the iceberg. The system architecture that runs an entire website or app is far more problematic. There are some user-interface dark patterns that are relatively easy to police; the user knows they are being tricked. But there is a whole slew of stuff that does not tick those boxes. It is more of a systems architecture approach, where they look at all the data, possibly introduce AI, understand behavioural patterns, and personalise the choice architecture to take advantage of vulnerabilities. So, the European Commission is currently looking at old-school dark patterns that are highly visible, even as the world has already moved on. Thinking only about 2015 or 2020 dark patterns and not considering other forms of deceptive design is really problematic.
Businesses naturally try to persuade consumers to buy their products or use their services. Where is according to you the line between legitimate marketing and manipulation, and what should be the benchmark?
Mark Leiser:Persuasion is a fundamental cornerstone of business, and nobody can ever accuse me of trying to eliminate persuasion as a legitimate business tactic. If I go into a grocery store and the milk is at the back, that is a persuasion technique. Nobody is forcing me to buy milk or anything else. I still have my autonomy. For me, if you are pushing customers or users to do something that reduces their autonomy, that is when it becomes problematic. If a choice environment takes advantage of something you know about them that they may not know, for example, that somebody is yellow-orange colourblind or has dyscalculia, then that is taking advantage of vulnerabilities. That is a clear-cut example of deception versus persuasion and manipulation versus persuasion. And I do not think you can overcome that with transparency alone. What is a platform going to do, tell the user, “By the way, you are colourblind'? That's not going to help.
So, the question is what legal response works instead. If you go down to foundational principles like autonomy, fairness, friction, restriction of choice, and information asymmetries, there are two levels to consider. The first is to ask: Is this practice taking advantage of a consumer's vulnerabilities? If so, the presumption should be unfairness. This is a balanced approach because businesses can still argue to overcome that presumption. But there is a second, more serious level: where a business actively builds a personalised environment that responds to the behavioural patterns it knows about a particular user, which amounts to exploitation. In those cases, the burden shifts. The business must demonstrate that its conduct was not unfair to that specific user. Some forms of such exploitation and targeting based on vulnerabilities should be blacklisted outright, and the UCPD already provides the regulatory framework to do so. But we need a total rethink of what goes on in Annex I of prohibited practices. Rather than banning specific practices like urgency timers, we should rethink the approach to block deceptive optimisation strategies. Although we do not yet have clarity on what the DFA might look like, I hope we retool regulators to enforce more broadly, including a deceptive-design approach (as I discussed above) rooted in the UCPD's existing regulatory structure.
What are, in your view, the biggest gaps in the current legal framework protecting consumers from deceptive designs?
Mark Leiser:You have the GDPR, the UCPD, and new EU digital policy laws; they all have very specific regulations targeting dark patterns. But they are all user interface dark patterns (Article 25 of the DSA, Article 13(6) of the Digital Markets Act, and provisions of the Data Act). Then the AI Act focuses on subliminal, deceptive, and manipulative techniques, but the thresholds are very high. When you couple subliminal techniques with material distortion and significant harm, all these things are quite high for those prohibitions to actually be triggered.
So, collectively, we have not really thought more broadly about deceptive design that may look perfectly fair in the user interface but is hyper-personalised behind the scenes. This has two dimensions. For the individual consumer, the choice environment is tailored to exploit what the system knows about them. But there is also a market-wide effect: if every consumer sees a different marketplace, nobody knows what the other side is doing, and that creates anti-competitive effects as well.
Now, banning all “dark patterns”, as some have suggested, is not reasonable nor even plausible. Remember, “dark patterns” aren’t de facto illegal. Unless the practice is prohibited outright, the design technique must result in a material distortion. Secondly, a small business can reach me through personalised ads in ways that a Google search never could. Personalisation somewhat levels the playing field and increases competition. Banning personalisation would have horrible consequences for both the economy and consumers and would entrench the more powerful players.
How do you expect the DFA to regulate dark patterns, and how do you think it should regulate them?
Mark Leiser:Again, without really knowing what the law will look like, I do not want any reform to limit itself to the 2020 understanding of “dark patterns”. The consumer acquis needs to be flexible enough to adapt without having to go through a whole legal process. I really hope the European Commission maintains or develops a law that embeds the UCPD's regulatory design, including a general prohibition on unfairness, distinct tests, and a blacklist. The key takeaway is that rather than just having a blacklist of prohibitive practices, a blacklist of prohibitive strategies is needed. That is where fairness becomes malleable as a principle, flexible enough to reflect where we are going in consumer law and in an AI-centric marketplace, without being tied to specific practices that will be outdated within years. In this sense, I’m not sure a whole new law is needed.
As for enforcement, reversing the burden of proof is one option. Secondly, using technology to better enforce existing law: The Dutch ACM, in particular, is using AI techniques to scan for dark patterns. They reviewed thousands of websites, identified instances of dark pattern deployment, sent letters, and achieved a 100% success rate. But again, that is using AI to identify dark patterns in user interfaces, not in the system architecture.
I have also proposed that there should be a legal obligation to have two servers operating side by side, one a mirror of the other, so that a regulator can audit historically: what data, what user interface, what information, how did you make this decision about the presentation, and what did the user actually see? Consumer law has traditionally relied on the observation of transactions to justify an enforcement action, and you are not going to have that anymore. There will be zero evidence of how a business engages the “average consumer” because there will be no average consumer anymore. If everybody gets a hyper-personalised marketplace, the only consumer who sees that marketplace is an individual. One way some are advocating to address this is to view everybody as a vulnerable consumer in each environment, which is the same recognition of the problem, but from a different angle. I do not subscribe to this view. It’s far too subjective and fails to maintain the proper balance between business rights and a consumer’s accountability to their own purchases. This is an interesting academic argument, but this thought exercise doesn’t have a place in the real world, where the law must balance economic interests and legal clarity with consumer rights and protections.
0 Comments