When the Algorithm Knows You Better Than the Law: Automated Profiling and the limits of EU Consumer Protection
Automated profiling has become a defining feature of online marketplaces, shaping how consumers encounter prices, offers, and choices. In this blog post, Carolina Lisboa examines how profiling challenges core assumptions of EU consumer law, particularly in relation to autonomy, vulnerability, and equality.
Introduction
Automated profiling has become a defining feature of online marketplaces. Platforms increasingly predict what consumers are likely to click on, select, or purchase, often before consumers have actively searched for a product, compared options, or articulated preferences themselves. This is not because smartphones are secretly listening, as suggested in popular discussions such as the Reply All podcast episode on Facebook “spying,” where eerily well-timed ads are mistaken for audio surveillance. Rather, it is because automated profiling systems continuously infer, predict, and act upon behavioural signals such as browsing patterns, scrolling behaviour, interaction speed, location, and time of day. Through these mechanisms, platforms do not merely observe what consumers do; they seem to increasingly shape how consumers decide.
This development exposes structural limits in how EU consumer law conceptualises autonomy, vulnerability, and equality. These concepts play a central role in EU consumer law (Reich 2017): autonomy is primarily protected through information and informed choice, vulnerability through the targeted protection of specific consumer groups, and equality through the assumption that consumers participate in the same, non-personalised market environment. While consumer protection remains largely organised around transparency, information duties, and the point at which consumers make purchasing decisions, profiling increasingly reshapes the behavioural conditions under which consumer decisions are formed, shifting platforms from largely observational intermediaries to active architects of consumer choice.
In this blog post, I use automated profiling as a lens to examine how EU consumer law conceptualises and seeks to protect consumer autonomy, how it identifies and addresses vulnerability, and how it assumes a baseline of equality in market participation. I argue that profiling does not simply influence individual purchasing decisions, but reshapes the behavioural conditions under which consumers participate in markets, thereby stress-testing some of the core assumptions underpinning EU consumer law.
Automated profiling as behavioural governance
Automated profiling marks a structural shift from passive market analysis to active behavioural governance. Platforms no longer rely primarily on what consumers explicitly disclose, but on continuous streams of behavioural data, including browsing patterns, scrolling behaviour, interaction speed, time of day, device use, and navigation paths (Van Heusden 2025). This shift is legally significant because EU consumer law largely regulates market influence at the level of information and transactions, rather than at the level of system design. These behavioural signals are used to generate inferred information about consumers: not only what they might want, but how they are likely to feel, decide, and respond at particular moments. Profiling systems estimate traits such as willingness to pay (Grochowski et al. 2022), impulsiveness, stress (Peppet 2014), and susceptibility to urgency cues (Hacker 2021), often without consumers’ awareness.
Crucially, these inferences are not merely descriptive. They are actively deployed to shape the consumer’s digital environment in real time. Rankings, recommendations, prices, defaults, and interface elements adapt continuously, recalibrated through feedback loops that learn from each interaction (Susser et al. 2025). What emerges is not simply personalised advertising, but behavioural optimisation: the continuous configuration of the choice environment to steer behaviour. Influence increasingly operates through the architecture of choice itself, rather than through discrete or visible persuasion techniques, often long before any concrete transactional decision is made (Jannach et al. 2021).
Autonomy beyond the moment of transaction
In EU consumer law, consumer autonomy is primarily operationalised through the idea that well-informed consumers can make free and rational purchasing decisions. Under Article 5(1) of the Unfair Commercial Practices Directive (UCPD), a commercial practice is unfair where it is contrary to the requirements of professional diligence and materially distorts, or is likely to materially distort, the economic behaviour of the average consumer it reaches, or of the average member of a targeted consumer group. In practice, this assessment turns on whether a practice causes the consumer to make a transactional decision they would not otherwise have taken. Hereby, it presupposes that influence is visible and intelligible, and that it operates at, or immediately before, the moment of decision, allowing a causal link to be drawn between a particular practice and a particular choice.
Automated profiling disrupts these assumptions. In profiling-driven environments, influence operates upstream. Platforms structure what consumers see, when they see it, and how options are framed, often long before a purchase is contemplated. By the time a consumer reaches a 'buy' button, the decision is frequently the endpoint of a process that has already been systematically engineered (Namysłowska 2025).
This temporal displacement matters legally. If decisive influence occurs before the transactional moment, the UCPD risks capturing only the effects of influence rather than its source. Autonomy may already have been compromised through the design of the choice architecture itself (Van Heusden 2025), even if no single practice at the point of sale appears misleading or aggressive. Traditional consumer protection tools therefore struggle in this context (Namysłowska 2025). Information duties and transparency requirements presuppose that consumers can detect, understand, and resist influence once it is disclosed. Profiling undermines this premise. The mechanisms shaping behaviour are opaque, adaptive, and often operate below conscious awareness (Hacker 2021). Where influence is embedded in the continuous optimisation of the choice environment, legal intervention focused on the final transaction arrives a little too late.
From status-based to computed vulnerability
EU consumer law has long relied, and continues to rely, to a significant extent, on a distinction between the average consumer and vulnerable consumers, the latter understood in status-based terms linked to stable characteristics such as age or infirmity (Article 5(3) UCPD). This approach assumes that vulnerability is relatively stable and identifiable in advance, allowing the law to trigger heightened protection for predefined groups. Automated profiling fundamentally disrupts the adequacy of this logic. In profiling-driven markets, vulnerability is no longer something one is; it is something that can be inferred moment by moment from behavioural context, with the same consumer being treated as more or less susceptible at different times depending on how they interact with the system (Sax et al. 2014).
Rather than targeting fixed groups, profiling systems identify situational weaknesses in any consumer. By analysing behavioural data, platforms infer transient internal states, such as fatigue, stress, or distraction, and determine when resistance to influence is likely to be lowest. Choice architectures are then tailored to exploit these moments (BEUC 2025). Vulnerability thus becomes dynamic and universal. It is situational, triggered by temporary states (European Commission 2016); relational, arising from asymmetric platform–consumer relationships (Helberger et al. 2022); and architectural, produced through interface design that amplifies predicted weaknesses (Galli 2022). Under the influence of automated profiling, the distinction between the average and the vulnerable consumer has thus begun to collapse further. Digital vulnerability has become the rule rather than the exception.
Fragmented markets and behavioural inequality
Profiling also unsettles assumptions about equality in consumer markets. Traditional non-discrimination frameworks presuppose a shared informational environment in which consumers can compare offers and treatment, an assumption on which consumer law implicitly relies to enable comparison, informed choice, and market discipline. Profiling erodes this premise. Consumers increasingly navigate personalised micro-markets, where prices, rankings, and offers are tailored to behavioural profiles, creating epistemic fragmentation: consumers have little visibility into alternatives shown to others, or into the criteria governing their own treatment.
Within these fragmented markets, inequality is generated not by who consumers actually are, such as based on traits such as their age or social status, but by how they are algorithmically inferred to behave, with consumers treated differently based on predicted responsiveness or profitability rather than identity. Profiling enables forms of behavioural discrimination, including personalised pricing and ranking, based on traits such as impulsiveness or willingness to pay. These traits fall outside protected grounds under EU non-discrimination law, rendering resulting inequalities largely invisible from a legal perspective (Zuiderveen Borgesius 2018).
Why this matters
Automated profiling does not simply move faster than regulation. It exposes structural limits in how EU consumer law conceptualises autonomy, vulnerability, and equality. EU consumer law therefore faces a fundamental question: should it continue to focus primarily on transactions and information, or must it begin to engage with the behavioural infrastructures that increasingly govern consumer behaviour? While existing tools are unlikely to disappear, the analysis suggests that safeguarding meaningful consumer choice in profiling-driven digital markets will require consumer protection to look beyond individual transactions and address how choice environments are designed and optimised in the first place. In this respect, it will be particularly interesting to see how the EU approaches these issues in the context of the proposed Digital Fairness Act (DFA), which may reveal how far it is willing to move from a transaction-focused model towards addressing behavioural influence and profiling as matters of consumer protection.
0 Comments