Deceptive by Design

The Visual Rhetorical Mechanism of Dark Patterns

by Eric York

Such is the "duplicity" of mêtis which, giving itself out to be other than it is, is like those misleading objects, the powers of deception which Homer refers to as dólos: the Trojan Horse, the bed of love with its magic bonds, the fishing bait are all traps which conceal their inner deceit beneath a reassuring or seductive exterior. Cunning Intelligence in Greek Culture and Society
—Marcel Detienne and Jean-Pierre Vernant (1978/1991, p. 23)
Our conventional response to all media, namely that it is how they are used that counts, is the numb stance of the technological idiot. For the "content" of a medium is like the juicy piece of meat carried by the burglar to distract the watchdog of the mind.Understanding Media: The Extensions of Man
—Marshall McLuhan (1964/2013, p. 24)

Introduction

Research into dark patterns—the blueprints of deceptive interfaces—has increased recently across a number of fields, including computer science, design, ethics, finance, security, defense, politics, law, digital rhetoric, and more. Perhaps driven by their prominent role in recent world events and their rapid proliferation in online spaces, this increased interest is occurring both in academic and in nonacademic discourse.

In the decade since the term was coined, dark patterns have gone from a humble .org domain and a hashtag used by the user experience (UX) design community on Twitter (mainly for "naming and shaming" big corporations) to an active interdisciplinary research site with a steadily accumulating body of literature relevant to a growing number fields. As a result, there is a pressing need to situate dark patterns historically and contextualize them as a distinct form of interface communication in order to better understand not only how they work, but also their scope and influence.

By analyzing four kinds of deceptive interface, examining archetypes and variants of each, I try to bring into view the dark patterns that informed their composition. The analysis draws on Gestalt psychology, especially the visual-perceptual theory described in Rudolf Arnheim's (1974/1991) Art and Visual Perception, joined with the concept of rhetorical cunning, or mêtis, seen by feminist and embodied rhetorics as a transgressive and creative intelligence used in crossing boundaries, breaking bonds, and setting and evading traps.

The analysis reveals that deceptive interfaces exploit perceptual and interpretive lacunae—gaps in how we process what we see—to deceive users. These gaps are well known to designers and in many cases involve effective design principles as well as malicious ones. They are like interactive optical illusions, deployed with malicious intent, and so I conceptualize them as visual–symbolic traps that, when triggered, turn users against themselves. Much like real traps, they consist of a disguise and a snare, although in this case constructed of visual and rhetorical materials in virtual spaces.

The union of these two theoretical strands, the visual with the rhetorical, is demonstrated in the relationship they have with one another. If dark patterns are the blueprints or plans for creating visual–symbolic traps, mêtis, cunning intelligence, informs the mind that conceives of the trap. When the trap is sprung, it is sprung in the interface, a medium governed by the Gestalt principles that explain how people interpret what they see, and it is triggered by human eyes and hands. The cunning that devises the trap depends on understanding the four-dimensional visual nature of the interface as well as the embodied human nature of the person it targets.

I selected the first epigraph for this reason. Like dólos—tricks—of which the Trojan Horse is exemplary, dark patterns rely on the illusion of harmlessness to conceal a malign intent. This webtext argues that the best way to understand dark patterns is to see them as the traps they are. Therefore, to defeat them, we must become familiar with their methods of disguise, with their mechanisms of operation, and in so doing, dissolve the illusion that all deceptive interfaces ultimately depend upon and thereby render them powerless.

History in Brief

The term "dark patterns" entered the mainstream sometime in 2016, but had been circulating in design discourse for several years prior. The events propelling the term to national prominence involved a digital campaign strategy that saw Donald Trump win the U.S. presidency and revealed that major U.S. technology companies, including Google and Facebook, were complicit in a conspiracy led by Russian state-sponsored actors to influence the election. These events showed the world not only how social media could be efficiently leveraged for global strategy but also the central role that design plays in deception online.

Dark patterns had been a topic in design discourse since 2010 when Harry Brignull coined the term, started the dark patterns hashtag on Twitter, and launched darkpatterns.org, a website dedicated to "naming and shaming" offenders (Brignull, n.d.). Within a couple of years, articles started appearing in tech-centric outlets like A List Apart (Brignull, 2011) and The Verge (Brignull, 2013), introducing the concept to wider audiences.

The pace stepped up suddenly the middle of the decade. In "The Year Dark Patterns Won," journalist Kelsey Campbell-Dollaghan (2016) wrote that during the election, "dark patterns … [were] wielded as weapons against democracy" (para. 5), demonstrating how "the details of the interface used by both Facebook and Google" (para. 7) were employed to mislead users. She ultimately concluded that "both companies lent legitimacy to lies through design" (para. 7). This probably didn't come as a surprise to those who'd been following the election or who were aware of dark patterns, or, frankly had been using the Internet in general over the past thirty years. As Safiya Umoja Noble (2018) pointed out, she had "argued for years about the harm toward women and girls … circulating through platforms like Google," but that "no one has seemed to care until it threw a presidential election" (p. 183).

Unfortunately, as with so much technology policy, legislation has failed to address the issue. The (poorly named) Deceptive Experiences To Online Users Reduction Act (or DETOUR), introduced in 2019 by a bipartisan group of senators, was ostensibly intended to curb the worst excesses of industry, but since it would have put the biggest offenders (Apple, Amazon, Facebook, and Google) in charge of determining the standards by which dark patterns would be identified, and by implication the criteria for banning them, it likely wouldn't have done much. In any case, according to the Library of Congress, the bill was read twice and referred to committee ("DETOUR Act," 2019), where it has since languished.

Content and Medium

Today, in the wake of a global pandemic that saw the rampant spread of dis- and misinformation about both virus and vaccine, and amidst ongoing atrocities in Europe wrought by Russia's invasion of Ukraine (predicated on yet another campaign of disinformation), people may be understandably less concerned with dark patterns in interfaces and more with how easy it seems to weaponize information on the internet. Why does it matter if an app or website tricks people into sharing their personal data when state governments deploy misinformation at a global scale to justify war crimes and cover up attempted coups?

It matters because the same mechanism underlying the deceptive interfaces that trick people into downloading malware (for example) also underlie the deceptive communication strategies employed in information warfare—both are designed according to the same dark patterns, both share the same characteristics, both feature the same fundamental tactics. And in some cases the source is even the same.

For example, according to the most recent Chainanalysis Crypto Crime Report (2022), "roughly 74% of ransomware revenue in 2021 … [was] highly likely to be affiliated with Russia" (p. 123). I do not mean to express a bias against the state of Russia, which is well beyond the scope of this webtext, merely to observe the multifaceted interplay of layers. This is why I chose the second epigraph. Fake news, dis- or misinformation, alternative facts, conspiracy theories and the like: all are the juicy bits of meat that distract the watchdogs of our minds.

In 1964, decades before the dawn of the internet age, Marshall McLuhan (1964/2013) reminded us how "the content or uses … are as diverse as they are ineffectual" and that rather "it is the medium that shapes and controls the scale and form of human association and action" (p. 17). McLuhan warned of the media's invisibility and how "the 'content' … blinds us to the character" (p. 17). All the while it is the medium, in fact, that is "totally radical, pervasive, and decentralized" (p. 18). In other words and much more famously: "the medium is the message" (p. 17).

When it comes to dark patterns, we must keep this important insight firmly in mind. It doesn't matter whether the pattern generates an interface for tricking someone into buying an unwanted subscription or whether it causes the toppling of a government. It is not the uses to which the technology is put that matters, but rather "the new scale that is introduced into our affairs" (p. 16), the extension it provides to our capability. What dark patterns are used for, and by whom, is ultimately less important than understanding how the sheer fact of their existence changes the world.

An Illustration

The moment one succumbs to a deceptive interface is highly instructive: It constitutes the edge case when one's interpretive ability fails spectacularly, when what one thought the interface to be turns out exactly wrong. Consider the deceptive interface in the figure below. A human hair has been meticulously photoshopped to appear to sit on the screen of the user's device, prompting them to wipe it off and, in so doing, to trigger the link.

Fake hair photoshopped onto a sneaker ad.
Figure 1: "Fake Hair." Posted to Twitter by @hydrosound (Martin, 2017), this deceptive interface consists of a fake hair Photoshopped onto a sneaker advertisement so it looks like it sits atop the screen. When the user tries to wipe it away, it triggers the link wrapped around the whole image.

Even the most perceptive user might instinctively try to brush the hair away and thereby trigger the malicious link. This is not persuasion, but rather a kind of hacking or exploit. What at first seemed innocent is now revealed as sinister. The elaborately constructed sneaker ad is just a ruse to deliver the hair. Some percentage of the time, the hair garners the instinctive response.

This example is instructive because it illustrates four important features of deceptive interfaces shared by all dark patterns. First, they rely on disguise of some kind; in this case the illusion of the hair disguises its purpose. Second, they change their form; in this case the image is actually a hyperlink. Third, they await a specific moment in time; in this case a careless moment. And finally they reverse the user’s intent; in this case causing someone to tap a link they didn't intend to tap.

This example is highly refined, turning on a single visual trick that exploits a single perceptual weakness: the difficulty human eyes have with estimating depth. This generates a somewhat predictable human response—brushing the hair away—and in this moment what was virtual, an image projected on a screen with bits, becomes physical, as it enters the human's eyes and triggers the human's touch. Because they are interactive events, deceptive interfaces always blend the virtual and the embodied.

As RhetOps

In the introduction to their recent collection, RhetOps: Rhetoric and Information Warfare, Jim Rodolfo and William Hart-Davidson (2019) wrote of the "dark side" of digital composition and argued the "production and proliferation of mass disinformation" is a consequence of the "ability of our disciplinary knowledge to become weaponized" (p. 4). Dark patterns represent just such weaponization of rhetoric and are involved in activities ranging from the highly questionable to the clearly criminal.

Due to their interactive and visual–symbolic structure, as well as the only-ever-partially visible network within which they operate, dark patterns, like other complex interactive media, present challenges to traditional analytic methods, as Angie Mallory (2019) noted in her study of the rhetoric–operations divide, writing of the need to "devise a way to analyze … by layers" (p. 206). Dark patterns, thankfully, are not nearly as complicated as the viral extremist propaganda videos Mallory wrote about, but they nevertheless share characteristics of interactivity, deception, and networked distribution.

As I hope this overview has shown, dark patterns are of concern to several fields of study. Scholars largely agree that dark patterns constitute significant threats to individuals, to organizations, and to governments. And the thread running through from end to end is profoundly rhetorical, a kind of forced persuasion that slips past our defenses. In the next section I follow this thread into theories of visual perception and rhetorical cunning and review the literature on dark patterns.