To review this article, select My Profile and then View Recorded Stories.
To review this article, select My Profile and then View Recorded Stories.
Arielle Pardes
To review this article, select My Profile and then View Recorded Stories.
To review this article, select My Profile and then View Recorded Stories.
In 2010, the Electronic Frontier Foundation was fed up with Facebook’s insistent interface. The platform had a way of forcing others to give up their privacy more and more. The question was, what is the name of this coeration? Zuckermining? Facebaiting? Was it a Zuckerpunch? The call that was finally maintained: Privacy Zuckering, or when “they lead you to publicly share more data about you than you wanted.”
A decade later, Facebook has triumphed over enough scandals to know that other people are concerned about such manipulations; last year, he even paid a $5 billion fine for making “misleading statements about consumers’ ability to confidentially their non-public data.” And yet researchers have found that Privacy Zuckering and other murky tactics are still alive and online. That is, they prevail on social networks, where the control of your privacy is somehow more confusing than ever.
Here’s an example: a recent Twitter pop-up told users “You control,” before inviting them to “trigger traditional classified ads” to “improve the ones you see” on the platform. Don’t need doomscrolling-targeted classified ads? Okay. You can “keep less applicable classified ads”. A language like that makes Twitter a bad loser.
Dark Patterns is made imaginable through Omidyar Network. All WIRED content is editorially independent and produced through our journalists.
Actually, it’s an old trick. Facebook used it in 2010 when it allowed users to opt out of Facebook spouses’ websites to publicly collect and record their Facebook information. Anyone who has rejected this “customization” saw a pop-up asking, “Are you sure?” Allowing instant customization will give you a greater experience when browsing the web. Until recently, Facebook also warned others who opposed disabling its facial popularity features: “If you leave facial popularity disabled, we may not be able to use this generation if a stranger uses your photo to impersonate you.” The button to activate the settings is bright and blue; The button to turn it off is a less striking gray.
Researchers call these decisions the design and formulas of “dark models,” a term implemented for the user to delight and try to manipulate their choices. When Instagram asks you to “enable notifications” and doesn’t offer a opt-out option? It’s a dark pattern. When LinkedIn shows you a component of an InMail message in your email, but forces you to stop on the platform to be more informed? Also a dark pattern. When does Facebook redirect you to “disconnect” when you check out to disable or delete your account? It’s also a dark pattern.
Dark patterns appear throughout the web, causing others to subscribe to newsletters, load parts into their shopping carts, or request services. But, says Colin Gray, a scholar of human-machine interaction at Purdue University, they are insidious “when it comes to the privacy rights that must be assigned, knowledge from which he is willing to separate.” Gray has been reading dark patterns since 2015. He and his research team knew five fundamental types: harassment, obstruction, stealth, interface interference, and forced action. All those things seem to be in privacy controls. He and other scholars in the box have detected cognitive dissonance between Silicon Valley’s gigantic privacy openings and the team to modulate the options, which remain filled with confusing language, manipulative design, and other features designed to regain more knowledge.
These privacy games are not limited to social media. They are endemic to the Web in general, specifically as a result of the European General Data Protection Regulation. Since the GDPR came into force in 2018, Internet sites have been required to ask others for consent to collect certain types of data. But some consent banners simply ask you to comply with privacy policies, without the option to say no. “Some studies have warned that more than 70% of consent posters in the EU have some kind of dark pattern built in,” Says Gray. “It’s problematic to cede really broad rights.”
Recently, sites like Facebook and Twitter have started giving their users more accurate control over their privacy on the website. The privacy check recently implemented by Facebook, for example, guides you through a variety of possible options with brightly colored illustrations. But Gray points out that the default values are set with less privacy in mind, and the many checkboxes can overwhelm users. “If you have a hundred boxes to tick, who will?” he said.
Last year, U.S. Senators Mark Warner and Deb Fischer filed an invoice that would ban such “manipulative user interfaces.” The Deceptive Experience Reduction Act for online users, DETOUR in summary, would make it illegal for Internet sites like Facebook to use dark patterns when it comes to non-public data. ‘Deceptive activations with the click of the button’ OK ‘can move your contacts, messages, browsing activity, images or location data without your knowledge’, senator Fischer wrote when the invoice was filed. “Our bipartisan law aims to curb the use of these misleading interfaces and build online trust.”
The challenge is that it becomes very complicated to delineate a dark pattern. “Every design has a point of persuasion,” says Victor Yocco of Design for the Mind: Seven Psychological Principles of Persuasive Design. By definition, design encourages someone to use a product in a specific way, which is not inherently bad. The difference, Yocco says, is “if you set out to lie to people, you’re an idiot.”
Gray struggled to draw the line between dark patterns and poor design.
“It’s an open question,” he says. “Are they explained through the designer’s purpose or by the belief used?” In a recent article, Gray tested how users of the r/AssholeDesign sub-repertoire perform moral calculations of design choices. Examples of this diversity of sub-becoming, from innocent (automatic updates in Windows software) to really bad (an ad on Snapchat that gives the impression that you’ve dropped a hair on your screen, forcing you to swipe up). After combining the examples, Gray created a framework that defines the design as one that eliminates the user’s choice, controls responsibilities, or drags users to a resolution that doesn’t benefit them, but to the company. Stupid designers also use methods like fake statements, two-sided interactions, such as advertising from an ad blocker that also includes ads.
Many of these dark models are used to extract measures that involve success, such as user expansion or time spent. Gray cites an example of the Trivia Crack smartphone app, which encourages its users to play some other game every two to three hours. Such spam notifications have been used through social media platforms for years to induce the type of FOMO that helps keep you hooked. “We know that if we give other people things like scanning or prestige updates, other people are more likely to come back and see it over and over again,” Yocco says. “This can lead to compulsive behavior.”
The darkest patterns of all occur when other people retreat to leave those platforms. Try disabling your Instagram account and find it exceptionally difficult. First of all, you can’t even do it from the app. Since the desktop edition of the site, the settings are hidden internally in “Edit Profile” and come with a series of interstitial ads. (Why do you turn it off, Are you too distracted? Here, take a look at disabling notifications. You just want a break? Think about disconnecting).
“It creates frictions on how to achieve your goal, making it harder for you,” says Nathalie Nahai of Webs of Influence: The Psychology of Online Persuasion. Years ago, when Nahai deleted his Facebook account, he discovered a similar set of manipulation strategies. “They used the relationships and connections I had to say, “Are you sure you should stop?” If you leave, you won’t get any updates from that person and then post photos of some of their close friends.” They use that language that, in my opinion, is coertion,” he says. They make it psychologically painful for you to leave.”
Worse, Gray says, studies show that most people don’t even know they’re being manipulated. But according to one study, he said, “when other people were familiar with language beforehand to show what manipulation was like, twice as many users can simply identify those dark patterns. At least there is hope that greater awareness can give users back some control.