Follow the money: how the worst of the Web subsidizes the virtual

To review this article, select My Profile and then View Recorded Stories.

To review this article, select My Profile and then View Recorded Stories.

Gilad Edelman

To review this article, select My Profile and then View Recorded Stories.

To review this article, select My Profile and then View Recorded Stories.

There’s a lot going on this summer. The presidential race is in full swing, civil rights protesters are still on the streets, the pandemic is taking a turn, Hamilton is at Disney. In the midst of all those existing occasions, and partly because of them, companies, activists and legislators are focusing on a challenge that is less dramatic but still important: virtual advertising, the underlying monetary style of the open Internet.

The most prominent example is the Stop Hate for Profit campaign, which has convinced some top advertisers, such as Verizon and Unilever, to suspend spending on Facebook until the company takes dramatic steps to address the spread of hate speech on its platform. But how accurately does this generate profit? The answer goes beyond Facebook’s content policies.

Re: Targeting is made imaginable through Omidyar Network. All WIRED content is editorially independent and produced through our journalists.

“Many of those debates, when discussed about their technical causes, inevitably come down to advertising technology,” said Aram Zucker-Scharff, director of advertising engineering for The Washington Post’s research, experimentation and progression team. “Many of the disorders that other people are talking about lately on the Web are disorders that arise from detailed and persistent tracking of the behavior of third parties and users between sites.”

There’s a lot to unpack there. In the coming weeks, WIRED will discuss the other tactics in which the modern virtual advertising market promises the proliferation of harmful, divisive and misleading online content, while at the same time undermining true journalism. To begin with, we want to perceive the 3 main categories of advertising technologies and their position in the online waste food chain.

Companies like Facebook and Twitter get almost all their money from advertising. Hence the prevention of the boycott of profit hatred: loss of advertising profits is the only thing, we believe, that can lead the world’s largest social network to replace the way it handles racism and misinformation. But what is precisely the relationship between advertising and the bad actors of social media? It’s as if white supremacists on Facebook make money from their posts. The economy is a little more complicated.

Facebook critics have long argued that while the platform does not directly monetize hatred or misinformation, its reliance on micro-targeted advertising encourages such things to exist. A flexible social network for users makes cash in proportion to the time those users spend on the platform. Longer means more opportunities to run classified ads and collect insights that can be used to help advertisers target the right other people. And so, for a long time, social media corporations have designed their platforms to keep others engaged. However, one thing that tends to catch other people’s attention is polarizing and incendiary content. Actually, this is not surprising; the old journalistic mantra “If he bleeds, leaves classified ads.” A set of rules that prioritize user engagement can therefore prioritize content that annoys other people, or tells them what they need to hear, even if it’s not true. Even if advertising doesn’t directly fund fake or divisive content, it helps keep others on the platform. Facebook’s own internal review concluded, for example, that “64% of all extremist team unions are due to our benchmarking tools.

The other challenge is similar to the content of the classified ads themselves, especially political classified ads. The same features of a platform built around engagement and microtargeting can make paid propaganda really powerful. In June, for example, Facebook got rid of an announcement of Trump’s crusade with a red triangle upside down reminiscent of a Nazi symbol. Data from Facebook’s ad library shows that the crusade tested several variants of the ad, using other illustrations; the triangle seemed to work better. In other words, Facebook’s set of rules optimized for an ad that Facebook will eventually violate its own policies.

“Facebook’s entire business style is an optimization of a robust knowledge mining operation that spans much of our lives for microtarget classified ads versus the cheapest and ‘attractive’ content possible,” said Jason Kint, CEO of Digital Content Next, an advertising organization. publishers (including WIRED’s parent company, Condé Nast) Array in an email. “Unfortunately, content that tends to get top speed and succeed through Facebook’s algorithms swims in the same group with incorrect information and hate.”

Facebook questions this. In a recent blog post, the company’s vice president of foreign relations and communications, Nick Clegg, insisted that “people use Facebook and Instagram because they have smart reports; they don’t need to see hate-inciting content, our advertisers don’t need to see it.” , and we don’t need to see it. There’s no incentive for us to do anything but eliminate it. »

Facebook may need to remove hateful content. But it’s hard to track billions of messages a day, and automated systems have more problems with things like hate speech. The boycott doesn’t really replace that. This doesn’t even hurt the backline of the company, as most classified ads classified on Facebook don’t come from giant corporations, but from little-known small and medium-sized businesses. (It is also not known how seriously corporations take the boycott. HP, for example, added its call to the list, but continued to purchase new classified ads from Facebook and Instagram in the first week of July.

However, Facebook is not only the victim of its own great success; the corporation takes political resolutions that also facilitate the dissemination of misinformation. Make your resolution to exempt politicians from their fact-checking policies, adding announcements, which means that elected officials and applicants can lie directly on the platform and target those in express parts of the electorate. (Under pressure, Mark Zuckerberg recently announced that Facebook would remove posts from politicians who incite violence or intend to suppress the vote.) In response, several critics suggested that Facebook register with Google and its YouTube subsidiary to ban the option of microtargeting. political announcements. Array In this way, false statements can at least be the subject of scrutiny and not transmitted directly to a small audience. Several Democrats in the House have brought expenses that would impose just that. (Twitter, on the other hand, absolutely prohibits political ads).

“What, in my opinion, makes micro-targeting so harmful in the political context is that it can target other people so granularly that they could do so, without the advantages of the argument around it or the counterargument that exists if someone puts the ad on television, for example,” said David Cicilline, chairman of the House Antitrust Subcommittee and drafter of one of the bills. Array in an interview in May.

Social media gets the most attention, but if you really want to track hate for cash and misinformation online, you want to perceive demo programmatic advertising.

According to a new Global Disinformation Index report, tens of millions of dollars in advertising will move to companies that have published giant volumes of incorrect information about coronavirus and conspiracy theories this year. The report includes screenshots showing discordant juxtapositions: a Merck ad that appears in the marginal World News Daily right with the title “Tony Fauci and the Trojan Horse of Tyranny”; a Dell ad on top of a Gateway Pundit article accusing “defective models, unwanted science, and Dr. Fauci” of destroying the economy; and even an ad for the British Medical Association along with a name suggesting that “mandatory vaccination” will genetically adjust other people and make them inhumane.

How can this happen? In a word: automation.

In the 1990s and early 2000s, virtual advertising (advertising banners, pop-ups, etc.) was just the virtual analogue of print advertising: a logo area purchased directly from a website. But today, it’s weird. What has the highest in position is what’s called programmatic advertising. With programmatics, classified classified classified ads are no longer targeted at urgent posts. Instead, they target express types of users, based on items such as age, gender, location, as well as the scariest elements, such as what their browsing history shows about their interests. Advertisers now rank their classified ads in an automated formula with commands to succeed in a secure audience, wherever they are. They have the strength to tell the formula to keep their classified ads away from certain sites and content, but the effects are uneven.

Programmatic advertising is the economic fuel of the “free” Internet. Its expansion has made it less difficult to create your own site and without delay earning cash from traffic. Unfortunately, the same convenience that allows a food blogger to turn subscribers into a source of revenue also allows you to create a site that promotes hate speech or propaganda and monetize it without any advertiser explicitly opting to pay them.

Facebook. Instagram. Youtube. Amazon. How much do you accept as true with these products? Answer our survey and tell us what you think.

“Before, without advertising generation, it was much more complicated for them to make money,” says Augustine Fou, an advertising fraud consultant. Automation has replaced everything. The key change, Fou explains, was “the ease with which you can copy and paste some lines of code into your site and start running classified ads and make money. Before the programmatic program, I had to ask an advertiser or a firm means of shopping to give him money.” Ad generation teams allow brands to block certain sites and types of content, but advertisers don’t.

The paradox of programmatic advertising is that, although it may be easy to exploit, the existing mechanism is absurdly complex: a series of real-time auctions mediated through layers of automated intermediaries. Whenever a page or app running classified ads is loaded under programmatic classified, the publisher begins by finishing its available ad space, as well as the data it has about the user sharing the page, necessarily the actions it sells, to your ad server. (The most popular ad server is controlled through Google). The ad server completes an auction request to advertisers who need to target this type of user. Brands rank their classified ads on an ad-buying platform, as well as their target audience and what they’re willing to pay for. (Google also has the largest shopping platform, which is very popular with small businesses). The platform ends with this offer for advertising exchange, where it competes with other offers for the target audience. The winning offer is then at festival with all the winners of all other exchanges. Finally, the winner appears on the publisher’s website. Believe it or not, it’s an incredibly simplified account; Genuine is much more complicated. But, in a nutshell, that’s how a call known as Merck or Dell would possibly end up sponsoring Covid’s denial. (Two weeks ago, Google, however, announced that it would begin blocking classified ads classified in stories that sell debuted coronavirus theories.)

Technically, top ranked ads on social media can also be described as programmatic screens, as they target users based on automated behavioral awareness auctions. The difference is that classified ads classified on social media appear in the closed formula of a certain platform, while what I call programmatic advertising follows you all over the web. But two percent vital similarities.

“It’s largely about creating an area where users can be targeted when they’re vulnerable and need a higher term,” says Zucker-Scharff of the Post. They are vulnerable to the correct false data that appear to be in the correct component of a thread or in a place at the right time. This kind of thing only happens because they can be directed with this type of data. »

The newest great set of virtual classified ads is sought: effects that advertisers pay to demonstrate above or below the actual effects of a search engine. It’s a much simpler system. You’re your search engine, you specify what search terms your ad needs to run on, and pay based on the number of clicks your ad has. There is no complex chain of intermediaries that makes other advertising ecosystems so conducive to unfair play. And Google, which accounts for about 90% of the global search engine market, has fairly strong policies related to classified ads on its platform.

However, even classified search ads can be used to provide erroneous data on critical issues. Two recent reports from the Technology Transparency Project, an Internet control body and a persistent review by Google, illustrate how. In the first, researchers analyzed the effects of thousands of Google searches to get data on how to get cash from the federal coronavirus relief bill. They discovered ads with text like “Get your stimulus check – claim your cash now” that, when clicked, sent users to various types of scams. Because Google has recently replaced the format of classified ads, less intelligent users can easily mistake them for herbal search effects. Some were designed to download credit card numbers or other non-public data; some have encouraged users to download browser extensions that are meant for them to get their money, but in fact they have necessarily turned their computers into click machines to provide fake impressions for programmatic ads on fraudulent websites. Some have just taken users to other more unpleasant search engines that generate money by flooding their own advertising effects, a strategy known as seeking arbitration that shows how bad actors can mix other types of advertising generation to fund their projects.

The report of the time uncovered similar nonsense around search queries on how to register to vote. In particular, fraudsters attack others by providing them with registration for a higher rate, even if U.S. law requires voter registration to be free.

“Hopefully, in the future, the company will develop its due diligence on ads like this that have predatory practices, whether it’s extracting money, installing malware, or leading the electorate to scams, especially when other people are not only looking to be safe to avoid interacting with others about the pandemic, but looking to locate important information, Katie Paul, director of the Tech Transparency Project.

Google responded to reports by noting that it “constantly improves our app to stay ahead of bad players looking to take credit from users.” One spokesman said the company had disposed of classified ads classified stimulus and disabled classified classified voter registration ads even before they were published in the press. They also noted that Google had announced in April that all advertisers should verify their identity to serve classified ads classified across all Google platforms.

However, recent reports show that even a platform with the resources and technological wisdom Google seeks may have difficulty staying ahead of players.

“Tech corporations want to develop their due diligence efforts to track down those scammers,” Paul says. “Because every time a platform ends up taking strong action against something, criminals will simply evolve to avoid this repression.”

WIRED is where it is done. It is the essential source of data and concepts that give meaning to a global and coherent transformation. The WIRED verbal exchange illustrates how generation is turning each and every facet of our lives: from culture to business, from science to design. The advances and inventions we notice lead to new thinking tactics, new connections and new industries.

More from WIRED

Contact

© 2020 Condé Nast. All rights are reserved. Use of this site is an acceptance of our user agreement (updated 1/1/20) and the privacy and cookie policy (updated 1/1/20) and your privacy rights in California. Wired can earn a portion of sales of products purchased on our site as a component of our component partnerships associated with retailers. The content on this site may not be reproduced, distributed, transmitted, cached or otherwise used, unless you have the prior written permission of Condé Nast. Ad selection

Leave a Comment

Your email address will not be published. Required fields are marked *