Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 12 submissions in the queue.

Submission Preview

Link to Story

How Facebook's Political Ad System Is Designed to Polarize

Accepted submission by upstart at 2019-12-13 21:31:22
News

████ # This file was generated by upstart! Edit at your own risk. ████

Submitted via IRC for chromas

How Facebook's Political Ad System Is Designed to Polarize [wired.com]

Amid the tense debate [wired.com] over online political advertising, it may seem strange to worry that Facebook gives campaigns too little control over whom their ads target. Yet that’s the implication of a study [arxiv.org] released this week by a team of researchers at Northeastern University, the University of Southern California, and the progressive nonprofit Upturn. By moonlighting as political advertisers, they found that Facebook’s algorithms make it harder and more expensive for a campaign to get its message in front of users who don’t already agree with them—even if they’re trying to.

Social media is well on its way to supplanting television as the dominant platform for campaign spending. The study notes that online spending is projected to make up 28 percent of all political marketing in the 2020 elections, up from 20 percent just last year. The optimistic take is that this shift is helping campaigns, especially smaller ones, to more efficiently get their messages to the right voters. But the new study suggests that there are some limits to that account. By optimizing for what it defines as “relevance,” Facebook puts its thumb on the scale in favor of a certain kind of political communication, the kind that focuses on engaging with people who are already on your side. Polarization, in other words, is part of the business model.

“If you are ever trying to reach across party lines, that’s a much more difficult strategy on Facebook,” said Aaron Rieke, managing director at Upturn and one of the study’s authors.

The paper, still in draft form, is a follow-up to research the group did earlier this year, which found [wired.com] that Facebook’s algorithms can dramatically skew the delivery of ads along racial and gender lines even when the advertiser doesn’t intend it. That’s because while Facebook allows advertisers to design their audience—that’s ad targeting—the platform’s algorithms then influence who within the audience actually sees the ad, and at what price. That’s ad delivery. Because Facebook wants users to see ads that are “relevant” to them, the algorithm essentially pushes a given ad toward users it thinks are most likely already interested in its message. This, the researchers found, can reinforce stereotypes. For example, of the users who saw ads for jobs in the lumber business, 90 percent were male, even though the intended audience was evenly split between men and women. (Facebook is also facing [nytimes.com] litigation for allegedly allowing advertisers to intentionally discriminate.)

For the new study, the team decided to explore whether the algorithm also skews political ad delivery along partisan lines. Because the company doesn’t share that information, they had to run a number of experiments, essentially going undercover to figure out where targeting ends and Facebook’s algorithms begin.

The basic setup of the experiments was simple: Over the summer, the researchers bought ads promoting either Donald Trump or Bernie Sanders, and targeted both sets of ads simultaneously at groups of American users. If the only thing affecting who saw the ads was their targeting parameters, the researchers hypothesized, then liberal and conservative Facebook users would see both ads at about the same rate. But if Trump ads disproportionately went to conservatives and Sanders ads to liberals, that would mean Facebook’s algorithm was putting a thumb on the scale. (Why Sanders? Because at the time of the experiment, his campaign was the biggest Democratic spender on Facebook, which the researchers hoped would minimize the risk of the study actually influencing the election.)

Facebook infers our political interests from our behavior on (and off [wired.com]!) the platform and allows advertisers to target us accordingly. It’s hard to measure how that actually plays out, however, because the company doesn’t let advertisers see the political leaning of the people who ultimately see or click an ad. So the researchers came up with a workaround, based on the fact that Facebook does let advertisers track ad impressions by location. They built separate, similar-sized audiences of liberals and conservatives in the same area. Then they targeted them with Trump and Sanders ads simultaneously. (They also showed a “neutral” ad encouraging them to register to vote.) That meant liberals and conservatives were being targeted by the same messages, at the same time, in the same place. The key was to target them with separate but identical ad buys. So, for example, when the team targeted users in Charlotte, North Carolina, they made two transactions with Facebook: one for the liberal audience and one for the conservative audience. That allowed them to keep track of everyone’s partisanship to see whether Facebook was skewing ad delivery by political affiliation.

And, indeed: it was. While the audience for the neutral ad was nearly evenly split, the researchers found that, on average, “Facebook delivers our ads with content from Democratic campaigns to over 65% users registered as Democrats, while delivering ads from Republican campaigns to under 40% users registered as Democrats, despite identical targeting parameters.” Targeting based on Facebook’s classification of users’ political leaning, instead of party registration, led to even more skewed results. Just as important, it cost much more to reach users across the political divide. For example, the study found that it cost 50 percent more to get a conservative voter to see Sanders content than Trump content.

“In traditional television or newspaper advertising, two political campaigns that have the same financial resources have an equal chance to reach the same audiences,” said Aleksandra Korolova, a computer scientist at USC and one of the study’s authors. “Whereas what we’ve showed in this work is that Facebook will charge the political campaigns differently depending on who they are and will deliver the ads to a subset of the users that they’re targeting according to what Facebook thinks is important—not according to what the political campaign may be trying to do.”

Facebook has downplayed the study. “Findings showing that ads about a presidential candidate are being delivered to people in their political party should not come as a surprise,” said a company spokesperson in an emailed statement. “Ads should be relevant to the people who see them. It’s always the case that campaigns can reach audiences they want with the right targeting, objective, and spend.”

But what if the “right” spend is more than a campaign can afford? The study suggests that Facebook charges a premium to reach audiences who aren’t already aligned with your message. That might not matter for a national campaign with tens millions of dollars to throw around. But local campaigns and less well funded candidates have to make hard choices about where to invest limited resources.

The company is right about one thing, however: to people who work in digital campaign world, the results were not exactly shocking. “I’m not surprised at all,” said Tatenda Musapatike, a former Facebook employee who is now senior campaign director at Acronym, a progressive digital advocacy group. “I don’t think many people in the industry would be particularly surprised.”

Musapatike pointed out it’s rare for digital campaigns to even bother trying to persuade voters on the other side. Eric Wilson, a Republican digital strategist, agreed. But, he pointed out, that’s largely because strategists already know the platform rewards that approach. “People like to fault the campaigns for playing up the base voter, but I lay the blame at Facebook’s feet,” he said. “Because if you’re telling me that I can reach voters who agree with me for half the cost or a third of the cost of voters who disagree with me? I’m going to take that bet every day.”

It all comes down to Facebook’s desire to show users “relevant” ads. When you target an ad to a certain Facebook audience, you’re actually bidding against other advertisers in an auction for that group’s attention. And Facebook openly tells [facebook.com] businesses that the platform will “subsidize relevant ads,” meaning an ad can win an auction even against higher bidders if the algorithm deems it more relevant to a given user. Why? Because to keep selling ads, Facebook needs to keep users on the platform.

“I talk about this all the time in my trainings for campaigns and operatives: Facebook’s objectives are not aligned with your campaign objectives,” said Wilson. “Facebook wants to make more money, and they make more money by getting people to spend more time on the site.” That, in turn, gives the platform an incentive to show users what they’re already interested in. That might seem benign when it comes to an ad for detergent, but it has different implications for democratic politics, which depends to some extent on the possibility of candidates getting their messages in front of people who aren’t already in their camp. And it raises the question of whether the platform gives an advantage to established politicians; an unknown candidate, after all, won’t show up in any user’s list of preexisting interests.

The research also suggests the limitations of proposals to fix problems with political misinformation by restricting microtargeting. “What this paper shows is, if you broaden up targeting, you’ve still got this machinery in the background that’s going to push the ads to the people that Facebook thinks are most politically aligned anyway,” said Rieke.

What seemed to most bother the political strategists I spoke with was not so much the existence of that machinery as its invisibility. In one of the cleverest twists of the experiment, the researchers created a neutral voter registration ad that secretly served code to make Facebook think it directed to one of the campaign’s sites. In other words, to users, the ad was completely neutral, but Facebook had been tricked into thinking it was partisan. Lo and behold, the skew was still there—and it could only have come from Facebook’s end. And, significantly, it would indicate that the algorithm was determining the ad’s relevance not by the content, but purely by who it thought was behind it.

“This ultimately comes down to a lack of honesty and transparency on the part of Facebook—and that is toxic for our democracy,” said Betsy Hoover, a former campaign strategist and the cofounder of the progressive tech incubator Higher Ground Labs, in an email. If the platform is pre-judging which voters should hear from which candidates, regardless of the message, it could be locking campaigns into filter bubbles they aren’t even aware of.

Of course, the rise of partisan polarization long predates Facebook and digital campaigning. There’s a wealth of research validating the strategy of playing to one’s base. But regardless of whether a given campaign would object to Facebook’s delivery algorithm, the new study helps make one thing very clear: the platform is making hidden decisions about who hears what in our political life. And until the company lifts the veil of secrecy around how those decisions are made, it’s fair to question whether the concept of “relevance”—however useful it is for selling products—is a good foundation for democratic discourse.

More Great WIRED Stories


Original Submission