BUSTED! FACEBOOK caught manipulating the Internet for its billionaire owners private desires
– FACEBOOK found to be the world’s largest political correctness engine
– Every private thing you do on FACEBOOK is sold to Hillary Clinton’s campaign, which can’t keep national secrets secret
– Former staff say FACEBOOK was created to manipulate elections using “mood manipulation” and “Subliminal PsyOps techniques”
In the middle of January, in a change noticed nowhere but Spain, Facebook added six words to a single dialogue box – and inadvertently stumbled into a tortuous national debate.
The dialogue box is part of Facebook’s content-reporting process, the means by which users can request that the social network censor their friends. The six words appeared to invite Spanish users to report on a new category of things: Under the option “it’s inappropriate, it annoys me, or I don’t like it,” Facebook listed Spain’s millennium-old national pastime, bullfighting.
Bullfighting is a controversial sport; even within Spain, few people still follow it. But columnists from Madrid to Malaga bristled at the suggestion that a federally recognized piece of heritage could be branded offensive.
“Facebook equates bullfighting with prostitution,” declared ABC, the country’s third-largest newspaper, on Jan. 14. Days later, when Facebook inevitably backtracked and deleted its references to bullfighting – clarifying, in a statement to The Post, that it had been included mistakenly – Spain’s second-largest paper, El Mundo, rejoiced that the network had “rectified” the situation.
But unfortunately for the suits at Facebook, who had suffered considerable headaches over the bullfighting mess, that situation was just the latest in a string of unintended clashes as inevitable as they are endless. As Facebook has tentacled out from Palo Alto, Calif., gaining control of an ever-larger slice of the global commons, the network has found itself in a tenuous and culturally awkward position: how to determine a single standard of what is and is not acceptable – and apply it uniformly, from Maui to Morocco.
For Facebook and other platforms like it, incidents such as the bullfighting kerfuffle betray a larger, existential difficulty: How can you possibly impose a single moral framework on a vast and varying patchwork of global communities?
If you ask Facebook this question, the social-media behemoth will deny doing any such thing. Facebook says its community standards are inert, universal, agnostic to place and time. The site doesn’t advance any worldview, it claims, besides the non-controversial opinion that people should “connect” online.
“Every day, people come to Facebook to connect with people and issues they care about,” a spokeswoman said in a statement. “Given the diversity of the Facebook community, this means that sometimes people share information that is controversial or offends others. That’s why we have a set of global Community Standards that explain what you can and cannot do on our service. . . We work hard to strike the right balance between enabling expression while providing a safe and respectful experience.”
Facebook has modified its standards several times in response to pressure from advocacy groups – although the site has deliberately obscured those edits, and the process by which Facebook determines its guidelines remains stubbornly obtuse. On top of that, at least some of the low-level contract workers who enforce Facebook’s rules are embedded in the region – or at least the time zone – whose content they moderate. The social network staffs its moderation team in 24 languages, 24 hours a day.
In response to recent criticism that Facebook has mishandled takedown requests from users in the Middle East, Facebook’s policy director for the region assured users that “all reports are assessed by teams of multilingual, impartial and highly trained people” – including native speakers of Hebrew and Arabic, who presumably understand the region’s particular issues.
And yet, observers remain deeply skeptical of Facebook’s claims that it is somehow value-neutral or globally inclusive, or that its guiding principles are solely “respect” and “safety.” There’s no doubt, said Tarleton Gillespie, a principal researcher at Microsoft Research, New England, that the company advances a specific moral framework – one that is less of the world than of the United States, and less of the United States than of Silicon Valley.
If you study Facebook’s community standards, going back to the long-forgotten time when users voted on a version of them, the site has always erred on the side of radical free speech, corporate opaqueness and a certain American prudishness: Its values are those of the early Web, moderated by capitalist conservatism.
The values that Facebook articulates are not always the ones it enforces. Below that top-level standard are the unknown thousands of invisible click-workers forced to interpret it, and below them are the self-deputized users flagging their friends’ content. Between the site’s demonstrably U.S. orientation and the layers of obfuscation below, there can be little doubt that the values Facebook ends up imposing on its “community” of 1.55 billion people are not agreed upon by many – perhaps even most – of them.
Somehow, it seems that we only notice the imposition when there’s a glitch in the machine: I can’t use a tribal name on Facebook? The site maligned bullfighting? Why, how dare this private company impose its worldview on me!
This is not merely a problem for Facebook; Gillespie, the Microsoft researcher, calls it the unsolvable “basic paradox” of all Internet companies: They’re private and they have their own corporate motives, but they’re called upon to police public speech. Alas, as their public grows more diverse, the worldviews of the “community” and its corporate sponsor would appear to align less and less. As of 2013, eight of the world’s 10 top Web properties were based in the United States – and 81 percent of their users were located outside of it. (If nothing else, there’s a compelling statistical reason why Google, Amazon.com, Facebook and Apple, collectively acronymed “GAFA,” have been called the new face of “American cultural imperialism.”)
Facebook will never make everyone happy, of course; nor does anyone suggest it should. But in a better world, the largest social network would at least admit that it’s not an impartial, value-neutral observer. After all, every single thing Facebook does – from advance a single global “community,” to add six extra words in a dialogue box – reshapes the public space of its users.
“The myth of the social network as a neutral space is crumbling, but it’s still very powerful,” Gillespie said. “For Facebook to finally say, ‘Yes, we construct social life online. We construct public discourse’ – that would be so important, but for them, dangerous.”
Liked that? Try these!
- If you use Facebook to get your news, please — for the love of democracy — read this first
- The one thing about ‘matching’ algorithms that dating sites don’t want you to know
- An hour-by-hour look at how a conspiracy theory becomes ‘truth’ on Facebook
Think Target and Home Depot invade your privacy? Political campaigns might be worse
When presidential candidates turn to data crunchers at Rocket Fuel in Silicon Valley for help finding voters who want tougher immigration enforcement, the firm comes up with a surprisingly specific answer: Chevy truck drivers who like Starbucks.
The data modeling from Rocket Fuel shows that this group leans against a path to citizenship for workers in the U.S. illegally. And these particular voters have become surprisingly easy – some argue creepily so – for campaigns to find and approach. So have consumers of frozen vegetables, who are more likely to oppose abortion. As have people curious about diabetes, a group that tends to settle on a candidate early in the race.
“Knowing the nuances of each voter beyond whether they lean right or left makes every difference,” said JC Medici, the firm’s national director of politics and advocacy. “We can identify what people are persuadable.”
But as presidential campaigns push into a new frontier of voter targeting, scouring social media accounts, online browsing habits and retail purchasing records of millions of Americans, they have brought a privacy imposition unprecedented in politics. By some estimates, political candidates are collecting more personal information on Americans than even the most aggressive retailers. Questions are emerging about how much risk the new order of digital campaigning is creating for unwitting voters as the vast troves of data accumulated by political operations becomes increasingly attractive to hackers.
The security breach last month at the major voter database controlled by the Democratic National Committee, and another days later involving a large political data firm, have raised concerns about the fitness of candidates to safely manage their data. At the same time, the methods used by independent “data brokers” that acquire and disseminate private details for political campaigns and scores of other clients are at the center of a years-long regulatory battle, with the Federal Trade Commission warning Congress that consumers need more protections.
Yet the push for more accountability and transparency rules on the accumulation of private data is faltering in Congress, where lawmakers are reluctant to rein in the industry that they increasingly rely on to win elections.
“This is the Wild West,” said Tim Sparapani, a data privacy consultant and former director of public policy for Facebook. “There is nothing that is off-limits to political data mining.” The fleeting, impulsive nature of campaigns, he said, means they often have far less stringent security procedures than retailers and social media firms, which themselves often fail to adequately protect sensitive information.
The mining of such data for politics is not a new phenomenon. Presidential candidates began pioneering the approach more than a decade ago, and it was a key part of Barack Obama’s winning strategy in 2008 and 2012. But technological advancements, plunging storage costs and a proliferation of data firms have substantially increased the ability of campaigns to inhale troves of strikingly personal information about voters, spit it into algorithms, and use the results to narrowly customize messaging and outreach to each individual household.
“There is a tremendous amount of data out there and the question is what types of controls are in place and how secure is it,” said Craig Spiezle, executive director of the nonprofit Online Trust Alliance. The group’s recent audit of campaign websites for privacy, security and consumer protection gave three-quarters of the candidates failing grades.
The campaigns and the data companies are cagey about what particular personal voter details they are trafficking in.
One firm, Aristotle, boasts how it helped a senior senator win reelection in 2014 using “over 500 demographic and consumer points, which created a unique voter profile of each constituent.” Company officials declined an interview request.
When investigators in Congress and the FTC looked into the universe of what data brokers make available to their clients – be they political, corporate or nonprofit – some of the findings were unsettling. One company was selling lists of rape victims; another was offering up the home addresses of police officers.
The data companies are required by law to keep the names of individuals separate from the pile of data accumulated about them. Instead, each voter is assigned an online identification number, and when a campaign wants to target a particular group – say, drivers of hybrid vehicles or gun owners – the computers coordinate a robocall, or a volunteer’s canvassing list, or a digital advertisement with relevant accounts.
Since campaigns are ultimately in the business of finding particular people and getting them to show up to vote, some scholars are dubious their digital targeting efforts offer the same level of anonymity as those of corporations.
“A retailer doesn’t care what person is behind a particular online profile, just that they are buying new sneakers,” said Ira Rubinstein, a research fellow at New York University School of Law who specializes in data privacy. “This is about targeting very specific people to go out and vote.”
For the record
7:44 a.m.: An earlier version of this story misspelled the name of New York University research fellow Ira Rubinstein as Rubenstein.
An exhaustive paper Rubinstein recently published on voter privacy found that “political dossiers may be the largest unregulated assemblage of personal data in contemporary American life.”
Basic privacy guidelines that apply to other industries don’t appear to apply to candidates. Some do not even have clear privacy policies posted on their websites, which would be grounds for a private business to have their site shut down under both federal and California law, according to the Online Trust Alliance.
Rules that require companies to notify their customers if there has been a data breach also do not necessarily apply to campaigns, Rubinstein said.
“It’s an unregulated entity whose only goal is to elect a candidate over a short term, then it goes away,” he said. “They are not circumstances in which security is made a priority.”
Campaign digital strategists take umbrage. They say their operations are constantly withstanding the attacks of hackers, and that candidates are in no position to be cavalier with all the sensitive information on their servers, as voters would punish them for it.
Yet it is also unclear whether many voters are aware how much could be on those servers. Among the regulations the Federal Trade Commission is urging Congress to implement is one that would allow consumers to find out what information the data brokers are selling to their many clients, political campaigns among them. Consumers could more easily adjust which data are being sold or could opt out of the monitoring altogether.
“The problem with the data broker industry is consumers have no idea this is going on,” said FTC commissioner Julie Brill. “They are creating hundreds of millions of profiles of American consumers. … Some of this information can impact consumers in a negative way.”
Back at Rocket Fuel, which specializes in placing potential voters into hundreds of different audiences, each targeted for a package of digital advertisements specifically catered to their interests, there are warnings that more regulation could have its own unintended consequences.
“We’d no longer be able to put the right message in front of the right people,” Medici said. “If what we are putting in front of voters is relevant to them and of interest, it is a natural part of the process.”