How Facebook is Constantly Lying About its Surveillance of Users
Executive Summary
- A big part of Facebook is lying to its users about how it surveils users and manages its data.
- We cover Facebook’s lying around its surveillance.
Introduction
As Facebook faces a DOJ lawsuit for monopolistic behavior, what will happen is Facebook’s business model will be highlighted to more of a degree than before. Facebook has a long history of lying to users about its surveillance.
Our References for This Article
If you want to see our references for this article and other related Brightwork articles, see this link.
Facebook’s History of Lying About its Surveillance
Dina Srinivasan does an excellent job explaining how cookies evolved to track users’ activities on other websites instead of applying them to the browser.
How the Cookie Evolved to Surveil Users
Widespread commercial surveillance of consumer behavior across the internet, however, was inhibited by two restraining facts. First, a company could only read its own cookies. Second, a company could read cookies only when a user initiates an HTTP request to the company’s server. For example, if The New York Times wrote a cookie onto a user’s device, the Times could not read its cookies when the user was on wsj.com.
In other words, The Times could know what users were doing on nytimes.com but not on other properties. A company could circumvent these built-in privacy protections by installing a piece of their own code on other websites, that would invisibly generate an HTTP request on behalf of the user to the company’s server. Of course, no publisher would want another publisher to be tracking the behavior of its own customers. (Emphasis added)
In order to develop complete and accurate profiles of users, there would need to be cooperation among thousands of sites which would otherwise be competitive. At one point, competitors and competition kept horizontal cookie collusion in check.
When one is informed of a cookie being downloaded when browsing, this explanation of cookies’ tracking capability is never explained. Facebook has to track users across the Internet to maintain its valuation because the data that users give Facebook is not sufficiently valuable to advertisers. This topic is cover in the article How Facebook’s Entire Business Model is Based Upon Surveillance.
Facebook Introduces Beacon
Facebook began extensive surveillance of the web activity in 2007 with the introduction of Beacon.
..on November 6, 2007, with growing momentum in the market, Facebook reneged on the promise not to surveil users outside of Facebook through the release of an advertising product called “Beacon.” Beacon was a direct product license to third-parties that openly allowed Facebook to monitor and record user activity off of Facebook, and reflects Facebook’s first attempt to track users on the sites of independent businesses. But Beacon was immediately controversial. Its ultimate failure is evidence of what competition demanded of Facebook from a privacy perspective.
Launched in conjunction with a handful of third-parties, including Blockbuster and The
New York Times, Facebook provided Beacon participants a piece of Facebook code to install on their own sites. When a user triggered an action on a participating site (say, rented a movie or read an article), the website then presented the user with a pop-up box requesting permission to share the user’s activity on Facebook. Many people today would remember being once jarred by the pop-up requesting permission to share one’s reading or browsing activity with Facebook.Facebook’s surveillance framework today requires the coordination of millions of independent third-parties.(emphasis added) Facebook induced publishers and others to first coordinate with Facebook upon the representation that Facebook would not leverage their
coordination for commercial surveillance.
Facebook’s Constant Lying About its Surveillance of Users
A big part of what Facebook has done since its inception is to lie about its surveillance of users, as explained in the following quotation, as Srinivasan explains.
The New York Times interviewed Facebook’s vice president of marketing and operations, Chamath Palihapitiya.
The reporter asked, “If I buy tickets on Fandango, and decline to publish the purchase to my friends on Facebook, does Facebook still receive the information about my purchase?” Palihapitiya answered, “Absolutely not. One of the things we are still trying to do is dispel a lot of misinformation that is being propagated unnecessarily.” Facebook represented that it did not receive information about users that declined to share information about their activity. Only hours after Palihaptiya’s comments in the Times, Stefan Berteau, a senior research engineer at California’s Threat Research Group, examined the actual contents of Facebook’s HTTP requests and responses that were normally invisible to users and revealed that Palihapitiya’s representations were not true.
How Individuals, Not the Government Catches Facebook in Lies
It should be observed that the US Government is not the one that caught Facebook for lying about what it did with people who opted out, but by a lone individual who had the technical expertise to fact check Facebook. This also illustrates Facebook’s culture, which is to lie and lie, and then when caught, lie again. And this leads to what followed Beacon.
In the face of backlash, some Beacon participants pulled out of the Beacon program, effectively declining to extract consent from their own website visitors to Facebook’s surveillance mechanism. The growing e-retailer Overstock, one of the initial companies to sign up for Beacon, pulled out.
Zuckerberg apologized and announced that Facebook would allow users to opt-out of the Beacon program. But consumers did not accept Facebook’s opt-out scheme, which required them to navigate Facebook’s default privacy settings. Consumer uproar persisted, and, by September of the following year, Facebook revealed it would shut down Beacon entirely. Zuckerberg would later call Beacon a “mistake.”
The Established Pattern of Apologizing…and Then Going Back On Their Word
This is also a pattern by Zuckerberg and others at Facebook to apologize and go back to what they were doing. Facebook waits for the criticism to pass and then does not make internal adjustments to address the issue, and when it does, they are either short-lived or based upon a false assertion.
This pattern has been observed several times, such as in the following quotation.
Every so often, Mark Zuckerberg will issue a statement that implies he is sorry and that Facebook will try to do better. Of course, it never does. – The Nation
And Srinivasan observes how Zuckerberg and Facebook went back on their word.
But as Facebook’s market power grew through the foreclosure of competition and the lock-in of network effects, Facebook would eventually abolish this newly announced voting procedure and reinstate the scope, scale, and invasiveness of Beacon’s mission.
Today, Facebook surveillance is a mandatory tie-in with a third-party’s (e.g., The New York Times) use and license of other Facebook business products (Like buttons, Logins, etc.).
After Beacon in 2007, Facebook introduced the media Like button, which allowed for cross-site surveillance. We cover in the article How Facebook Tricked its Media Partners into Surveillance with the Like Button.
Facebook lied about how its Like button enabled surveillance and was then promptly caught in a very similar manner to how they were caught lying about Beacon’s surveillance.
Facebook is Caught Again Lying About its Tracking and Surveillance of Users
One issue, even absent the topic of cookies, was that the presence of Like buttons on other sites enabled Facebook to receive any internet users’ URLs as they moved around the internet, and Facebook’s privacy policy did not appear to obtain users’ consent for this practice. Facebook responded by saying that its privacy statement was “not as clear as it should be, and we’ll fix that.”
Observe this statement. This is a constant feature of Facebook PR to try to allay real concerns with dishonest statements.
Barry Schnitt, a Facebook spokesman, then allayed concerns by explaining that Facebook plug-ins work like any other plug-ins on the internet, Facebook does not use social plug-in data for advertising, and Facebook may use received data to catch bugs in its software.
And again, the plug-in has not advertised use; it’s just a technical artifact for bug management.
But then, once again, Facebook is exposed by an individual performing their own test.
But in November of 2011, Dutch researcher Arnold Roosendaal exposed Facebook’s hidden activity with Like buttons in the way that Berteau did earlier with Beacon. Shortly after launching the Like buttons, Roosendaal published a paper showing that Facebook was using the Like button code now installed on third-party sites to write and read user cookies. Roosendaal showed that each time a Facebook user visited a site with a Like button, Facebook retrieved the user’s Facebook website login cookies, which contained the user’s unique identifying number, traceable to his or her real identity. Facebook again was leveraging login cookies from the communications network to conduct detailed surveillance possibly for the advertising side of its business. In addition to a user’s ID number, Facebook retrieved the specific URL the user was on, which revealed the title of an article the user was reading or the name of the product a user was buying. Roosendaal then demonstrated that Facebook used these open connections to write cookies and surveil the behavior of people that did not even have Facebook accounts. The Wall Street Journal published the results of its own investigative study confirming Roosendaal’s findings. The study, led by a former Google engineer, examined the presence of social widgets on the world’s top 1,000 most-visited sites. The Journal concluded that Facebook buttons had been added to millions of websites, including a third of the top 1,000 most-visited sites. Facebook knew when a user reads an article about “filing for bankruptcy” on MSNBC.com or about depression on a small blog, even if the user didn’t click any Like button. (emphasis added)
Facebook Lies Yet Again to Cover Up its Previous Lie
Bret Taylor, Facebook’s chief technology officer at the time, responded definitively to the privacy breach allegations, “We don’t use them for tracking and they’re not intended for tracking.” Taylor clarified that Facebook places cookies on the computers of people that visit facebook.com to protect users’ Facebook accounts from cyberattacks.
Why would a cookie be necessary for protecting a user from a cyberattack?
That explanation did not make any sense and was clearly designed to trick non-technical readers. It is a desperate move, as the statement is easy to prove false.
Another Bug?
And then Facebook returns to its “bug” excuse, which is one of Facebook’s favorite explanations to try to get out of being caught lying.
The Journal also gave Facebook an opportunity to answer to Roosendaal’s allegation
that Facebook was tracking people that did not even have a Facebook account. Facebook said that Roosendaal had found a “bug,” and that it had therefore discontinued this practice.
And this is just another lie — which is essentially all that anyone seems to be able to get out of Facebook. Everything related to their tracking is either a bug or is designed to stop cyberattacks. And their tracking methods, which they say do not perform tracking, and then proven to perform tracking.
Later that year, in September of 2011, Nik Cubrilovic, an Australian internet security contractor, followed up on Roosendaal’s and the Journal’s discoveries, and published an article showing that Facebook.
Like and other plugins were still tracking user activity outside of Facebook even if users had completely logged out of Facebook. Normally, when a user logs out of a service, the service’s login cookies terminate. But Facebook’s cookies weren’t terminating, they were persistent and still able to identify and track people.
Why Facebook’s Unending Lies Are So Critical to the DOJ’s Antitrust Case
All of this is critical to the antitrust case against Facebook, as Facebook demonstrated a pattern of offering its services under pretenses designed to earn users’ trust and become the dominant social network website. Then on every chance, it had to go back on its promises. It did so.
Misleading, deceptive, or otherwise unethical conduct at the early stages can induce market participants to choose the firm that they think increases their welfare, but the very act of mistaken choice can lock in the market to their detriment Carl Shapiro, then deputy
assistant attorney general for economics in the Antitrust Division of the Department of Justice, addressed the need for heightened concern: “Even more so than in other areas, antitrust policy in network industries must pay careful attention to firms’ business strategies, the motives behind these strategies, and their likely effects….”Such was the focus of the Court’s analysis of Microsoft’s misleading behavior in the
operating system market. Microsoft had made false public representations about its own
products which induced developers to develop applications compatible with Microsoft’s
operating system. While Microsoft represented that applications developers wrote would be
cross-compatible with Sun, developers ended up writing applications that were only compatible with the Windows operating system. Microsoft’s false statements induced developers to choose Microsoft to their detriment. Since the operating system market exhibits direct network effects, misleading behavior to induce choice is anticompetitive. This was anticompetitive conduct that supported the finding that Microsoft had illegally monopolized the operating system market.
Facebook’s Patent for Cross-Site Tracking is Exposed
Facebook repeatedly said that it never perform cross-site tracking. However, this is what the Like button did. Also, for a company that repeatedly claims it does not perform cross-site tracking, it is curious that Facebook would have filed a patent….to perform cross-site tracking.
Noticeably absent from Facebook’s public statement was the fact that Facebook also filed a patent application the year prior, on September 22, 2011, for a “method … for tracking information about the activities of users of a social networking system while on another domain.”
The list of lies from Facebook goes on and on.
Their lies rely upon people not performing the research and accepting the false claims by Facebook executives and employees to make the world a more connected place. This applies even to businesses, like advertising firms, that generally accept Facebook’s claims we cover in the article Facebook’s History of Lying to Advertisers and Media Buyers About Ad Reach.
Given how much Facebook lies, I would find it very difficult to either trust them as a user or buy services from them if I were an advertiser.
Another example of dishonest behavior by Facebook is ignoring the Do Not Track setting in user’s browsers.
How Facebook Violated the Do Not Track Setting in Browsers by Lying About its Surveillance
In an effort to ward off regulation, various industry players promised to give users Do Not Track opt-outs. Microsoft updated the Internet Explorer browser to provide users with a Do Not Track setting. Other browsers— Firefox, Safari, Opera, and Chrome—also adopted the Do Not Track protocol.186 With Safari, for example, a user could go to Preferences Settings, toggle to the Privacy tab, then select the checkbox “Ask websites not to track me.” The Do Not Track protocol though didn’t technically block companies from tracking users. Rather, a user’s browser would simply send a message notifying companies that he or she does not wish to be tracked. It was simply a polite request, which a company could choose whether or not to heed.
In another demonstration of market power, Facebook would ignore users’ activation of
Do No Track. In 2013, Erin Egan, the chief privacy officer of Facebook, explained that
Facebook would bypass consumer Do Not Track settings because Facebook does not track
consumers for advertising purposes, in effect arguing that consumers do not understand what Do Not Track means. “We don’t use that data for an advertising purpose,” she emphasized. In
2014, after Facebook changed course and began tracking consumers for commercial purposes,
Facebook simply continued to ignore consumers’ Do Not Track signals.
This brings up the question — if Facebook won’t respect users’ declared preferences in a browser, why would they respect any restriction in Facebook’s user settings that the user goes to the effort to reduce from the “wide open” default settings?
Facebook has a long series of settings that a user can set. Along the way, Facebook made it difficult to increase privacy. However, one question rarely asked is given Facebook’s track record on privacy and surveillance. Why would anyone believe that Facebook will follow the settings applied by users? Isn’t it more likely that the privacy settings are just a placebo that Facebook provides to users concerned with privacy to keep them using Facebook?
How Facebook Tried to Migrate Content to Facebook Where it Could Circumvent Ad Blockers
Facebook ignored Do Not Track settings in browsers but developed technology to circumvent ad blockers, as is explained by Dina Srinivasa.
In tandem, independent studies conducted by the Pew Research Center, and the Annenberg Center, continued to show that Americans were overwhelmingly opposed to being shown ads targeted to them based on information derived from surveillance. Was this the largest boycott in human history?
Facebook raced to engineer a way to circumvent users’ installation of ad blockers. Initially, Facebook prevented its public-facing pages from loading on user devices that had ad blockers installed. If consumers landed on forbes.com and Forbes prevented its page from loading, consumers could switch to a Forbes competitor to read news.
With Facebook, consumers did not have any alternative product they could switch to. Then, in August of 2016, Facebook announced it had found a way to circumvent ad blockers entirely. Facebook “flipped a switch on its desktop website that essentially renders all ad blockers … useless.”
With rapid consumer adoption of ad blockers and Facebook’s unique ability to force onto
consumers the presence of behaviorally targeted ads, Facebook approached publishers with a proposition.
Facebook Offers Publishers Their Ability to Circumvent Ad Blockers
Facebook offered publishers the ability to publish content not on their own websites, but inside the walls of the impenetrable Facebook, where Facebook could ensure the delivery of behaviorally targeted advertising that commanded higher rates in ad markets.
Despite normally being competitors in the ad market, participating publishers like The New York Times worked in tandem with Facebook to sell the advertising. If a participating publisher sells the advertising, it keeps 100% of the ad revenue. If Facebook sells the advertising for The New York Times, Facebook retains a 30% cut.
Facebook had the power to force invasive advertising on consumers through a capability that other publishers like The New York Times did not have. Facebook could successfully fight against users’ preference for privacy—users had to submit to the terms of trade imposed upon them by this century’s new communications network.
Observe how Facebook built a mini business on cheating. They developed the technology to cheat the user out of their privacy and then sold it to cut the advertising.
Facebook receives our Golden Pinocchio Award for constantly lies about user surveillance.
This constant lying is mirrored in the following quotation.
Facebook has accomplished a neat trick in the last fourteen years, draping itself in humanitarian intent while establishing a glob-straddling monopoly. In the name of connecting people, (emphasis added) it has built the world’s largest surveillance apparatus, rivaled only by Google. In the name of community, it has turned some 30 percent of the world’s population into its unacknowledged, unpaid workforce. In the name of relevance, it offers advertisements based on our most personal information. – LongRead
Conclusion
Beacon is, and the media Like button, Facebook’s cross-site surveillance patent application, and the adblocking circumventing technology are examples of Facebook’s constant lying about its surveillance. Facebook has a specific modality it has been following for over ten years, that when caught, Facebook lies further and will often apologize, but then go back to its surveillance.
However, Facebook’s history constantly says one thing about user privacy, and then it goes and surveils its users. This is why nothing that Facebook says about how it surveilled customers should be listened to.
Dina Srinivasan receives our score of 10 out of 10 for accuracy.