Over the last few weeks, Google and Facebook, the giants of search, social media, and a laundry list of other services, made public declarations of dedication to defending their users’ privacy. At the same time, U.S. presidential candidates, longtime privacy advocates, and even a Facebook co-founder voiced criticism and skepticism of the companies’ apparent efforts to protect user privacy.
It’s time for Big Tech to face a big Baby Bell-style breakup, they say, citing a growing rap sheet of privacy violations. But whether splitting the companies up would actually improve consumer privacy prospects remains far from clear.
Facebook, which owns the popular Instagram and WhatsApp services, has faced extensive criticism for its privacy violations over the past several years, including for allowing Cambridge Analytica to data-mine its users and sell that data to Donald Trump’s presidential campaign.
“I know we don’t exactly have the strongest reputation on privacy right now, to put it lightly,” Facebook CEO Mark Zuckerberg acknowledged at the company’s F8 developer conference on April 30 in an attempt to explain why the company’s 2.5 billion users—about a third of the entire human population—should continue to trust it.
READ MORE ON BIG TECH AND PRIVACY
- How to tell you’re part of the 30 million user Facebook breach
- Facebook was breached. Here’s what we know (and don’t)
- What’s in your Facebook data? More than you think
- Ready to #DeleteFacebook? Follow these 7 steps
- How to recover from a Facebook hack
- On privacy, Google CEO’s congressional hearing comes up short
- How a European Commission antitrust ruling could impact Android privacy
- How updated privacy policies could make GDPR the global standard
- Facebook’s Stamos on protecting elections from hostile hackers (Q&A)
While staying light on details, Zuckerberg said Facebook will spend “the next few years” working on six “privacy” principles: encryption (including extending WhatsApp’s end-to-end encryption to all Messenger conversations); interoperability (enabling users to communicate across its various platforms—perhaps even its Oculus virtual-reality gaming system); private interactions (making controls during conversations and interactions between users more clear); reduced data permanence (ceasing from indefinitely keeping user messages or “stories”); safety (to “keep you safe” when using encrypted Facebook services); and secure data storage (preventing user data from being stored in countries where it may be “improperly” accessed).
A week after Zuckerberg addressed the Facebook faithful, Google CEO Sundar Pichai peppered nearly two dozen mentions of security, privacy, and trust throughout his May 7 keynote address at his company’s developer conference. During his speech at Google I/O in 2018, Pichai never uttered the word “privacy”; this year, he wrote a column for The New York Times on the topic.
“People today are rightly concerned about how their information is used and shared,” Pichai wrote. He jabbed at Apple (though not by name) for making privacy a “luxury good,” and he asserted that the challenge in providing privacy “for everyone” is that “everyone” defines privacy “in their own ways.”
In his keynote speech and column, Pichai said Google, with its vast resources, is able to improve privacy with technological innovations. In his column, he points to the benefits of “federated learning” for artificial-intelligence algorithms, which enables the algorithms to improve without Google storing more data on its users.
“Privacy and security are the foundation for all the work we do,” he said in his keynote speech. “And we’ll continue to push the boundaries of technology to make it even better for our users.”
One pro-privacy argument for breaking up Facebook is that Facebook doesn’t currently have to compete on privacy features because no other company can effectively compete in social media against it. One might extend this argument to Google and search, or Amazon.com and retail, or Apple and mobile-app commerce.
“It’s a good sound bite to say, ‘Let’s break them up.’ And ultimately, it may be the solution. But there are all kinds of antitrust remedies.”—Mitch Stoltz, senior staff attorney, Electronic Frontier Foundation
Massachusetts senator and Democratic presidential candidate Elizabeth Warren says the lack of competition on all fronts, including privacy, has slowed innovation within Big Tech. In a March 8 blog post explaining why she wants to see Google, Facebook, and Amazon split up, she wrote:
“Weak antitrust enforcement has led to a dramatic reduction in competition and innovation in the tech sector. Venture capitalists are now hesitant to fund new startups to compete with these big tech companies because it’s so easy for the big companies to either snap up growing competitors or drive them out of business. The number of tech startups has slumped, there are fewer high-growth young firms typical of the tech industry, and first financing rounds for tech startups have declined 22% since 2012.”
In a New York Times column published May 9, Facebook co-founder Chris Hughes wrote, “Competition alone wouldn’t necessarily spur privacy protection—regulation is required to ensure accountability—but Facebook’s lock on the market guarantees that users can’t protest by moving to alternative platforms.”
As a chorus of voices pursuing antitrust action against Big Tech grows louder, it’s important to note that using regulations to protect consumer privacy is far from a simple endeavor.
Challenges include determining which companies should be regulated, how they should be regulated, what the regulations should specify, what the ultimate goal of the regulations are, and what the unintended consequences of regulation might be.
The breakup of “Ma Bell” AT&T, some note, eventually led to the company becoming even bigger. Others argue that the Microsoft antitrust case in 1998 eventually led to the rise of the current tech titans, just as an early 1980s case against IBM led to Microsoft’s prominence.
Tech companies often adopt privacy protections for their users only after privacy advocates step in, the consumer privacy and rights advocacy group Ranking Digital Rights says in its 2019 annual report on corporate accountability.
“Despite new regulations in the EU and elsewhere, most of the world’s Internet users are still deprived of basic facts about who can access their personal information under what circumstances, and how to control its collection and use. Few companies were found to disclose more than required by law,” the report said.
“There is a lot of technology that has not been implemented because people can’t figure out how to make it run fast enough yet.”—Lea Kissner, chief privacy officer, Humu
Laura Reed, Ranking Digital Rights’ research and engagement manager, says companies aren’t willing to better protect consumer privacy without government intervention. As they begin to acknowledge the influence they have over consumer privacy controls, she says, they are making efforts to “redefine” what privacy means.
“Sundar Pichai echoed this point that Zuckerberg was persistent about in his Congressional hearing last April: ‘We will never sell your information to third parties.’ But that’s not their business model. They gather your information to sell advertising against it,” Reed says, rather than selling the information itself. “They’re [often] not giving users control over whether that information is collected in the first place.”
Technology has been at the heart of protecting consumer privacy since the Clipper chip and Crypto Wars of the 1990s, says Lea Kissner, the chief privacy officer at machine-learning company Humu and the former global lead for privacy technology at Google. Look no further than Facebook implementing end-to-end encryption on WhatsApp in 2016, which instantly protected the conversations of more than 1 billion people using the platform every month from snooping. (Two years later, in January 2018, WhatsApp’s popularity had grown to more than 1.5 billion monthly users.)
Big Tech has what it takes to make the biggest strides in consumer privacy, Kissner says. And while regulation is warranted, breaking up any of these companies might actually harm privacy research and development more than it improves it.
“There’s a lot of research going on in differential privacy, federated learning, anonymization—[and] pretty much everybody who’s working on that on a practical level is working at Google, Apple, Netflix. All of that research is coming out of large tech companies and academic institutions. There are not that many companies that find this research feasible to do,” she says. “There is a lot of technology that has not been implemented because people can’t figure out how to make it run fast enough yet.”
Properly implementing cutting-edge privacy protection technology requires a significant amount of research and development, Kissner adds. Take shortcuts, and you risk interfering with security features already in wide use across the Internet, such as Single Sign-On, which lets people use their Google or Facebook log-ins for unaffiliated third-party services.
“The government should not be consulting Zuckerberg on how to regulate Facebook.”—Sally Hubbard, director of enforcement strategy, Open Markets Institute
Along with technological reasons for approaching a Big Tech breakup with caution are logistical and legal concerns, says Mitch Stoltz, an antitrust expert and senior staff attorney at the digital-rights advocate Electronic Frontier Foundation.
“It’s a good sound bite to say, ‘Let’s break them up.’ And ultimately, it may be the solution. But there are all kinds of antitrust remedies. There’s requiring companies to interoperate with competitors; allowing people to leave Facebook but still communicate with friends on Facebook,” he says. “Especially in the case of Google, requiring them to split off the advertising network business from their consumer platform, or not to share data between those. A big erosion of our privacy online has come from the rise of third-party tracking that can track your behavior across multiple websites and apps. Google and Facebook are the leaders of that right now.”
Keeping the fox from designing the locks on the henhouse door is another challenge on the road to better regulating digital-privacy protections, says Sally Hubbard, the director of enforcement strategy at the Open Markets Institute.
“The government should not be consulting Zuckerberg on how to regulate Facebook,” she says. And she points out that the problem isn’t limited to Google or Facebook: Add Apple and Amazon, she says, and you have a list of four companies controlling every type of online industry.
Whether the company is Google promoting favored products in Search results, Apple refusing to allow third-party apps such as browsers to integrate with iOS the way Safari does, or Amazon aggressively selling its own products on its marketplace, “Companies are controlling the game and playing it too,” Hubbard says.
Whichever company “makes the operating system picks the winners and losers in the browser market. Every single one of these companies [is] doing what Microsoft did” with Internet Explorer in the late 1990s and early 200s, she adds.
Ultimately, Hubbard says, more privacy regulation will be inevitable. And she does “think we will see something like the Microsoft [antitrust] case,” if not something like the breakup of Ma Bell.
“There are companies waiting in the wings with privacy innovations,” she says, “but they’re going to be killed, if we don’t break up some of the gatekeepers.”