In a defining moment for Facebook, CEO Mark Zuckerberg goes before Congress on Tuesday to admit the social media giant failed to protect users and American democracy – and to promise significant changes.
But given all the damage already done and the past lack of disclosure, can we still trust Facebook to protect our privacy? And are the fixes too little, too late?
Facebook is facing severe blowback. Its stock is taking a pounding, losing more than $100 billion in market value since Feb. 1. And even sympathetic lawmakers in its Silicon Valley backyard are warning that if Facebook can’t, or won’t, police itself, Congress will step in.
Perhaps that time has come. While more government regulation isn’t ideal, these issues are too important to ignore – not just for Facebook, but for the entire internet.
Some in Congress are proposing something similar to the European Union’s far more stringent privacy rules, which go into effect May 25. They require companies to get informed consent to collect data, and for each type of use for the information. Users must also be able to revoke their consent and to retrieve their data. The requirements are backed by huge fines – as much as 4 percent of a company’s worldwide earnings.
Under that intense pressure, Facebook is making some moves in the right direction. On Thursday, it announced that it would give users worldwide access to privacy settings that follow the EU’s standards. And Friday, Facebook announced it will require people who want to buy political or issue ads to verify their identities.
Still, the world’s 2.2 billion Facebook users are getting a rude awakening: They’re not the customer; they – and their personal information – are the product. The real customers are the advertisers, or the firms that use that data. Given that business model, how much can Facebook really limit access to all that information? Indeed, Facebook COO Sheryl Sandberg said last week that even if the option were available, users who want to completely opt out of having their data used by advertisers would have to pay.
That’s one of many questions Congress should demand that Zuckerberg answer. After years of resisting calls to appear, he is to testify Tuesday before a joint meeting of the Senate Commerce and Judiciary committees, whose leaders he met privately Monday. Wednesday, he is set to appear before the House Energy and Commerce Committee, which released his prepared statement in advance.
In it, Zuckerberg says Facebook is “an idealistic and optimistic company” that focused on the good that can come from connecting people, such as organizers of the #MeToo movement and the student-led marches against gun violence.
But it’s clear now, he says, that Facebook didn’t do enough to prevent harm, including fake news, Russian interference in the 2016 presidential election, hate speech and privacy violations. “We didn’t take a broad enough view of our responsibility, and that was a big mistake,” Zuckerberg plans to say.
Zuckerberg is pledging to limit the access of app developers to users’ data, including religious and political views; to reduce information that users give to apps; to more aggressively detect and remove fake accounts; and to add 5,000 more staffers who work on security and content review.
Facebook is starting to alert as many as 87 million users, mostly in the U.S., whose information was improperly shared with Cambridge Analytica, which used the data to help its clients – including Donald Trump’s campaign – target voters. About 126 million people may have seen content from the Internet Research Agency, a Russian troll farm.
That’s a lot of harm that Facebook can never truly repair. It still has a chance to earn back the trust of the public and Congress, but time is fast running out.