WHY ARE AMERICANS TOLERATING THIS? Facebook spied on private messages of Americans who questioned 2020 election.

Stewart Baker comments:

But as the story is written, it has one big problem. The conduct it describes would violate the law in a way that neither the FBI nor Facebook would likely be comfortable doing. Federal law mostly prohibits electronic service providers from voluntarily supplying customer data to the government.

What’s more, Facebook has issued a denial. A very careful denial. It says that “the suggestion we seek out peoples’ private messages for anti-government language or questions about the validity of past elections and then proactively supply those to the FBI is plainly inaccurate and there is zero evidence to support it.”

A compound denial like that often means that portions or slight variations of the statement are true. Thus, if Facebook is screening for something just a bit more alarming than “anti-government language or questions about the validity of past elections,” the denial is inoperative.

The Post tries to square the denial with its story by suggesting that the FBI has recruited a Facebook employee as a confidential human source (CHS). I doubt that. Being a CHS doesn’t mean you can do things with your employer’s data that your employer can’t do. And I doubt the FBI would feel free to evade a limit on its investigative power by using a CHS this way.

But there is a provision of federal law that allows electronic service providers to volunteer information to law enforcement. To do so, they need to believe “in good faith … that an emergency involving danger of death or serious physical injury to any person requires disclosure without delay of communications relating to the emergency.” 18 USC 2702(c).

So, Facebook and other Silicon Valley companies could have developed an AI engine to search for strings of words that its legal department has precleared — in good faith — as evidence of an emergency involving a danger of death or serious injury.

I question the “good faith” part.


Any mass effort to find “bad” speech on a big social media platform is bound to make a lot of mistakes, as all students of content moderation know.

And, as with content moderation, no one would be surprised if mass Silicon Valley criminal referrals were biased against conservatives. (That bias would be built in if Justice is using an existing grand jury tied to January 6 to generate the subpoenas.)

So, assuming I’m right, it’s fair to ask how any such effort was designed, how aggressively conservative complaints were turned into emergency threats to life and limb, who’s overseeing the process to prevent overbroad seizures of legitimate speech, and whether the same thing could be done to Black Lives Matter, environmental groups, animal rights campaigners, and any other movement whose more extreme followers have sometimes lapsed into violence.

There should be serious consequences for this, but apparently consequences are only for the little people. Which over time is likely to backfire unpleasantly.