How Mark Zuckerberg can save Facebook from itself


Imagine you’re Mark Zuckerberg. I know, I know, it’s a far less appetizing thought than it would have been a few years ago, but still, put yourself in his shoes for a minute.

You know a few things for sure: (1) you created this world-changing platform that, despite all its flaws, still has seen more followers (aka monthly users) than Christianity; (2) you had a critical insight about human nature – people really need other people, but you also missed one that’s just as important – people are unpredictable; (3) you’ve learned the hard way there’s nothing you do can ensure that 100 percent of posts and content are respectful and reasonable; and (4) your own decisions have made it clear that while you have a lot of brains and even more money, you don’t have the judgment to handle this problem internally.

OK, odds are the self-awareness needed for that last one is probably a stretch, but regardless of who Zuckerberg wants to blame, it’s impossible to argue that Facebook can anticipate every possible comment, decide what’s appropriate and what crosses the line, and remove the problematic content before it causes even more problems. Rather than endlessly devise new programs to try to maintain a semblance of responsibility and control, why doesn’t Facebook do what everyone in politics does by second nature: pass the buck.

Anyone else would be better at the job because when they screw up – and they invariably will – the screwup didn’t happen inside of Facebook itself. Instead of Zuckerberg having to be coached on how to express empathy and disappointment, now Facebook’s execs (well, those who are left) can shake their heads and say, “This is a serious breach of our trust. We expect X to do much better and we plan to hold them accountable for it.” That sounds a little less unpleasant than being the person blamed every single day?

Now imagine you’re a government agency or a non-profit charged with removing problematic Facebook content. You may be more savvy about human nature, so you know a lot of bad things are coming – but you’re still overwhelmed by the sheer volume of it. The first thing you do is make categories for everything you don’t have to worry about so you can make the rest of the task a little more manageable.

Does Facebook make money from that content too? Probably, but the real money is still in harvesting and selling the data of the people posting pictures of their hamster playing ping pong. Sure, if more speech were restricted on Facebook, there’d be less data to monetize, but given the hit Facebook keeps taking publicly (as does its share price), it’s more than a worthwhile tradeoff.

Is anyone crazy enough to take Facebook’s money and assume responsibility for moderating content? In a rational world, no. But in a world where someone is always looking for money, looking for attention (even if it’s bad attention), looking for relevance, yes, you’ll find plenty of people with impressive resumes and credentials willing to do it. They’ll tell themselves they’re saving free speech or the internet. They’ll tell themselves that with the proper AI tools, this is a winnable fight. They’ll fail all of the time. Keep in mind, this is an impossible task – but that won’t stop them from trying.

Is this a cynical solution for some of Facebook’s woes? Absolutely. But is society still better off if the fox isn’t guarding the henhouse? It is.

Instead of squirming when Sen. Elizabeth Warren, D-Mass., calls for splitting Instagram and WhatsApp from the mothership, ask her to do you one better: “You’re so confident in government’s ability to solve problems, take this one on too. It’s all yours.”