3x37: I Will See Them In The Fire

Stuart Langridge, Jono Bacon, and Jeremy Garcia present Bad Voltage, in which whistles are blown, audio is twitched, and:

  • [00:02:18] A whole show on one topic: Facebook and content moderation. Since Frances Haugen left Facebook for the life of a whistleblower, filing complaints with federal law enforcement that Facebook knows that it amplifies hate, misinformation and political unrest and conceals that knowledge, this discussion has been in the news a lot. We want to look at it from two different angles: policy and process. Obviously Facebook could choose to do better; if they do not choose to do better, what if anything can be done to make them do better? Regulation may be required, but what should that regulation say? And if we take the more charitable view that Facebook actually do want to fix the problems that hate speech causes on their platform and by extension for the world: is it even possible to do so? Facebook are in the business of spreading people’s thoughts, especially those which produce strong reactions. Is it possible for them to fix or even further alleviate this problem while continuing to be Facebook?

Come chat with us and the community in our Slack channel via https://badvoltage-slack.herokuapp.com/!

Download from https://badvoltage.org

News music: Long Live Blind Joe by Robbero, used with attribution.

Thank you to Marius Quabeck and NerdZoom Media for being our audio producers!

I see a handful of obvious steps that would impact something like Facebook, but wouldn’t overburden an “insurgent” competitor like someone running a Diaspora pod.

  • Ban advertising. It was almost banned in the 1960s in the United States, and the reasons that the Supreme Court didn’t do it are all obsolete. Without that, social media no longer cares about “engagement,” because you can’t make money by making people frightened enough that they’ll click on the erectile dysfunction ad in hopes of feeling less…well, impotent.
  • Unwind the mergers and put roadblocks in the way of future mergers, so that no company buys their way into having control over parts of the lives of billions of people.
  • Require web services to follow “common carrier” (in the telecommunications sense) rules, that the company can’t forbid non-abusive ways that the network can be used. If someone wants to write a script to export someone’s entire public profile or unfollow everybody, that should be the user’s choice.
    • Obviously, the “advanced” version of this is to mandate a public API, but that also gives the company more control than just allowing users to scrape the site and puts smaller companies at a slight disadvantage.
  • Moderation decisions need to be transparent within some time-frame, and there needs to be a public appeals system that explains things to the community. As an example, I was once banned from a minor social media site, because some troll wrote a script to report hundreds of my comments as spam. There was no notice and no way to find out what happened without creating a second account (in violation of the terms of service, no less) to do some investigation. The appeals process was also a black hole, with no explanation, apology, or even notice that I was allowed back on. That should be unacceptable.
  • Remove the “existing/anonymized data” exemption from institutional review board mandates (at least for internet services), and (maybe) add a requirement that all such research on users needs to be conducted in public. One of the reasons that Facebook gets to be a hive of pro-violence disinformation, after all, is because Facebook decimated the journalism ecosystem by defrauding news outlets with their “pivot to video” pitch that apparently just lied about their data.
  • Users should be allowed to opt-out of “the algorithm.” I should be able to log in and get a linear record of posts created by people I follow, with no trying to guess what I “want” to “engage” with. I suspect that this is the big key, because it unravels the outrage feedback loops. Those using “the algorithm” should be able to see clear metrics on why they’re being shown what they see, so that users (or researchers, more realistically) can see what the system finds important.

Probably most importantly, though, is that people need to stop giving Facebook the benefit of the doubt. They consistently make decisions that are terrible for their community, even when it doesn’t make them much money. They started out as a tool for Zuckerberg and his creepy friends to rate women, and surprise!, that misogynist pile of crap has gone on to actively try to get rid of privacy (more than a decade ago) and now openly supporting fascists and conspiracy theorists. They’re not here to help people…

I can’t remember where I heard this (@jeremy alluded to it) but it was said that Facebook would love regulation because:

  1. Their lawyers can fight regulation, and Facebook has more than enough money to hire enough lawyers:
  2. It would have to apply to all social media, so Facebook spends less time and money defending themselves; and
  3. Implementing regulation costs money so it protects the rich incumbents, like Facebook, from competition.

The idea I like best is a Bring Your Own Data model where you could link your info (including your stream of posts) into a social network. Then the network would only be the interface. So choosing what to show, and how to show it, and how you interact with it. I’m not sure if your friends list should be part of your info or part the network’s (or perhaps a bit of both).