This week’s edition of the Nonzero newsletter was quite interesting. It chronicles all the shenanigans of Mark Zuckerberg since he was at Harvard. The post paints Zuck almost as a modern-day Darth Vader. Of course, you might agree or disagree depending on your vantage point, and I don’t want to get into passing moral judgements on Zuck’s character.
But here’s an interesting passage that kinda gives a sense of what I’m talking about:
And after the ConnectU folks lobbied the Harvard Crimson to write a story about their claim that Zuckerberg had stolen their idea, Zuckerberg used private Facebook login data to hack email accounts so he could see what the two Crimson journalists working on the story were saying about it. He once texted a Harvard friend about how much power Facebook gave him: “if you ever need info about anyone at harvard just ask. i have over 4000 emails, pictures, addresses… people just submitted it. i don’t know why. they ‘trust me.’ dumb fucks.”
I could go on, but you get the idea: If we could design a moral gyroscope to guide the man who runs the most powerful conglomeration of social networks in the world (Facebook and Instagram, not to mention WhatsApp), it probably wouldn’t be an exact replica of Zuckerberg’s moral gyroscope.
More importantly, the post references a new study by Steve Rathje, Jay J. Van Bavel, and Sander van der Linden on the role of social media platforms in political polarization. A few interesting findings:
1. An analysis of Twitter accounts found that people are increasingly categorizing themselves by their political identities in their Twitter bios over time, providing a public signal of their social identity (27). Additionally, since sharing behavior is public, it can reflect self-conscious identity presentation
2. Messages that fulfill group-based identity motives may receive more engagement online. As an anecdotal example, executives at the website Buzzfeed, which specializes in creating viral content, reportedly noticed that identity-related content contributed to virality and began creating articles appealing to specific group identities
3. First, we looked at the effect of emotional language on message diffusion. Controlling for all other factors, each additional negative affect word was associated with a 5 to 8% increase in shares and retweets, except in the conservative media Facebook dataset, where it decreased shares by around 2%. Positive affect language was consistently associated with a decrease in shares and retweet rates by about 2 to 11% across datasets.
4. Across datasets, each political out-group word increased the odds of a retweet or share by about 67%5. The average percent increase in shares of political out-group language was about 4.8 times as large as that of negative affect language and about 6.7 times as large as that of moral-emotional language.
This thread by Steve Rathje has a more exhaustive summary.
These results broadly confirm other studies. Negativity and adversarial posturing generate the most amount of engagement on Facebook and Twitter. In that sense, we could say that social media platforms have an incentive to perpetually fuel outrage among users by algorithmically feeding us content that conforms to the user’s priors.
And the end result is that people are slowly transforming into rage-filled monkeys engaging in pointless partisan arguments on social media.
This ties back to the previous post I wrote about the incoming regulatory assault on big tech and social media platforms. As we slowly begin to grapple with the enormous real-life consequences of these platforms, we’re bound to see regulatory actions, both good and the ugly. No matter what, this won’t be pretty.
Leave a Reply