See no algorithm, hear no algorithm
Battered by increasing criticism of its recommendation algorithm, Facebook is trying a new approach: leveling with us about ourselves.
Welcome to The Unraveling, a weekly brainpurge that will cover current media issues and internet culture with a mix of theory, scholarship, and practice. And if that doesn’t work out, I’m going to run Ever Given’s social.
It’s easy to take for granted that the age of mass social media is barely more than a decade old, probably ready for braces but definitely not old enough to drive. Facebook came in 2004 but really didn’t break outside its college silo until a few years later. It also wasn’t the first social media product, but it did have a kind of reach and scale that marked a clean break from the era of MySpace, and Friendster, and a bunch of other sites that had moments but never really the staying power.
Facebook, then Twitter, then YouTube, then Instagram, and on and on and on. Definitely Vine. Remember Vine? The point is, these products have not been with us that long, and most of the first decade was spent building platforms with the main goal of steep growth that would justify investment. Along the way, these networks broke a lot of things that had been with us for a while, but they also have shown their power in moments when people at the margins harnessed new tools to get their voice out. The world isn’t better or worse because of Facebook, just a different mix of redistributed good and bad.
As these platforms have matured (and in some cases, died so that others could survive), we are at the point where we can start to assess what has changed. Three months, six months, a year … not enough time to really talk about altering the social fabric or the way we do work/play. More than a decade in, it’s become clearer what the long-term social effects of the social media age have wrought, what kinds of damage is done, what we’ve gained, and what we need to work on.
I write all this Nick Clegg’s recent post in mind. Clegg, the VP of Global Affairs at Facebook, wrote a 5000-word missive on Medium that acknowledged some problems with his company’s platform but then pointed the finger back at the audience. What, you think a title like “You and the Algorithm: It Takes Two to Tango” was going to let users off the hook?
You should read the piece. It’s long, but it’s an interesting reframe of the issue for reasons I want to talk about today.
Most interesting about this piece is that it represents something of a shift in how Facebook talks about its platform to the public at large. In my own interactions with Facebook reps at conferences, I have almost always come away with the sense that they get it. They get the problems, the challenges, the moral quandaries, all of it. But that has always seemed at odds with how the top brass speaks publicly to major media and regulators. The most famous example is likely Mark Zuckerberg declaring after the 2016 U.S. election that:
“Personally I think the idea that fake news on Facebook, which is a very small amount of the content, influenced the election in any way — I think is a pretty crazy idea. Voters make decisions based on their lived experience.”
It was a quote that was at odds with how many in his company talk. It was dismissive of mounting evidence that something was going very wrong on his platform, it was dismissive of scholarship on propaganda and group psychology, and it was pretty defensive. And in the end, he realized it too.
That’s just an example, and to be fair Facebook’s corporatespeak has been all over the map on public criticism of its product. It has gone through cycles of dismissiveness, open-mindedness, contrition, communitarianism, and hardcore libertarianism when it comes to the public debate on the platform’s effects on all of us.
Clegg’s piece, then, could just be another pivot at a time when Congress is starting seriously look at regulations that would restrict Facebook’s business. But in reading it with a skeptical eye, there seems to be some realization on his part that one of Facebook’s biggest challenges right now is that users largely don’t know how the product works on the back end and that this is exactly the kind of thing that makes the public take up pitchforks when it needs something to blame.
Facebook, like a lot of social platforms, uses algorithmic decision-making to decide what you see. It capitalizes on one of the internet’s chief strengths—the ability to personalize your experience—by creating a custom experience that is aligned with your interests and habits. Think about it this way: a major news outlet like The New York Times acts as a singular gatekeeper for millions by virtue of what it covers or what it puts on the front page, and while Facebook’s algorithm is a gatekeeper as well there isn’t a singular decision or front page for all its users. One NYT front page vs. nearly three billion front pages, all based on your networks, interests, and habits.
Algorithms are computer programs, of course, but they are designed by humans and reflect human values and choices. There is an inherent tension here, that most of what we call “Facebook” or “Instagram” is automated on the daily but largely constructed by more philosophical and data-driven human decisions. They aren’t making sure a particular post is showing up in your feed on this day, but that post showing up is the result of programming choices Facebook’s engineers have been making and updating over the course of years.
Some of Clegg’s piece, then, seems aimed at doing that baseline bit of education. Media nerds like me know more about how these products deliver content, but the public has to do a lot more work to educate itself about basic things like “how come I see more of this person’s posts than that person’s.” It is a lot of time and reading and thinking about things that people in the public largely don’t consider much in their day-to-day life, and that is fertile ground for ignorance about how the experience of social media works.
But past that, Clegg explains a bit about the complexity of the algorithm that is building to a specific point: what you see is what you choose:
You are an active participant in the experience. The personalized “world” of your News Feed is shaped heavily by your choices and actions. It is made up primarily of content from the friends and family you choose to connect to on the platform, the Pages you choose to follow, and the Groups you choose to join. Ranking is then the process of using algorithms to order that content.
It’s important to see the point here. It’s easy to blame “the algorithm,” but to the extent that the algorithm is toxic then it’s useful for a user to ask whether that is the content they are getting served, and then more so what about their behavior is causing it to be served.
It’s not difficult to see this argument cynically. Clegg spent some time in this piece talking about how we can’t and won’t go back from personalization having tasted it for ourselves, and that by extension the worst of what people cite in Facebook’s content is the result of poor user choices. I can understand that sentiment. It is in Facebook’s interests to blame-shift and ignore some of the arguments implicit in his piece, that Facebook can (but doesn’t) put a thumb on the scale and suppress or ban certain types of content. That it won’t is a philosophical choice that the company tends to selectively own in some spaces and let work in the background during other arguments.
But just to step back for a second: assuming Facebook has little interest getting into larger scale moderation for toxic content and speech, that implies that the work then will have to be done by the public. And to that extent, Clegg isn’t wrong. Algorithms that learn from us are going to amplify our best and worst parts of ourselves. But it’s critical to note the list of things he details in his piece, that it’s not just posts and comments. It’s other types of behaviors both on and off Facebook. When you start to put that picture together, Facebook’s ranking of content is not based on individual slices of ourselves but a fairly full picture of our online activity writ large.
This is where it gets tricky in the public conversation, because how you see Clegg’s argument could easily hinge on your own particular take on humanity in general and what we are capable of deep inside. As an Xer I’m more primed to see us all as flawed and capable of our worst without some system to check ourselves (and in true Xer fashion, we also expect the system itself to fail us). Others will take a more optimistic view, that people are basically good but can be led down bad roads.
I’m not here to play spiritual adviser, but rather to point out that one of Facebook’s challenges is it has to sell “Well, You’re Partly the Problem!” to an audience that might not fully accept that logic because the content delivery algorithm is invisible and opaque to the public. And look, I’m probably suspicious of Clegg’s larger goal here because any sort of successful blame-shifting is good for Facebook’s goal of avoiding wholesale regulation reform. But regardless of how this plays out, Facebook has seemed to at least realize that many of its users A) don’t know how content is surfacing as they use the product and B) how much culpability users have for a toxic experience.
I’m not sure this will be a winning argument, but I do think it’s possible to simultaneously see the self-serving interest in Clegg’s piece while also noting that taking what he says at face value can create a better experience for us. That is, there is some wisdom there in how to break the algorithm and teach it to give us something better.
A couple thoughts on that.
First, altering behaviors in general. Just like putting up boundaries around toxic relationships and people is good, it’s also good for social media. Surfing and engaging online with more purpose would help. This could involve muting friends or topics, cutting loose problematic social media connections, not clicking on that sketchy story.
Second, implicit in what he says is that it’s important to see our role as amplifiers online. We contribute to the ranking system by what we choose to read and then share (or reshare). The reason people see toxic stuff isn’t always the algorithm’s fault. Sometimes it’s just a straight reshare from our own accounts. Facebook’s algorithm is a gatekeeper, but so are we by virtue of what signals we amplify to our friends. For 10 years now I’ve been trying educate students about the power they hold in their hands when they post. It’s easy to dismiss a single post or choice, but in aggregate they matter a great deal.
In that sense, I am comfortable taking the view that Clegg’s piece might be self-serving but still could be an important education in helping users create a better experience. In an ideal world, the company would help us on this more (and in that post, Clegg is promising some type of tool that lets users modify their rankings algorithm a bit). But maybe it’s on us when it comes down to it, just like we have choices about what or whom to pay attention to in the every day. That kind of knowledge can’t hurt, at least.
Media stories I’m keeping an eye on this week:
March 31 was International Transgender Day Of Visibility, and last weekend we got a reminder of what happens when social platforms built on user votes have to deal with their rules being hijacked. Check out this dizzying story on Wookieepedia, one of the most famous Star Wars fan sites online, and how a move to stop “deadnaming” transgendered creatives on the site ended up running afoul of democratic processes the site had to decide major policies.
Donald Trump, kicked off Twitter and most everywhere else, says he’s launching his own social network. I remain in the skeptical camp that he’s going to be hands-on about this if anything launches, and still think licensing his name to an existing conservative platform is more likely. All the cash payments without the risk or difficult work building a social platform. It’ll be like Trump Steaks, except with Likes. Trump Likes? Need to workshop this.
I’ve been waiting for someone to write a solid story about how people are getting agitated by friends jumping vaccine lines and posting about it on social media, and WaPo finally gave us a good one. It was inevitable that the age of oversharing on social media would collide with inequitable vaccine distribution plans.
Related to last week’s newsletter about Asian journalists being taken off stories about anti-Asian violence, I wanted to add this story about Washington Post reporter Felicia Sonmez. She noted last weekend that she was taken off of stories about sexual assault because she has been public about being an assault survivor, and the Post reversed itself a couple days ago. The problem isn’t new, and a wholesale rethinking of who objectivity is for is an important conversation newsrooms need to have.
Jeremy Littau is an associate professor of journalism and communication at Lehigh University? Find him on Twitter at @jeremylittau.