I’ve been meaning to write this post on the consequences and power of data, and reflecting on what power we have as designers to make things better. It’s becoming more clear as time goes on, some of the things we design (or technologies we facilitate with our design) related to our work on certain platforms is more dangerous than we may realize. There are structures we’ve helped build that need to be reevaluated immediately.
I’m also well aware that I’m probably preaching to the choir, I don’t know how to escape the echo chamber.
A political interlude
I reported a tweet for the first time recently. I categorized it as “abusive or harmful”, and “threatening violence or physical harm”. It was one of a series of tweets from the United States President, calling for the “liberation” of states who are imposing stay at home orders during a global pandemic. The tweet, and his many tweets and statements since, are protected under the guise of free speech.
Parler is (or was) an app used by right-wing groups because they’re able to share violent, racist, and misogynist views and strategies under the guise of “free speech”.
Using the cover of free speech to endanger people’s lives is not what our founders had in mind when drafting the Bill of Rights.
To be clear, I believe in the concepts of freedom and liberty and the fact that our country was founded on those principles above all. However, I believe these principles alongside the belief that we’re all equal despite race, color, or gender. I believe in nonviolence and diplomacy, and I have faith in our democracy. I believe everyone should have the same opportunities to succeed in the United States, just as my parents did coming from a humble Italian-American upbringing in the Bronx.
Makes you wonder how the hell we got here, doesn’t it. We’ve now had our first social media president. There are a lot of factors that manifested the outgoing US President and other populist figures cropping up around the world, but it’s the first time we’re seeing the power of machine learning working towards lifting these figures up and amplifying their voices.
The real problem
This isn’t a political post, however. I’m writing this because I’m worried about AI, and have been for a while. Mostly because of the black and white nature of corporate objectives, and the dangers of using algorithms to achieve them. And I’m worried that designers working on these platforms may not recognize or feel the power to voice their concerns about the consequences of certain actions.
As Jaron Lenier talks about in Ten arguments for deleting your social media accounts right now, the main objective of companies like Twitter, Facebook, and YouTube is engagement (people viewing, clicking, and interacting on these platforms). These companies sell ads and data. Facebook advertisers will pay more money if there’s more people to see those ads, and more data means more profit for Twitter.
So, similar to the Facebook algorithm, Twitter’s algorithm prioritizes posts based on what they think people want to see. What causes the most rage tends to get a lot of people engaging with it, which means hate speech and just quite frankly a-holes on these platforms get more views and priority in the algorithm. One of Lenier’s reasons for deleting your social media accounts is that “it will turn you into an asshole”. At the very least this system rewards bigotry and reinforces this behavior.
I deleted my Facebook account in a rage in 2019. Mainly over their disregard for our privacy and ignorance of the proliferation of hate speech and fake news on the platform. It’s reassuring to see that they’ve recently taken steps towards limiting dangerous content, but it’s so far from being enough.
It’s not all bad. There are many wonderful consequences of social media, but the business model inherent to these platforms guarantees that the problem will not go away.
So what can we do
There’s a recent “The Daily” episode (a New York Times podcast) that interviews one of the creators of the YouTube algorithm, Guillome Chaslot. He thinks he got fired from Google because he was spending too much time on his side project, which was to improve the YouTube algorithms to present content that would improve human life rather than presenting the same things over and over.
His current project is a bot that analyzes the opaque YouTube algorithm to surface the most recommended videos overall. After a quick look, most of the top videos were right wing or super religious zealots. Must be why there are countless stories like this one of Americans being radicalized by the platform.
I’m using this example as someone with a lot of courage to risk his job in order to make the world better. We’re not all in a position to be able to do that, but there’s more power if everyone has the courage to call out what’s wrong together. Then companies will know there won’t be any complacent designer out there to replace you.
Alongside Lenier and Chaslot, how could we re-architect the algorithms or business goals so that society-centered voices and truth were amplified instead?
We’ve all seen The Social Dilemma by now and these realizations are starting to become more mainstream, but designers of these platforms have an outsized responsibility. We have more power than we realize, and we should focus on solutions rather than designing anything that could make this mess of a situation any worse.