Previous month:
May 2023
Next month:
May 2024

The hidden power of social media algorithms

“Social Media is not really a filter bubble but a boxing ring: You are both bonding with people with similar values, views, interests and in conflict with people with opposing views.”

The quote belongs to techno-sociologist Zeynep Tufekci, today a professor at Columbia University.

I've recently read two books - "Stolen focus: Why you can't pay attention" by Johann Hari and "Traffic: Genius, Rivalry, and Delusion in the Billion-Dollar Race to go Viral" by former Buzzfeed-editor Ben Smith - that both strongly reminded me of an excellent talk Tufekci gave in Oslo late 2019.

I almost always take notes from such occasions, but for some reason I did't blog about the talk back then, but since it's still relevant and interesting (re: those books and ongoing debates) I thought I'd just post those notes now. The talk was called "The hidden power of algorithms".

The caveat? The following are my superquick notes from a super interesting debate. All speakers were very eloquent, but I’ve just jotted down the essentials.

ZT: I grew up in Turkey and it influenced my work. As a kid I was very interested in maths, physics etc. I picked a practical topic that could give me a job with no political dilemmas - so I majored in programming.

Now obviously very wrong: I find myself in the middle of all these ethical dilemmas.

Worked for IBM in Turkey. Was a coup in Turkey = media very censored. But through my work for IBM I could go on IBMs intranet and people would answer all my questions - people from all around the world would answer her questions.

I thought: If we can talk to one another like this it will change everything - and it did.

Internet opened up the world for me.

The Internet was designed for a community of people who trusted each other: For academics at CERN.

Very few, if any, of them had an idea this would end up being the infrastructure of the global internet. I don't think there is one scientific breakthrough where we don't have problems.

A couple of things happened, like the ad financing of the public sphere. No intention of creating something bad. I know one of these who invented the ad model: Ethan Zuckermann, and he's very sorry.

There are countries where Facebook is the internet, like Indonesia.

To make the ad model effective you need SCALE. You need millions of people and you need to be everywhere.

A lot of data is the other thing you need - in order to personalize recommendations etc.

These issues create the whole infrastucture problem.

The business model locks in all of this.

We also have AI. When they say AI today they almost always mean machine learning or deep learning.

In machine learning we're feeding in enormous amounts of data and tell machines to go figure out / optimize this data - and we don't understand it.

The turning point is 2012 for machine learning - a major paper/ study published about identifying cats.

(In 2012, Google made a breakthrough: It trained its AI to recognize cats in YouTube videos. Google’s neural network, software which uses statistics to approximate how the brain learns, taught itself to detect the shapes of cats and humans with more than 70% accuracy.  It was a 70% improvement over any other machine learning at the time )

Google then starts using this technnology to make money, to optimize Youtube to keep you longer on the site. It's a story of how Youtube "lost its mind".

I researched Trump on Youtube, incl. watched his rallies to get quotes accurate - and then Youtube "lost its mind" and started to recommend white supremacy things.

I started doing the same type of research with other candidates like Hillary Clinton or Sanderson - and a similar thing happened: Youtube did not recommend white supremacy stuff but it was sending me more  towards leftwing conspiracy stuff.

What's happening here is that the machine learning algorithm has figured something out: That conspiracy, polarising content is ENGAGING and keeps people on the site longer.

The design of all social media platforms, like Facebook and Twitter, is that they’re designed to keep me on the page as long as possible. Social Media not really a filter bubble but a boxing ring: You're both bonding with people with similar values, views, interests and in conflict with people with opposing views.

Machine learning can do wonderful things - but you can do all sorts of analysis at scale (like gayness etc) and governments are going to use this for social control.

Youtube has curbed some of this to some degree after various research on this. But you can only curb it to a certain degree if you want to optimize a site for engagement.

These platforms are building a public sphere to keep us on the site - using data, AI, machine learning – it’s not a healthy public sphere.

This is also happening in countries lacking the democratic infrastructure we have, like Indonesia etc. Facebook and social media platforms do not create the ethnic conflicts there, but they are adding fire to existing conflicts and exacerbating them.

I think this is superfixable. We've solved much tougher problems before.

On the ashes of the World War 2 there was a lot of thought that went into building democratic institutions to prevent it ever happening again.

I don't think GDPR is going to fix this. GDPR is focused on individual consent: Can I get your data? At the individual level it makes sense to give away your data say e.g. for a new avatar.

The public sphere is something we all live in. If we allow every individual to use cars, we are going to get pollution.

I would like to see us regulate social media at the public goods level. People say data is oil. We cannot allow this very lucrative accidental business model to be our public sphere.

The people who go into computer science is often people who like to solve closed puzzles - the opposite of what today's technology really is like.

We need a new kind of education that teach both.

Debate:

Professor Petter Bae Brandtzæg: On FB we are triggered to use system 1 of thinking, re: Kahnemanns ideas on thinking systems 1 and 2) all the time. Even intelligent ppl do studpid things on FB

ZT: When FB switched to the engagement model/ recommendation engine their numbers went way up. Absolutely increase the time people are using on the platforms

Think we see subsets of people - like some people (vulnerable) use WAY more time /drags up the average. In the US middle schoolers often get Chrome books - very hard to disable Youtube on them. AN have this survey that shows Youtube is THE go-to-thing for middle schoolers

Some may not be affected by the engagement model (like, they have parents who counterbalance it etc.) but we’ll have kids where parents are not present. Poor xx with little awareness/ few democratic institutions - gonna affect vulnerable populations and people

Professor Bente Kalsnes: Mainly young people who get their news on FB. Fake news have a tendency to be more exciting, more engaging etc. On FB we see more focus on friends than who published the news and what the source is

ZT: I don't think Zuckerberg should be the one to decide what we get to see on the public sphere. I think this is a political issue, not a descision to be made by the CEO of one company.

If FB find a snippet of copyrighted content they'll take it down asap. But extremist content? I think FB has got better, at least for English language content

It's super complicated, but compare it to other industries we have regulated - say e. g. food or adding lead to food to preserve it - social media is a very regulatable industry. Having a healthy public sphere is a political issue.