At a time when political polarization is becoming an increasing problem on social media, WashU data scientist Jean Springsteen is working on a way to bring down the temperature and still get buy-in from social media companies.
Springsteen, a graduate student in the Division of Computational and Data Sciences at the McKelvey School of Engineering, works with William Yeoh, an associate professor of computer science and engineering, as well as with Dino Christenson, a professor of political science in Arts & Sciences.
In an interview for the school’s “Engineering the Future” podcast, Springsteen talked about her work with science writer Shawn Ballard.
The following has been edited and condensed. Listen to the full interview on the “Engineering the Future” website:
Working at that intersection of political science and computer science, what are some of the big questions in that space right now?
One we hear about in the news a lot as AI (artificial intelligence) becomes more accessible through LLMs like ChatGPT, are questions about the responsibility of training those models, right? That’s very intersectional, and the place of AI in health care, education, all of these disciplines. That’s probably one of the biggest questions right now.
Another big question, and the one I’m focused on, is at the intersection of computer science and political science. Social media companies use recommendation systems to show us when we log on to Facebook or Twitter or Instagram. They show us what is on our social media feed. That’s the computer science side of it, right? But the impact of what we see on social media goes far beyond computer science, and that’s the multidisciplinary. In political science, what impact does it have on our elections, on people’s ideology?
So, that is a pretty big question at that intersection, and because social media impacts the way we interact, the way we think, what we see on social media impacts what we talk about at Thanksgiving dinners.
The algorithms that are behind your Google search or what’s on your social media feed, those feel very impersonal. Can you talk about that tension a little more?
Unfortunately, the algorithms behind what we see when we log on to social media are really impersonal. Social media companies aren’t trying to show us my new niece and nephew, right? They’re not interested in showing me that. They’re interested specifically in showing me what they think will keep me engaged on a platform. They are interested in the profit and user engagement.
They might be showing me incendiary content or misinformation because their algorithms, through their machine-learning techniques, are learning that that’s what drives user engagement, not necessarily what I as a user want to see when I log on.
As you’re analyzing what’s going on with those algorithms, how are you able to work with that?
We are interested in seeing how polarization is seemingly increasing. What can we do from the algorithm side to reduce that polarization? So, instead of having algorithms focused just on user engagement, what if we find some metrics such as extreme policy, like how do people feel about policy? How do people feel about political candidates from their party and the other party? And using those as metrics instead of just user engagement.
This seems like really rewarding work. But not easy and not the most obvious pathway.
And especially not easy in terms of data restrictions. As social media companies are becoming more influential and growing, the lack of data access for researchers like us is complicated. But that I think is part of the reason, like my math background, computer science background, we can kind of come up with ways to circumvent that and, run simulations and do user surveys to get our own data, instead of relying on the access of social media data, which is decreasing.
Even with the restrictions, you’re able to get a clear enough picture to be able to see what is probably or likely happening on social media?
There’s no way to know for sure, right? Because that data is restricted, we can’t compare it and say how close did our model actually get. But that’s why we make informed modeling choices. We talk to other researchers. We pull in the political scientists and say, you know, what are you seeing? How can you inform our modeling choices as well?
What are longer-term goals for you?
Once we learn if there are parts of social media posts that create extreme or polarized responses, how can we balance that with user engagement metrics?
There’s a whole body of literature that talks about what makes people stay on social media, like what kind of posts keep people engaged. And so, if we can add to that, well, what kind of posts make people more polarized or what kind of posts make people less polarized more specifically, how can we balance that user engagement with polarization?
So, for us, it’s not enough to say, we think this is what will make people less polarized because the social media companies, they’ll hear that and say, ‘Great, we care about user engagement, we’re not interested in that.’ So, the next step is to try to balance those things. Can we find posts or sets of posts that keep user engagement high, that people want to see and interact with, but that maybe are less extreme or less misinformation on social media?
And hopefully then, social media companies will be a little more receptive and say, OK, maybe this is something we can implement because our user engagement metrics, our profit levels, aren’t impacted as much.
Finding those sets of posts that have patterns of maintaining high engagement with low polarization, how would that get incorporated or put into practice by social media companies?
I think there would be multiple ways to implement some of these recommendations. We are focused on the recommendation systems, those algorithms. If we see posts with X and Y qualities elicit less polarized responses, right, how can we put that into a filtering strategy, into that recommendation system, and say, all right, maybe let’s focus on X and Y, and if Y happens to be a metric that increases user engagement, right, that’s the kind of thing we want to see. And then in those recommendation systems, we can focus on those features, show people posts with X, Y features.
You found these patterns that achieve the engagement, but not the scary polarizing parts, right, and then say to social media companies: ‘Look, this won’t hurt your bottom line, but you can promote these things that not only don’t hurt your bottom line, but don’t hurt people’s Thanksgiving dinners or democracy.’
Yeah, that’s the idea. That’s the hope, right?
Visit the “Engineering the Future” website to watch more episodes or click below: