PeopleandTech & AI

Human-Computer Interaction and Real World Impact: An Interview with Niloufar Salehi

By Amanda Glazer

August 19, 2021

This article is part of STEMinism in the Spotlight, a monthly interview series.

Niloufar Salehi is an assistant professor at the School of Information with an affiliated appointment in Electrical Engineering and Computer Sciences at UC Berkeley. She earned her BSc in computer engineering from Sharif University of Technology in Iran and her PhD in computer science from Stanford University. Her impactful research focuses on human-computer interaction, and she works closely with existing communities to understand the effect of algorithms in practice. In this interview, Niloufar shares her thoughts on human understanding of algorithms, how people are harmed in social systems, and her career path.

Amanda Glazer (AG): How did you first get interested in STEM and computer science?

Niloufar Salehi (NS): It was around when I was 15 years old that I got really, really interested in math. I’d always been really interested and good at math and really liked solving problems, but it was around when I was 15 that I started learning about computer science. I just really enjoyed spending time solving problems.

When I was first starting to learn programming, I spent a lot of time with my friends, or even by myself, searching for things on the internet to learn from and staying up at night trying to solve problems. I did some competitive programming in high school. Then I did my undergrad in computer engineering in Iran. I was always motivated by computer science, but it was towards the end of my undergrad that I started learning that I have a lot of interdisciplinary interests and questions.

AG: Did you know going into your PhD that you wanted to merge these interests in computers and problem solving?

NS: Exactly. In undergrad, I was really interested in research. I started off doing some theoretical computer science research: graph coloring problems. But I was always looking for something with more of a human element. The first research that I got involved with was in computational neuroscience. There was an interdisciplinary team led by a physician who was also a researcher. He was really interested in brain networks. I did some coding with that group, and I wrote a neural net from scratch. I learned a lot. After that I did some work with one of the professors in my department, who was a computer scientist, on EEG brain networks and doing network analysis. From there I started learning about human-computer interaction. That seemed to just click for me when I found that field because it had a mix of all of my interests.

AG: Why did you decide to pursue a PhD? Did you ever consider industry?

NS: Not really. I may be an outlier, but the day I started my undergrad I knew I wanted to get a PhD. It could also have to do with the fact that my dad was a professor and I always liked what he did and I felt like I wanted to do that. The only question was, “What field do I choose?” Once I started with computer science, programming, and problem solving, I knew that this was what I wanted to do.

AG: Did you know you wanted to come to the United States for a PhD?

NS: I’m a planner. It’s hard, because when I talk to students, I don’t want them to think that they have to be like that. 90 percent of people don’t function that way and it’s totally fine and you don’t have to know what you want to end up doing. But I sort of always liked to have a plan.

Why the [United States]? Mostly because of the research I saw at conferences, a lot of it was from professors here. I started doing some research in undergrad with a professor at Cornell. I mostly looked for professors whose work I really liked and applied to their schools.

AG: How did you end up in the School of Information and could you tell us a bit about it?

NS: I got really, really lucky that there was an opening at the School of Information in my last year in grad school, that I saw on Twitter. I saw the job description and it seemed to really match my area of human-computer interaction. I really love the I-School. I love everything that’s going on in there. It’s a very interdisciplinary school. Maybe half of the faculty are more on the technical side, we have a lot of computer scientists, and then in the other half we have Jenna Burrell, a sociologist, Coye Cheshire is a sociologist, [and] Deirdre Mulligan, who’s a law scholar. We have a very interdisciplinary group of faculty. Just looking at the course catalog makes me want to take all those courses. And the students learn from such a variety of perspectives on these questions about technology and society and law; the students are just brilliant to work with.

AG: What’s your favorite part of your job?

NS: Working with students definitely. I don’t think this job would be nearly as fun without daily interactions with really bright students who are really motivated. They just make your day.

AG: What advice would you give to students?

NS: I would suggest trying a lot of things, especially earlier on, to really understand what motivates you. Sometimes I pause and I think about my research and I think if this wasn’t a job and I didn’t need to work on it, I’d probably still be working on the same things. That’s really motivating. Other than that, one piece of practical advice is to have a mix of low risk, low reward and high risk, high reward projects. So that you don’t get tired or demotivated, but you have things you can switch between.

Another thing is that I’m seeing students really struggle during the pandemic. I myself am as well. I think it’s good to know that it’s normal to be struggling right now and hopefully we’ll get through it but just know that none of us are alone in this struggle. It’s good to talk to other people about it and try to work through it, and to not blame yourself if things aren’t going as well as you’d want them to.

AG: Could you give us an overview of your research?

NS: My current research basically falls into two directions. One is that I’m really interested in studying online harms and how people are harmed in social systems. A lot of research before I started my job at Berkeley was around building and designing social systems that people can use for collaboration or collective action. What brought me to this research was all the ways people were harmed on these kinds of systems, so that’s been a big research agenda for me. Studying things like harassment, abuse, harms through targeted ads, those kinds of harms. And trying to understand what to do about them? I build a lot on restorative and transformative justice which are theories of justice about trying to understand who is harmed, what are their needs and whose obligation is it to meet those needs as a way to try to repair or restore after harm has happened. I try to take that philosophy and practice into thinking about what you do when people are harmed on these systems and whose obligation it is. What is on the person who caused the harm? What is on the platform? What are the community’s obligations? How do you design for those kinds of things?

Then another big area that I’m really interested in is people’s interactions with algorithms, whether machine learning or different kinds of algorithms that people interact with a lot or on a daily basis. I’ve done some work on how YouTube content creators think about what the recommendation algorithm is and what role it plays and how they strategize their own activities related to how they conceptualize the algorithm’s role. I’ve worked on school assignment algorithms that a lot of cities like San Francisco use to assign students to public schools to understand how parents understand the algorithm and how they strategize around it. And also trying to understand why some of the theoretical values that the algorithm has don’t actually seem to pan out in practice—I’m a big fan of this, because my background is in computer science and I love proving these kinds of theoretical things. You can prove that the algorithm is strategy-proof but when you look in practice you see a lot of people are still trying to game it. Trying to understand how that leads to inequities and how we might design the system differently to reach more human or community level goals like predictability, distance from your house and more higher-level community goals like diversity, equity, and access to public education. So those are the two things that I’ve been spending a lot of time on these days.

AG: Amazing. For your first research direction, I’m really curious where you think responsibility lies? How much obligation do you think platforms like Facebook have?

NS: That’s a really good question and one that I’ve been thinking about a lot. I think a problem that happens is that we tend to conflate different kinds of harms. We also don’t have a clear idea of what to do when someone is harmed. A lot of our understanding of how to react when someone is harmed is based on the criminal justice model--- a model where we basically define who caused the harm and punish them. And there’s a lot of work that shows that doesn’t actually lead to any good outcome. It usually doesn’t work for the person who caused the harm to understand what harm they caused and try to work to repair it or not do it again. It also doesn’t meet any of the needs of the person who was harmed.

People have different needs when they are harmed, but there’s research that shows that they generally fall into categories of either needing safety, needing to stop the continuation of the harm, needing acknowledgement or apology that the harm has happened, or needing acknowledgement from other people. I’m figuring out, what are the needs that people have? Whose obligation is it to meet each need? For stopping the continuation of the harm, the platforms have a big obligation because the harm oftentimes happens on their platforms and they have the means to stop it from happening again.

So, if you think about it in those terms, it leads to a different way of thinking about what the platform’s obligation [is]. A lot of the way that we currently think about it is in terms of content moderation. What is the content? Is it against the rules or not? But if you actually look at the needs that people have when they are harmed, very little of it has to do with the content staying on the website or not. A lot of it has to do with the actual ways that content harmed or threatened them in some way and how do they ensure their safety moving forward? I think if we flip the question then we come up with different ways of thinking about the obligations of both the platform, the community and also the state, through policy.

AG: Do you feel like these platforms are doing enough? It feels like they aren’t doing anything.

NS: They aren't doing nearly enough. You are completely right. We also don’t seem to expect much. It’s really harmful that we get pulled into this content moderation debate. I think it’s the wrong debate. It leads to interventions like the Facebook Oversight Board, which has a very, very limited scope of what it’s allowed to do. Facebook removes some content, then people can contest that decision and can have it reviewed by this board, and the oversight board can say, “Yeah this should be allowed,” or, “No this shouldn’t be allowed.” That is just such a narrow understanding of how people are harmed on the internet that it basically doesn’t do anything for people who are being stalked on these platforms or victims of revenge porn. The material ways in which people are hurt have very little to do with this.

Mark Zuckerberg got called to Congress for a host of things ranging from election misinformation to harassment, abuse, hate speech. For all of them he just kept saying we need more AI. That was ridiculous. I think he said AI 20 times in a single response. But that’s one of the reasons that I’m hoping research like my own and my colleagues’ will help shift the questions and push the companies to actually start listening to people and civil rights groups or hire people who are experts in trauma response and restorative justice, and start understanding the role they are playing in enabling this harm to happen and how they can work to repair it.

AG: It’s really tough too when there’s no effective regulation on these companies.

NS: Exactly. There’s almost no regulation, but another gap is that it’s not clear that we know what regulation we need. That’s another big open question.

AG: To what extent do you engage with policy makers in your research?

NS: One of the things in my research is that I really, really care about real world impact. I want to be building systems and studying things that have an immediate impact. I don’t have a problem with people writing papers and then having impact later on. But I’m just a very action-oriented person. For instance, we’re starting a collaboration with the San Francisco Unified School District, and they’re dealing with the school board, which sets policies for the district. In 2018 the board in San Francisco voted to completely stop using the assignment algorithm and completely redesign it. How do you redesign it to meet the policy goals? We’re working in collaboration with the school district right now to help them try to figure out, how do you design the system, communicate how it works to parents, and involve parents in the design so you’re getting closer to those policy goals? These are very complex social and legal challenges in addition to being computer science problems, so I’m collaborating with Professor Catherine Albiston at the Law School and Professor Afshin Nikzad who’s an economist and mechanism design researcher at USC.

AG: What drew you to these research areas?

NS: I try to do a lot of my research in collaboration or cooperation with existing communities. A lot of my work in grad school was about designing systems. One of the groups that reached out to me was “Buy Twitter.” This was a group that came together in 2016 before the election, when Twitter was actually doing really poorly financially, and they were looking to sell the company. And there was a group of users under the name “Buy Twitter” who were trying to figure out a way to raise money, buy the company and have it run by users. There were a ton of open questions about governance and design there. One of their members reached out to me and asked if this was something that I’d be interested in helping with. One of the questions that I immediately had was “Why? If you want to buy it, I completely get that, but what would you change? What is the really pressing thing that you want to own Twitter to change?” We had a series of workshops with the members and the thing that kept coming up was harassment and abuse and actually doing something about it. So that defined a whole research agenda for me through working with them, and later on I started collaborating with one of my colleagues, Amy Hasinoff, who’s a communications professor. She was the one who taught me about restorative justice and then we started working on what it could actually look like in practice to bring this philosophy and practice into the design of social computing systems.

I guess the answer to your question in summary is that I try to engage a lot with existing communities. I try to source problems from them and then I do a lot of interdisciplinary work with people in other fields.

AG: I’ve noticed that some academics really feel an obligation to engage with the public in their work and others don’t. Do you feel that academics, especially at a public institution, should have an obligation to connect their research to the public?

NS: I advocate that there’s value in both. I’m a big fan of doing both. I can see how I’m definitely on the side of feeling an obligation towards the public, especially in California, and working on problems that people are facing. I also think it’s really important to make research legible to the public and give back in that way. But I also see a lot of value in basic research that people might not immediately see the value of, but if it’s creating knowledge then that in itself is a value that I think a public university should strive for as well.

AG: What do people misunderstand most about AI?

NS: That’s a really good question. Part of my research is asking, how do people understand what an algorithm is and does, and how do they strategize or work with it to achieve their goals? I’m dealing with that question a lot in my research. How do you communicate to a user, for instance, what a machine translation algorithm actually does? Because that will change how people rely on what the AI says, which is a good thing because if you are able to rely on AI at the right times and question it at the right times, it’s more useful.

I think that right now AI is used as a catch-all for a bunch of different things, and it’s starting to lose meaning. There’s this really good talk by Professor Arvind Narayan at Princeton called “AI Snake Oil.” He talks about promises of things that AI can do that are sold as snake oil. For example, if you give this AI a video of someone’s job interview, the AI can tell whether they will be a good programmer or not. We know that’s not possible. There needs to be training data, there needs to be ground truth. With something like your video interview telling whether or not you are going to be a good programmer or not, there’s no way that there’s ground truth for that or anyone can even start to label that. So, there’s no way that an AI will be possible to do that. But it’s sold as this promise and it’s actually being used by companies. It becomes a self-fulfilling prophecy, right? There’s a company called HireVue that actually sells that technology of automated interview analysis. It’s really dystopian.

Going back to your question of the role of a public university, this is one of those things where we need people to understand what an AI can and cannot do. What an AI can do is look for patterns in large amounts of data and start identifying when those patterns come up. We need people to understand that if they are going to be able to notice when the promise of the AI is snake oil.

AG: Do you think we’ve made progress in terms of more people understanding what AI can or cannot do?

NS: In 2015 there was a study that one of my friends, Motahhare Eslami, who’s a professor at [Carnegie Mellon] now, did where she asked people if they knew that their Facebook feed was curated by an algorithm. And 62 percent of people did not know that. Even in the span of 6 years, it’s changed a lot. I hear the word algorithm all the time. There are journalists whose beats are now algorithms. That was unimaginable 10 years ago. There’s more public conversation about it, and there’s more conversation about the harms. Though I think that in terms of understanding the harms, we have sensationalized it a bit. We over focus on the imaginary AI of the future and under focus on the real algorithms that are in use right now. The school assignment algorithm has been used for over a decade, and it has decided where generations of students in New York and San Francisco have gone to school. Instead of worrying about algorithms that don’t even exist yet, we need to look at what’s happening right now. I think that’s a more productive way to engage with these harms.


Amanda Glazer is a graduate student in statistics

Photo courtesy of Niloufar Salehi

Notice something wrong?

Please report it here.