Former Berkeley Lab Director Paul Alivisatos recently interviewed Joan Williams, a UC Hastings professor of law and an internationally recognized expert on gender issues in the workplace.
Paul Alivisatos: Thank you for sitting down to talk with me. When I heard you speak recently, I wanted to invite you to share some of your insights with the Berkeley Lab community. Some people here are certainly very familiar with the idea of implicit bias, but most people really haven’t heard much about it from an expert. How do you explain the concept?
Joan Williams: Let’s start with an example that everybody at Berkeley Lab can relate to—the image that most people hold of what a scientist is. What typically comes to mind is a sorta tall white guy. Women just don’t fit as well as men into what is called a schema of the scientist. It’s from those schemas that implicit biases arise. Because if somebody doesn’t quite fit the image of a brilliant scientist, studies show that they are going to have to work harder and give more evidence to be seen as equally competent.
PA: I am just thinking about my own experiences—I really may sometimes also have that kind of preconception. It’s easy to be unaware until it is pointed out to you.
JW: That’s right, bias happens to good people.
PA: Once I was at an event being interviewed with a group of scientists on a panel, and I mentioned what I knew about implicit bias. There was a scientist who came up to me after the event and said he could understand that implicit bias comes into action when people go looking for housing or jobs in non-academic fields, but that scientists are very, very quantitative people and they know how to be objective and look very carefully at numbers and submerge their biases—so is it really true that scientists show implicit bias?
JW: Well there is actually a study, the Paradox of Meritocracy, by two social scientists, Emilio Castilla and Stephen Benard, in which they compared organizations that had a strong self-description of being a meritocracy with other organizations. They found that organizations that see themselves as intensely meritocratic or data-driven are actually more likely to show bias than organizations that don’t hold a strong sense of meritocracy.
The only thing you can hope to do is to interrupt its expression by being conscious of it or by redesigning your organizational processes so that they interrupt the expression of biases at work.
PA: So in meritocracies, we as individuals are inclined to be confident that we are doing it right already, when in fact we are not.
JW: Sad but true! There are only two choices—one is to interrupt the bias, and the other is to let the bias have free reign. So just to give you one example, if your average white woman is walking on a dimly lit street at night and sees a large black man walking toward her, she is likely to experience anxiety. That reflects the automatic associations we have based on race.
So you only have two choices if those are your automatic associations. One is to cognitively correct and say I know that is my first instinct, but that is very biased, and I am going to override it. Or the other is just to go with it.
It’s the same with women in science. The automatic association of the brilliant scientist as being a man is very, very pervasive. The implicit associations test shows us that about 75 percent of Americans have a stronger association of men with science than they do of women with science. If that is true, most people are going to carry that into a science environment. Either they are going to be conscious of it, and cognitively override it, or it is going to shape their everyday behavior.
PA: Can you tell me more about your research in this area?
JW: Many studies document gender and race under laboratory conditions, typically in college social psychology labs. Sometimes people raise the question about whether the findings of these lab studies hold for actual workplaces.
So I did a study where we simply recited the findings of social psychology lab studies about gender bias and asked highly successful professional women whether they had encountered any of the long-documented patterns of gender bias. Ninety-six percent gave me specific examples of one or more of the four basic patterns of bias from their own experience. I detail their experiences in the book What Works for Women at Work: Four Patterns Working Women Need to Know—which also discusses strategies women can use to navigate successfully through a workplace shaped by subtle bias. I’ve also developed a game called Gender Bias Bingo.
PA: Gender bias bingo, okay well that sounds entertaining.
JW: When I introduced it at an NSF conference, I got 400 examples of bias in science within the first two days the game was up. Here is something that came in through the Bingo: a woman scientist said she had ordered new lab equipment in the same way she had seen male colleagues do it. And she got chewed out for the way she did it. She said: “Wait a minute, I just saw five guys do it that way!”
PA: That is really unfair. Why weren’t any of the men called out and yet she was?
JW: It turns out this is an example of what is called, in-group favoritism where objective rules tend to be applied rigorously to out-groups (in this case women) and leniently to in-groups (in this case men). It is an absolute textbook example.
PA: Is there another example of a type of bias that we should think about?
JW: Yes. Here’s another example, from a recent study of performance evaluations in tech. A far higher percentage of women (87.8%) than men (58.9%) received negative feedback. And women received different kinds of negative feedback: 71 out of 94 critical reviews of women contained comments about negative personality traits. Many women were faulted for being abrasive, strident, aggressive—reflecting prescriptive stereotypes that women should be nice; others were told they were bossy or should “step back and give others a chance to shine”—reflecting prescriptive stereotypes that women should be modest and self-effacing. Only two out of 83 critical reviews of men faulted them for negative personality traits.
PA: So say there’s an example of where that might be going on at our Laboratory. What could we do to intervene?
JW: It’s not as hard as you might think. You can address this on a structural level, or an individual level. At a structural level, someone who has been trained to spot prescriptive stereotypes can read over performance evaluations. At an individual level, if you find yourself in a meeting where a woman is being called abrasive, you can just say mildly: “I wonder if we would be saying this if she were a man.”
PA: I shared a Yale study in my State of the Lab talk last year (“Science faculty’s subtle gender biases favor male students”)—can you talk about that?
JW: Yes, that was a double-blind, randomized study that documented gender bias in science. It was an extremely important study, and at the same time it was an absolutely ordinary, no-news study. It simply replicated the kinds of patterns that have been documented over and over again for the past 40 years. It was a matched resume study, where you give people identical resumes, one of a man John and one of a woman Jane.
The study found what I call the “prove it again” pattern. Women as well as male scientists were likely to hire the man, even though the woman had an identical resume. Scientists also were more likely to offer the man a higher salary, and to suggest he was a good prospect for mentoring.
Another study about race found that an African-American candidate with an identical resume had to have eight additional years of experience in order to get the same number of callbacks from prospective employers as a white person with the same resume.
JW: All this has been extensively documented for years. Not to interrupt it—I’m sorry, but that’s malpractice.
PA: Well, this would be interesting to see how people do and don’t notice when we put this out. My goal is always for Berkeley Lab to become better. Can you tell me more about why Berkeley Lab is a place where you would potentially like to do some research and what you think about our prospects for improvement?
JW: Berkeley Lab is not alone, but I think science and tech have among the worst gender and other diversity problems. But I also think they have the tools for the solution because they are very evidence-based and metrics-driven. My research documents how bias plays out in everyday workplace interactions. Also, I have developed a bias training that has that element: it is very concrete about how this plays out—and about how individual managers can interrupt bias. We are also developing an inventory of what steps organizations can take, to redesign their operations and processes to interrupt common patterns of bias.
PA: Well you know, in some sense that brings us full circle. What we started with was saying that an organization that sees itself as highly meritocratic and that has an evidence-based culture is very, very susceptible to implicate bias. A built-in weakness that makes it more susceptible to implicit bias. But now we are also saying that this weakness can become a strength, because our respect for evidence can become a basis for how to become successful at overcoming biases. So, our weakness could become our strength.
JW: Exactly right. I think that is very astute, that the weakness is the self-perception of meritocracy, which can lead to complacency. But if you have an organization that is willing to overcome that, and say that we are going to address this problem the same way we do our science, namely by doing evidence-based interventions and then measuring whether they are effective—that is a recipe for real progress.
PA: Excellent. Well thank you so much for a very good conversation. I am looking forward to sharing this with the Berkeley Lab Community.