From everyday apps to complex algorithms, technology has the potential to hide, speed, and deepen discrimination, while appearing neutral and even benevolent when compared to racist practices of a previous era.
In this talk, Ruha explores a range of discriminatory designs that encode inequity: by explicitly amplifying racial hierarchies, by ignoring but thereby replicating social divisions, or by aiming to fix racial bias but ultimately doing quite the opposite. Ruha also helps attendees consider how race itself is a kind of tool designed to stratify and sanctify social injustice and discuss how technology is and can be used toward liberatory ends.
This presentation delves into the world of biased bots, altruistic algorithms, and their many entanglements, and provides conceptual tools to decode tech promises with sociologically-informed skepticism. In doing so, it challenges us to question not only the technologies we are sold, but also the ones we manufacture ourselves.
JEFFRIANNE: Welcome to NCWIT Conversations for Change, an on‑line thought leadership series.
My name is JeffriAnne Wilder and I’m a senior research scientist here at NCWIT. It is my pleasure to welcome you to this series, which features speakers with a diverse range of opinions and hopefully protective ideas and worldviews.
We wouldn’t be here today without the support of our sponsors and I want to thank the viewing audience in advance for your patience ‑‑ should we experience any bandwidth or other technical issues.
Today’s conversation will consist of a 40 minute talk with Dr. Benjamin and then we will open up the conversation to Q&A from the audience. As with our past sessions, you don’t need to wait to send questions. Please feel free to send them any time using the Q&A feature and we will do our very best to answer as many as we possibly can. We will get to as many questions as possible in the live Q&A, but based on the lively discussion from our previous conversations, we will be staying 15 minutes extra on the call to get to as many comments and as many comments and questions as possible. And with that, I would like to introduce our speaker of the hour.
Dr. Ruha Benjamin is a Professor of African‑American studies at Princeton University and author of People’s Science: Bodies and Rights on the Stem Cell Frontier. She has studied the social dimensions of science, technology, and medicine for over 15 years and speaks widely on issues of innovation, equity, health, and justice in the U.S. and globally.
Her latest book, Race After Technology: Abolitionist Tools for the New Jim Code, examines the relationship between machine bias and systemic racism, analyzing specific cases of discriminatory design and offering tools for a socially conscious approach to tech development. Please join me in welcoming Dr. Ruha Benjamin.
DR. BENJAMIN: Good evening, everyone. Good evening.
It’s so wonderful to be in conversation with you this evening, and thanks to JeffriAnne for that lovely introduction and all of the conference organizers for all of the work behind the scenes to get us in one virtual space.
Before I begin, I just want to acknowledge that we are all experiencing this pandemic, this moment in history, with varying levels of uncertainty, stress, and vulnerability. Some of us have already lost friends and family members. Many have lost or are at risk of losing jobs, homes, and other things we need to sustain us.
One thing we have all lost is a sense of normalcy. But perhaps a return to normalcy isn’t what we should be after. Because one of the things that is coming to light is how the global spread of a microscopic virus is placing the ravages of racism and inequity under the microscope. I was recently reading an article by one of my favorite writers and thinkers, Arundhati Roy. It’s titled the “Pandemic as a Portal.” And she writes, “Historically, pandemics have forced humans to break with the past and imagine their world anew. This one is no different. It is a portal, a gateway between one world and the next. We can choose to walk through it, dragging the carcasses of our prejudice and hatred, our avarice, our data banks and dead ideas with us. Or we can walk through lightly with little luggage, ready to imagine another world and ready to fight for it.” This image in particular, dragging the carcasses of our prejudice and avarice, our data banks and dead ideas, really resonates with me.
I see us moving through this portal as individuals, communities, and institutions and at all levels we have this opportunity to either drag outmoded ways of thinking and doing things with us. Or we can begin to imagine and craft a world that is more habitable, more just, and joyful. To do this, though, I think we have to reckon honestly with what we actually have been holding on to so that we can let it go. Because if we’re not careful, what will likely happen is that many dead ideas will be repackaged as new and innovative tech solutions for the problems we face.
But we can avoid this. First, by recognizing that there are two popular stories we currently tell about technology. That tech is either going to slay us or save us. Take all of the jobs or make everything more efficient. Hollywood loves the dystopian version because it sells tickets. Silicon Valley loves the utopian version because it sells gadgets. Both are flawed. And, while they may seem like the opposites on the surface, they actually share an underlying logic. That technology is in the driver’s seat, harming or helping us.
But the humans behind the screen are too often missing from view. Human values, assumptions, desires, world views that shape technology. Though at the moment only a narrow slice of humanity is doing the shaping.
So what I would like us to consider is what it means for more people to participate in imagining and building the digital and physical world we all inhabit.
And what is the responsibility of those in leadership positions to broaden participation? Consider a recent experience I had being a nosy sociologist walking by two men in Newark International Airport, when I overheard one say to the other, “I just want someone I can push around,” dot, dot, dot. I didn’t stick around to see how the sentence ended. But I could imagine all types of endings.
It could be in the context of looking through résumés, deciding who to hire. I just want someone to push around at work. Or, in the context of dating or marriage. I just want someone I can push around in my personal life. The desire to exercise power over others is a dominant mode of power that’s been given new license to assert itself. A kind of power that requires others to be subordinate.
At the time, I was traveling to speak with students at Harvey Mudd College about issues of technology and power. And so when I overheard that conversation, I couldn’t help but think of this advertisement from a 1957 Mechanics Illustrated magazine: “The robots are coming. And when they do, you will command a host of push button servants.” And then it says, “In 1863, Abe Lincoln freed the slaves, but by 1965, slavery will be back. We’ll all have personal slaves again. Don’t be alarmed. We mean robot slaves.”
There is so much going on in this one little page that we could talk about it for an hour, but for the sake of time, I’m just going to point out two things. One is to take note of this date, 1957, a time when those were pushed around in the domestic sphere, wives, domestic servants, could no longer be counted on in the same way to, quote, to dress you and comb your hair and serve you meals in a jiffy. Hence the desire to replace free and cheap labor in the home with push button servants. The fact is no technology is pre‑ordained but rather grow out of the broader context that make some inventions appear inevitable and desirable. Perhaps even more telling, is that we will all have personal slaves again. That one little word tells us something about the targeted audience of the ad. Certainly not those who are the descendants of people who were enslaved the first time. The imagined user is gendered, raced, and classed without gender, race, or class ever being mentioned. Code words in this sense, encode the interlocking systems of inequality as part of the design process. Precisely by ignoring social realities, tech designers will almost certainly reproduce it. True in 1957, true today. So, with that I want to offer three takeaways for the conversation that will sort of thread through the rest of my comments.
The first is that racism is productive. Not in the sense of being good, but in the literal capacity of racism to produce things, of value to some, even as it wreaks havoc on others. Many of us are still taught to think of racism as an aberration, a glitch, an accident, an isolated incident, a bad apple in the back woods and outdated, rather than as innovative, systemic, diffuse, an attached incident, the entire orchard, in the ivory tower, in the tech industry, forward looking, productive. In my field of sociology we like to say race is socially constructed, but we often fail to state the corollary that racism constructs.
Secondly, I would like us to think about the way that race and technology shape one another because more and more people are accustomed to thinking and talking about the social and ethical impacts of technology, but that’s only half of the story. Social norms, values, structures all exist prior to any given tech development. So it’s not simply about the impact of technology that we need to be concerned about, but the social inputs that make some inventions appear inevitable and desirable.
Which leads to a third provocation: that imagination is a contested field of action, not an ephemeral afterthought that we have the luxury to dismiss or romanticize, but a resource, a battleground, an input and output of technology and social order. In fact, we should acknowledge that many people are forced to live inside someone else’s imagination. One of the things we have to come to grips with is how the nightmares that many people are forced to endure are the underside of elite fantasies about efficiency, profit, security, and social control.
Racism among other axes of domination, including sexism, ableism, classism, helps to produce this fragmented imagination where we have misery for some, monopoly for others. This means that for those of us who want to construct a different social reality, one that’s grounded in justice and joy, we can’t only critique the underside, who’s harmed or left out by the current structures, but also wrestle with the deep investments, the desire even that many people have for social domination.
I just want someone I can push around. So that’s the trailer of the talk.
Now let’s turn to some specifics starting with a relatively new app called, Citizen. Now this app will send you real-time crime alerts based on a curated selection of 911 calls. It also offers a way for users to report, live stream, and comment on supported crimes via the app. It also shows you incidents as red dots on a map so you can avoid supposedly dangerous neighborhoods. Now some of you are already thinking, what could possibly go wrong in the age of Barbecue Beckys who call the police on Black people cooking, walking, breathing out of place? It turns out that even the Stanford educated environmental scientist living in the Bay Area can be an ambassador of the carceral state, calling the police on a cookout at Lake Merritt. And worth noting that app, Citizen, was called the less chill name, Vigilante. And in its rebranding, it also moved away from encouraging people to stop crime but rather now simply to avoid it. What’s most important to our discussion, I think, is that Citizen and other tech fixes for social problems are not simply about technology’s impact but also about how social norms and values — racial norms, indeed — shape what tools are imagined necessary in the first place. So, how should we begin to understand this duplicity of tech fixes, purported solutions that can nevertheless re‑enforce and even deepen existing hierarchies?
One formulation that is hard to miss is the idea of racist robots. A first wave of stories a few years ago seemed to be shocked at the prospect that, in Langdon Winter’s terms, artifacts have politics. A second wave seemed less surprised.
Well, of course, technology inherits the creator’s biases and now I think we have entered a phase of attempts to override or address the default settings of racist and sexist robots, for better or worse, and here “robots” is a kind of heuristic for thinking about automation more broadly.
One of the challenges we face is how to meaningfully differentiate technologies that are used to differentiate us. This combination of coded bias and imagined objectivity is what I’ve termed, the “New Jim Code,” innovation that enables social containment while appearing fairer than discriminatory practices of a previous era. This riff off of Michelle Alexander’s analysis in her book, “The New Jim Crow,” considers how the reproduction of racist forms of social control in successive institutional forms, entails a crucial socio-technical component that not only hides the nature of domination, but allows it to penetrate every area of social life under the guise of progress.
This formulation as I highlight here is directly related to a number of other cousin concepts, we might call them, by my colleagues Browne, Buolamwini, Broussard, Daniels, Eubanks, O’Neil, and others.
A quick example, hot off the presses of the new Jim Code, racial bias in a medical algorithm favors White patients over the sicker Black patients, reports of a new study by Obermeyer and colleagues in which the researchers were actually able to look inside the black box of algorithm design, which is typically not possible with proprietary systems. What’s especially important to note is that this algorithm does not explicitly take note of race. That is to say, it is race neutral. By using costs to predict health care need, this digital triaging system unwittingly reproduces racial disparities in health care because, on average, Black people incur fewer costs for a variety of reasons, including systemic racism.
In my review of the study by Obermeyer and colleagues, both of which you can download from the journal, Science, or the research tab of my website, I argue that indifference to social reality on the part of tech designers and adopters can be even more harmful than malicious intent. In the case of this widely used health care algorithm, affecting millions of people, more than double the number of Black patients would have been enrolled in programs designed to help them stay out of the hospital if the predictions were actually based on need rather than cost. That is to say, race neutrality can be a deadly force. In fact, this issue has come up in the last few weeks in the context of the pandemic because according to current protocols, in many hospitals, those who are healthier and fitter are given higher priority when it comes to receiving scarce resources like ventilators.
But who tends to be healthier and fitter in the first place? And more importantly, why? To speak only of pre‑existing biological conditions in the context of who is at risk of dying from this virus is to normalize and naturalize death. So, we have to be very clear to also name the pre‑existing social conditions in housing, employment, and health care that have impacted communities well before the current crisis. This virus is not simply biological, but bio-political. It may not set out to discriminate, but the structures in which it circulates certainly do. Including the technical systems that many people think are neutral. As Dr. Hannah McLane explains, “If we strictly adhere to the ‘Save the Most Lives’ principle, we will be treating more White people, more men, more healthy people. Black people are dying in record numbers from COVID‑19, so this ethical oversight may already be playing out,” she says. “We can’t say we aren’t discriminating based on race or ability to pay while algorithmically prioritizing the most likely to survive.”
So zooming back out again to the wider literature and context of the New Jim Code, I situate this idea within a hybrid literature that I think of as Race Critical Code Studies. Again, this approach is not simply concerned with the impacts of technology, but its production, and particularly how race and racism enter the process. As we think about how anti-Blackness, in particular, gets encoded in and exercised through automated systems, I consider four conceptual offspring of the New Jim Code that follow along a kind of spectrum. From engineered inequity, which names those technologies that explicitly seek to amplify social cleavages; these are what we might think of as the most obvious, less hidden dimension of the New Jim Code. But even they typically come wrapped in the packaging of progress. Though, the idea is usually that for some people to move forward, others must be contained.
Default discrimination are those inventions that tend to ignore social cleavages and, as such, tend to reproduce the default settings of race, class, and gender among other axes of difference. Coded exposure examines the way that some technologies fail to see racial differences while others render racialized people hyper visible and exposed to systems of surveillance. And then finally, techno benevolence names those tech developments that claim to address the bias of various sorts but may still manage to reproduce or deepen discrimination, in part because of the narrow way in which fairness is defined and operationalized.
So for the sake of time, I’m just going to sketch the last two dimensions, though these are the structuring concepts for my book, Race After Technology, in which I do a deep dive into all four. As I said, coded exposure names this tension between ongoing surveillance of racialized communities and calls for digital recognition and inclusion; the desire to literally be seen by technology. What I would like to underscore is that it’s not only in the process of being out of sight, as we see through this example on the screen, but also in the danger of being too centered that racialized groups are made vulnerable. So that being seen is not simply positive recognition, but can be a form of unwanted surveillance; but not without creative resistance, as I will come back to in just a moment. But first I’m going to share a brief clip that really illustrates one side of this dialectic of coded exposure.
This clip depicts how a superficial corporate diversity ethos, the prioritization of efficiency over equity, and the default Whiteness of tech development all work together to ensure that innovation literally produces containment. The fact that Black employees are unable to use the elevators, doors, water fountains or turn the lights on is treated as a minor inconvenience in service to a greater good. But good for whom is what we have to ask.
Finally, some of the most interesting developments, I think, are those we can consider techno benevolent, which aim to address bias of various sorts. Take for example new AI techniques for vetting job applicants. A company called Hirevue aims to, quote, “reduce unconscious bias and promote diversity in the workplace by using an AI powered program that analyzes recorded interviews of prospective employees. It uses thousands of data points including verbal and non‑verbal cues like facial expressions, posture, vocal tone, and compares job seeker’s scores with those of existing top performing employees to decide who to flag is a desirable hire and who to reject.” The sheer size of many applicant pools and the amount of time and money that the companies pour into recruitment is astronomical. So companies like Hirevue can step into the mix and narrow the eligible pool at a fraction of the time and cost and over 700 companies worldwide have signed on, everyone from Goldman Sachs to Vodafone. According to Hirevue, there is a lot that a human interviewer misses that AI can keep track of to make, quote, “data-driven talent decisions.” After all, the problem of employment discrimination is widespread and well documented. So, the logic goes, wouldn’t this be even more reason to outsource decisions to AI?
Well, a study by some of my colleagues in the Princeton Computer Science Department examined whether a popular algorithm trained on human writing online exhibited the same racially biased tendencies that psychologists have documented among humans. In particular, they found that the algorithm associated White-sounding names with pleasant words and Black-sounding names with unpleasant words. Which builds on a classic audit study from the early 2000s that sent out old school resumes and got the same pattern of feedback from employers, negative for Black sounding names and positive for White, all other qualifications being equal; so too with gender-coded words and names, as Amazon learned a few years ago when its own hiring algorithm was found discriminating against women. Nevertheless, it should be clear why technical fixes that claim to bypass human biases are so desirable. If only there was a way to slay centuries of racist and sexist demons with the social justice bot. Beyond desirable, more like magical. Magical for employers, perhaps, looking to streamline the grueling work of recruitment, but a curse for many job seekers. Whereas proponents describe a very human‑like interaction, those on the hunt for jobs recount a very different experience. Applicants are frustrated not only by the lack of human contact, but also because they have no idea how they are being evaluated and why they are repeatedly rejected.
One job seeker described questioning every small movement and micro expression and feeling a heightened sense of worthlessness because the company couldn’t even assign a person for a few minutes. As this headline puts it, “Your next interview could be with a racist bot,” bringing us back to that problem space we started with. Though it’s worth noting that some job seekers are already developing ways to subvert the system and develop, trade answers to employer’s tests, and create fake applications of their own for an informal audit. In fact, one HR employee for a major company recommends that people slip in the words ‘Oxford’ or ‘Cambridge’ into your CV in invisible white text to pass the automated screening. But in terms of a more collective response, a federation of European trade unions called UNI Global has developed a charter of digital rights for workers touching on automated and AI based decisions to be included in bargaining agreements and this is just one of many efforts that are thinking proactively about how to create more accountability and power for workers who are the subject of these automated systems.
Another development that gives me some optimism, some hope in the midst of this kind of daily barrage of depressing headlines is that tech workers themselves have increasingly been speaking out against the most egregious forms of corporate collusion with state sanctioned racism. For example, thousands of Google employees have condemned the company’s collaboration on a Pentagon program that uses AI to make drone strikes more effective, and a growing number of Microsoft employees are opposed to the company’s ICE contract saying that, quote, “As the people who build the technologies that Microsoft profits from, we refuse to be complicit.” As this article published by Science For the People reminds us, “Contrary to popular narratives, organizing among technical workers has a vibrant history, including engineers and technicians in the ’60s and ’70s who fought professionalism, reformism, and individualism to contribute to radical labor organizing.”
The current tech workers’ movement, which includes students across our many institutions, can draw from past organizers experiences and learning to navigate the contradictions and complexities of organizing in tech today and which includes building solidarity across race and class. For example, when the predominantly East African Amazon workers in the company’s Minnesota warehouses organized a strike on Prime Day last year to demand better work conditions, engineers from Seattle came out to support them.
In terms of civil society, initiatives like Data for Black Lives and the Detroit Community Technology Project offer an even more far-reaching approach. The former brings together people working across a number of agencies and organizations in a proactive approach to tech justice, especially at the policy level. And the latter develops and uses technology rooted in community needs, offering support to grassroots networks doing data justice research including hosting what they call “disco techs,” which stands for discovering technology, which are these multi‑media local neighborhood workshop fairs that can be adapted in other locales.
Finally when it comes to rethinking STEM education, as I see I already have a few questions about that, I thought ahead, this is the ground zero really for reimagining this relationship between technology and society. There are a number of initiatives underway that we can learn from and adapt. I will just mention one very concrete resource that you can all download for free called the “Advancing Racial Literacy in Tech” handbook, which was developed by some of my colleagues at the Data and Society Research Institute in New York City. The aim of this intervention is three-fold: To develop an intellectual understanding of how structural racism operates in algorithms, social media platforms, and technologies not yet developed; an emotional intelligence concerning how to resolve racially stressful situations within organizations; and a commitment to take action to reduce harm to communities of color.
To that end, the late legal and critical race scholar, Derrick A. Bell, encouraged a radical assessment of reality through creative methods and racial reversals, insisting that to see things as they really are, you must imagine them for what they might be, which is one of many reasons that I’m convinced that the arts and humanities have a vital role to play both in STEM education and tech justice organizing. And so one of my favorite examples of what we might call a “belly-in racial reversal” is this parody project that begins by subverting the anti-Black logics embedded in new high‑tech approaches to crime prevention. Instead of using predictive policing techniques to forecast street crime, the White-Collar Early Warning System flips the script by creating a heat map that flags city blocks where financial crimes are likely to occur. The system not only brings the hidden but no less deadly crimes of capitalism into view, but it creates a map that allows users to avoid high risk areas and engage in citizen policing and awareness, which should sound familiar. Taking it one step further, the development team is working on a facial recognition program to flag individuals who are likely perpetrators. And the training set used to design the algorithm includes the profile photos of 7,000 corporate executives downloaded from LinkedIn. Not surprisingly, the averaged face of a criminal is White and male.
To be sure, creative exercises like this are only comical when we ignore all of its features are drawn directly from actually existing proposals and practices in the real world, including the use of facial images to predict criminality. By deliberately and inventively upsetting the status quo in this manner, analysts can better understand and expose the many forms of discrimination embedded in and enabled by technology. So then, if as I suggested at the start, the carceral imagination captures and contains, then the libratory imagination opens up possibilities and pathways. It creates new templates and builds on critical, intellectual traditions that have continually developed strategies and insights grounded in justice.
So here is my final proposition: if it is a case that inequity and injustice is woven into the very fabric of our society, then that means each twist, coil, and code is a chance for us to weave new patterns, practices, and politics. The vastness of the problem will be its undoing once we accept that we are pattern makers. And if this pandemic is indeed a portal, my hope is that we use this time to imagine and craft a world which we can all thrive.
So with that, I thank you so much for your attention. I’m going to stop sharing my screen now and we are going to have a conversation.
[ADDITIONAL Q&A NOT INCLUDED IN VIDEO]
JEFFRIANNE: Thank you so much, Dr. Benjamin. Wow, so timely, so many wonderful things within your talk. Our Q&A, as you can see, is very lively so I’m going to jump right into some of the questions here.
So the first question that we have that I’m going to pose is thinking about how innovation can create containment. “Can you tell us about how increasing representation and those who create technical innovation can better ensure a level playing field?”
DR. BENJAMIN: Yeah thank you for that question. You know, one way I think about that question is kind of the role of diversity in the tech workforce. In general, what I like to suggest is that diversity is important but not sufficient to address the extent of the issues that I’m describing, the various kinds of discriminatory design.
And you know if we have representation but the people that are adequately represented, so that’s certainly a first step, but if people don’t have sufficient power to exercise their voice and their agency within those organizations and companies, then oftentimes diversity and representation can serve as a kind of veneer, a kind of progressive veneer for business as usual that we don’t really address some of the underlying issues because our website for our organization looks adequately diverse or the people around the table seem to be represented. And I give you one example that again comes out of tech that on one hand it’s an acknowledgment of a problem and attempt to address it but that attempt to address it actually further entrenches the problem because the people who are trying to address it are not diagnosing the issues accurately. This is where something like a fix or a solution can actually be harmful.
So the example comes from late Fall of 2019 when many of the major companies had by then accepted the research that had come out of MIT by my colleague Joy Buolamwini, a Black woman a graduate student who did groundbreaking research showing that most of the major facial recognition systems had ‑‑ were not detecting people with darker skin accurately. So, she did this ground breaking research which you can look up the paper is called gender shades. And because there was an issue that specifically in the way that Black women were inadequately detected as well. At first many of the major companies tried to say that the research was faulty and eventually the rigor was undeniable and they accepted.
So as trying to improve upon their product, Google in particular in late Fall was coming out with a new phone and the phone is one of those that you can open with your face so they wanted it to work on people of all shades, all backgrounds. So, in an attempt to create a more diverse training set for the facial recognition system, Google hired contract workers to approach people to take selfies with the camera so that these faces could populate this training data. And told the contract workers to approach Black people in Atlanta but not just any Black people, but homeless individuals and give them $5 gift cards in exchange for a selfie. And the people were not adequately informed about what these selfies were going to be used for. And the only reason we know about it really is that the contract workers felt that something was wrong. The people who Google had hired. And in fact, some of them went on the record saying that they were instructed to approach homeless people because these individuals were less likely to go to the media about this.
So if we step back and look at the dynamics here, and I promise you it will relate back to the question. Just give me one minute.
The dynamics are one in which you have a company that acknowledges a problem. They want to create a product that is more inclusive. They want the product to be accessible and work for everyone, but to get to that inclusive product, they build a coercive process in which the people who they are relying upon are once again the most vulnerable and are not being informed about what they are being enrolled into. So, you can get ‑‑ you can aim for an inclusive endpoint but get there through a very coercive or alienating process. So that’s why thinking about how we get places, what process we put in place are just as important as thinking about the end point because for me I sit ‑‑ I sit around thinking about who was in that room when that decision was being made? Not only were they diverse in the sense of understanding this long history of science and technology being built on the backs of the most vulnerable in this country and there are mounds and mounds of scholarship that reveal that reality.
So was there anyone around the table not even were there people who were African‑American or Latinx but people who have the disciplinary background to know this history and sociology?
It’s also about the kind of knowledge that needs to be around the table when we are designing and when we are conceiving technologies. Even further, even if these individuals were around the table, did they feel empowered to speak up? To say, you know what? I don’t think this is a good idea. This resembles many other experiments and studies that have been done on prisoners, on farmers, on poor women that reproduce these power dynamics in which some people benefit grossly from science and technology and others are harmed.
So all of that to say is that, yes, I think that diversity representation is vital but it can actually ‑‑ if we only set our sights there, it can actually become a permanent place holder for more fundamental change that we need to be striving for. And change in the way of not just sort of representation but power dynamics. Do people in your organization feel like they can speak up when they think something is going wrong? I think we need to be striving for that in both the process and the end point that we are working towards.
JEFFRIANNE: Very very helpful. And for me as a sociologist all of this resonates and I think I would like to take a moment to answer one other question from Q&A for some of the non‑social scientists in the room, some of the non-theorists. You started off your talk giving us three important takeaways and the first one: was racism is productive, it constructs. If you can for a moment continue to unpack that for some folks here in our conversation who that it might be a little bit, you know unclear.
DR. BENJAMIN: Yeah. And I say it, too, not even just for people outside of the social sciences, but also from ‑‑ for many people who often study the underside of racism. They look at who was harmed. Many, many, even social scientists look at the communities or look at the people who are harmed by oppressive systems for even they by saying that it’s productive, I’m pushing them to look at who benefits from these systems? Who was actually reaping the rewards?
So for example a quick illustration that I use in the introduction to race after technology is simple exercise when you type in two words into your word processor. You type in the word “under‑served.” And your word processor typically recognizes that as a legitimate word like, Go ahead. Continue the sentence.
But until very recently, many word processors, if you typed in the word “overserved” would put the little red underline and say that is not a word. There is no such thing as an overserved person or population. So even in our grammar, in our language, we have a hard time naming the full story. We will look at who is harmed by systems of oppression but we have a hard time talking about who is benefiting that such our technical systems and our computers are telling us, that is not a thing.
So if you don’t have a word for it, you can’t discuss it and thereby change it.
With that in understanding then you can go back through the history of racism and racial oppression in this country and globally and we can look at not simply who was pushed out of neighborhoods due to redlining and restrictive covenants. But precisely because some people were pushed out of neighborhoods and kept from owning homes and generating wealth, a whole other set of people were able to be invested in to get those homes and accumulate that wealth. Pass that wealth on to their children and children’s children and remortgage their home to get that tuition. That whole process becomes naturalized because you forget how it started. And then we tell a story of how some people are just hard working. Some people are just lazy and that’s why we have these vast disparities in wealth and health and on and on. So to tell the full story we need to tell the story of how racism has been and is productive.
So I gave you a quick example there and in terms of our housing policy that we can do that through almost any other arena when we think about our schools. If teachers are looking for trouble in one corner of the room because they have been socialized to associate black boys with trouble and we actually have studies coming out of Yale that put eye tracking technology on preschool teachers and told them to look for challenging behavior. And this eye tracking technology showed that the vast majority of teachers were actually — continued to look and direct their attention to the little Black boys when they were told to look for challenging behavior. So that’s an example of where we see who is being over disciplined, but who is getting away with things? Who is no one looking at? Who is able to escape accountability and responsibility for their behaviors? And that whole other corner of the room is the overserved. Are the people for whom racism and this kind of racist vision of looking for trouble, they are benefiting from that.
And so we always have to ask ourselves, what is the oversight of this story? Who is benefiting from this thing that we see as harmful but some people ‑‑ and the reason why we have to do that is because until we really wrestle with the upside of a harmful system, we are not going to adequately be able to bring it down. Because we will not adequately understand how deeply people are invested in maintaining this. So, long as we think that we can just wish ourselves away or just love ourselves out of this, we don’t understand that people actually understand their livelihood, their sense of self, their sense of sort of you know family are wrapped up in certain racist notions of whiteness is good and blackness as bad. The investment in that polarity is still profound in this country. We don’t adequately have ‑‑ we haven’t wrestled with it to the extent to be able to address it.
That’s why we need to think about the productive side of these systems.
JEFFRIANNE: Thank you, that is so very important. I’m seeing a couple of questions related to ‑‑ there is a question coming in from a student asking about the future of race. I’m sure that we’ve heard these kinds of questions before and I think it’s pertinent to tackle it here.
“So related to whether we as a society or a world can overcome racism,” this is coming from a Brazilian student; this student notes, “I’m interested in your views about a more utopian society. Is there a future where race will no longer be a problem?”
DR. BENJAMIN: You won’t believe it but my son, my teenage sons when I was trying to work earlier today I had my headphones in and they were having a big debate on this very question. I don’t know if there is something in the air or something about the generation in which they are TikToking or Snapchatting about this but that was certainly. I took my earbuds out and was like, “what are you guys talking about?” and one of them was like, “will there be a future without racism?”
Ya know, I don’t position myself as like a prognosticator and the kind of predictor ya know. There is a lot of power in prediction.
That’s why a lot of the tools I look at are tackling these predictive algorithms. So, what I would tell the student from Brazil is that we have to build on a legacy of people who have worked against racism for more just society whether we truly believe it can never happen and whether we are going to see the fruits of it in our lifetime. So, whether or not you have faith in a kind of racism free utopia, the baton is in our hands right now. We have to move ‑‑ we have to move it forward closer to that reality. So every generation I think has a responsibility to ‑‑ especially look at the way that racism is mercurial and morphs and takes on different forms in every generation.
One of the challenges for young people is to understand that a lot of times racism today is not going to look exactly like it did in the text books that you are reading. Just because there is not a sign out in front of that business that says, “this is for whites only”, if there is an algorithm that’s interviewing you and you are being judged against an existing workforce where there are few people like you, that algorithm may weed you out.
Not because you are whatever your background is, but because in prior hiring stages, people were discriminated against. So that form of hiring discrimination is going to change in technology is playing a strong role in hiding discrimination today.
So for this young person I would say that– I think that we are always as human beings going to be challenged by the drive to create status hierarchy. To differentiate ourselves not just in a horizontal fashion where we can appreciate differences, but there will always be a subset of humanity that needs, that thrives off of hierarchy. And every generation is going to have to fight that tendency and address that and not allow it to thrive and to rule and to govern the policies and the way that we set up. So the idea is not that we are ever going to get to some utopic end point, but we should always be striving to get closer to a more just and joyful and equitable society no matter whether we are going to see it in our lifetimes or not.
JEFFRIANNE: I think that’s an excellent point and it’s really important for us‑‑ as you mention for younger people to really be able to recognize and the manifestation of how race appears, very differently today as it did ten or 20 years ago.
I see that we are almost here on the hour so I’m going to ask one more question before I move to wrapping our session and this one is related to solving problems with algorithms.
“So in reference to your example about the racist medical algorithm, you suggest that it should be based on need rather than on cost. How might we encode or measure need? Using established measures of health status, how might a needs based algorithm still produce or perhaps overcome racial disparities?”
DR. BENJAMIN: That’s a great question.
In fact, the researchers who exposed that form of discrimination embedded in that digital triaging system, they are working right now with Optum the company that produced it to create a better sort of prediction for triaging that’s fair. You can look up the work of Obermeyer and co‑authors to look at the nitty‑gritty of that. But what I would say and it goes back to the ventilator example as well that I mentioned in the context of the pandemic, that there is a need to make the systems fair, definitely. But my interest lies a step back to get us to think about the fact that scarcity is manufactured.
The fact that not all patients can receive adequate resources or receive adequate attention, the fact that we have to do this digital triaging in the first place or decide that some people get ventilators or not, that is not inevitable. That is an outgrowth of decisions that people and organizations have made up until that point to invest in some things and not others. And so, where my energy lies is not in tweaking the algorithm or tweaking the system but to get us to actually think about our investments. That we don’t have to live with this scarcity model, whether it’s in health care or education or any other context. We have enough to feed everyone, to treat everyone, to educate everyone. A high-quality education and we choose to create policies and institutions and ways of interacting with each other that actually create disparities needlessly. So rather than create this tweaking of better technologies, I want us to think about these larger social relations and investments and demand radically different investments. Investing in the public good in the collective good so that we can distribute the wealth of resources that we have in all of these arenas to everyone who needs it. We don’t have to pick and choose because when we pick and choose we inevitably reproduce existing social inequities. Those that already have, get more. Those that don’t, don’t. And I don’t think that’s ‑‑ that we have to live with that.
JEFFRIANNE: Thank you so much. This has been such an engaging lively discussion and at this point we are moving towards the end of our conversation. I mentioned at the top of the hour that you have been so gracious to agree to stay on with us for an additional 15 minutes for those folks who are able to stay on will do so and move into the more informal session of continuing the Q&A. Before we do that, I have a few housekeeping thank yous to wrap up our conversation.
So please join us this Friday, this coming Friday from 12:00 to 1:00 mountain time for our final conversation with our very own CEO and co‑founder Lucy Sanders. Lucy’s talk is titled Tech Culture Interrupted. She will be discussing diversity, equity, and inclusion efforts in tech and highlighting an exciting new research based approach for creating sustainable inclusive technical cultures.
And remember once you exit the Zoom conversation, you will see a short pop‑up survey for you to complete and as you know NCWIT evaluation is an important part of the work we do and we would greatly appreciate your feedback so we rely on this to improve the work that we do.
And again, thank you so much to Dr. Benjamin for today’s conversation. Also thank you to all of you who attended and to our sponsors who made these conversations for change series possible. Finally, I want to thank all of the people behind the scenes who made this session possible. Our Q&A and room monitors, our event staff, our research team, and our NCWIT leadership.
At this point we will move to the informal portion of our session where we are going to continue on with some really great questions and answers at the end of the program. Thank you all so much for joining us.
Okay so let’s jump into a really, really deep question here and the person who posed the question, prefaced it by saying, “this one may be too tough but really goes deeper perhaps than some of the solution questions. I struggle with language or concepts that move beyond equity and technology conversations. Bell Hooks defines love as code the will to extend oneself for the purpose of nurturing one’s own for spiritual growth and love is an act of will. Could you please describe the role of love in the Bell Hooks sense in technology.” I love this question.
DR. BENJAMIN: Yeah, this is a person after my own heart. I’m always ready to talk about love. And this idea that love is not just the kind of saccharine sentiment that I truly believe that love is a force. That we exercise, that we can put out there that moves things, moves people, that moves us and so I think for me love is especially relevant when I think about teaching. I think of myself first and foremost as a teacher. And I don’t think that you can really have a relationship with students ‑‑ a fruitful relationship with students unless you care about them and love them. So that’s the first place that it becomes relevant.
The other when we start thinking about the role of criticism and critique, I remember being a college student and just having a lot of criticism from my institution, my college, you know? I ended up graduating as the valedictorian. In my talk I remember just trying to explain that my critique for the school grew out of love. I think when you want something to be better, like you ‑‑ unless you love something, you don’t really care about it. It’s like, be your messed-up self. But if you want something to be better and to be more habitable, to be, you know? And I think that even what feels like criticism it can grow out of a sense of care and love for something.
So I think that also informs my scholarship in that case. So if the question has to do with technology, I think, you know, until very recently I think a lot of people wanted to believe that technology was neutral. Part of an attachment to that, is that when you say that no, it’s situated, it grows out of a particular worldview, desires, aspirations that somehow that brings it down to earth and pollutes it or corrupts it. I don’t think that just because something is situated in that way it can actually be ‑‑ it actually can be a very good thing because when you create technology out of an ethic of love and care for the people that are going to be engaging with it, it makes you much more deliberate and conscious about what you are creating.
You don’t feign that objectivity and neutrality at a distance from those who it’s going to affect. In that sense, I think one of the very things that I think love should do when we begin to think about tech design is it should motivate us to go slower because part of the forms of discriminatory design that I touched upon and that I write about is directly tied to the pace and the market logic of faster, better, quicker, before the competitor, hurry! If that is the animating force, you know that sense of speed, competition and profitability, you don’t really have time to take on board all of these questions about equity and bias and discrimination. That requires time. So, one way perhaps to practice love when we think about tech, design and imagination and the method is love should allow us to slow down out of a sense of caring for how these things are going to circulate in the world. So, I personally think love is always relevant. It’s never too deep and keep asking those sorts of questions.
JEFFRIANNE: So timely, especially now. We have a couple of questions here related to the K‑12 space.
So the attendee wrote to Dr. Benjamin, “how can K‑12 school districts rebooting from the global pandemic have increased awareness of the inequity challenges in the entire education ecosystem?” Some of these especially in tech have been around for decades. What can we do differently?”
DR. BENJAMIN: The idea that so many of the longstanding inequalities, even when it comes to access to the internet, to devices, like that basic level of access before you get into algorithms and AI and all of the rest are laid bare now.
They were there this whole time and we found a way to keep business as usual. So now ‑‑ so one of the things that I think about ‑‑ and I have colleagues, scholars, who are K‑12 educators that have given this a lot of thought who written about it and speak about it so I would also defer to their expertise, especially in the height of their expertise that’s even grown in the last couple of months.
One of the things that I could imagine being really fruitful is to look at the inequities head on. To make them an object of study for students. That is to say, let us dissect what has happened both in the way that technology and education are sort of brought together. That might be in terms of looking at the kind of ‑‑ the internet access in different neighborhoods. But that also might be actually studying these platforms that we are using. Let us look at the default settings of Zoom, of the Google Hangout– all of the things and open up the black boxes and study ourselves. Study these platforms as we are using them rather than just take them for granted. I think that there is a lot that we can learn by objectifying what we have been forced into, this reality that we have been forced into.
I personally– one of my sort of mental health strategies is to study these things that affect me. Like right now with the virus, to think about it in a way that you ‑‑ it’s not that you depersonalize it, but you realize that what you are going through is bound up with a much larger situation. And once you begin to realize that this is not just my personal problem, this is a social problem.
This is something that is patterned. And it’s very ‑‑ even at this stage I have been doing this for ‑‑ since my high school days, this idea of looking at the patterns. Once you begin to see the patterns, there is a sense of empowerment that you, in being able to name the reality that is affecting you, you begin to have power over that reality. And that’s why when I talk about those two words, under‑served and overserved, being able to name something gives you a sense of agency and power vis‑a‑vis that thing. When you can’t name it and it’s still impacting your life, still influencing and shaping you and don’t have a name for it and can’t pin it down, that to me is one of the elements of being disempowered. What about giving‑‑ but building with students a language to name this new reality that we are living? A reality which implicates the role of technology in all of these systems to build this tool kit to name and study and ultimately with the idea that we want to change it. Before you can change it, we have to build that grammar and that conversation to understand that this is part of a process. I think that there is an opportunity to do that especially in the secondary schools stage and so I would encourage my colleagues and everyone to think about how to do that and how people have been doing that so bring it into this moment.
JEFFRIANNE: Okay. I think we have time for one final question and this question, it was really popular. So, folks really want to know the answer to this. We recognize that you are not forecasting things for us.
“Many universities and colleges are intent on returning students to campus in the Fall. What advice do you give for people who work at universities who are inconsistent on returning students to campus? For instance, a STEM university in a predominantly black neighborhood is trying to return students that are not predominantly black and come from other areas to campus with seemingly little understanding or care for all of the issues of harm and historical imbalances that you have been talking about.”
DR. BENJAMIN: One of the things I’ve been feeling and thinking the last few weeks is how I feel so fortunate not to be in an administrative capacity right now and having to make these decisions. Like I really ‑‑ I really empathize with those who are having to make these serious decisions impacting many, many people.
You know, I think both in terms of the people who are make the decisions and those who want to challenge those decisions, I think it’s really important to figure out what are the guiding principles for whatever decision you are making to articulate those principles and for that, those principles to be deliberated upon by everyone who is affected by it rather than a kind of top down, a top down policy. And I think for me one of the key principles in this context is that we have to put at the center of our decisions those who are most vulnerable in a situation.
And the driving principle cannot be necessarily simply the financial viability ‑‑ that has to be part of it. Or we want to get back to business as usual, but one of the things I hinted to at the beginning of my talk and I would encourage everyone to read that article, “The Pandemic is a Portal”. One of the things that Roy says is that the worst thing we can do right now is to go back to normality. This insistence to get things back on track when back on track was already not working for many, many people. And so, I think we have to ‑‑ we have to take seriously this idea that we are living in a break. We are living in a waiting room between one world and the next. And we don’t need to simply try to revert back to some imagined security that was not often secure for most people. So, then the question becomes for a school like this, for all of our institutions, who are the most vulnerable in this context? In terms of the people doing the labor. But also this idea that somehow students are ‑‑ have greater immunity and so on.
But students do not exist in a vacuum. They live in families.
In this case many families in which you have elders and relatives that are particularly vulnerable. We have to understand that a school is not simply about teachers and students and staff but it’s about entire families and communities. If you are serving a community and families for which this virus has been completely ravaged through, there can be no sense that you are just going to pluck the students out and little bubbles and bring them back to normal. So that’s going to require ‑‑ that’s a non‑answer.
That’s to say that the drive to do that we have to question and challenge and put at the center of our deliberations about what principles should guide us. The care, the deep care and love for the people who are the most affected and most vulnerable in the context of our decision.
Hopefully that speech will mean I will never be a dean of anything or President of anything because I am not the one ‑‑
JEFFRIANNE: We never know. Well, this hour actually, 75 minutes have flown by. I have thoroughly enjoyed this conversation and I know that all of our participants ‑‑ that all of our participants. Enjoyed it very much, so engaging. And again, so timely. Thank you so much, Dr. Benjamin for joining us.
DR. BENJAMIN: Thank you so much. It was such a pleasure to be in conversation with you, my sociologist sister. And thank you to NCWIT for all of your work for all of these conversations. It’s really a service at this time. I appreciate participating.
JEFFRIANNE: On that note, everyone have a wonderful evening.