Share

August 31, 2023

What results when you bring together esteemed Dartmouth faculty, a Ph.D. student, and staff to discuss algorithmic bias?
Just as many questions answered as questions posed!

Pictured from left: Susan Brison, the Eunice and Julian Cohen Professor for the Study of Ethics and Human Values, Department of Computer Science faculty member Deeparnab Chakrabarty, Simon Stone, a Dartmouth Library Research Data Science Specialist, Engineering graduate student, Chase Yakaboski, and Luis Alvarez Leon, Assistant Professor in the Department of Geography

Pictured from left: Susan Brison, the Eunice and Julian Cohen Professor for the Study of Ethics and Human Values, Department of Computer Science faculty member Deeparnab Chakrabarty, Simon Stone, a Dartmouth Library Research Data Science Specialist, Engineering graduate student, Chase Yakaboski, and Luis Alvarez Leon, Assistant Professor in the Department of Geography

Bias in Algorithms Event: reflections on a crucial topic

Questions like, who or what is the root cause of unfair or disadvantaged results? How do we mitigate the risks? Do we want to? Who holds the greatest responsibility: designers, programmers, business executives, politicians, or everyday people? Who benefits from the design of these technologies and the algorithms that feed them?

The Bias in Algorithms session revealed that no individual actor creates algorithmic bias. Instead, it's an entire ecosystem: machines, people, data, markets, and institutions, a perspective shared by panelist Luis Alvarez Leon, Assistant Professor in the Department of Geography. 

He provocated by saying, "Information asymmetry reinforces power asymmetry. Systemic distortions - the conditions from which the algorithms emerge - create those biases and can't be isolated to a single source. It really is an entire assemblage," and "rather than challenging or overcoming [those systemic issues], tech 'fixes' reinforce the status quo."

"Humans are good at heuristics and reasoning, but that can be detrimental in a societal context. The data inputted into the [algorithmic] models reflect individual thinking." - Chase Yakaboski

photo from the event Bias in Algorithms with librarian Matt Benzing emceeing the event

Engineering graduate student Chase Yakaboski pointed out that these algorithms don't know they're biased. Instead, they're programmed to attempt to find the best answer based on patterns from the data. He also shared that "humans are good at heuristics and reasoning, but that can be detrimental in a societal context, so the data inputted into the models reflect individual thinking." 

Simon Stone, a Dartmouth Library Research Data Science Specialist, reiterated that bias is intrinsic in design and used in various ways. It's not the algorithmic model that creates bias, but instead, the datasets informing the model. The critique that this event supposed about bias in algorithms, he noted, is how the algorithmic outputs can lean toward the detriment of a person or cohort of people.

Department of Computer Science faculty member Deeparnab Chakrabarty, better known as Deep C, reminded us that the algorithm comprises a set of 1s and 0s. Bias comes from us – the humans – our perceptions, what we consider fair or unfair, and how the code is trained.

"The code itself isn't biased. The code is a set of instructions, and the parameters the code is trained on may not present what is considered fair." Deep C then shared the example of how AI is being used to determine prisoner recidivism and who is granted parole using algorithmic models that predict future behaviors, or "criminal risk assessment algorithms."

Throughout the session, Susan Brison, the Eunice and Julian Cohen Professor for the Study of Ethics and Human Values, demonstrated the social impact of algorithmic bias.

She shared two books that delve deeper into this topic: Algorithms of Oppression and How We Grow the World We Want. She reminded us of the myth that anything scientific is objective, neutral, and free of human bias. Ultimately, technology, she says, reflects and reproduces society's inequities. And when there is ethics-washing, or "creating a superficially reassuring but illusory sense that ethical issues are being adequately addressed, to justify pressing forward with systems that end up deepening current patterns", involved, the fundamental problems are not being solved. 

What was made clear in the event was how the solution to mitigating bias in algorithms was unclear. Groups of people at different times will always be at a disadvantage. "Absolute fairness is not going to be attainable, but efforts can be made in the various domains across the algorithm ecosystem to mitigate some of that."

Thanks to the Dartmouth Librarians who helped make this event possible, Lilly Linden, Matt Benzing, and Tricia Martone, and to the panelists for sharing their expertise and perspectives on such a critical topic.

 

Expand your knowledge and skills.

Back to top