Artificial intelligence is amplifying unethical power structures, USF is part of the problem.

GRAPHIC BY JOANNE CHU/GRAPHICS CENTER

Ornelle Fonkoua Mambe is a senior computer science and economics major. 

Kai Middlebrook is a USF alum ‘20 with a BS in data science.

USF’s core mission is grounded in the Jesuit Catholic tradition of “taking action against the things that degrade human dignity; tending to the whole person; uniting the mind and heart; and amplifying the voices of the underserved, disadvantaged, and poor.” Despite USF trying to offer “students the knowledge and skills needed to succeed as persons and professionals, and the values and sensitivity necessary to be men and women for others,” it has failed to maintain these commitments within the departments of computer science and mathematics and statistics — despite students’ requests for change. 

Often, when people talk about addressing bias in technology — including prominent figures in the computer science and artificial intelligence (AI) fields — they focus on “debiasing” the data they work with. But this narrow framing of bias is deeply flawed. Bias in tech is systemic. It is the result of larger systems of power; often mirroring the historical patterns of discrimination, inequity, and oppression in our society. To truly address bias in technology, we must acknowledge that social dynamics matter and take the necessary steps to remedy these inequities.

A growing body of research in recent years has highlighted the ways that AI systems can cause harm to under-represented groups or those with less power, socially, politically, economically, or otherwise. For example: facial recognition technologies miscategorize Black faces and fail to recognize minorities; criminal sentencing algorithms discriminate against Black defendants; chatbots adopt racist and misogynistic language when trained for online discourse; and risk assessment systems often punish the poor and reinforce existing structural inequities

These systems are developed almost exclusively by a handful of technology companies and a small set of elite university laboratories which tend to be extremely white, affluent, technology-oriented, and male-dominated. They also have a history of discrimination, exclusion, and sexual harassment. The cost of speaking out in these spaces is tremendous, even for prominent figures. In the past few months alone, Google abruptly fired two AI employees after they called for greater diversity among Google’s technical staff and expressed concern that the company was starting to censor research papers critical of its products. 

Given its absence of a curriculum which explicitly addresses such integral problems in technology, USF fails to prepare students to ask some of the most fundamental questions as it relates to their lives’ work. It is therefore imperative that the departments of computer science and math and statistics integrate technology ethics into their required curriculums, especially in light of recent events highlighting systemic racism and the well-documented problem of discrimination in artificial intelligence.

Both departments should offer a semester-long, 4-credit course on AI and computer science ethics and require all future computer science and data science majors to take this ethics course to fulfill their respective degree requirements. The AI and computer science ethics course should also satisfy USF’s ethics or philosophy core requirement to avoid adding additional overhead to degree requirements.

The AI and computer science ethics course must offer students the knowledge, skills, and foresight necessary to promote equitable and accountable AI and computer science in the real world, in alignment with the University’s Jesuit values. This requires the course to first, address the illusion that the problem of bias in technology can be “solved” by “fixing” or “removing” bias in individuals, in datasets, or in algorithms; second, acknowledge the role that perverted power structures, social dynamics, and historical patterns of oppression, discrimination, and inequity play in amplifying and naturalizing bias in technology; and third, explore the ways that these issues can be tackled in real life.

The problem of bias in technology will not go away on its own. It has to be actively rejected. If USF truly stands by its Jesuit mission, then the computer science and mathematics and statistics departments must offer students the knowledge and skills necessary to understand and design systems that address oppression, inequity, and discrimination in the virtual world.

Leave a Reply

Your email address will not be published. Required fields are marked *