The Double-Edged Sword of LLMs: Mitigating Extrinsic Task Bias and Advancing Intrinsic Debiasing

LocationISI Foundation, Seminar Room
Speaker(s)Dr. Gianmarco Cafferata - PhD, University of San Andrés (UdeSA)
Computational Social ScienceData Science
Ryunosuke Kikuno Roqmkx6zzx8 Unsplash

ABSTRACT
Addressing AI bias means both leveraging LLMs to reduce task-level disparities and correcting the intrinsic biases within the models themselves. I will first present our research on geolocation extraction for humanitarian crisis response, which is the result of a collaboration with the ISI Foundation, demonstrating how LLM reasoning effectively reduces socioeconomic and geographical disparities in NER tagging for humanitarian crisis response. While these models act as agents for task equity in some applications, they remain subjects of internal bias inherited during training. In the second part I will show how debiasing through techniques like machine unlearning and representational editing is essential to fix these intrinsic stereotypes, presenting some preliminary results.

SHORT BIO
Gianmarco Cafferata is a Computer Engineer who graduated from the University of Buenos Aires. He currently teaches undergraduate courses in the AI Engineering program at the University of San Andrés (UdeSA), where he is also a PhD candidate focusing on fairness in natural language processing. In industry, he serves as a technical leader at Mercado Libre—the largest e-commerce and digital banking platform in Latin America—specializing in unsupervised learning.

Published on wednesday, 8 april 2026

Related News