This is me
I am a Research Associate in Artificial Intelligence (AI) at King's College London (UK) where I explore how algorithms are impacting on non-privileged subjects from a technical, political and legal perspective. Building in my experience as a mathematician and computer scientist, my interest lies in investigating power relationships in algorithmic governance and how public and private actors are fueling the future with AI.
Currently, I am investigating how datafication technologies are transforming borders and migration governance at Security Flows (ERC's project led by Prof. Claudia Aradau). Within this project, I have developed digital methodologies to discover which algorithmic systems are used in the field of border security and to better understand the UK and EU asylum system. I am also analysing the past, present and future of biometric systems from a critical perspective. Moreover, I bridge the gap between computer and social science by translating technical concepts.
In the past, I conducted research in transdiciplinary teams to design technological solutions in the private sector. I am a former fellow of the Data Science for Social Good program at University of Chicago. Nowadays, I collaborate with the Jevon’s Paradox blog, where we examine the relationship between science and technology, knowledge and power. I am a research advisor on Algorace, a project that raises awareness about the algorithmic risks, harms and limitations on racialised subjects. I am also part of the Algorights, a civil society initiative, including volunteers and community-based organizations, that analyses AI's impacts on society from an ethical perspective.
Machine Learning Digital methods
Natural Language Processing
Fairness and Accountability AI Ethics
Social Justice Power and Resistance
Valdivia, A. et al. (2022). Judging the algorithm: A case study on the risk assessment tool for gender-based violence implemented in the Basque country. arXiv preprint arXiv:2203.03723.
Valdivia, A., et al. (2021). There is an elephant in the room: Towards a critique on the use of fairness in biometrics. arXiv preprint arXiv:2112.11193.
Valdivia, A. et al. (2021). How fair can we go in machine learning? Assessing the boundaries of accuracy and fairness. International Journal of Intelligent Systems, 36(4), 1619-1643.
Valdivia, A. et al. (2020). What do people think about this monument? Understanding negative reviews via deep learning, clustering and descriptive rules. Journal of Ambient Intelligence and Humanized Computing, 11(1), 39-52.
Omnes et singulatim: Collective and individual subjectivities in algorithmic governmentality
This workshop aims to inaugurate a (trans)disciplinary debate about how to critically rethink subjectivities in AI.
Speakers: Antoinette Rouvroy, Claudia Aradau, Bernard Harcourt, Seda Gürses, Colin Koopman, Lorena Jaume-Palasí.
Organised by: Daniele Lorenzini (Warwick University), Martina Tazzioli (Goldsmiths), Ana Valdivia (King's College).
Datafication and Migration
I co-present with Martina Tazzioli our work: Making up migrants. Invisibilize, foster and recraft racialised borders through artificial intelligence at the GeoMedia Speaker Series.
Colonial Legacies, Biometric Futures: from Galton to the Entry-Exit System
I was invited to the seminar organised in the frame of the MA in International Relations (Goldsmiths, University of London) to present my work on the theoretical foundations of biometrics and how it has evolved through the development of computational methods such as deep learning.