Josie V. Williams is an afro-futurist focused on the intersection of technology, art, and culture. Josie is founder of Algorithmic Equity, an interactive digital platform that empowers any New Yorker to report, record, or respond to law enforcement behavior. She is currently on the Creative Science track at NEW INC, an incubator out of NYC’s New Museum. Her primary interests are machine and deep learning, algorithmic equity, and creating sentient-centered AI. Her other projects revolve around chatbot conversations and the use of biometrics for mass surveillance.
She has also researched at the NYU Medical Center under Professor Narges Razavian where I had the privilege of being the only undergraduate researcher in a program that typically employs graduate and Ph.D. students. She has presented her research on bias in chronic kidney disease prediction modeling at NeurIPS's Fair Health in ML workshop in Vancouver, Canada and NYC Media Lab’s Summit in 2019. She graduated from NYU in 2019 with a Computer Science major and Web Programming and Application minor and completed coursework in game theory, statistics, discrete mathematics, object oriented programming, algorithmic problem solving, advanced data structures, computer graphics, and natural language processing.
MY RESEARCH & PROJECTS
From Theory to Reality
Algorithmic Equity is an interactive digital platform that empowers any New Yorker to report, record, and respond to law enforcement behavior. For Algorithmic Equity, success will be radical transparency in the documentation of NYPD misconduct. The goal is to create an honest and open representation of community policing trends and officer behavioral patterns generated directly by impacted populations. Every New Yorker is a data scientist.
FAIRNESS IN CHRONIC KIDNEY DISEASE PREDICTION
For my undergraduate research, I created a supervised learning model trained on hospital electronic health records to predict the likelihood of chronic kidney disease in any given patient. As a result, the model’s prediction prowess for subgroups with already high accuracy scores remained constant, and the performance of risk prediction for the rest of the subgroups increased. Our findings were well received by the academic community. In December 2019 I had the opportunity to present this research at NeuRIPS at the Fair ML in Health workshop in Vancouver, Canada.
Ancestral Archives, a collection of chatbots modeled after historically-significant Black leaders designed to bring these figures back in new dimensions. The goal is to cultivate and develop a connection between the revolutionary leaders of the past and a future generation of activists and critical thinkers. This project combines the learning capabilities of deep neural networks with the power of radical Black culture to create a thoughtful and one-of-a-kind experience.
ROBOTIC BLUETOOTH PROJECTION
I built a Raspberry Pi-powered video rover that’s controlled by an iPhone via Bluetooth and projects footage captured from the camera. This was a side project I did to build a basic foundation in robotics.
THE CHAMELEON PROJECT
The Chameleon Project was inspired by Octavia Butler’s Bloodchild, Geoge Orwell’s 1984, the Hong Kong Protests, and the Black Lives Matter protests worldwide. How can one achieve invisibility in the modern world? How can we combine existing research with innovation to produce inconspicuous technology intended to deflect big data surveillance? Over the course of a year, I will implement all outlined solutions to attempt to become an invisible woman in the 21st century.
TOWARDS QUANTIFICATION OF BIAS IN MACHINE LEARNING FOR HEALTHCARE: A CASE STUDY OF RENAL FAILURE PREDICTION
As machine learning (ML) models, trained on real-world datasets, become common practice, it is critical to measure and quantify their potential biases. In this paper, we focus on renal failure and compare a commonly used traditional risk score, Tangri, with a more powerful machine learning model, which has access to a larger variable set and trained on 1.6 million patients' EHR data. We will compare and discuss the generalization and applicability of these two models, in an attempt to quantify biases of status quo clinical practice, compared to ML-driven models.