JOSIE WILLIAMS

researcher, developer, designer

 

ABOUT JOSIE

Josie V. Williams is an afro-futurist focused on the intersection of technology, art, and culture. Josie is founder of Algorithmic Equity, an interactive digital platform that empowers any New Yorker to report, record, or respond to law enforcement behavior. She is currently on the Creative Science track at NEW INC, an incubator out of NYC’s New Museum. Her primary interests are machine and deep learning, algorithmic equity, and creating sentient-centered AI. Her other projects revolve around chatbot conversations and the use of biometrics for mass surveillance.

She has also researched at the NYU Medical Center under Professor Narges Razavian where I had the privilege of being the only undergraduate researcher in a program that typically employs graduate and Ph.D. students. She has presented her research on bias in chronic kidney disease prediction modeling at NeurIPS's Fair Health in ML workshop in Vancouver, Canada and NYC Media Lab’s Summit in 2019. She graduated from NYU in 2019 with a Computer Science major and Web Programming and Application minor and completed coursework in game theory, statistics, discrete mathematics, object oriented programming, algorithmic problem solving, advanced data structures, computer graphics, and natural language processing.

IMG_9433
IMG_5741.PNG
28056254_2069140793103259_41049389648058
IMG_9434_edited_edited.jpg
27973931_2069141023103236_19364068527714
27867331_2069140683103270_79648815390490
IMG_5739.JPG
28166300_2069140669769938_91697193451285
IMG_9025.jpeg
27972724_2069140829769922_58141351244537
27858804_2069141063103232_41212223868201
 

MY RESEARCH & PROJECTS

From Theory to Reality

DdVx_AhVwAA_Vy9_edited.jpg

ALGORITHMIC EQUITY

Janurary 2020

Algorithmic Equity is an interactive digital platform that empowers any New Yorker to report, record, and respond to law enforcement behavior. For Algorithmic Equity, success will be radical transparency in the documentation of NYPD misconduct. The goal is to create an honest and open representation of community policing trends and officer behavioral patterns generated directly by impacted populations. Every New Yorker is a data scientist.

5955.jpg

FAIRNESS IN CHRONIC KIDNEY DISEASE PREDICTION

November 2018

For my undergraduate research, I created a supervised learning model trained on hospital electronic health records to predict the likelihood of chronic kidney disease in any given patient. As a result, the model’s prediction prowess for subgroups with already high accuracy scores remained constant, and the performance of risk prediction for the rest of the subgroups increased. Our findings were well received by the academic community. In December 2019 I had the opportunity to present this research at NeuRIPS at the Fair ML in Health workshop in Vancouver, Canada.

 
external-content.duckduckgo.jpg

ANCESTRAL ARCHIVES

March 2021

Ancestral Archives, a collection of chatbots modeled after historically-significant Black leaders designed to bring these figures back in new dimensions. The goal is to cultivate and develop a connection between the revolutionary leaders of the past and a future generation of activists and critical thinkers. This project combines the learning capabilities of deep neural networks with the power of radical Black culture to create a thoughtful and one-of-a-kind experience.

IMG_0440.jpeg

ROBOTIC BLUETOOTH PROJECTION

August 2019

I built a Raspberry Pi-powered video rover that’s controlled by an iPhone via Bluetooth and projects footage captured from the camera. This was a side project I did to build a basic foundation in robotics.

THE CHAMELEON PROJECT

TBD

The Chameleon Project was inspired by Octavia Butler’s Bloodchild, Geoge Orwell’s 1984, the Hong Kong Protests, and the Black Lives Matter protests worldwide. How can one achieve invisibility in the modern world? How can we combine existing research with innovation to produce inconspicuous technology intended to deflect big data surveillance?  Over the course of a year, I will implement all outlined solutions to attempt to become an invisible woman in the 21st century. 

PUBLISHED WORK

November 2019

TOWARDS QUANTIFICATION OF BIAS IN MACHINE LEARNING FOR HEALTHCARE: A CASE STUDY OF RENAL FAILURE PREDICTION

As machine learning (ML) models, trained on real-world datasets, become common practice, it is critical to measure and quantify their potential biases. In this paper, we focus on renal failure and compare a commonly used traditional risk score, Tangri, with a more powerful machine learning model, which has access to a larger variable set and trained on 1.6 million patients' EHR data. We will compare and discuss the generalization and applicability of these two models, in an attempt to quantify biases of status quo clinical practice, compared to ML-driven models.