I am a PhD candidate at the University of Michigan, School of Information, advised by Florian Schaub. I am currently a visiting PhD student at Georgetown University. Previously, I obtained a M.Sc. degree in biostatistics at the University of Michigan, School of Public Health, and a B.A. in applied math and political science at Macalester College in Saint Paul, Minnesota.
I envision an AI-infused society where people enjoy the benefits of technology without compromising their privacy or security. In my work, I ask: How can we empower people to recognize, interpret, and respond to privacy and security risks? To answer this question, I create human-centered solutions that strengthen the integrity of AI data infrastructures, including usable and useful privacy notices, privacy-preserving data processing, and protections against deceptive synthetic content.
My work spans four interconnected streams that engage diverse users and experts, tackling challenges from data protection to data misuse:
I advance the legibility of privacy information by transforming existing legal and technical mechanisms, which are intended to protect users yet are rarely understood, into communicable and interpretable infrastructures. For example, my large-scale analyses of privacy notices in the financial industry exposed how fragmented privacy laws lead to inconsistent privacy disclosures, and I offered actionable policy recommendations for useful and usable transparency. I broadened this research stream by designing user-centered explanations of privacy-enhancing technologies (e.g., differential privacy, federated learning) that support users' informed privacy decision-making.
I address the tension between AI's data demands and data sensitivity by creating human-centered tools for valid data analysis under privacy constraints.
I examine AI-mediated deception, particularly deepfake scams where the unique deceptive intimacy exploits familiar social relationships and identity trust. I am designing and evaluating effective deepfake scam warnings with actionable advice to enable immediate user action in real-time video calls against this new class of security threats.
I investigate AI’s societal implications across domains, especially creative and knowledge work. I found that creative work relies on identity-bearing materials and invisible labor, suggesting the need for process- and labor-aware protections when creators' content can be taken up as AI training data.
I have published extensively across top-tier venues in cybersecurity (e.g., ACM Conference on Computer and Communications Security (CCS), Proceedings on Privacy Enhancing Technologies (PoPETs/PETS)), human-computer interaction (e.g., ACM CHI Conference on Human Factors in Computing Systems (CHI), ACM Conference on Computer-Supported Cooperative Work & Social Computing (CSCW)), and computational social science (e.g., AAAI Conference on Web and Social Media (ICWSM)). My work has been recognized with a Distinguished Paper Award (top <1%) at ACM CCS 2025.