AI Robustness & Adversarial Safety

AI systems deployed in the real world must withstand adversarial manipulation and perform reliably across the full spectrum of human language variation. This project investigates how dialect diversity, authorship obfuscation, and expert-level text editing expose critical vulnerabilities in content detection systems. From stress-testing harmful content classifiers across 50 English dialects to evaluating robustness against sophisticated evasion techniques, this work reveals that the Digital Language Divide is not only a gap in language coverage but also a security vulnerability—one that adversaries can exploit when AI systems are brittle to linguistic variation.
Related Publications:
- DIA-HARM (2026) — Harmful content detection robustness across 50 dialects
- Authorship Obfuscation in Multilingual MGT Detection (2024, EMNLP)
- BEEMO (2025, NAACL) — Expert-edited machine-generated outputs benchmark

I am a PhD candidate in Informatics in the College of IST at Penn State University, where I conduct research at the PIKE Research Lab under the guidance of Dr. Dongwon Lee. I specialize in AI/ML research focused on Information Integrity, Safe and Ethical AI, including combating harmful content across multiple languages and modalities. My research spans low-resource multilingual NLP, generative AI, and adversarial machine learning, with work extending across 79 languages. I have published 12 papers with 260+ citations in premier venues including ACL, EMNLP, IEEE, and NAACL.
My doctoral research focuses on bridging the digital language divide through transfer learning, classification (NLU), generation (NLG), adversarial attacks, and developing end-to-end AI pipelines using RAG and Agentic AI workflows for combating multilingual threats. Drawing from my Grenadian background and knowledge of local Creole languages, I bring a global perspective to AI challenges, working to democratize state-of-the-art AI capabilities for underserved linguistic communities worldwide. My mission is to develop robust multilingual multimodal systems and mitigate evolving security vulnerabilities while enhancing access to human language technology through cutting-edge solutions.
As an NSF LinDiv Fellow, I conduct transdisciplinary research advancing human-AI language interaction for social good. I actively mentor 5+ research interns and teach Applied Generative AI courses. Through industry experience at Lawrence Livermore National Lab, Interaction LLC, and Coalfire, I bridge academic research with practical applications in combating evolving security threats and enhancing global AI accessibility. I see multilingual advances and interdisciplinary collaboration as a competitive advantage, not a communication challenge. Beyond research, I stay active through dance, fitness, martial arts, and community service.