My research sits at the intersection of natural language processing, information integrity, and AI safety, with a focus on the systems and populations that current AI tooling has historically left behind. I build models, datasets, and evaluation frameworks that:
Below is a high-level summary of my research themes. Detailed papers, datasets, and code are linked from each.
Extending modern NLP techniques — fine-tuning, prompt-based learning, retrieval, and adversarial training — beyond English into the long tail of low-resource languages and dialects. My BLUFF benchmark spans 79 languages (20 high-resource + 59 low-resource) with 200K+ samples, and DIA-HARM evaluates robustness across 50 English dialects.
Building robust, equitable detection systems for mis/disinformation, harmful content, and AI-generated text — and rigorously characterizing where and why they fail. My work consistently shows that detector performance gaps along language, dialect, and resource axes are systematic, not marginal.
Understanding the threat surface of modern language systems — jailbreaking, hallucination, dialectal evasion, agentic adversaries — and designing defenses that hold up under realistic, non-Standard-American-English inputs.
Drawing on my Grenadian background and knowledge of Caribbean Creole varieties, I bring a global perspective to AI fairness, focused on ensuring that the next generation of AI systems serves the linguistic communities most often left out of training data.
Detailed statements describing my research vision, teaching philosophy, and commitment to diversity, equity, and inclusion are available below.
For the most up-to-date overview, please see my CV or contact me directly.