Mentorship Statement

May 3, 2026·
Jason Lucas
Jason Lucas
· 3 min read
research

Jason Samuel Lucas · Assistant Professor of Information Science · University of Colorado Boulder · Director, Secure and Ethical AI Lab (SEAL)


Philosophy

My mentorship philosophy centers on a single idea: meet students where they are and scaffold pathways to independence. Students arrive at research with different preparation, different starting points, and different reasons to be there. The work of a mentor is to take that seriously, calibrate accordingly, and build the runway each student needs without flattening the standards the work deserves.

Track Record

Through Penn State’s Millennium Scholars program, I have mentored undergraduate researchers on progressively complex projects:

  • Kendall Reed II advanced from analyzing existing datasets to leading original research, with a forthcoming co-authored publication.
  • Lia Carin Djaouga progressed from literature reviews to designing cross-lingual transfer frameworks for low-resource language settings.

In both cases, the trajectory was the point: each milestone was scoped just beyond current capability, never punitively, and never below the standard of the work. I have additionally guided junior PhD students on research scoping, mentored undergraduate teams on collaborative robotics with an emphasis on inclusive problem-solving, and coordinated ENVISION: STEM Career Day, an annual event reaching 500+ young women across central Pennsylvania.

SEAL’s Mentoring Model

The Secure and Ethical AI Lab (SEAL) at CU Boulder is built around three commitments:

1. Substantive independence within a coherent program. SEAL students lead their own research questions, but those questions sit inside a shared intellectual frame: AI safety and equity across the digital language divide. Independence is not isolation. Coherence is not constraint.

2. Visibility matters. Students need to see scholars who navigated paths similar to their own. I share my trajectory openly, from Caribbean island student to R1 doctoral candidate, navigating a learning disability and limited resources, because students from underrepresented backgrounds often do not have models for the path they are on. Visibility is not a substitute for support; it is a condition for it.

3. Community over hierarchy. SEAL operates as a research community, not a chain of command. Senior students mentor junior students. Postdocs co-advise undergraduates. The PI is the scaffolding, not the ceiling. Ideas come from everywhere, and credit is distributed accordingly.

Who SEAL Looks For

I welcome PhD students, MS researchers, and undergraduates motivated by AI safety, multilingual NLP, low-resource and dialect-aware language technology, adversarial evaluation, or the broader question of who AI systems serve and who they fail. Strong candidates need not arrive with all the technical pieces in place. They need curiosity, rigor, and a willingness to engage with both the technical and the social dimensions of the work. The technical pieces, we build together.

An Open Invitation

If your research interests intersect with SEAL’s mission, I encourage you to reach out. I am especially committed to mentoring students from Caribbean and African diaspora communities, first-generation graduate students, and others underrepresented in AI research. The lab’s work is improved when its membership reflects the linguistic and cultural diversity of the communities the work is meant to serve.

Prospective students can reach me directly, or apply to the PhD program in Information Science at CU Boulder and indicate interest in SEAL in their application materials.


Last updated: May 2026 · Download CV · Back to Research

Jason Lucas
Authors
Ph.D. Candidate · Incoming Assistant Professor & Director, Secure and Ethical AI Lab (SEAL) — CU Boulder (Aug 2026)

I am a PhD candidate in Informatics in the College of IST at Penn State University, where I conduct research at the PIKE Research Lab under the guidance of Dr. Dongwon Lee. Starting August 2026, I will join the Department of Information Science at the College of Media, Communication and Information (CMDI), University of Colorado Boulder, as a Tenure-Track Assistant Professor and founding Director of the Secure and Ethical AI Lab (SEAL). My research advances trustworthy and equitable AI for the world’s languages and communities — spanning multilingual NLP, low-resource and dialectal language technology, AI safety, and information integrity, with work extending across 70+ languages. I have authored 14+ peer-reviewed papers with 315+ citations in premier venues including ACL, EMNLP, NAACL, ICML, and IEEE.

My doctoral research focuses on bridging the digital language divide through transfer learning, classification (NLU), generation (NLG), adversarial attacks, and developing end-to-end AI pipelines using RAG and Agentic AI workflows for combating multilingual threats. Drawing from my Grenadian background and knowledge of local Creole languages, I bring a global perspective to AI challenges, working to democratize state-of-the-art AI capabilities for underserved linguistic communities worldwide. My mission is to develop robust multilingual multimodal systems and mitigate evolving security vulnerabilities while enhancing access to human language technology through cutting-edge solutions.

As an NSF LinDiv Fellow, I conduct transdisciplinary research advancing human-AI language interaction for social good. I actively mentor 5+ research interns and teach Applied Generative AI courses. Through industry experience at Lawrence Livermore National Lab, Interaction LLC, and Coalfire, I bridge academic research with practical applications in combating evolving security threats and enhancing global AI accessibility. I see multilingual advances and interdisciplinary collaboration as a competitive advantage, not a communication challenge. Beyond research, I stay active through dance, fitness, martial arts, and community service.