Teaching Statement
Jason Samuel Lucas · Assistant Professor of Information Science · University of Colorado Boulder · Director, Secure and Ethical AI Lab (SEAL)
Philosophy
I teach at the intersection of artificial intelligence, language, and ethics. My classrooms are spaces where students learn to build AI systems and to interrogate them, with equal seriousness given to both. Drawing on a decade of teaching at St. George’s University School of Medicine, where I taught introductory health informatics to over 4,000 medical students, and multiple semesters as a graduate teaching assistant at Penn State across courses including IST 140: Introduction to Application Development and IST 402: Applied Generative AI, I bring three evidence-based principles to every course I teach: active learning through hands-on experience, adaptive instruction responsive to diverse learners, and authentic assessment rooted in real-world practice.
Active Learning Through Hands-On Experience
Lectures alone do not produce the engineers, researchers, and informed citizens that AI now demands. In my redesign of IST 140, I restructured lab sessions around an interactive learning cycle: a 10-minute concept introduction, live coding demonstrations where I deliberately introduce bugs to model debugging as a core skill, paired exercises of progressive difficulty, and peer-led code reviews where students explain solutions to the class. Engagement rose from 45% to 85% across semesters, with a 40% increase in students reporting confidence in their programming abilities. The pedagogy is grounded in cognitive science: immediate application strengthens memory consolidation, peer collaboration exposes students to alternative problem-solving strategies, and public review develops both technical communication and metacognitive awareness.
Adaptive Instruction
Students arrive with different cognitive approaches, languages, and prior preparation. As someone who navigated academic pathways from Grenada to an R1 doctoral program while managing a learning disability, I am committed to inclusive pedagogy that recognizes diverse learning styles. In COMP 420 (Database Systems), I present complex concepts like normalization through three parallel modalities: visual entity-relationship diagrams with color-coded relationships for visual learners, physical index-card exercises for kinesthetic learners, and formal mathematical notation for analytical learners. Students who initially struggled with one modality showed an average 23 percentage-point improvement when offered alternative presentations, while students who already understood the concept reported that alternative explanations deepened their understanding. Continuous formative assessment through in-class polls allows me to dynamically adjust pacing and depth in real time.
Authentic Assessment
Assignments in my courses mirror professional practice. In IST 402 (Applied Generative AI), students complete a semester-long project structured across four phases that mirror an industry workflow:
- Problem Definition and Dataset Curation. Students identify a real-world problem domain (past examples include mental health chatbots, multilingual customer service, and accessibility tools) and curate appropriate datasets via Hugging Face.
- Prompt Engineering and Baseline Development. Students implement zero-shot and few-shot baselines, document experiments in shared Jupyter notebooks, and present preliminary results in lightning talks.
- Advanced Techniques. Students explore multimodal LLMs, custom embedding models, and fine-tuned classification and generation models, conducting ablation studies to understand which components drive performance.
- Deployment and Documentation. Students deploy applications via Streamlit, HuggingFace Spaces, or agentic frameworks, write comprehensive documentation, and present in a poster session attended by faculty, peers, and invited industry partners.
Several past projects have evolved into deployed applications and submitted research papers. As one student wrote: “This felt like building a real product, not just completing an assignment.”
The T·I·C Framework
My approach to AI in the classroom is captured in the T·I·C framework, which I developed and have presented in faculty workshops on responsible generative AI:
- Transparency. Be explicit, in writing, about when, why, and how AI is or is not permitted. Vague policies create unequal outcomes.
- Intentionality. Use AI because it serves a specific pedagogical goal, not because it is convenient. The internal test: what would be lost if students did this without AI?
- Criticality. Treat AI as a starting point, not an authority. Surface biases. Question hallucinations. Acknowledge what disciplinary expertise contributes that the model cannot.
The framework is short by design, portable across disciplines, and durable across model generations.
Research-Informed Pedagogy
My research on multilingual AI safety enriches my teaching directly. Students analyze real datasets from my work on harmful content detection across 70+ languages, examine where AI systems fail, and build defensive mechanisms using examples from my Fighting Fire with Fire (F3) framework. They experience firsthand the cat-and-mouse dynamics between attackers and defenders, transforming abstract concepts into tangible challenges with clear societal stakes.
Course Portfolio at CU Boulder
Through SEAL and the Department of Information Science, I am committed to teaching across the undergraduate-to-doctoral pipeline:
- Foundations: Applied Generative AI; Natural Language Processing; Responsible AI.
- Advanced topics: Multilingual NLP; AI Safety and Adversarial Machine Learning; Fair Machine Learning; AI for Social Good.
- Doctoral seminars: Trustworthy AI Across Linguistic Diversity; Research Methods for AI Equity.
Each course integrates research directly into instruction. The classroom and the lab are not separate domains; they are continuous.
Vision
Looking forward, I aim to develop courses at the intersection of AI and social impact, integrate community-engaged learning through partnerships with non-governmental organizations and newsroom collaborators, and contribute to pedagogical scholarship on inclusive practices in computing education. Just as my research seeks to democratize AI capabilities across linguistic boundaries, my teaching seeks to democratize computing education across different backgrounds, abilities, and learning styles.
What Students Take Away
By the end of a course with me, students should be able to do three things they could not do before: build a working AI system, evaluate where and why it fails, and articulate the human stakes of those failures. The first is technical. The second is methodological. The third is moral. All three are required for the field, and all three are within reach of any student willing to engage seriously with the work.
Last updated: May 2026 · Download CV · Back to Research

I am a PhD candidate in Informatics in the College of IST at Penn State University, where I conduct research at the PIKE Research Lab under the guidance of Dr. Dongwon Lee. Starting August 2026, I will join the Department of Information Science at the College of Media, Communication and Information (CMDI), University of Colorado Boulder, as a Tenure-Track Assistant Professor and founding Director of the Secure and Ethical AI Lab (SEAL). My research advances trustworthy and equitable AI for the world’s languages and communities — spanning multilingual NLP, low-resource and dialectal language technology, AI safety, and information integrity, with work extending across 70+ languages. I have authored 14+ peer-reviewed papers with 315+ citations in premier venues including ACL, EMNLP, NAACL, ICML, and IEEE.
My doctoral research focuses on bridging the digital language divide through transfer learning, classification (NLU), generation (NLG), adversarial attacks, and developing end-to-end AI pipelines using RAG and Agentic AI workflows for combating multilingual threats. Drawing from my Grenadian background and knowledge of local Creole languages, I bring a global perspective to AI challenges, working to democratize state-of-the-art AI capabilities for underserved linguistic communities worldwide. My mission is to develop robust multilingual multimodal systems and mitigate evolving security vulnerabilities while enhancing access to human language technology through cutting-edge solutions.
As an NSF LinDiv Fellow, I conduct transdisciplinary research advancing human-AI language interaction for social good. I actively mentor 5+ research interns and teach Applied Generative AI courses. Through industry experience at Lawrence Livermore National Lab, Interaction LLC, and Coalfire, I bridge academic research with practical applications in combating evolving security threats and enhancing global AI accessibility. I see multilingual advances and interdisciplinary collaboration as a competitive advantage, not a communication challenge. Beyond research, I stay active through dance, fitness, martial arts, and community service.