research
How large language models interact with human language — the patterns they absorb from training and how those patterns shape their outputs.
Human language is messy, socially marked, and full of patterns we don’t consciously intend. When you train a model on it, what comes out the other side? I study the collision between human linguistic variation and AI systems — how language shapes AI outputs, how AI shapes human communication, and how the two are increasingly shaping each other.
AI and Human Communication
How does the language people use affect the AI assistance they receive? I study how sociolinguistic variation in prompts systematically shapes LLM outputs, and what this means for who benefits from AI-powered tools. I am also interested in the broader question of co-adaptation: how humans change their language and behavior in response to AI systems, and how AI systems in turn are shaped by the humans who use them.
AI Behavior
LLMs are trained on human-generated text, and they absorb more from that training than we might expect. I investigate the implicit patterns LLMs pick up from language — and how those patterns surface when models are asked to perform tasks with no single correct answer. See How Random is Random? Evaluating the Randomness and Humanness of LLMs’ Coin Flips (arXiv 2024).
Humans Online
My PhD work at Cornell (advised by Jon Kleinberg) examined how people use language and form communities online. I studied the conventions governing word ordering in binomial expressions (bread and butter, not butter and bread) at web scale, developed null models for network analysis, and examined how geographic and topical structure organizes online communities. See the publications page for details.