~/home/
We develop novel methodology and theory for reverse-engineering intelligence using tools from machine learning, high-dimensional statistics, and optimization.
Group research
Our research lies at the intersection of high-dimensional statistics, optimization, and time-series
analysis, with applications to neuroscience and AI. A central focus of our work is mechanistic
interpretability—reverse-engineering how both biological and artificial neural systems process
information, form representations, and generate novel outputs. We develop theoretical frameworks
and computational tools to decode the inner workings of neural networks and brains, from uncovering
sparse conceptual representations in vision and language models to revealing the geometric
principles
underlying creative generation in diffusion models. This research aims to explain artificial neural
networks as inference algorithms in biologically-plausible, generative, statistical models, enabling
the principled design of model-based, explainable AI systems and offering novel insights into
biological
cognition. Our approach bridges neuroscience, machine learning, and optimization theory to build
interpretable models that not only explain how intelligence emerges from computation, but also
enable
us to design more transparent and trustworthy AI systems. Finally, the design of architectures
tailored
to generative models lets us leverage GPUs and modern computational infrastructure to solve
inference
and parameter estimation problems in neuroscience and beyond.
You can find sample projects related to three areas of research we've been focused on in recent
years
here: