I'm a Ph.D. candidate in theoretical computer science at Boston University, under the excellent supervision of Ran Canetti. I am currently looking for full-time job opportunities in TCS and AI/ML research (postdoc and industry) for after graduation in Summer 2024.
I research algorithmic and complexity-theoretic aspects of machine learning.
1) I want to develop the theoretical justification for empirical phenomena observed in the practice of ML/AI. Currently, I'm thinking about how to formalize the advantages of multimodal vs. unimodal data in machine learning. I'm open to new projects in ML theory; feel free to reach out!
2) I also study the theory of meta-complexity as a way of making formal connections between cryptography, complexity, and learning. In the spring semester of 2023, I visited the Simons Institute for the theory of computing at UC Berkeley, where I participated in the meta-complexity program. Since then, I have been very interested in developing our understanding of what kinds of algorithms are implied by natural circuit lower bounds. See my research presented at ITCS and ALT 2024 for more on this.
I am also interested in privacy, security and responsibility issues in machine learning and AI.
1) In the past, I designed algorithms for conducting secret experiments. This work has consequences to the theory and practice of model stealing attacks, as well as information security in data curation and anotation. I recently wrote a series of blog posts that help explain some of the theoretical difficulty of defending against model stealing attacks, and how that relates to possible challenges in abuse prevention in LLM chatbots like chatGPT. This series is based in part on some of my research published at TCC '21 and SaTML '23, and can be found at the links below.
Model Extraction, LLM Abuse, Steganography, and Covert Learning part 1 part 2 part 3
Publications and manuscripts
Teaching fellowships
Select talks