akarchmer0 at gmail dot com - photo - C.V. -
google scholar - a piece of art I made

I'm a postdoc at Harvard University, where I participate in the SAFR AI lab led by Seth Neel and Salil Vadhan. I obtained my Ph.D. from Boston University under the supervision of Ran Canetti in spring of 2024.

I'm interested in machine learning, AI, and theoretical computer science.

My current goal is to help develop the theory of how machine learning really works—can we predict how ML models will perform/act, before we train (or otherwise do surgery on) them? For instance, some of my recent work contributes to a better understanding when and why multimodal data is useful for ML. I'm also interested in applied research in ML/AI*robustness*, such as methods for data attribution and model editing, which I'll work on at Harvard.

My Ph.D. thesis was in the area of meta-complexity, and in particular, discovered new relationships between the complexity theory of circuit lower bounds and computational learning theory.

I'm a postdoc at Harvard University, where I participate in the SAFR AI lab led by Seth Neel and Salil Vadhan. I obtained my Ph.D. from Boston University under the supervision of Ran Canetti in spring of 2024.

I'm interested in machine learning, AI, and theoretical computer science.

My current goal is to help develop the theory of how machine learning really works—can we predict how ML models will perform/act, before we train (or otherwise do surgery on) them? For instance, some of my recent work contributes to a better understanding when and why multimodal data is useful for ML. I'm also interested in applied research in ML/AI

My Ph.D. thesis was in the area of meta-complexity, and in particular, discovered new relationships between the complexity theory of circuit lower bounds and computational learning theory.

Piet Mondrian, The Red Tree, 1908

Refereed Publications

(ab) indicates author names are listed alphabetically.

- Karchmer, Ari. On Stronger Computational Separations
Between Multimodal and Unimodal Machine Learning. ICML 2024.

"Spotlight" paper (3.5% acceptance rate). ArXiv preprint. - Karchmer, Ari. Agnostic Membership Query Learning with Nontrivial Savings: New Results and Techniques. ALT 2024.

ArXiv preprint. - Karchmer, Ari. Distributional PAC-Learning from Nisan's Natural Proofs.
ITCS 2024.

Winner of best student paper award. Invited for publication at TheoretiCS. ArXiv preprint.

- Karchmer, Ari. Theoretical Limits of Provable Security Against Model Extraction by Efficient Observational Defenses. SaTML 2023.

IACR ePrint. -
Canetti, Ran and Karchmer, Ari. Covert Learning: How to Learn with an Untrusted Intermediary. TCC 2021. (ab)

Invited to Journal of Cryptology. IACR ePrint.

Select Talks

- "Undetectable Model Stealing with Covert Learning" - Harvard University SAFR AI Lab, Cambridge MA, April '24 Slides
- "Cryptography and Complexity Theory in the Design and Analysis of ML" - Vector Institue, Toronto CA, April '24 Slides
- "Learning from Nisan's Natural Proofs" - MIT CIS Seminar, March '24 Slides
- "Distributional PAC-learning from Nisan's Natural Proofs" - ITCS, Simons Institute, Jan '24 Video
- "Undetectable Model Stealing and more with Covert Learning" - Google Research MTV, algorithms seminar, Jan '24 Slides
- "New Approaches to Heuristic PAC-Learning vs. PRFs" - Lower Bounds, Learning, and Average-case Complexity Workshop at Simons Institute, Feb '23 Simons talk
- "The Limits of Provable Security Against Model Extraction" - Privacy Preserving Machine Learning Workshop at Crypto, Aug '22 PPML talk
- "Covert Learning: How to Learn with an Untrusted Intermediary" - Charles River Crypto Day at MIT, Nov '21 Crypto day talk

Teaching Fellowships

- "Responsible AI, Law, Ethics & Society" with Shlomi Hod et al. in Spring 2022
- "Network Security" with profs Ran Canetti and Sharon Goldberg in Fall 2019
- "Algebraic Algorithms" with prof Leonid Levin in Fall 2018