I am a Junior Fellow of the Harvard Society of Fellows, and an affiliate of the Harvard Center for Brain Science. My research interests lie in some neighborhood of the intersection between theoretical neuroscience, statistical physics, and machine learning. For more information about my work, see some of my recent publications, lecture notes, and teaching experience.
I earned my doctorate in physics at Harvard, under the supervision of Cengiz Pehlevan. My thesis considered a few topics in the statistical mechanics of neural networks. Before that, I did my undergraduate work in physics at Yale, where I worked with Damon Clark on visual motion detection and locomotor coordination in fruit flies.
A postdoctoral position is available in my group at the Center for Brain Science. See official posting here.
A word of warning: I am in the process of updating this website to use a new theme that is compatible with GitHub Pages’ default Jekyll version. The bay theme I used before now requires a newer version of Jekyll.
[November 2024] In response to a question from Binxu Wang, I’ve written an addendum to my lecture notes on the replica trick for the extreme eigenvalues of a Wishart matrix, which describes the even simpler computation of the log-determinant. Eventually I’ll get around to combining the two computations into one note, but for now this addendum is available here.
[October 2024] I have been awarded the 2024 American Physical Society Dissertation Award in Statistical and Nonlinear Physics.
[October 2024] Two of our papers were featured in the 2024 Machine Learning Special Issue of JSTAT!
[October 2024] I have received a 2024 NIH Director’s Early Independence Award. See also the blurb in the Harvard Gazette.
[October 2024] I gave two guest lectures on high-dimensional ridge regression for Harvard AM 226, roughly based on our paper with Alex Atanasov. My lecture notes are available here.
[September 2024] Our paper with Billy Qian on mechanistic mismatches in data-constrained models for integrator circuits was accepted to NeurIPS 2024.
[4 April 2024] I’ve defended my thesis! It is available here.
[March 2024] Three posters on our work were presented at Cosyne 2024! I also had the pleasure of speaking at the “Comparative approaches to advance neural circuits and computation” workshop organized by Anqi Zhang and Siddharth Jayakumar.
[December 2023] I’m presenting three papers at NeurIPS 2023! Check out our work with Paul Masset on models for fast inference in the olfactory bulb, our work with Hamza Chaudhry and Dima Krotov on associative memory for long sequences, and our work on learning curves for models with many layers of structured features. Also check out Hamza’s talk at the AMHN workshop.
See also Google Scholar, ResearchGate, Semantic Scholar or dblp.
My ORCID is 0000-0002-4060-1738.
A model for place field reorganization during reward maximization
Kumar, M. G., Bordelon, B., Zavatone-Veth, J. A., and Pehlevan, C.
[bioRxiv]
Risk and cross validation in ridge regression with correlated samples
Atanasov, A. B.*, Zavatone-Veth, J. A.*, and Pehlevan, C. (*equal contributions)
[arXiv]
Spectral regularization for adversarially-robust representation learning
Yang, S., Zavatone-Veth, J. A., and Pehlevan, C.
[arXiv] [code]
Asymptotic theory of in-context learning by linear attention
Lu, Y. M., Letey, M. I., Zavatone-Veth, J. A.*, Maiti, A.*, and Pehlevan, C.
[arXiv] [code]
Scaling and renormalization in high-dimensional regression
Atanasov, A. B., Zavatone-Veth, J. A., and Pehlevan, C.
[arXiv] [code]
Neural networks learn to magnify areas near decision boundaries
Zavatone-Veth, J. A., Yang, S., Rubinfien, J. A., and Pehlevan, C.
[arXiv] [code]
Long Sequence Hopfield Memory
Chaudhry, H. T., Zavatone-Veth, J. A., Krotov, D., and Pehlevan, C.
Journal of Statistical Mechanics: Theory and Experiment 104024. (2024)
[link]
Invited contribution to the Machine Learning 2024 special issue; updated version of our NeurIPS 2023 paper.
Learning curves for deep structured Gaussian feature models
Zavatone-Veth, J. A., and Pehlevan, C.
Journal of Statistical Mechanics: Theory and Experiment 104022. (2024)
[link]
Invited contribution to the Machine Learning 2024 special issue; updated version of our NeurIPS 2023 paper.
Adversarially-robust representation learning through spectral regularization of features
Yang, S., Zavatone-Veth, J. A., and Pehlevan, C.
Workshop on Symmetry and Geometry in Neural Representations, NeurIPS (2024).
In-context learning by linear attention: exact asymptotics and experiments
Lu, Y. M., Letey, M. I., Zavatone-Veth, J. A.*, Maiti, A.*, and Pehlevan, C.
Workshop on Mathematics of Modern Machine Learning, NeurIPS (2024).
[OpenReview]
Partial observation can induce mechanistic mismatches in data-constrained RNNs
Qian, W., Zavatone-Veth, J. A., Ruben, B. S., and Pehlevan, C.
Workshop on NeuroAI, NeurIPS (2024)
[OpenReview]
Partial observation can induce mechanistic mismatches in data-constrained models of neural dynamics
Qian, W., Zavatone-Veth, J. A., Ruben, B. S., and Pehlevan, C.
Advances in Neural Information Processing Systems (NeurIPS) 37. (2024)
[bioRxiv]
Statistical mechanics of Bayesian inference and learning in neural networks
Zavatone-Veth, J. A.
PhD dissertation, Department of Physics, Harvard University.
[ProQuest] [Harvard DASH]
Long Sequence Hopfield Memory
Chaudhry, H. T., Zavatone-Veth, J. A., Krotov, D., and Pehlevan, C.
Associative Memory and Hopfield Networks Workshop Oral, NeurIPS (2023)
[OpenReview]
Neural circuits for fast Poisson compressed sensing in the olfactory bulb
Zavatone-Veth, J. A.*, Masset, P.*, Tong, W. L., Zak, J. D., Murthy, V. N.#, and Pehlevan, C.# (*,#equal contributions)
Advances in Neural Information Processing Systems (NeurIPS) 36. (2023)
[link] [OpenReview] [bioRxiv] [code]
Long Sequence Hopfield Memory
Chaudhry, H. T., Zavatone-Veth, J. A., Krotov, D., and Pehlevan, C.
Advances in Neural Information Processing Systems (NeurIPS) 36. (2023)
[link] [OpenReview] [arXiv] [code]
Learning curves for deep structured Gaussian feature models
Zavatone-Veth, J. A., and Pehlevan, C.
Advances in Neural Information Processing Systems (NeurIPS) 36. (2023)
[link] [OpenReview] [arXiv]
Replica method for eigenvalues of real Wishart product matrices
Zavatone-Veth, J. A., and Pehlevan, C.
SciPost Physics Core 6 (2): 026. (2023)
[link] [arXiv]
Asymptotics of representation learning in finite Bayesian neural networks
Zavatone-Veth, J. A., Canatar, A., Ruben, B. S., and Pehlevan, C.
Journal of Statistical Mechanics: Theory and Experiment 114008. (2022)
[link]
Invited contribution to the Machine Learning 2022 special issue; updated version of our NeurIPS paper.
Contrasting random and learned features in deep Bayesian linear regression
Zavatone-Veth, J. A., Tong, W. L., and Pehlevan, C.
Machine Learning and the Physical Sciences Workshop, NeurIPS. (2022)
[link]
Training shapes the curvature of shallow neural network representations
Zavatone-Veth, J. A.*, Rubinfien, J. A.*, and Pehlevan, C. (*equal contributions)
Symmetry and Geometry in Neural Representations (NeurReps) Workshop, NeurIPS. (2022)
[OpenReview]
Natural gradient enables fast sampling in spiking neural networks
Masset, P.*, Zavatone-Veth, J. A.*, Connor, J. P., Murthy, V. N.#, and Pehlevan, C.# (*,#equal contributions)
Advances in Neural Information Processing Systems (NeurIPS) 35. (2022)
[link] [OpenReview] [bioRxiv] [code]
Excitatory and inhibitory neural dynamics jointly tune motion detection
Gonzalez-Suarez, A. D., Zavatone-Veth, J. A., Chen, J., Matulis, C., Badwan, B. A., and Clark, D. A.
Current Biology 32 (17) 3659-3675. (2022)
[link] [bioRxiv] [code]
Contrasting random and learned features in deep Bayesian linear regression
Zavatone-Veth, J. A., Tong, W. L., and Pehlevan, C.
Physical Review E 105 (6): 064118. (2022)
[link] [arXiv]
On neural network kernels and the storage capacity problem
Zavatone-Veth, J. A., and Pehlevan, C.
Neural Computation 34 (5): 1136–1142. (2022)
[link] [arXiv]
Parallel locomotor control strategies in mice and flies
Gonçalves, A. I.*, Zavatone-Veth, J. A.*, Carey, M. R.#, and Clark, D. A.# (*,#equal contributions)
Current Opinion in Neurobiology 73: 102516. (2022)
[link] [arXiv]
Drifting neuronal representations: Bug or feature?
Masset, P.*, Qin, S.*, and Zavatone-Veth, J. A.* (*equal contributions)
Biological Cybernetics 116: 253-266. (2022)
[link]
Depth induces scale-averaging in overparameterized linear Bayesian neural networks
Zavatone-Veth, J. A., and Pehlevan, C.
55th Asilomar Conference on Signals, Systems, and Computers. (2021)
[link] [arXiv]
Asymptotics of representation learning in finite Bayesian neural networks
Zavatone-Veth, J. A., Canatar, A., Ruben, B. S., and Pehlevan, C.
Advances in Neural Information Processing Systems (NeurIPS) 34. (2021)
[link] [OpenReview] [arXiv] [code]
Exact marginal prior distributions of finite Bayesian neural networks
Zavatone-Veth, J. A., and Pehlevan, C.
Advances in Neural Information Processing Systems (NeurIPS) 34 Spotlight. (2021)
[link] [OpenReview] [arXiv] [code]
Activation function dependence of the storage capacity of treelike neural networks
Zavatone-Veth, J. A., and Pehlevan, C.
Physical Review E (Letter) 103 (2): L020301. (2021)
[link] [arXiv]
Statistical structure of the trial-to-trial timing variability in synfire chains
Obeid, D., Zavatone-Veth, J. A., and Pehlevan, C.
Physical Review E 102 (5): 052406. (2020)
[link] [bioRxiv]
Spatiotemporally precise optogenetic activation of sensory neurons in freely walking Drosophila
DeAngelis, B. D.*, Zavatone-Veth, J. A.*, Gonzalez-Suarez, A. D., and Clark, D. A. (*equal contributions)
eLife 9: e54183. (2020)
[link] [code]
A minimal synaptic model for direction selective neurons in Drosophila
Zavatone-Veth, J. A., Badwan, B. A., and Clark, D. A.
Journal of Vision 20 (2): 2. (2020)
[link] [bioRxiv] [code]
Using slow frame rate imaging to extract fast receptive fields
Mano, O., Creamer, M. S., Matulis, C. A., Salazar-Gatzimas, E., Chen, J., Zavatone-Veth, J. A., and Clark, D. A.
Nature Communications 10: 4979. (2019)
[link] [code]
Dynamic nonlinearities enable direction opponency in Drosophila elementary motion detectors
Badwan, B. A., Creamer, M. S., Zavatone-Veth, J. A., and Clark, D. A.
Nature Neuroscience 22: 1318–1326. (2019)
[link] [code]
The manifold structure of limb coordination in walking Drosophila
DeAngelis, B. D.*, Zavatone-Veth, J. A.*, and Clark, D. A. (*equal contributions)
eLife 8: e46409. (2019)
[link] [erratum] [code]
Lecture notes on the replica trick for the log-determinant of a Wishart matrix
An addendum to my existing lecture notes on the extremal eigenvalues of a Wishart matrix, written in response to a question from Binxu Wang. When time allows, I’ll integrate these with my existing notes.
[pdf]
Lecture notes on the inductive biases of high-dimensional ridge regression
Notes for two guest lectures on high-dimensional ridge regression given as part of Harvard AM 226 Fall 2024. Roughly based on our paper with Alex Atanasov.
[pdf]
Lecture notes on Gaussian process behavior in wide neural networks
Lecture notes introducing the NNGP correspondence using mean field theory, prepared for a tutorial I gave at the HHMI Janelia Junior Scientist Workshop on Theoretical Neuroscience in November 2022.
[pdf]
Lecture notes on the replica method for Wishart matrix eigenvalues
Lecture notes introducing the replica trick through the example of Wishart matrix eigenvalues, prepared for Harvard AM 226 Fall 2022.
[pdf]
Teaching fellow, Harvard Applied Mathematics 226: Theory of Neural Computation (Fall 2022)
Lead instructor: Cengiz Pehlevan
Received Certificate of Distinction in Teaching
Teaching fellow, Harvard Applied Mathematics 226: Neural Computation (Fall 2021)
Lead instructor: Cengiz Pehlevan
Received Certificate of Distinction in Teaching
Teaching fellow, Harvard GenEd 1125: Artificial & Natural Intelligence (Spring 2021)
Lead instructor: Venkatesh Murthy
Received Certificate of Distinction in Teaching
I can also be found on Bluesky or Mastodon. In official contexts I use my full name, so you’ll find me listed as Jacob Andreas Zavatone-Veth.
Powered by Jekyll and Minimal Light theme.