Welcome!

News

I’ll be joining the Machine Learning Department at Carnegie Mellon University, beginning in January 2025! And, accordingly, I am actively recruiting students applying for PhD programs this year to begin in Fall of 2025! See below for my own research interests, and for what I’m looking for in prospective students.

Research Interests

My research focuses on learning in sequential, interactive, and dynamic settings. This includes everything from reinforcement learning, to prediction in control systems, to robotic agents. At the moment, I’m very excited about how large AI models might change how we think about these problems. For example, how do generative model architectures like diffusion models enable robots to learn general behaviors? Or how can we develop new deep learning methods for world modeling, video prediction, decision making and more? And might certain types of deep learning models be able to leverage diverse training experience to explore their environments ?

My current interests span the gamut from mathematical to practical. But all of my research is informed by years thinking like a theorist; this research ranged broadly across topics in adaptive sampling, multi-arm bandits, complexity of convex and non-convex optimization, reinforcement learning, learning in linear and nonlinear dynamical systems, and fairness in machine learning.

Student Recruiting

The best way to work with me is to apply to a Master’s or PhD program in the School of Computer Science at Carnegie Mellon; I will recruit mostly from the Machine Learning Department and Robotics Institue PhD applications. I’m particularly excited about students who love to think deeply and hatch truly new (sometimes crazy) ideas, and am looking for candidates across the theory-to-practice spectrum. An ideal applicant should come in with at least one core strength (e.g. mathematical problem solving, software engineering, deep learning research), and be eager to develop other abilities as the PhD continues. I strongly encourage students who come from a diverse set of backgrounds to apply, including racial, gender, sexual and religious identities, political beliefs, socioeconomic and disability statuses, and non-traditional educational/professional paths. Unfortunately, I am not able to take on research interns who are in high school, or who are undergraduates/Master’s students at institutions other than CMU, at this time. If you belong to the latter category, please apply to a CMU Master’s/PhD!

Selected Publications

Diffusion Policy Policy Optimization

Allen Z. Ren, Justin Lidard, Lars L. Ankile, Anthony Simeonov, Pulkit Agrawal, Anirudha Majumdar, Benjamin Burchfiel, Hongkai Dai, Max Simchowitz. ArXiv Preprint, 2024.

Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion

Boyuan Chen, Diego Marti Monso, Yilun Du, Max Simchowitz, Russ Tedrake, Vincent Sitzmann. Neurips, 2024.

Faster Algorithms for Growing Collision-Free Convex Polytopes in Robot Configuration Space

Peter Werner*, Thomas Cohn, Rebecca H. Jiang, Tim Niklas Seyde, <b?Max Simchowitz</b>, Russ Tedrake, Daniela Rus. ISRR, 2024.

Provable Guarantees for Generative Behavior Cloning: Bridging Low-Level Stability and High-Level Behavior

α: Adam Block, Ali Jadbabaie, Daniel Pfrommer, Max Simchowitz, Russ Tedrake. NeurIPS, 2023.

Do Differentiable Simulators Give Better Policy Gradients?

H.J. Terry Suh, Max Simchowitz, Kaiqing Zhang, Russ Tedrake. ICML, Outstanding Paper Award 2022.

Naive Exploration is Optimal for Online LQR

Max Simchowitz, Dylan Foster. ICML, 2020.

Improper Learning for Nonstochastic Control

Max Simchowitz, Karan Singh, Elad Hazan. COLT, 2020.

Learning Without Mixing: Towards A Sharp Analysis of Linear System Identification

Max Simchowitz, Horia Mania, Stephen Tu, Benjamin Recht, Michael I. Jordan. COLT, 2018.

Selected Awards

Best Paper Finalist. Constrained Bimanual Planning with Analytic Inverse Kinematics International Conference on Robotics and Animation (ICRA), 2024.

Best Paper Finalist. Constrained Bimanual Planning with Analytic Inverse Kinematics Robotics Science and Systems (RSS), 2023.

Outstanding Paper Award. Do Differentiable Simulators Give Better Policy Gradients? International Conference on Machine Learning (ICML), 2022.

Best Paper Award. Delayed Impact of Fair Machine Learning International Conference on Machine Learning (ICML), 2018.

Tong Leong Lim Pre-Doctoral Prize. University of California, Berkeley, 2018.

George B. Covington Prize for Excellence in Mathematics. Princeton University, 2015.

Bio

Hi, I’m Max. I started my academic journey as a math major at Princeton University, where I was fortunate enough to do research with Sanjeev Arora and David Blei (who taught at Princeton at the time). I went on to do a PhD focusing on theoretical questions in Machine Learning in the EECS department at UC Berkeley, co-advised by Ben Recht and Michael Jordan. I also worked with, and was closely mentored by, Elad Hazan at Princeton and Kevin Jamieson, now at University of Washington.

I spent a good chunk of my PhD thinking about mathematical questions governing how and under what conditions one can learn to predict and control dynamical systems. But as the PhD progressed, I wanted to understand what the relevant challenges in pratical applications, especially as many larger AI models were starting to overcome hurdles once thought insurmountable. I was lucky to be able to postdoc in Russ Tedrake’s Robot Locomotion Group in the EECS Department at MIT, and this is what opened me up to my ongoing curiosity about robot learning (both practical and mathematical).

A lot of folks in this field like spending time outdoors, and I respect that. But I’m more of an inside person - I like reading, listening to music, I play saxophone and (pretend to) play jazz piano, and love a good dinner party. Over the pandemic I got addicted to car YouTube; not sure if I have recovered (?).