Dr. Zachary Lipton
Assistant Professor of Operations Research and Machine Learning, Tepper School of Business
I am now an Assistant Professor at Carnegie Mellon University (CMU) jointly appointed in the Tepper school of Business and the Machine Learning Department. Additionally, I am affiliated with the Heinz School of Public Policy. My research spans core ML methods and theory, their applications in healthcare and natural language processing, and critical concerns, both about the mode of inquiry itself, and the impact of the technology it produces on social systems. I completed my PhD at the loveliest of universities (in UCSD's Artificial Intelligence Group), and if I had a time machine, I would go back, take two years longer to graduate, and actually learn to surf.
I run the Approximately Correct Machine Intelligence Lab, a group of wonderful students whose creativity and talent are the primary reasons why I have not yet moved to a small island in the Aegean Sea, where I would tend a flock of goats, slowly acquire the centuries-old craft of distilling spirits from the local herbs, and devote the rest of my life to writing third-rate science fiction novels. We are especially interested in (i) building robust systems that can cope with a changing world, whether due to natural distribution shift, or to the strategic manipulations of those subject to automated decisions; (ii) understanding the social impacts of machine learning in a philosophically coherent way; (iii) the intersection of representation learning and causality; and (iv) leveraging ML to address impactful questions in clinical medicine.
I value clear, understandable scientific prose and to this end have authored / co-authored two reviews of the literature (on RNNs and Differential Privacy) and more recently an interactive book, Dive into Deep Learning, which teaches deep learning through exposition, math and code in a fully-interactive textbook written in Jupyter and automatically compiled to HTML and PDF (forthcoming on Cambridge University Press). In Fall 2016, I launched Approximately Correct, a blog aimed at bridging technical and social perspectives on machine learning. We have had some success addressing misconceptions about AI, both in the broader discourse and within the research community, but the problem has only intensified.