Dr Jean Honorio

Modern machine learning (ML) problems are combinatorial and non-convex, specially those involving latent variables, for which theoretical guarantees are quite limited. Furthermore, while quantitative guarantees (e.g., small test error) have been studied, qualitative guarantees (e.g., outlier robustness) are mostly lacking. My long-term research goal is to uncover the general foundations of ML and optimization that drives the empirical success across many specific combinatorial and non-convex ML problems. I aim to develop a set of optimization-theoretic frameworks and tools to bridge the aforementioned gaps, to further our understanding of continuous (possibly non-convex) relaxations of combinatorial problems, as well as our knowledge of non-convexity.

My aim is to generate correct, computationally efficient and statistically efficient algorithms for high dimensional ML problems. My research group has produced breakthroughs not only on classical worst-case NP-hard problems, such as learning and inference in structured prediction, community detection and learning Bayesian networks, but also on areas of recent interest such as fairness, meta learning, federated learning and robustness.

Prior to joining the University of Melbourne in 2024, I was an Assistant Professor in the Computer Science Department at Purdue University, as well as in the Statistics Department (by courtesy). Prior to joining Purdue, I was a postdoctoral associate at MIT, working with Tommi Jaakkola. My Erdős number is 3. My work has been partially funded by the National Science Foundation (NSF). I am an editorial board reviewer of JMLR, and have served as area chair of NeurIPS and ICML, senior PC member of AAAI and IJCAI, PC member of NeurIPS, ICML, AIStats among other conferences and journals.

Back to our people