Portrait of Dipendra Misra

Dipendra Misra

Senior Researcher

About

I am a machine learning researcher specializing in the field of interactive learning (e.g., reinforcement learning), natural language understanding, and representation learning. My main research agenda is to develop machine learning agents that can interact with the world using actions and natural language, and solve tasks using reward or natural language feedback.

There are three main threads of this agenda.

  • Reinforcement Learning (Algorithm): A reinforcement learning agent should be able to robustly and adequately explore and plan a wide range of tasks. My focus here has been on practical and provably efficient approaches. My representative work on this agenda includes a list of recent RL algorithms for problems with complex observations that are provably sample-efficient and computationally efficient: the Homer algorithm (ICML 2020) (opens in new tab), RichID algorithm (NeurIPS 2020) (opens in new tab), FactoRL Algorithm (ICLR 2021) (opens in new tab), and PPE algorithm (ICLR 2022) (opens in new tab).
  • Learning from Text Feedback and Interactions (Signal): Agents which can interact with the world via expressive mediums like natural language, can unlock many real-world applications. I am interested in developing agents that can understand and execute instructions in expressive feedback like natural language, and also be trained using these mediums. Representative work on this agenda includes the EMNLP 2017 (opens in new tab)EMNLP 2018 (opens in new tab)CoRL 2018 (opens in new tab), and CVPR 2019 (opens in new tab) papers on developing agents that can follow natural language instruction and our ICML 2021 (opens in new tab) paper that trains these agents using just natural language.
  • Representation Learning (Model): Almost all machine learning systems learn some form of representation of the world. Once we learn the right representation, a reinforcement learning agent can act upon it to explore the world, or an agent may use it to follow instructions. I am interested in developing the theory and practice of representation learning methods. Representative work includes our recent papers at AISTATS 2022 (opens in new tab) and ICML 2022 (opens in new tab) on understanding the behavior of contrastive learning.

Beyond my main agenda, I also have an interest in a diverse range of topics including language and vision problems, semantic parsing, statistical learning theory, and computational social science.