ECE Seminar - Alex Olshevsky, Boston University
On Reinforcement Learning and Gradient Descent
Alexander Olshevsky, Boston University
Thursday, January 23, 3:00pm
DL 514 or Zoom (https://yale.zoom.us/j/93561838732)
Hosted by: Professor Steve Morse
Abstract:
Despite the remarkable successes of reinforcement learning, its foundational methods remain only partially understood from a theoretical perspective. In this talk, we examine popular reinforcement learning techniques, including temporal difference learning, actor-critic algorithms, and their adaptations with neural network approximations. By reframing these methods through the lens of a novel gradient-based approach called gradient splitting, we gain deeper insights into their mechanisms. This perspective not only enhances our understanding of their operation but also improves convergence rates, inspires new algorithmic developments, and clarifies why these methods integrate effectively with neural networks.
Bio:
Alex Olshevsky received the B.S. degree in applied mathematics and the B.S. degree in electrical engineering from the Georgia Institute of Technology, Atlanta, GA, USA, both in 2004, and the M.S. and Ph.D. degrees in electrical engineering and computer science from the Massachusetts Institute of Technology, Cambridge, MA, USA, in 2006 and 2010, respectively. He was a postdoctoral scholar at Princeton University from 2010 to 2012, and an Assistant Professor at the University of Illinois at Urbana-Champaign from 2012 to 2016. He is currently an Associate Professor with the ECE department at Boston University.
Dr. Olshevsky is a recipient of the NSF CAREER Award, the Air Force Young Investigator Award, the INFORMS Computing Society Prize for the best paper on the interface of operations research and computer science, a SIAM Award for annual paper from the SIAM Journal on Control and Optimization chosen to be reprinted in SIAM Review, and an IMIA award for best paper on clinical medical informatics in 2019.