CSE Colloquium: Towards Trustworthy Decision-Making and AI: Explainability and Safety

Zoom Information: Join from PC, Mac, Linux, iOS or Android: https://psu.zoom.us/j/94549451017?pwd=RUpGZDRpVHJGUGdGQm9nYklnT3U1QT09 Password: 767073 

or iPhone one-tap (US Toll): +16468769923,94549451017# or +13017158592,94549451017# 

or Telephone: Dial: +1 646 876 9923 (US Toll) +1 301 715 8592 (US Toll) +1 312 626 6799 (US Toll) +1 669 900 6833 (US Toll) +1 253 215 8782 (US Toll) +1 346 248 7799 (US Toll) Meeting ID: 945 4945 1017 Password: 767073 International numbers available: https://psu.zoom.us/u/an7c23NZ8 

 ABSTRACT: In this talk, I will talk about my recent work towards trustworthy artificial intelligence, particularly trustworthy decision-making. Many companies are now building self-driving vehicles and medical robots, and the development of advanced autonomous systems is already a billion-dollar industry. These new technologies offer oversight, advanced automation, and autonomous instruments, and they are adaptable to changing situations, knowledge, and constraints. However, introducing new technologies into our technical and social infrastructures has profound implications and requires establishing confidence in their behavior to avoid potential harm. Therefore, the effectiveness and broader acceptability of autonomous smart systems rely on these systems' ability to explain their decisions. Building trust in artificial intelligence (AI) systems is a critical requirement in human-robot interaction and essential for realizing the full spectrum of AI's societal and industrial benefits. 

This talk identifies two critical factors for establishing the trustworthiness of autonomous systems: explainability and safety. First, to achieve human-level interpretability, I propose new algorithms leveraging symbolic AI and data-driven ML to enable real-world applications. Particularly, I investigate an explainable and data-efficient hierarchical sequential decision-making framework based on symbolic planning and deep reinforcement learning, termed Symbolic Deep Reinforcement Learning (SDRL, IJCAI'2018, AAAI'2019, ICLP'2019). This approach achieves state-of-the-art results on the most challenging Atari games, Montezuma's Revenge, and outperforms other methods by a large margin. Second, to enhance safety and risk-awareness in decision-making, I propose the Mean-Variance Policy search (MVP, NeurIPS'2018, JAIR'2018, ICML'2020, AAAI'2021) algorithm family. Instead of merely maximizing the expected mean of cumulative rewards in sequential decision-making, the MVP algorithm enables a trade-off between the mean and variance by utilizing the Legendre-Fenchel duality. MVP is the first (and to date the only) data-driven mean-variance optimization algorithm with available finite-sample analysis. Unlike conventional mean-variance optimization, which often has multi-timescale stepsizes to tune, this algorithm is single-timescale and can thus scale up easily. Third, I will discuss ongoing work applying the algorithms to a wide range of practical applications such as control, robotics, e-commerce, autonomous driving, and medical treatment. 

BIOGRAPHY: Bo Liu is a tenure-track assistant professor in the Dept. of Computer Science and Software Engineering at Auburn University. He obtained his Ph.D. from Autonomous Learning Lab at the University of Massachusetts Amherst, 2015, co-led by Drs. Sridhar Mahadevan and Andrew Barto. His primary research area covers decision-making under uncertainty, human-aided machine learning, symbolic AI, trustworthiness and interpretability in machine learning, and their numerous applications to BIGDATA, autonomous driving, and healthcare informatics. In his current research, he has more than 30 publications on several notable venues, such as NIPS/NeurIPS, ICML, UAI, AAAI, IJCAI, AAMAS, JAIR, IEEE-TNN, etc. His research is funded by NSF, Amazon, Tencent (China), Adobe, and ETRI (South Korea). He is the recipient of the UAI'2015 Facebook best student paper award and the Amazon research award in 2018. His research results have been covered by many prestigious venues, including the classical textbook "Reinforcement Learning: An Introduction" (2nd edition), NIPS'2015/IJCAI'2016/AAAI'2019 tutorials. He is an Associate Editor of IEEE Transactions on Neural Networks and Learning Systems (IEEE-TNN), a senior member of IEEE, and a member of AAAI, ACM, and INFORMS. 

 

Share this event

facebook linked in twitter email

Media Contact: Rui Zhang

 
 

About

The School of Electrical Engineering and Computer Science was created in the spring of 2015 to allow greater access to courses offered by both departments for undergraduate and graduate students in exciting collaborative research fields.

We offer B.S. degrees in electrical engineering, computer science, computer engineering and data science and graduate degrees (master's degrees and Ph.D.'s) in electrical engineering and computer science and engineering. EECS focuses on the convergence of technologies and disciplines to meet today’s industrial demands.

School of Electrical Engineering and Computer Science

The Pennsylvania State University

207 Electrical Engineering West

University Park, PA 16802

814-863-6740

Department of Computer Science and Engineering

814-865-9505

Department of Electrical Engineering

814-865-7667