体育投注现金网_沙巴体育-娱乐场*官网@

图片

搜索
你想要找的

热门搜索

建党100周年70周年校庆卓越育人学术育人不言之教幸福之花

9月17日 朱英伦:Efficient Sequential Decision Making with Large Language Models
2024-09-17 10:00:00
活动主题:Efficient Sequential Decision Making with Large Language Models
主讲人:朱英伦
开始时间:2024-09-17 10:00:00
举行地点:普陀校区理科大楼A座1514
主办单位:统计学院、统计交叉科学研究院
报告人简介

Yinglun Zhu is an assistant professor in the ECE department at the University of California, Riverside; he is also affiliated with the CSE department, the Riverside Artificial Intelligence Research Institute, and the Center for Robotics and Intelligent Systems. Yinglun’s research focuses on machine learning, particularly in developing efficient and reliable learning algorithms and systems for large-scale, multimodal problems. His work not only establishes the foundations of various learning paradigms but also applies them to practical settings, addressing real-world challenges. His research has been integrated into leading machine learning libraries such as Vowpal Wabbit and commercial products like Microsoft Azure Personalizer Service. More information can be found on Yinglun’s personal website at https://yinglunz.com/.


内容简介

This presentation focuses on extending the success of large language models (LLMs) to sequential decision making. Existing efforts either (i) re-train or finetune LLMs for decision making, or (ii) design prompts for pretrained LLMs. The former approach suffers from the computational burden of gradient updates, and the latter approach does not show promising results. In this presentation, I’ll talk about a new approach that leverages online model selection algorithms to efficiently incorporate LLMs agents into sequential decision making. Statistically, our approach significantly outperforms both traditional decision making algorithms and vanilla LLM agents. Computationally, our approach avoids the need for expensive gradient updates of LLMs, and throughout the decision making process, it requires only a small number of LLM calls. We conduct extensive experiments to verify the effectiveness of our proposed approach. As an example, on a large-scale Amazon dataset, our approach achieves more than a 6x performance gain over baselines while calling LLMs in only 1.5% of the time steps.