bet365注册会员-bet365是什么网站

搜索
你想要找的

9月17日 朱英倫:Efficient Sequential Decision Making with Large Language Models
2024-09-17 10:00:00
活動主題:Efficient Sequential Decision Making with Large Language Models
主講人:朱英倫
開始時間:2024-09-17 10:00:00
舉行地點:普陀校區理科大樓A座1514
主辦單位:統計學院、統計交叉科學研究院
報告人簡介

Yinglun Zhu is an assistant professor in the ECE department at the University of California, Riverside; he is also affiliated with the CSE department, the Riverside Artificial Intelligence Research Institute, and the Center for Robotics and Intelligent Systems. Yinglun’s research focuses on machine learning, particularly in developing efficient and reliable learning algorithms and systems for large-scale, multimodal problems. His work not only establishes the foundations of various learning paradigms but also applies them to practical settings, addressing real-world challenges. His research has been integrated into leading machine learning libraries such as Vowpal Wabbit and commercial products like Microsoft Azure Personalizer Service. More information can be found on Yinglun’s personal website at https://yinglunz.com/.


內容簡介

This presentation focuses on extending the success of large language models (LLMs) to sequential decision making. Existing efforts either (i) re-train or finetune LLMs for decision making, or (ii) design prompts for pretrained LLMs. The former approach suffers from the computational burden of gradient updates, and the latter approach does not show promising results. In this presentation, I’ll talk about a new approach that leverages online model selection algorithms to efficiently incorporate LLMs agents into sequential decision making. Statistically, our approach significantly outperforms both traditional decision making algorithms and vanilla LLM agents. Computationally, our approach avoids the need for expensive gradient updates of LLMs, and throughout the decision making process, it requires only a small number of LLM calls. We conduct extensive experiments to verify the effectiveness of our proposed approach. As an example, on a large-scale Amazon dataset, our approach achieves more than a 6x performance gain over baselines while calling LLMs in only 1.5% of the time steps.


温州百家乐真人网| 百家乐官网澳门有网站吗| 贵族百家乐的玩法技巧和规则| 澳门百家乐官网玩大小| 百家乐群号| 赙彩百家乐官网游戏规则| 足球百家乐投注网出租| 百家乐官网缆法排行榜| 大发888游戏平台3403| 澳门百家乐官网常赢打法| 全讯网3| 金臂百家乐| 立即博百家乐的玩法技巧和规则 | bet365v网卡| 百家乐谁能看准牌| 百家乐官网庄闲和赢率| 网络博彩qq群| 幸运水果机下载| 百家乐现金投注信誉平台| 游戏机百家乐庄闲| KK百家乐官网娱乐城 | 百家乐庄和闲的赌法| 优惠搏百家乐官网的玩法技巧和规则| 三易博娱乐| 大发888 dafa888 大发官网| 如何打百家乐的玩法技巧和规则| 百家乐官网旺门打法| 金皇冠娱乐城| 百家乐网上真钱娱乐场| 百家乐官网游戏解码器| 百家乐官网游戏模拟| 如何玩百家乐官网游戏| 金三角娱乐城| 德州扑克qq| 大发888明星婚讯| 在线百家乐官网策略| 百家乐官网赌博彩| 最好的百家乐官网论坛| KK百家乐官网现金网| 皇家金堡娱乐城| 利来国际城|