The School of EECS is hosting the following HDR Progress Review 1 Confirmation Seminar:

Human-in-the-Loop Decision Making for Online Safety 

Speaker: Bing Tuo
Host/Chair: Dr. Rocky Chen

Abstract: From content moderation and loan approval to disease diagnosis and legal determination, humans are increasingly being assisted by artificial intelligence (AI) systems in decision-making tasks. Although it is expected that the synergy between AI models and humans can enable human-AI teams to outperform either party alone, collaboration in practice often remains suboptimal. People frequently exhibit inappropriate reliance on AI models, partly due to the lack of transparency around why a particular decision was made. Large language models (LLMs), with their exceptional conversational and analytical capabilities, offer promising opportunities to improve AI-assisted decision making. LLMs can provide natural-language explanations of AI recommendations—for instance, describing how individual features of a decision-making task may have contributed to the model’s output.

To explore how LLMs can augment human decision making and achieve complimentary performance, we focus on three key research questions: 1) How does LLM-generated assistance affect team performance and users’ engagement, trust, and reliance? 2) What are the critical factors that influence the effectiveness of LLM-based support? 3) What are the optimal paradigms for deploying LLM assistance across different task scenarios?

For the first question, we explore the challenges of human moderation in detecting online hate speech, particularly when the language is implicit or nuanced in its discriminatory intent. We evaluate the effectiveness of ChatGPT-generated explanations in assisting human moderators, hypothesizing that interpretable, natural language-based explanations would provide complementary knowledge to support more accurate assessments of AI recommendations and facilitate informed decision-making. However, our results show that ChatGPT-generated explanations often foster unwarranted trust in AI, rather than promoting appropriate reliance, ultimately impairing moderation quality.

BioBing Tuo is a second-year Ph.D. student at the University of Queensland, Australia. Prior to his candidacy, he worked in industry as a product manager, focusing on Trust & Safety, Smart Assistants, and Monetization. He received his Master’s degree in Computer Science from the University of Southern California. He is currently researching Human-in-the-Loop Decision Making for Online Safety, advised by Prof. Gianluca Demartini and Prof. Timothy Miller.

 

 

About Data Science Seminar

This seminar series is hosted by EECS Data Science.

Venue

Room: 78 - 631/632 (MM Lab)
Zoom: https://uqz.zoom.us/j/5680824702