2025/10/30 【Oct. 30(US)/31(Taiwan)】Dr. Ethan Busby: Arti-‘fickle’ Intelligence: Using LLMs as a Tool for Infere
Arti-‘fickle’ Intelligence: Using LLMs as a Tool for Inference in the Political and Social Sciences
by Dr. Ethan Busby (Brigham Young University)
Date:Thursday, October 30 - 21:00PM~11:00PM (USA Central Time, GMT-6) / Friday, October 31- 10:00AM~12:00AM (Taiwan Time, GMT+8)
Registration:http://utd.link/DAC20251030

Ethan Busby
Ethan is an Assistant Professor of Political Science at Brigham Young University, specializing in political psychology, extremism, artificial intelligence, public opinion, racial and ethnic politics, quantitative methods, and computational social science. His research relies on various methods, using lab experiments, quasi-experiments, survey experiments, text-as-data, surveys, artificial intelligence, and large-language models. His work focuses on how democratic societies should respond to extremism, using approaches from political psychology and generative AI tools. These are deeply integrated in his work—political psychology informs my use of AI, and AI tools test theories from political psychology. More specifically, his research explores what extremism is, who people blame for extremism, how political persuasion intersects with extremism, and what encourages and discourages extremism. His research has been published in a variety of presses and academic journals, including Cambridge University Press, the Journal of Politics, Political Analysis, Political Behavior, and the Proceedings of the National Academy of the Sciences.
Abstract:
Arti-‘fickle’ Intelligence: Using LLMs as a Tool for Inference in the Political and Social Sciences To promote the scientific use of large language models (LLMs), we suggest that researchers in the political and social sciences refocus on the scientific goal of inference. We suggest that this refocus will improve the accumulation of shared scientific knowledge about these tools and their uses in the social sciences. We discuss the challenges and opportunities related to scientific inference with LLMs, using validation of model output as an illustrative case for discussion. We then propose a set of guidelines related to establishing the failure and success of LLMs when completing particular tasks and discuss how to make inferences from these observations.