Introducing the latest in Data Science,

focusing on applications in social,

political and policy sciences.

Visual Platforms and Generatve AI:  Effects on Political Discourse and Policy  Preference in the 2024 U.S. Elections

Herbert Chang, Ph.D.

Assistant Professor, Dartmouth College

Friday, October 24 - 10:00AM~13:00PM  (Taiwan Time, GMT+8)


Short Bio:
Herbert Chang is an Assistant Professor of Quantitative Social Science, Computer Science, and Mathematics at Dartmouth College, and a Forbes Under 30 Honoree in Science. His research studies how emerging technologies impact democratic behavior. He has published more than 35 peer-reviewed articles on misinformation, social networks, and the political impact of AI systems. His work has been featured in the New York Times, Washington Post, and Scientific American.

Abstract:
The 2024 U.S. elections marked a pivotal moment in the use of generative AI to create and disseminate political (mis)information. Across three studies, we examine the role of visual content on TikTok and Instagram and the influence of AI-generated media on policy preferences. Drawing on 239,526 Instagram images collected over seven months, we use zero-shot labeling, deep learning, and OpenAI’s generative classifier to identify synthetic content and visual themes. We find that AI-generated visuals alone do not increase engagement, but their combination with traditional memes substantially boosts reach. Republicans more often employ generative AI for out-group attacks, while Democrats use it for in-group reinforcement. Building on these findings, a large-scale survey experiment conducted four days before the election tests whether AI-generated visuals shift policy preferences. Using conjoint and framing designs with randomly assigned text and images across issues (abortion, tax, Gaza) and candidates (Trump, Harris), we observe significant within-subject switching toward Harris’s abortion stance and Trump’s immigration stance, reflecting hot cognition and partisan alignment. We conclude by discussing implications for election integrity, targeted misinformation, and the methodological advantages of integrating LLM workflows with survey experiments to enable rapid, ecologically valid research during high-stakes political events.
 

Arti-‘fickle’ Intelligence: Using LLMs as a Tool for Inference in the Political and Social Sciences

Ethan C. Busby, Ph.D.

Assistant Professor, Brigham Young University


Friday, October 31 - 10:00AM~13:00PM  (Taiwan Time, GMT+8)
 

Abstract:
Arti-‘fickle’ Intelligence: Using LLMs as a Tool for Inference in the Political and Social Sciences

To promote the scientific use of large language models (LLMs), we suggest that researchers in the political and social sciences refocus on the scientific goal of inference. We suggest that this refocus will improve the accumulation of shared scientific knowledge about these tools and their uses in the social sciences. We discuss the challenges and opportunities related to scientific inference with LLMs, using validation of model output as an illustrative case for discussion. We then propose a set of guidelines related to establishing the failure and success of LLMs when completing particular tasks and discuss how to make inferences from these observations.

Short Bio:
Ethan is an Assistant Professor of Political Science at Brigham Young University, specializing in political psychology, extremism, artificial intelligence, public opinion, racial and ethnic politics, quantitative methods, and computational social science. His research relies on various methods, using lab experiments, quasi-experiments, survey experiments, text-as-data, surveys, artificial intelligence, and large-language models. His work focuses on how democratic societies should respond to extremism, using approaches from political psychology and generative AI tools. These are deeply integrated in his work—political psychology informs my use of AI, and AI tools test theories from political psychology. More specifically, his research explores what extremism is, who people blame for extremism, how political persuasion intersects with extremism, and what encourages and discourages extremism. His research has been published in a variety of presses and academic journals, including Cambridge University Press, the Journal of Politics, Political Analysis, Political Behavior, and the Proceedings of the National Academy of Sciences.

 

Spatial Thinking in Machine Learning and Artificial Intelligence

May Yuan, Ph.D.

Ashbel Smith professor of GIS, University of Texas at Dallas

Friday, November 14 - 10:00AM~13:00PM  (Taiwan Time, GMT+8)


Abstract:
Tversky (2019), in her seminar book, Mind in Motion, claimed that all thought begins as spatial thought.  We move in space, interact with space, use space to think, and think about space. Our actions (live, move, and act) in a physical world are fundamental to our thoughts, whether concrete or abstract. Her claim is backed up by neuroscience findings of spatial codes for human thinking. Spatial thinking is at the core of Geospatial Science (a.k.a., Geographic Information Science – GIScience, Spatial Computing, and Spatial Data Science), to develop conceptual and computational frameworks for understanding geographic worlds.  From human thinking to machine learning, spatial concepts are also instrumental to the progression of machine learning and artificial intelligence and continue to subserve advances in data and algorithms. This talk will discuss what and how spatial thinking is ingrained into popular machine learning algorithms and manifests in the transition from perception AI, generative AI, and Agentic AI to Physical AI. Finally, the talk posits that spatial thinking is essential to the realization of Artificial General Intelligence. 

Short Bio:
May Yuan received all her degrees in Geography: B.S. 1987 from National Taiwan University and M.S. 1992 and Ph.D. 1994 from the State University of New York at Buffalo. She is Ashbel Smith Professor of Geospatial Information Sciences (GIS) in the School of Economic, Political, and Policy Sciences at the University of Texas at Dallas (UT-Dallas). She is an elected fellow of the American Association for the Advancement of Science (AAAS), the American Association of Geographers (AAG), and the University Consortium of Geographic Information Science (UCGIS). She serves as the Editor-in-Chief of the International Journal of Geographical Information Science. From July 2022 to July 2025, she was on an assignment to the National Science Foundation (NSF) as a program director of Human-Environment and Geographic Science(HEGS). Her research has been supported by NSF, NASA, DoD, DHS, DOJ, DOE, NOAA, USGS, and NIST. She and her students at the Geospatial Analytics and Innovative Applications (GAIA) Lab explore ways to understand the dynamics of people, events, and places, as well as the connections among brain health, spatial behaviors, and the environment. They also investigate the learning mechanisms taken by humans or machines to conceptualize, represent, and compute geospatial processes.