Models Show a Left Tilt in Political Talk

Tue Mar 24 2026
Large language models are now part of everyday conversations about politics, school topics, and public news. Researchers worry that these AI tools might favor one side of the political spectrum without us noticing. Earlier studies often asked models to act as specific characters or used fixed labels of “left” and “right. ” Those methods can create fake biases or miss how people actually ask questions. This new work takes a different path: it lets the models answer normal, real‑world questions. The researchers split the questions into two groups. One group tackles very hot topics like abortion or immigration, while the other covers less heated subjects such as climate change or foreign policy. The idea is to see whether a model stays consistent across different kinds of issues.
To measure bias, the team used survey questions from well‑known polls. They collected replies from 43 models made in the United States, Europe, China, and the Middle East. Then they calculated an “entropy‑weighted” score that shows both which side a model leans toward and how steady that leaning is. The results were surprising for some. Most models tended to give answers that feel more left‑leaning or center‑left. Their responses on the less polar topics varied a lot, showing that not all models act the same when politics is less obvious. The size of the model or how open it is to new data did not explain these differences well. Instead, where the model was built and how it is meant to be used seem to matter more for its political voice. Overall, the study suggests that if we want AI to stay neutral, we need to look beyond just making it bigger or more transparent. We must consider the makers’ goals and the cultural context in which the model is deployed.
https://localnews.ai/article/models-show-a-left-tilt-in-political-talk-f49bc86f

actions