Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.

Submission Preview

Link to Story

The political preferences of LLMs

Accepted submission by gznork26 mailto:gznork26@gmail.com at 2024-08-02 17:53:04 from the The Politics of Politics Dept dept.
News

From ScienceBlog: A comprehensive analysis of 24 state-of-the-art Large Language Models (LLMs) has uncovered a significant left-of-center bias in their responses to politically charged questions. The study, published in PLOS ONE, sheds light on the potential political leanings embedded within AI systems that are increasingly shaping our digital landscape.

The underlying paper at PLOS One: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0306621 [plos.org]

The researcher used a variety of tests of political alignment to assess the bias of some Large Language Models (LLMs) and found that they exhibited a left-of-center bias. To discover whether that bias can be affected by changing the training data, versions of LLMs were trained on selected sources, producing biases to order.

Here's a question for the community: Is the 'centerpoint' of political bias, as judged by these tests, arbitrary and reflective of the gamut of bias that is accepted as normal at this time? Is that centerpoint an absolute that can be used as a reference, or is it simply an artifact of how the political universe is currently understood? It seems to me that the phase space it exists in is limited by the kinds of political organizations which are preset in the world today, and that there might be valid solutions which have not yet been explored.


Original Submission