A recent collaborative study between U.S and Chinese universities has shed light on the inherent biases of different artificial intelligence (AI) large language models (LLMs). These LLMs, which lay the groundwork for generative AI like ChatGPT, can’t possibly tap into the entirety of human knowledge, behaviour, or perspectives. Consequently, their outputs are inevitably coloured by the limited information they’re programmed with.
Monitoring tools like the Political Compass may offer some insights into these biases. This tool classifies individuals based on their economic (left/right) and social (libertarian/authoritarian) positions, derived from a questionnaire. However, these categories are contentious and not universally accepted. Concepts such as ‘left’ and ‘libertarian’ are often conflicted when you consider the variance in their interpretations.
Bringing this concept into sharper focus, a recent report by MIT Technology Review stated, “Researchers conducted tests on 14 large language models and found that OpenAI’s ChatGPT and GPT-4 were the most left-wing libertarian, while Meta’s LLaMA was the most right-wing authoritarian.”
We note this not as a criticism of the study or the report, but as a way of illustrating how inescapable bias is. All journalism, including this article, is influenced not only by the story chosen to be told but also by the perspective taken. Transparency is crucial, given this ubiquitous and intrinsic prejudice. Also, humility in acknowledging nuances and not claiming monopoly on concrete notions like ‘facts’ and ‘truth’ are essential aspects.
The pertinence of this discussion lies in the rapid adoption of automation enabled by breakthroughs in LLMs. Many tasks traditionally assigned to humans now rest in the realm of AI, many of which, like screening out spam emails, require fine judgement and discretion. And unsurprisingly, these AI systems are not infallible. Recognising this, it becomes imperative to prominently disclose the inherent ideological bias of any service that uses LLMs, given that they inherently lean towards certain biases.