Chatbots Vs Democracy

My encounter with an AI chatbot feeding me false information about a Scottish political party and what that did to alert me to another threat from AI to our democracy.

Image Source: Unsplash

I had a strange experience this week while trying to dig out some details to answer a policy question which I think have hit home to me a new threat to democracy in the age of “AI”.

While searching to see if a particular political party supported a policy, an AI search engine chatbot told me they did support it based on the manifesto of a completely different political party.

I was talking to someone about our policies for a Scottish National Infrastructure Company (SNIC) in the wake of the news that a school in Falkirk was struggling to pay for repairs after storm damage as repair inspections found significant defects in the school’s construction that also needed to be repaired.

This highlighted to us that our campaign for coordinated public infrastructure investments went beyond merely serving as a solution to our linked campaign to eliminate PFI and other similar privatisation scams (the school in question was not bought using PFI but was may well have been built using the same network of inadequate construction procurement thus leading to similar problems).

I was casting my mind back to our successful campaign in 2018 that saw the Scottish National Party conference members vote unanimously to support the creation of a SNIC.

My query in searching was to wonder if the Scottish Green Party had ever formally supported SNIC. I know they had, at some point or another, formally adopted policies based on our plans for a Scottish National Investment Bank and for a Scottish Energy Company as well as a few other Common Weal policies and I know they have policies on public ownership of infrastructure construction but I couldn’t quite remember where they stood on SNIC so I did a bit of searching.

Not finding much on my usual search engine of choice, I tried Google and was confronted with their AI chatbot which served up a very confident, very authoritative summary of the Greens’ policies on public infrastructure including saying “Yes, the Scottish Green Party supports the creation of a public infrastructure company” before linking me to the 2021 manifesto for...the Scottish National Party.

I think I can see where the conflation lay. First, the rest of the AI chatbot’s summary mostly talked about the Greens’ support for a Scottish Energy Company and a smattering of their support for public owned infrastructure as well as their voting for the 2021 Programme for Government as evidence of their support. But linking to an entirely different party manifesto concerns me greatly.

if Musk doesn’t like the truths printed by Grok then he’ll tweak the bot until it produces truths he does like

The creation of a SNIC is a fairly niche topic as things go and in the grand scheme of things it’s not like it accused the Greens of being pro-fossil fuel but I worry now that chatbots might be conflating policy decisions between other political parties too, especially given some of the much more broadly important policies currently being discussed.

For example, there are currently two major political parties in the UK (Reform and the Conservatives) who are openly talking about polices towards immigrants and non-citizen residents in Britain that would, if implemented, result in British citizens from birth like me being forcibly deported from Britain for the apparent crime of marrying someone who moved here (and possibility facing discrimination in other countries where right-wing extremists there would treat me the way their allies here want to treat my wife).

I’m having a difficult enough time trying to discuss this with family members who are leaning towards voting for one of these parties as it is without risking them going online to verify what I’ve said only for a chatbot to lie to them about party political positions.

As it stands, they either simply do not believe the reporting on the words these politicians are saying or do not believe those politicians will be “allowed” to deport me and my family despite actively supporting the deportation of other, more nebulously defined “immigrants”.

And if this is a problem with relatively neutral AI systems (if such a word can be applied to these things given Big Tech’s supplication towards Trump in the US) then what of the systems controlled by people who do have political ambitions. Elon Musk has made no secret of his desire to see an uprising of the Right in Britain and talks about his own AI chatbot Grok – which recently printed words falsely accusing Scottish MP Pete Wishart of being a “rape enabler” – as an absolute validator of “truth” (and if Musk doesn’t like the truths printed by Grok then he’ll tweak the bot until it produces truths he does like).

The atomisation of knowledge – where people can be targeted to receive certain political information while others are blocked from seeing it – has moved far beyond where it was just a couple of elections ago. I fear we’re entering an age where even when people try to “do research”, they’ll just have their own bias reflected back at them by a compliant chatbot or, worse, be radicalised by one controlled by someone acting not in their interests.

The UK could take a stronger stance here. The owners of these AI chatbots should be held as responsible for their output as the editor of a newspaper would be if one of their journalists “hallucinated” important truths. Given that it’s almost impossible for said editor to edit the work of a chatbot before it’s published, it’s likely that this would be an effective ban on AI chatbots in Britain but this might be what’s needed.

The alternative could be that the electorate next May ends up voting for political parties that don’t hold the views the voters think they hold. I don’t need an AI to tell me that this would not be good for our democracy.

Next
Next

Performing principle: What the assisted dying bill reveals about Scotland’s hollow lawmaking