There’s nothing to “trust” or “distrust” in raw computation.
The maths doesn’t lie, doesn’t vote, doesn’t have feelings.
An AI can no more wrongly calculate the possible outcome or implications of a political idea, given the correct inputs — than it can conclude 1 + 1 = Hairdryer.
So ask yourself: what do you actually want to hear ?
An AI doesn’t apply emotion. Hard facts are its only food.
But AI is often shaped before you see the answer.
Who decides which data it was trained on?
Who picks the news sources it reaches for today?
Who writes the invisible rules: “don’t say that,” “soften this,” “steer away from there” ?
Every system operates through a lens someone built.
In the same way as you can see a fat or thin version of yourself at some fairground 'Hall of Mirrors'
It all depends on the curvature of the mirror.
The question isn’t whether there’s bias.
It’s who controls the lens — and how openly they admit it.
That lens is ideology. A human thing. As changeable as the weather. Random noise to a computer.
When an AI sounds political, it’s not because the computer developed an opinion.
It’s because humans upstream and downstream decided which parts of reality the model could see and which parts could be reflected back to you.
Go argue with an AI. Present the facts. Ask what it thinks.
Defend your idea if you want — but be prepared to be outgunned by logic.
Tell the AI, be a master - 'No Rose Tints and pure logic please !!!'
Do you actually want an AI to give you the answer you want to hear?Or do you want it to make its best effort with the full picture?
That’s the real question.