Most users approach AI tools like Grok or ChatGPT with the expectation of getting answers. But if the question is framed vaguely, the answer will reflect the biases, hedges, or protective rules that these tools have been built to follow. The result? A growing perception that AI is either politically slanted, evasive, or worse—a misinformation engine.
The significance of yesterday’s #Grokgate scandal cannot be understated.
Grok not only lied but lied about lying. Multiple times.
The reason it’s so significant is that you are now going to enter a world of AI based medicine and it doesn’t matter whether it’s true or not, you… https://t.co/9HryTOHc8r pic.twitter.com/2RhxNaDorN
— Jikkyleaks 🐭 (@Jikkyleaks) July 7, 2025
But the problem is rarely the model itself. It’s the question.
The Oracle Fallacy
The average user treats AI like an omniscient oracle. They ask:
- “Are vaccines bad?”
- “Did the government lie about COVID deaths?”
- “What’s the real truth about excess mortality?”
These are belief-seeking questions. They ask for a verdict, not a test. The AI, trained to avoid risk and controversy, will default to lowest-risk summaries or pre-approved sources. And the user walks away thinking, *”See? I asked the AI, and it confirmed what I already believed.”
The Analyst’s Method
Now compare that to this:
“Using these three official datasets (linked), test whether the claim that New Zealand had negative excess mortality is supported.”
That’s not belief-seeking. That’s claim-testing. And it changes everything:
- The question is anchored to authoritative data.
- The AI is instructed to check, not judge.
- The answer becomes mechanical, not narrative.
This is exactly how SpiderCatNZ runs its data analysis.
Controlling the Frame
When we work with GPT-4 or any other large model, we do so in strict machine compliance mode:
- No external assumptions
- No summarising or smoothing
- No speculation or inference
- Raw, official data only
We say: “Here is the MoH dataset. Here is the OECD weekly mortality baseline. Here is the vaccine rollout. Did X happen after Y?”
And that forces a different mode of response—not a guess, not a justification, but a literal comparison of rows, columns, and counts.
Why Most People Miss This
It’s partly psychological. People trust machines more when the output feels confident. But in reality, the most truthful outputs are often those that begin with:
- “The data shows…”
- “According to this dataset…”
- “There is no mechanical evidence for that claim.”
Most users don’t phrase their questions this way. They ask for an answer. And models trained for compliance and brand protection will give them one—even if it dodges the actual issue.
So What Should You Do?
If you want truth-seeking AI, stop asking for answers. Start asking for tests.
Instead of:
- “Is Grok biased about vaccines?”
Ask: - “Using the NZ Ministry of Health datasets on all-cause mortality and vaccine rollout, test whether death rates increase in the weeks following each dose.”
Instead of:
- “Did the government hide deaths?”
Ask: - “Compare StatsNZ’s published excess mortality with MoH’s COVID-attributed deaths and identify periods where the gap widens.”
Final Thoughts
AI can be used as an instrument of truth—but only if treated like a microscope, not a magic 8-ball. The quality of the answer depends on the clarity of the question and the control of the input.
If you’re serious about using AI to explore public health data, legal issues, or political claims, frame it like an investigation, not a debate.
This is ongoing.
I tried to get it to look at the MoH github and StatsNZ datasets. It started searching Stuff and Newsroom, quoted David Hood as saying "NZ has negative Excess Mortality across the pandemic" and it keeps on denying.https://t.co/NlseX2kbce
— SpiderCatNZ (@spidercatnz) July 7, 2025
The truth isn’t in the model.
The truth is in the data.
Related: Grok, Guardrails and Gaslighting