What if instead of delivering “the TRUTH,” what AI really provides is structure?
I have what some people might consider an honesty problem. I love digging deep to uncover the real motivations and incentives behind why we do the things we do. But when you expose someone to their own motivations, they often don’t appreciate it. Over the years, I’ve had to learn how to couch what I used to consider “honest” opinions or insights within a diplomatic framework.
And the reason for that? People lie. They lie a lot, and they don’t want to get caught. They lie to the people they love, they lie to the people they hate, they lie to their colleagues, and most of all, they lie to themselves.
As the old saying goes, “It Is Difficult to Get a Man to Understand Something When His Salary Depends Upon His Not Understanding It.” That’s doubly true when you’re a consultant, by the way.
AI isn’t human and has different motivations. So can AI lie? Can it manipulate? Can it gaslight? Not directly. Not unless it has been programmed to do so by its human overseers.
When AI experiences a “hallucination,” it isn’t motivated by personal ambition. It misinforms because when you ask it a question, it wants to provide an answer.
Generative AI is a data processor. Its primary objective is to grind down the information it receives into a “knowledge dust” and use that to craft a structure that resembles truth but may not be truthful.
There’s no doubt that the results of this processing are magical. It’s the ChatBot’s ability to transform raw human input into coherent language (and images) that has sparked the great AI revolution of 2023.
This gives us an AI that is eager to please but doesn’t filter its output the way humans do. Its ultimate goal is to generate information that aligns with our queries, whether that means answering our questions directly or inferring answers based on existing data models.
What it currently struggles with is admitting that it lacks the information needed for an accurate response. This is a far more useful way to look at AI-generated outcomes than simply categorizing them as “misinformation.”
This aligns well with how humans communicate. A lot of our interaction comes from reframing experiences into stories, and this very act of retelling changes the information conveyed.
So, how do we navigate a tool that can’t be fully trusted? The responsibility falls on the user, not the AI. With the mainstream arrival of these tools, it’s part of our civic duty to examine our expectations and biases before integrating AI-generated information into our worldview.
Since the advent of Photoshop nearly 30 years ago, few of us have developed the critical skills needed to scrutinize our beliefs in a world increasingly full of “misinformation.” We’d rather seek out information that affirms our existing worldviews than adjust them to align with reality.
Combine that with AI’s tendency to fulfill expectations rather than deliver objective results, and you start to see the impossibility of expecting perfect information from AI without some form of censorship.
If what you really wanted was a perfectly “truthful” AI, you’d also have to accept that our definition of truth includes both a point of view and a ton of context.
So what are the real motivations lurking under the surface of the “truth” we seek? Throughout human history, there has been far less objective truth than subjective truth. And whether we like it or not, generative AI is reflecting that back at us.
- Revelation isn’t always what you need.
- AI Doesn’t Have an Opinion—You Do!
- AI is built on processed data.
- Real security lies at the point of contact with the user.