Just over a year ago, Google announced they were going to be testing AI-generated summaries in their search results. This wasn’t a surprising announcement considering their trajectory. Google has, after all, already been capturing and “previewing” content from other sites on their results for the last few years. However, the difference this time around is subtle yet profound.
Previously, these summaries, though processed and curated, could still be traced back to their original sources. But when AI summaries become the default results, the intellectual deepfake becomes the authoritative result. And from there, it’s only a single step for all those fears about AI producing mis, dis, and mal-information to be manifested within that.
When we replace human-authored information with AI-generated summaries, AI regulation becomes a necessity. Not just because AI might get it wrong, but because when it’s being presented as the infallible source of truth, it must be right.
AI excels at a multitude of tasks, but one thing it lacks is the ability to form true opinions. As AI summarization becomes the default, we need to ask: What is the filter for this information? For what purpose? And who stands to benefit?
Many focus on the potential threats of “unfettered” rogue AI. However, it might be more useful to turn our gaze towards the large corporate entities wielding these technologies. As pre-digested information takes top billing, primary content creation will become harder to access, diminishing the market for this content. What does the information market look like when user access is reduced to a relic of the past?
It certainly seems like the beginning end of search as we know it, reshaping the web and turning the search engines themselves into the gatekeepers of the information we consume. This, to me, represents the most genuine “AI threat” we’re currently facing. Not misinformation or AI “hallucinations” but corporations dynamically delivering authoritative information through a thick web of biases, and making the primary sources harder and harder to find. And once these corporations seize control of how and what we learn, how can you even begin to challenge their control?
Google, OpenAI, Microsoft, and others have introduced tool suites under the banner of “user safety.” But, the limitations and biases inherent in these tools, coupled with pre-determined ethical frameworks, can significantly alter our current information landscape, leading us into an endless hall of mirrors.
Let’s recognize this shift for what it truly is—not technological progress, but a play for control. By endorsing regulation that champions an “accepted truth,” these corporations can easily put the brakes on any competition that might question this authority. It’s fully automated censorship that goes far beyond what Orwell could ever imagine possible.
In the process, they are purging the marketplace of opinion and discovery, bolstering their control, and solidifying their status as the only authority. Getting the government to provide regulation that cements that status would be the icing on the cake.
0 Comments