Artificial Intelligence has become an expansive umbrella term, obscuring an important emerging distinction. There are now two distinct paths forward: AI as a service, and direct access to AI technology itself.
AI as a service primarily provide functionality without visibility into their inner workings. Chatbots like Claude are convenient AI interfaces but they provide only a glimpse of what is really occurring in the silicon that powers them.
Adobe’s Sensei and Microsoft’s Copilot integrate intelligence into workflows and existing offerings like email, word processors and more. These tools offer amazing results, but offer the user only a black box – easy to use, but impossible for users to tinker with.
Access to the technology itself opens up very different possibilities. Companies like Stability AI and Meta are enabling developers to customize open source models like Stable Diffusion and Meta’s LLaMA. By adjusting these models directly and with intention, true innovation can happen that AI tools simply won’t allow.
This difference has parallels in the past. Services give basic utility, like using an app. But building an app yourself demands at least some understanding of the different technologies involved. Services readily apply existing solutions. Technology spurs completely new perspectives that allows it to move forward.
For creators and developers, accessing AI at a technology unlocks radical experimentation. You can modify architectures to suit specific needs. Combine approaches in provocative hybrids. Adapt functionality based on direct feedback. AI becomes a dynamic tool rather than static, predetermined output.
Of course, these opportunities inevitably raise ethical concerns. Misuse of technology threatens privacy and enables control. But that same unrestricted access can also empower decentralized mitigation of dangers. Rather than simply layering on regulation proper Independent oversight of the unique properties of AI could help maximize benefits while minimizing harm.
Large tech companies often argue that restriction increases safety. But their profit-driven motives warrant skepticism. Truly evaluating these claims requires transparency currently lacking. Who watches the watchers?
Today, the costs and barriers needed to wrangle the massive amounts of data needed to build an LLM limit everyday AI access. But open source models and cloud computing are making exploration feasible for some. As methods advance, expect direct AI technology utilization to become democratized, much as personal computers did in the 1980s.
Until then, it’s worth remembering that while we use the available services it’s always worth pushing for expanded technological access. Hands-on use of the tools prepares us for coming shifts and those fleeting moments where the door slips open and we can run into the engine room and grab control for ourselves. And our experimentation gives us access to the data we cab use to innovate and improve the core technology in ways no one else can imagine.
AI as a service will continue driving profits. But for true breakthroughs, nothing yet beats the touch of human hands on the levers of technology itself. Our creativity flourishes when technology is an adaptable tool, not a static byproduct. And while the black boxes can help us solve the problems of today, it’s always worth knocking at the engine room door and seeing what the future looks like.