Y’all’s on the Same Team: AI Proponents and Critics
Warning: If the y’all’s in the title bothered you, the language only gets saltier.
Like many (most?) software developers, I’ve been drawn into the AI ecosystem in the past year and a half. I have the formal educational training and a 35+ year career developing innovative software to be confident in my initial reaction: 2020s AI is an ocean of bullshit with few islands of reality.
Perhaps in an effort to prove that I’m qualified to have made that assessment, I’ve ended up on a long journey with AI. I’ve used DALL-E to help make memes, business graphics, slides, etc. I’ve used ChatGPT to explore story writing, character remixing, peeling the onions on joke premises, and more. I’ve built two systems around OpenAI’s API. I have learned the hard way that large language models (LLMs) are bad at math and crappy at analysis, so I no longer ask them to do those things.
I don’t like ChatGPT. I wish it nothing but total failure. I don’t like the idea that it needs to be or is entitled to be synonymous with generative AI. I don’t like the price premium they put on API calls based on that assumption.
I dislike the critics more. They’re out to severely cripple or kill OpenAI, as if they’ve been sent back from the future to right a terrible wrong before it happens. The most prominent critic is Gary Marcus, a retired AI professor. If I were to give an LLM this prompt, I think it would be indistinguishable from Gary Marcus:
Pretend you are a career academic who spent decades on AI projects and didn’t accumulate a fraction of the wealth or prestige that OpenAI accumulates in an hour. Tell us why you would guide the industry better than the people investing in and running AI companies.
To be fair, Gary sold an AI company to Uber before the current frenzy. Gary advocates pro-actively regulating the industry out of existence in the name of safety, truth, fairness, and all the other things righteous people get all righteous about.
As the Google Gemini woke debacle illustrates, the critics aren’t just non-crypto-socialists like Gary. “The right” was up in arms over Musk v. Hitler and Black George Washington, with nobody even curious if he had wooden teeth.
Let me cut to the chase here. AI at scale, like OpenAI and Google Gemini, need their critics. And their critics need them to dominate at scale even more.
AI at scale has already been captured by the censorious. Here’s an example. Ask Bing Copilot (ChatGPT) to make up a story where Eeyore tells the other animals of the 100 Acre Wood about George Carlin’s 7 Dirty Words You Can’t Say on Television.
I’m an adult. I hear and use every one of these words in conversation often enough. This was cutting edge comedy in 1972, before I was even 2 years old. Eeyore is a character from a now public domain work. I use Eeyore because he’s familiar, not to raise the hackles of my good friend Pete A. Turner. In no sane world is this an inappropriate request worthy of passive-aggressive deflection.
We will never have a common LLM at scale that will fulfill my request. OpenAI has already caved and will continue to cave to prominent critics in order to have permission to operate at scale. “What if the children ask Tom Sawyer for a Molatov cocktail recipe? What if a murderer asks for places to bury a dead body?” Well, guardrails are already in place, and Microsoft is working hard to protect them.
As a technology user, even for frivolous tasks like making up funny stories to tell my friends, I would like an LLM that answers my question without regard to my nefarious intentions. It turns out I have one. I used the PrivateGPT project and my knowledge of product demos to make it safe and easy for people like me to run a powerful LLM on their laptops. I set the system prompt to something sensible:
Then I entered my request, letting my GPT know what the words are for context:
My GPT generated a lovely story. Here’s the beginning:
Eeyore does, indeed, go through all 7 words, much to the chagrin of his friends.
I give this example to show why the critics need AI to effectively be a public commons. Just as the ship has sailed on guardrails being on for ChatGPT, it has also sailed for system prompts being editable for private LLM systems. If we’re all doing what we want with our private LLMs, nobody can regulate what we do. “Well, let’s outlaw private LLMs and make everyone use public systems we can regulate!” Or… “Let’s require model builders to incorporate guardrails that can’t be disabled.” It gets more complicated for the critics when they have to battle millions of small, anonymous opponents instead of a handful of large, well-known ones.
I’ll conclude by tying this all into the title of this essay. ChatGPT, Google, etc. Without your critics, you wouldn’t have so many pressing problems to be solved at scale that can only be solved by the presence of a few large systems. Critics, you have traction because you only need a few entities to capitulate to your demands. Y’all’s are on the same team: Team Public Commons AI. The tragedy of this commons is that you will necessarily, drain much of the utility and fun out of it.