"It's just a little thing." A function name. A tiny script. A regex. A quick summary. A "sanity check" when you're tired. A tiny bit of UI work. Some repetitive work that you know like the back of your hand but just don't want to write over and over.
I get it.
That phrase "just a little thing" is the marketing wedge. It's a dependency generator. Not by announcing, on a rooftop, that you are about to outsource your thinking and your security to a black box, but by offering you a thousand tiny conveniences until you can't imagine working without them.
Current LLMs do not come free. Not for big things. Not for small things. Not even for the stupid little things you think "don't count."
Hello. I'm visiting you and your neighbors today to share some good news.
In this day and age, it seems like everything is getting worse. This can make understanding reality difficult. In turn, building competence is strenuous, at best.
Well, not to worry. We have the answers for you. See, we have some men in Silicon Valley who have a direct line to Intelligence.
If you join our AI training course, you will be saved and you too can enjoy this access. This access gives you all the information you need in a way that's easy to digest so you can properly utilize all the benefits that Intelligence wants to give you.
Note that there will soon come a time when you can no longer work without it. During that time, you will be left behind. This is referred to as Artificial General Intelligence.
Take the pitch above and replace "Intelligence" with "LLMs," "saved" with "employable," "training course" with "workflow," and "men in Silicon Valley" with "tech executives." Then ask why it suddenly sounds like a cult instead of a product demo. [8]
This is the emotional payload of the AI hype cycle: abdicate your cognitive sovereignty now, and you'll be rewarded later.
People keep talking about LLMs like they hired an intern who never sleeps. That is not what you bought.
You bought a slot machine for your attention. Pull the lever, get a plausible answer, feel the dopamine. Pull it again. That's the whole loop. Vegas exists and causes far less societal harm along the way so, if you want that, just go do that.
If you think that sounds harsh, good. It should. Because a lot of what is being sold as "productivity" is just addiction with a corporate invoice attached to it. [7]
Every prompt is a physical event. It hits real chips, in real buildings, drawing real power, producing real heat, demanding real cooling.
"But my prompt was short." Multiply that by everyone else's short prompts. Multiply that again by the new default posture of modern work: "ask the machine first."
Independent benchmarking work has quantified the per-prompt footprint of LLM inference and has shown that long prompts and high-end models can consume energy on the order of tens of watt-hours per request, with water and carbon impacts that scale fast under heavy usage. [3]
The real scam is that the product is positioned as "a small convenience," while the infrastructure behind it behaves like a new industrial sector.
AI infrastructure doesn't stay inside the data center fence line. The grid expands. Generation gets added. Cooling and water become local political fights. And the public ends up subsidizing private compute in a hundred quiet ways that never appear in a pricing page.
This is why you keep seeing reporting about data centers driving electricity price pressure, and lawmakers asking whether households are paying for the AI buildout. [4] [5]
So no: "free tier" doesn't mean "free." It means "the bill is being routed somewhere else."
"I didn't paste anything sensitive." People say this right before they paste something sensitive: stack traces, logs, internal hostnames, redacted configs that still leak structure, snippets that reveal how authentication really works.
And the real trap is normalization. The prompt box stops feeling like a third party. It starts feeling like a private scratchpad. That's how leakage becomes culture. This is further excarbated by anthropomorphic feedback which gives you a psychological false sense of confidence.
Once leakage becomes culture, it becomes supply chain. And once it becomes supply chain, it becomes breach. Haven't we had enough of NPM and PeePee packages?
Security failures are often borne out of convenience. They look like "just this once." They look like "we'll fix it later." They look like "it's only internal."
Now add LLMs and you get a new failure mode. It is plausible looking output that feels safe because it reads well, plus workflows that encourage copying sensitive context into systems you do not control.
There is serious work documenting the cybersecurity risks of AI-generated code and AI-integrated development, including new attack surfaces and the difficulty of auditing what gets smuggled into production when output is cheap and attention is scarce. [2]
There is a reason "small tasks" matter. Small tasks are where craft lives. They are where you practice forming a mental model, not just receiving an answer-shaped blob of text.
If your default response to friction is "ask the model," you aren't removing drudgery. You are removing the reps that build competence. Now the machine is training you to end up beholden to it.
Research comparing LLM use to web search has already started to show patterns consistent with shallower learning, even when users feel productive. [1]
If you wear an exoskeleton every time you lift a box, your legs do not "stay strong." They atrophy. And eventually you cannot lift without it.
If you're a business who has a dependency on selling token usage, the token usage an analogy as the exoskeleton, wouldn't you want people to be dependent on that so you can maximize income? Even if it means they atrophy, suffer, and their lives end prematurely, pricing costs go up for their own personal lives, their family is negatively affected, their kids suffer, etc...? After all, why should you, the business selling AI, care about any of that? If you get to AI doing what most humans do, then you can solve all those problems... right?
If that sounds like paranoia, consider that researchers are already explicitly discussing AI "dependence" and "addiction" dynamics as a serious phenomenon in conversational LLM usage. [7]
A big problem, albeit not the whole problem, is these systems subvert humany and humans begin adopting the robotic idioms in return. [6]
The ability to obscure accurate or relevant information under mountains of plausible nonsense makes it all worse. [6]
Meanial work is actually increased by turning developers into full-time reviewers of machine output, moving the burden from writing to auditing, from building to triaging. [6]
Stop participating in the mercantilism of complexity. This is not progress. It is a business strategy. [6]
Simply put, we need to stop treating the usage of LLM as "innocent".
If you use these tools at all, treat them like untrusted output from an unknown third party:
The price you pay isn't only money. It's attention. It's skill. It's privacy. It's grid capacity. It's trust. It's also so much more. Peoples sanity. Their lives. It goes on and on.
And the whole con relies on you believing that "small" uses don't count.
There is no such thing as a free prompt.