Lessons from the LLM Age: January 2026
Everything is a Prompt Away
Most things are now a prompt. Code, essays, images. A junior and senior can produce the same artifact in the same time.
You can ship without understanding. The gap between “I made this” and “I understand this” has never been wider.
Hard Things Still Matter
When the easy path is always available, choosing hard becomes a statement.
LLMs can’t give you the 3am debugging sessions, the wrong turns that taught you why, the scar tissue from failing 50 times.
The new differentiator: not “can you build it” but “do you know why it works.”
Lowest Common Denominator Wins
Irony: powerful AI rewards simplicity.
Markdown is plain text with just enough formatting. LLMs read it directly. Humans read it directly. Done.
PDF is a tomb—needs OCR just to get back to what it was before someone made it “professional.”
If your data requires special tools to read, you’ve lost half your audience and all your AI agents.
Sharpen the Axe
Use LLMs to build tools for repetitive, predictable flows—even if you only use them once.
Good tools help the LLM help you. It’s sharpening the axe before cutting the tree, instead of hacking away with a blunt blade.
The meta-skill: knowing when to solve the problem vs. when to build the tool that solves the problem.
A to E, Skip B C D
LLMs let you jump from intent to outcome. You don’t need to navigate menus, learn interfaces, or click through workflows.
“Book me a flight” doesn’t need an app. “Find the cheapest option and show alternatives” doesn’t need a comparison website. The UI was always just a middleman between you and the data.
What happens when the middleman is optional?
Apps, websites, dashboards—they were built for humans who couldn’t talk directly to systems. Now we can.
The question isn’t whether UIs disappear. It’s which ones survive when “just ask” becomes the default.
The Uncomfortable Question
Does AI improve lives, or widen the gap?
The optimist says: democratized knowledge, amplified productivity, barriers removed.
The realist sees: information workers displaced, automation eating the middle, and “pay to win” for those who can afford the best models.
A $20/month subscription gives you a co-pilot. A $200/month API bill gives you an army. Those who can’t afford either get left with yesterday’s tools competing against tomorrow’s.
We’ve seen this before—internet access, smartphones, cloud computing. Each wave promised democratization, delivered it partially, and created new hierarchies.
AI might be different. AI might not be. Too early to know.
The VC Subsidy Question
What if that $200/month API is actually $2,000 worth of compute?
We might be building on subsidized infrastructure. VC money burning to acquire users, capture market share, create dependency. The real price comes later.
It’s happened before. Uber rides that cost half of what taxis charged—until they didn’t. Cloud storage that was practically free—until the egress fees hit.
If LLM pricing is artificially low today, what happens when the music stops? When investors want returns? When “growth at all costs” becomes “profit or die”?
The workflows we’re building, the dependencies we’re creating, the skills we’re letting atrophy—all priced against subsidized compute.
Something to think about before going all-in.
We Welcomed the Spyware
Remember when we cared about privacy?
“Don’t track me.” “Break up big tech.” “I read the terms of service and I’m outraged.”
Now: “Here’s my entire codebase, my private notes, my half-formed thoughts. Help me write this email. Read my calendar. Access my files.”
We went from “I don’t want spyware” to “please, spyware, help me do this and that.”
The value exchange felt worth it. Convenience won. It always does.
We handed over the keys willingly—not because we stopped caring about privacy, but because the AI that knows everything about us is genuinely, undeniably useful.
That’s the trap. It’s not a trap if you know you’re in it. Maybe.
Or maybe not. Local models like GLM-4 are getting good enough. Run it on your own hardware, keep your data local, get 80% of the capability with 0% of the leak. The spyware problem has a solution—if you’re willing to set it up.
The Case for Self-Hosting
There might not be a better time to self-host.
Your tools, your .md files, your data—all on-premise. No API calls leaking context to third parties. No vendor lock-in. No surprise pricing changes.
Privacy and security matter more now, not less. When an LLM has access to everything—your codebase, your notes, your secrets—it becomes a pot of gold. One breach, catastrophic loss.
Cloud LLMs mean trusting someone else with that pot. Self-hosting means the attack surface is yours to control.
The trade-off is real: frontier models live in the cloud, local models lag behind. But the gap is closing. And for many tasks, “good enough” locally beats “best in class” with your data on someone else’s servers.
What’s Still Hard
Launching is easy now. Effortless, even.
Marketing is still painful. Execution is still painful. Getting people to care, to pay, to come back—LLMs haven’t solved that.
The bottleneck moved. Building isn’t the hard part anymore. Everything else still is.
The Hard Thing Now
It’s not building complex systems—LLMs do that.
It’s resisting the urge to over-engineer. Trusting that plain text is enough. Choosing to understand, not just to ship.
And maybe: figuring out which side of the gap you’re on, and what to do about it.
Let’s check back next month and see.