Blog index

03 / AI tooling

AI Is Not Here to Replace Your Brain

The best AI tools reward clear thinking, good planning, small scopes, and human judgment. Used that way, they do not replace you. They multiply you.

People talk about AI like it is either a miracle or a threat. I think both frames miss what is actually interesting.

The best AI tools right now, especially coding tools like OpenCode and Claude Code, are not magic boxes that turn vague wishes into perfect software. They are closer to extremely fast junior engineers with infinite patience, no ego, and questionable judgment. If you use them well, they can be incredible. If you use them badly, they will generate a lot of confident nonsense.

That distinction matters.

A lot of people try an LLM once, ask it to "build an app," watch it produce a mess, and conclude the technology is overhyped. But that is like handing a construction crew a napkin sketch and then blaming the crew when the house has no plumbing. The tool did not fail because it lacked power. It failed because the instruction was bad.

AI rewards people who can think clearly.

You do not necessarily need to know how to build every piece of software by hand anymore. That is the uncomfortable part for a lot of developers. The barrier is moving. Raw implementation skill still matters, but it is no longer the only thing that matters. You can get surprisingly far if you understand what you want, why you want it, how the pieces should fit together, and what questions to ask along the way.

The skill is becoming less "Can I personally write every line of this?" and more "Can I design the system well enough that a machine can help me build it?"

That is a different kind of expertise. It is still expertise.


The model is only as good as the shape of the task

LLMs are bad when the task is badly shaped.

If you give a model a vague instruction, it will fill in the gaps. Sometimes it fills them in correctly. Often it does not. It will invent architecture, choose libraries, create abstractions, write tests that do not test anything, and confidently wire together code that looks reasonable until you actually run it.

This is where people get frustrated. They treat the model like an oracle, then get mad when it behaves like autocomplete with ambition.

The better approach is planning first.

Before asking an AI agent to build something, you should know what you are building. Not every line of code, but the shape of it. What are the constraints? What should the interfaces look like? What are the failure cases? What should not be changed? What needs to be tested? What would make the solution unacceptable?

This is why planning mode is so underrated.

A good planning step forces the model to slow down. Instead of immediately writing code, it has to explain the approach. You can inspect the assumptions. You can catch the bad architecture before it turns into ten files of plausible garbage. You can ask it to break the work into smaller pieces. You can tell it what to ignore.

Most people skip this, then complain that the output is bad.

But if you do not think ten steps ahead of what you are asking the model to do, the model will happily think zero steps ahead for you.


Agents work best when they have small jobs

The most interesting thing happening right now is not "one AI writes all your code." It is orchestration.

Tools like OpenCode, Claude Code, Codex, and similar harnesses make it possible to treat agents like scoped workers. One agent can inspect the codebase. Another can write a plan. Another can implement one piece. Another can review the diff. Another can run tests and explain failures.

That sounds fancy, but the underlying idea is simple: do not give one agent a giant ambiguous job. Give several agents small, legible jobs.

Small agents are easier to trust because you can tell what they are doing. If an agent is only responsible for reviewing a pull request, you know how to evaluate its work. If an agent is only responsible for writing tests for one module, you can inspect whether the tests are meaningful. If an agent is only responsible for summarizing a code path, you can compare its explanation to the files.

Scope creates accountability.

This is also why visible reasoning and tool calls matter. In most modern harnesses, you can see what the agent is reading, what commands it is running, what files it is editing, and what MCP tools it is calling. That visibility is not a small detail. It is the difference between "the AI did something weird" and "I can audit the chain of decisions that produced this change."

When people say they do not trust AI, I get it. You should not blindly trust it. But you also should not blindly trust a human contractor, a Stack Overflow answer, or a dependency you installed five minutes ago. You create trust through scope, review, tests, logs, and repeatable process.

AI does not remove that discipline. It makes the discipline more important.


This is probably the most open this era will ever be

Another thing people miss: right now, the ecosystem is unusually open.

There are hosted models, local models, open source harnesses, CLI tools, editor integrations, MCP servers, and agent frameworks all evolving at once. Nobody has fully locked the workflow down yet. You can mix and match tools. You can run agents in your terminal. You can inspect their output. You can wire them into your own systems. You can build weird little workflows that would have sounded impossible a few years ago.

That may not last forever.

Eventually, parts of this will consolidate. Platforms will get more polished, more expensive, and more closed. But at this moment, there is a strange window where the tools are powerful enough to be useful and still open enough to be shaped by individuals.

That is rare.

And local models are going to make it even stranger. The idea that you will be able to run useful coding agents on your own machine, for free or close to free, is not science fiction anymore. Even if the models stopped improving tomorrow, the current level of capability is already enough to change how people build things.

Not because AI can do everything.

Because it can do a lot, if you know how to drive it.


The future belongs to people who can ask better questions

I do not think people need to be afraid of AI in the abstract. They need to become more literate with it.

That does not mean believing every demo. It does not mean replacing your judgment with whatever the model says. It definitely does not mean letting an agent rewrite half your codebase while you look away.

It means learning the new interface.

Ask for plans before code. Break large tasks into small ones. Make the model state assumptions. Force it to write tests. Read the tool calls. Review the diff. Keep architecture in your own head. Use agents like workers, not prophets.

The people who get the most out of AI will not be the people who worship it. They will be the people who can direct it.

That is the part that makes me optimistic. AI does not make thinking obsolete. It makes thinking more valuable. The clearer you are, the more leverage you get. The sloppier you are, the faster you can create a mess.

So no, AI is not something to panic about.

But it is something to learn.

Because if you can plan well, ask good questions, and keep your standards high, these tools are not replacing you. They are multiplying you.