Open Book Is the Real World
Real engineers do not work alone inside their heads.
The Real World Is Open Book
One of the best engineering professors I ever had understood something that a lot of technical interviews still pretend not to know:
The real world is open book.
Not metaphorically. Literally.
When you are building something that matters, you have the documentation. You have the internet. You have source code. You have tests. You have logs. You have a debugger. You have Stack Overflow. You have teammates. You have old commits. You have issue threads. You have prior art.
And now you have AI.
That is not an exception to engineering.
That is the environment engineering happens inside.
Fluency Is Not Competence
A closed-book test can be useful in narrow cases. It can check whether someone has baseline fluency. If you are claiming to be a JavaScript engineer, you should not be surprised by closures, async behavior, scope, equality, or promises.
But baseline fluency is not the same thing as engineering competence.
The deeper skill is knowing how to move through a system with the resources available.
A junior engineer often thinks competence means having the answer immediately.
A stronger engineer knows competence means finding the answer responsibly.
There is a huge difference.
The Ego of Knowing
When I was younger, I had more ego around this. I wanted to know. I wanted to be the person who could look at the problem and produce the answer from pure internal fire. No hesitation. No lookup. No help.
That image is seductive.
It is also childish.
Real systems are too large for that.
A production codebase is not a pop quiz. It is a living organism with history, scar tissue, hidden assumptions, and failure modes that only reveal themselves when the system is under load.
You do not master that by memorizing trivia.
You master it by learning how to ask better questions.
What changed? Where is the boundary? What is the contract? What does the test prove? What does it not prove? What does the log say? What does the user experience say? What is the smallest safe experiment?
That is open-book thinking.
AI as a Resource
AI fits into that world naturally. It is another resource in the field. A powerful one. A dangerous one if used badly. But still a resource.
The problem is not that AI gives you help.
Engineering has always involved help.
The problem is when you use help without understanding, without verification, without ownership.
That is not an AI problem. That is an operator problem.
If you paste a function from Stack Overflow without understanding it, that is bad engineering.
If you copy an internal pattern without checking whether the assumptions still apply, that is bad engineering.
If you accept an AI-generated implementation because it “looks right,” that is bad engineering.
But if you use AI to generate options, inspect the diff, run tests, ask what failure modes you missed, compare against existing architecture, and then take responsibility for the result?
That is not cheating.
That is modern engineering.
The old model imagined the engineer as a sealed container of knowledge.
The better model sees the engineer as an operator inside a network of tools.
That operator still needs skill. Maybe more skill than before.
Because the more powerful the tools become, the more dangerous bad judgment becomes.
Knowledge as Orientation
The open-book world does not remove the need to know things. It changes what knowledge is for.
Knowledge becomes orientation.
It tells you where to look, what to distrust, what to test, what to ask, and when the answer is probably wrong.
That is why open book is not easier.
It is more honest.
It reflects the actual job.
You do not hire a brain in a box.
You hire someone who can move through the world, use what is available, preserve judgment under pressure, and leave behind systems that work.
That has always been the work.
AI just made it harder to pretend otherwise.
Part of The Operator Series