When code reviews become obsolete

We spend so much time in code reviews arguing about implementation details. Should this be a map or a loop? Why did you use this pattern instead of that one? Is this variable name clear enough?

These conversations made sense when humans were writing all the code. Understanding someone else’s implementation is crucial because you might need to maintain it, debug it, or extend it later.

But what happens when AI generates the code?

I can ask an AI agent to build an API that talks to a database, processes the data, and serves it to the frontend. The code works. The tests pass. But the implementation might be completely different each time I generate it.

Will this matter?

The conversation shifts. We’re not debating whether to use a for loop or a map. We’re debating whether we tested the edge case where users upload malformed data. Whether our performance benchmarks actually reflect real usage patterns.

Maybe developers will keep writing code that matters. But the code that controls what users see won’t be the implementation. It will be the tests.