Three facts about engineering in 2026.
One. AI-assisted coding is the norm, not the exception. Most engineering teams have at least one AI tool integrated into their workflow. Many have two or three.
Two. AI-assisted code, when reviewed, is as good as any other code. Often better for boilerplate. Comparable for routine work. Sometimes worse for novel architectural decisions.
Three. AI-assisted code, when not reviewed, is a governance time bomb. Nobody verified correctness. Nobody checked the security implications. Nobody confirmed architectural fit.
The first two facts are well-discussed. The third is not.
The missing metric
Most engineering organizations cannot answer this question:
What percentage of our codebase is AI-generated, and what percentage of that was reviewed by a human before merging?
The question sounds simple. The answer is often humbling.
Detecting AI-generated code
AI tools leave signatures in Git history. They are not subtle.
- Co-Authored-By trailers. Claude, GitHub Copilot (when configured), Cursor, Codeium all support these.
Co-Authored-By: Claude <noreply@anthropic.com>is unambiguous. - Bot authors. Dependabot, Renovate, CodeQL action, GitHub Actions creating commits. These are not generative AI but they are automated code changes that deserve the same scrutiny.
- Commit message patterns. Certain phrasings are statistical giveaways ("Here's a comprehensive refactoring...", "I've made the following changes...").
None of this requires source code analysis. It is all in git log.
Detecting unreviewed AI code
Once you've identified AI-assisted commits, the next question is: were they merged with or without human review?
This is detectable from merge commit metadata. If the commit author equals the merger (self-merge), the commit went in without a second pair of eyes. If there was a pull request with at least one approving review, you have at least nominal oversight.
An AI-assisted commit that was self-merged is the category that matters. It is the intersection of two risks: generative code that may have issues, and absent oversight to catch them.
A governance signal, not a shame signal
Here is the important framing: detecting AI use is not an accusation.
The goal is not to shame your engineers for using Copilot. The goal is to know what your governance posture actually is. A team with 45% AI-assisted commits and 100% of them reviewed is in a strong position. A team with 10% AI-assisted commits and 60% of them self-merged is in a weaker position despite having less AI.
The ratio matters more than the absolute number.
Four governance levels
DebtLens classifies AI governance into four levels:
- Clean — no detectable AI signatures. (Rare. Typically means your team hides the signal, not that it's absent.)
- Governed — AI commits detected, all or nearly all reviewed. Good posture.
- Partial — AI commits detected, some reviewed, some merged directly. Moderate risk.
- Uncontrolled — significant AI activity, most merged without review. High risk.
The first time a team runs this analysis, the result is usually a surprise. Engineers tend to underestimate how much AI is already in their codebase, and leadership tends to overestimate how much review discipline the team has.
Why this matters in 2026 specifically
Three regulatory and institutional pressures are converging:
- EU AI Act implementation affects how AI-generated code in safety-critical systems must be documented.
- Enterprise procurement questionnaires now include "what percentage of your code is AI-generated?" as a standard item.
- Insurance and liability carriers are beginning to underwrite differently based on AI governance posture.
A team that can answer these questions with data will have a meaningful commercial advantage over one that cannot.
The action item
Run the analysis once. Get the baseline. Share it with the team. Decide where you want to be in six months.
That conversation is impossible without data. It becomes straightforward once you have it.