SonarQube is a useful tool. This is not a takedown. But there is an entire class of technical-debt signals it will never catch — because it reads your source code, and these signals live in your Git history.
Here are five.
1. Hotspot concentration
A 400-line file with a cyclomatic complexity of 12 sounds fine. But if that file has been changed 300 times in six months — 50 times more than average — it is a hotspot. It concentrates defects. It slows every engineer who touches it.
SonarQube sees a moderately-complex file. The Git log sees an on-fire file. The difference matters.
Research consistently shows that roughly 4% of files cause 50% of defects. Those files are identifiable only through behavioral analysis, not static analysis.
2. Knowledge silos
A file where 93% of commits come from one author is a bus-factor risk. When that engineer leaves, goes on vacation, or changes teams, nobody else can safely modify the code. SonarQube has no opinion on this. It can't — the information is in git log --author, not in the code itself.
Knowledge silos are an operational risk that compounds silently. The first sign is usually a hiring freeze and a sick day colliding on the same Tuesday.
3. Temporal coupling
Two files that always change together have hidden coupling. Not import-level coupling — that a linter can catch. Behavioral coupling: touch A, and you find yourself having to edit B.
This is often an architectural smell that no static analyzer can detect. The files might have no import relationship at all. They might be in completely different modules. But changes to them are correlated because of a shared underlying assumption that is documented nowhere.
Temporal coupling is the signal you want to catch before a refactoring. It tells you what you'll actually break.
4. Merge discipline
Two repositories can have identical SonarQube scores and radically different quality cultures.
Repo A: every PR has two reviewers, no self-merges, average diff 80 lines.
Repo B: 60% of merges are self-merges, reviewer concentration is one person, average diff 400 lines.
Repo B is going to ship more bugs. Not because its code is worse. Because its process is worse. This lives in git log --merges, not in the source tree.
5. AI governance
In 2026, this is the metric that everyone will need and almost nobody is tracking.
Copilot, Claude, Cursor, Codeium, Codex — all of these leave signatures in your Git history. Co-Authored-By trailers, commit-message patterns, bot authors (Dependabot, Renovate). Detecting them is mechanical.
The real question isn't whether AI is in your codebase. It is: what percentage of your AI-assisted commits was reviewed by a human before merging?
A well-governed team might have 40% AI-assisted commits, all reviewed, low risk. A poorly-governed team might have 20% AI-assisted commits, half of them self-merged, high risk. SonarQube cannot see the difference. The Git log can.
The point
SonarQube is a quality gate for code. It is good at its job. Keep running it in CI.
But when leadership asks "where is our risk?" — static analysis can only answer part of the question. The rest lives in how your team actually works: where the code changes, who changes it, who reviews it, who approved the last AI-generated commit.
That information is already in your repository. You just need a different tool to read it.