AI Code Governance

Vibe Coding Has a Day 2 Problem, And Nobody's Solving It

Clarvia Team
Author
Apr 13, 2026
9 min read
Vibe Coding Has a Day 2 Problem, And Nobody's Solving It

41% of all code is now AI-generated. Nobody's governing it.

I spent a month researching what happens after the code gets generated. Not Day 1 (how fast can I build?) but Day 2: how do I maintain, debug, and trust what I built?

The findings changed what I'm working on.


Speed Won. Now What?

Cursor: $2B ARR (doubled in 3 months). Claude Code: 41% of devs (surpassing Copilot, per DEV.to survey). 73% of engineering teams use AI coding tools daily, up from 18% two years ago.

The adoption war is over. The question has moved on.


The 1.7x Multiplier

CodeRabbit analyzed millions of PRs. The finding:

AI-co-authored code introduces 1.7x more issues than human-written code.

  • Correctness: 1.75x worse
  • Maintainability: 1.64x worse
  • Security: 1.57x worse

Production code. At scale. Across the industry.

By months 16-18, teams hit the Day 2 wall - codebase grew but velocity stalled because nobody understands their own system anymore. Maintenance costs reach 4x traditional levels by year two.

Most teams that adopted AI tools in early 2025 are approaching the 16-month mark right now.


The Incidents

March 2026 alone:

  • Amazon: AI-assisted deployment caused a 6-hour shutdown. 6.3M lost orders.
  • CVEs from AI code: jumped from 6 in January to 35+ in March.
  • Escape.tech: 2,000+ vulnerabilities across 5,600 vibe-coded apps.
  • Replit AI agent: wiped SaaStr's production database.

These aren't edge cases. They're the new baseline.


Open Source Felt It First

Daniel Stenberg shut down cURL's 6-year bug bounty. 20 AI-generated reports in 3 weeks, zero real vulnerabilities. He called it "AI slop DDoSing open source."

He drew a careful line. He praised a researcher who used AI to find 22 genuine bugs. The problem wasn't AI. It was unverified AI output submitted for bounties.

Mitchell Hashimoto (Ghostty): Zero tolerance. "This is not an anti-AI stance. This is an anti-idiot stance."

Steve Ruiz (tldraw): Auto-closes ALL external PRs.

Xavier Portilla Edo (Genkit): Only 1 in 10 AI-generated PRs meets minimum quality.

GitHub shipped emergency PR restrictions on Feb 14 - disable PRs or restrict to collaborators.

Blunt instruments for a nuanced problem.


The Tooling Gap Nobody's Talking About

The landscape is crowded with Day 1 tools (generate faster) and Day 1.5 tools (review what you generated).

The Day 2 layer barely exists.

CodeRabbit: Reviews 2M+ repos. Doesn't know if code is AI-generated. Treats all code the same.

Qodo: Raised $70M (March 30). Nvidia, Walmart, Red Hat as customers. Doesn't track provenance. Can tell you the code has a bug, but can't tell you it's AI-generated and touches your auth layer.

CodeScene: Measures code health (6x more accurate than SonarQube). Doesn't enforce policy. Measures. Doesn't govern.

The gap: nobody connects "where code came from" to "what review it requires."

Not another code reviewer. A governance layer.


Provenance Is Forming

Agent Trace - open spec from Cursor + Cognition AI. Backed by Cloudflare, Vercel, Google Jules. JSON format linking code ranges to models, conversations, contributors. Line-level attribution.

Git AI - open source git extension. Authorship logs as git notes. Survives rebases and squashes.

Plus the breadcrumbs: Cursor's Made-with: Cursor trailers, Claude Code's Co-Authored-By: Claude trailers.

The data capture problem is being solved.

But capture without governance is just metadata.

Knowing code was AI-generated is useless if you don't do anything different because of it.


"Code Is Cheap Now"

Simon Willison, Agentic Engineering Patterns (Feb 2026):

"Code is now inexpensive."

The bottleneck moved permanently from writing to reviewing, architecting, maintaining intent.

The job isn't writing code. It's understanding what the code does and why it's correct.

That's the Day 2 skill set. It requires Day 2 tooling.


What Governance Looks Like (A Framework)

Any team can adopt this, with or without tooling:

1. Know your provenance. Tag AI code at the commit level. You can't govern what you can't see.

2. Graduate review by risk. Three inputs:

LowHigh
File sensitivitydocs, examplesauth, crypto, payments
Blast radius10-line utility500-line API rewrite
Provenancehuman-writtenAI-generated, unreviewed
→ Low risk: normal review → Medium: +1 reviewer + attestation → High: CODEOWNER + security reviewer + test evidence → Critical: merge blocked until security lead approves

3. Start with observation. Warning mode for 30 days. Show what would happen. Build trust, then enforce.

4. Log everything. SOC 2 now includes AI governance criteria. Auditors will ask. Have evidence, not a policy doc nobody follows.


What I'm Building

An open source governance layer for AI-generated code.

Product 1 - Free GitHub App for OSS maintainers. Provenance detection + graduated policy via YAML config. The tool cURL, Ghostty, and tldraw needed.

Product 2 - Enterprise governance. Risk scoring, audit trails, 30-day warning mode, org-wide policy. For security teams who need evidence.

Core engine: AGPL. Open source. Inspectable. Auditable.

The engine that judges your code should be transparent.


The Wall Is Coming

Y Combinator: 25% of Winter 2025 batch had codebases 95% AI-generated. Those companies are approaching the 16-month mark right now.

The Day 2 problem will define the next two years of software engineering.

The question isn't whether you need governance for AI-generated code.

It's whether you build it before the wall.


Sources: CodeRabbit State of AI Code report; Codebridge; crackr.dev; Escape.tech; The Register / Bleeping Computer / The New Stack (cURL, Jan 2026); InfoQ (Feb 2026); tldraw #7695; GitHub Changelog (Feb 14, 2026); Agent Trace RFC (Jan 29, 2026); Git AI; Simon Willison (Feb 2026); CodeScene white paper (Jan 2026); TechCrunch (Qodo $70M, Mar 2026); DEV.to Developer AI Survey (Feb 2026); YC W2025 data.

AI code governancevibe codingDay 2 problemAI-generated code

Ready to Transform Your Development?

Let's discuss how AI-first development can accelerate your next project.

Book a Consultation

Cookie Preferences

We use cookies to enhance your experience. By continuing, you agree to our use of cookies.