Open source GitHub App

Open source governance for AI-generated code.

A free GitHub App that helps maintainers manage AI-assisted pull requests with graduated, configurable policy. No ML detection, no false positives, no telemetry.

Free foreverAGPL-3.0StatelessZero telemetry
AC
ai-code-confidence[bot]
commented on a pull request

AI Contribution Policy

This project uses AI Code Confidence to help manage AI-assisted contributions.

Provenance signals detected:

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Made-with: Cursor
Generated with Claude Code

AI involvement

HIGH CONFIDENCE

Tool: Claude Code | Model: Sonnet 4.5

AI Code Confidence by Clarvia. Configured per repo via .aicontrib.yml.

A real bot comment, exactly as it appears on a PR

The problem

Maintainers are drowning in AI-generated PRs.

The cURL project banned AI-generated submissions. Ghostty's maintainer publicly rejects AI PRs. tldraw posted frustrated threads about AI slop. GitHub itself introduced emergency rate limits during AI-spam waves.

The maintainer's day looks like this: a stranger opens a 200-line PR that looks reasonable. They have 15 minutes to triage. They suspect AI but cannot tell. Merge and own subtle bugs forever, or reject and look hostile.

Outright bans are the only tool most maintainers have. AI Code Confidence gives them a graduated, low-friction alternative.

How it works

Detect, decide, act. In under one second.

01

Detect

When a PR opens, the bot scans commits, branch names, and any Agent Trace files for explicit AI provenance signals. Eight detector types, each with documented confidence levels.

02

Decide

Your repo's `.aicontrib.yml` config maps signals to actions. Different rules for sensitive paths, relaxed paths, and confidence levels. The bot stays in observe-only mode for 30 days on every new install.

03

Act

A single PR comment summarizes detected signals. Labels are applied. A check run reflects status. If your config requires it, attestation or disclosure is requested. Idempotent: re-pushes update the existing comment, never spam.

What we detect

L1Commit trailers

Co-Authored-By: Claude, Made-with: Cursor, Co-Authored-By: aider, Windsurf

L1Commit body markers

"Generated with Claude Code", "Generated by Cursor"

L1Agent Trace JSON

The emerging open standard for AI provenance with line-level attribution

L2Branch name patterns

cursor-, claude-, codex-, windsurf-, aider-, devin-, ai-

L2Burst commits

3 or more commits within 10 seconds from the same author

What we can do

label

Apply a GitHub label like ai-assisted or possibly-ai-assisted

comment

Post a single PR comment with the detected signals (idempotent: updates on re-push)

request-disclosure

Ask the contributor to describe their AI tool usage

require-attestation

Show a checklist the contributor must complete before merge

request-changes

Flag specific sensitive files that need additional review

auto-close

Close the PR (only on initial open, never on re-pushes, never under medium confidence)

Configuration

One YAML file, codified policy.

Drop a .aicontrib.yml file at the root of your repo. The bot reads it on every PR and applies the matching policy.

No config? The bot uses sensible defaults: label and comment on AI-detected PRs, request disclosure on heuristic matches, never block.

Sensitive paths get stricter rules. Docs paths skip enforcement entirely. Confidence levels map to action sets you choose.

.aicontrib.yml
version: 1

# Brand new installs run in observe-only mode for 30 days.
# Set to 0 to enable enforcement immediately.
warning_mode:
  warning_period_days: 30

# What happens when AI provenance is detected on a normal path
on_ai_detected:
  confidence_high:
    - label: ai-assisted
    - comment
  confidence_medium:
    - label: possibly-ai-assisted
    - comment
    - request-disclosure

# Sensitive paths get stricter rules
sensitive_paths:
  - "src/auth/**"
  - "*.sql"
  - "db/migrations/**"

on_ai_detected_sensitive:
  confidence_high:
    - label: ai-assisted-sensitive
    - require-attestation
    - request-changes

# Docs are exempt from enforcement
relaxed_paths:
  - "docs/**"
  - "*.md"

Trust by design

Confidence, not quarantine.

30-day warning mode by default

New installs run in observe-only mode for the first 30 days. The bot shows what enforcement would do without actually blocking anyone.

Zero telemetry

No analytics, no tracking, no data collection beyond what GitHub itself sends via webhook. Your code never leaves GitHub.

Heuristics never auto-block

Medium confidence signals (branch names, burst commits) cannot trigger auto-close or request-changes. Only explicit markers can.

FAQ

Common questions.

Does this block AI contributions?

Not by default. The bot ships in 30-day warning mode for all new installs. Even after the warning period, blocking is opt-in and only applies to high-confidence (explicit marker) signals on sensitive paths you configure.

Does it detect AI code by analyzing the code itself?

No. The bot only reads explicit provenance signals (commit trailers, branch names, Agent Trace files). It does not run ML detection on code content, which would produce false positives and erode maintainer trust.

How is this different from CodeRabbit or Qodo?

CodeRabbit and Qodo review code quality. AI Code Confidence governs provenance and policy. They are complementary tools that solve different problems.

What data do you collect?

None. No telemetry, no analytics, no tracking in v1. The bot is stateless and reads only the PR data GitHub sends it via webhook.

Does it work with private repos?

Yes. The bot processes private repos identically to public ones. All your code stays within GitHub.

Why open source?

Maintainer-facing tools have to be auditable. Anyone can read every line of detection and policy logic, fork it, or contribute. The core engine is AGPL-3.0, ensuring derivative tools also remain open.

Install in under a minute. Try it on one repo.

Free forever. AGPL-3.0. 30-day warning mode for new installs means nothing blocks until you are ready.

Need governance for an enterprise team? Talk to Clarvia.

Cookie Preferences

We use cookies to enhance your experience. By continuing, you agree to our use of cookies.