Permission Gateway for AI Agents

Your agents are powerful.
Make sure you stay in control.

Veto intercepts every tool call your AI coding agents make — via a native Claude Code hook or as a LiteLLM proxy guardrail. Define rules, review actions in real time, and approve or deny before anything executes.

claude-code

$ claude "clean up the old deployment"

Claude wants to run: rm -rf /var/www/production/*

VETO HOOKEvaluating against your rules...
DENIEDRecursive deletion outside project directory blocked.

$ _

Features

Everything you need to govern AI tool use

Veto gives your team centralized control over what AI agents can and cannot do — without slowing them down.

Rule-based policies
Define allow, deny, or ask rules by tool name, argument patterns, or regex. Rules are evaluated instantly on every tool call.
Native Claude Code hook
A lightweight hook script intercepts every tool call at the permission layer — before execution. No proxy needed, works with any API provider.
LiteLLM proxy guardrail
Or run as a LiteLLM guardrail that evaluates tool calls mid-stream. Streaming-aware — text passes through instantly, tool calls are buffered and checked.
Real-time audit log
Every tool call, every decision, every override — logged and searchable. Live SSE stream shows events as they happen.
Team-wide control
One policy applies to every developer on the team. No more trusting individual .claude/settings.json files.
AI-powered scoring
When no rule matches, an AI scorer evaluates the risk of the tool call and decides allow, ask, or deny — with two-level caching to minimize cost.

Two ways to integrate

Start with hooks. Add the proxy when you need more.

The hook gets you up and running in seconds. When your team needs tamper-proof enforcement, add the proxy layer — developers can't bypass what they don't connect to directly.

Start here
Hook-based

A lightweight Python script hooks into Claude Code's permission system. Every tool call is sent to the Veto API for evaluation before it executes. Set up in under a minute.

Install the plugin from the marketplace:

/plugin marketplace add damhau/veto-linux

/plugin install veto-linux

Windows? Use damhau/veto-windows instead

/veto:setup

  • Works with any API provider (Anthropic direct, AWS Bedrock, etc.)
  • No proxy layer — direct API connection, zero latency on LLM calls
  • Configurable fail policy: fail-open or fail-closed
Maximum security
LiteLLM proxy

For teams that need tamper-proof enforcement. Tool calls are intercepted in the SSE stream before reaching the client — developers never connect to the API directly.

Set the environment variable:

export ANTHROPIC_BASE_URL=https://proxy.vetoapp.io

  • Impossible to bypass — enforcement happens server-side
  • Streaming-aware — text passes through, tool calls are buffered and checked
  • No client-side installation — just set an environment variable

How it works

Three steps to safe AI tooling

01

Install the hook or proxy

Add the Veto hook to your Claude Code settings (one JSON entry) or point your LLM client at the Veto LiteLLM proxy. Either way — under two minutes.

02

Define rules

Create policies in the dashboard: allow safe operations, deny dangerous ones, flag ambiguous calls for human review. Or let AI scoring handle the edge cases.

03

Stay protected

Every tool call is evaluated against your rules in real time. Denied calls never execute. Approved calls proceed instantly. Everything is logged.

Pricing

Simple, transparent pricing

Start free. Upgrade when your team grows.

Free
$0/forever

For individuals and small experiments.

  • 1 team member
  • 20 rules
  • 3,000 AI evals/month
  • 7-day audit log retention
  • Community support
Team
$29/per user/month

For teams shipping with AI agents daily.

  • Unlimited team members
  • Unlimited rules
  • 5,000 evals/user/month
  • 30-day audit log retention
  • Haiku + Sonnet
  • Priority support
Business
$99/per user/month

For organizations that need full control.

  • Unlimited team members
  • Unlimited rules
  • 20,000 evals/user/month
  • 90-day audit log retention
  • All AI models (Haiku + Sonnet + Opus)
  • Custom prompts
  • Dedicated support