Dashboard

The Veto dashboard gives you full visibility and control over what your AI agents are doing. This guide covers every section of the dashboard.

Rules

The Rules page is where you define what your agent can and cannot do. Veto ships with sensible defaults, but you can customize everything.

Each rule has:

  • Type — whitelist (allow) or blacklist (deny)
  • Tool pattern — a regex matching the tool name (e.g. Bash, Read|Glob|Grep)
  • Content pattern — an optional regex matching the tool input (e.g. rm\s+-rf)
  • Priority — lower numbers are evaluated first
  • Scope — all users, a specific user
  • Applies to — whether the rule applies to hooks, LiteLLM proxy, or both

You can drag and drop rules to reorder priorities, bulk-enable or disable rules, and use the Test Rules button to simulate a tool call against your rule set without actually running anything.

Import and export

Export your rules as JSON to back them up or share them across organizations. Import rules from a JSON file to apply a configuration from another environment.

Templates

Rule templates give you a starting point. Click Templates to browse pre-built rule sets (e.g. read-only mode, strict security, dev-friendly) and apply them to your organization.

See Writing Rules for the full rule syntax.

Audit Log

The Audit Log records every tool call your agent makes and the decision Veto made about it. This is the primary place to understand what your agent is doing and whether your rules are working as expected.

What each entry shows

ColumnDescription
Session IDWhich Claude Code session made the call (click to filter)
ToolThe tool name (e.g. Bash, Write, Read)
Tool inputWhat the tool was called with (truncated, expandable)
DecisionAllow, Deny, or Ask — color-coded
Decision typeHow the decision was made: whitelist match, blacklist match, AI review, or no match
AI scoreRisk score from the AI model (if scoring is enabled)
ReasoningThe AI model's explanation for its score
LatencyHow long the evaluation took (ms)
SourceWhether it came from hooks or LiteLLM proxy
API KeyWhich API key was used
CachedWhether the scoring result was served from cache

Live mode

Toggle Live to see events streaming in real time. New events are highlighted as they arrive. This is useful when you're actively testing rules or monitoring an agent session.

Filtering

You can combine multiple filters to narrow down the log:

  • Date range — pick a start and end date
  • Search — free-text search across tools and commands
  • Decision — filter by Allow, Deny, or Ask
  • Decision type — filter by whitelist, blacklist, AI review, AI error, or no match
  • Source — filter by hooks or LiteLLM
  • Session — filter events from a specific session

Export

Export the audit log as CSV or JSON.

Proxy Logs

If you're using the LiteLLM proxy integration, the Proxy Logs page shows every LLM request that passed through the proxy — not just tool calls, but all API traffic.

What each entry shows

ColumnDescription
Request IDUnique identifier for the request
ModelWhich model was called (e.g. claude-sonnet-4-20250514)
ProviderThe LLM provider (Anthropic, OpenAI, etc.)
StatusSuccess or error
TokensTotal, prompt, and completion token counts
SpendCost in dollars
UserWhich team member made the request
Cache hitWhether the response was cached

Like the audit log, proxy logs support live mode for real-time streaming, date range filtering, and CSV/JSON export.

Analytics

The Analytics page visualizes trends across three tabs.

Audit analytics

  • Decisions over time — line chart showing allow vs deny vs ask decisions
  • Top blocked tools — which tools are being denied most often
  • Score distribution — histogram of AI risk scores
  • Latency over time — average evaluation latency
  • Top triggered rules — which rules are matching most frequently

Proxy analytics

  • Requests over time — success vs error rates
  • Spend over time — cumulative cost
  • Tokens over time — prompt vs completion token usage
  • Cost by model — compare spend across different models
  • Top users by spend — which team members are using the most resources

Scoring costs

  • Total cost — how much AI scoring has cost
  • Total evaluations — number of AI-scored decisions
  • Cache hit rate — percentage of evaluations served from cache
  • Cost by model and cost by source breakdowns

All analytics support date range filtering and interval selection (hour, day, or week).

Sessions

The Sessions page shows all Claude Code sessions connected to your Veto instance.

Each session displays:

  • Session ID — the full session identifier
  • Name — editable label (click to rename)
  • Status — active or ended
  • Started — when the session began
  • Last event — most recent tool call
  • Mode — permission mode (hooks or litellm)

Session actions

  • View Audit — jump to the audit log filtered to this session's events
  • Whitelist — temporarily allow all tool calls for this session (1h, 4h, 8h, or 24h). Useful when you trust a session and don't want to approve every action.
  • Revoke — remove an active whitelist
  • Delete — remove the session record

Use the Active only toggle to hide ended sessions.

Team

Manage who has access to your Veto organization.

Roles

RoleWhat they can do
AdminFull access — manage rules, settings, API keys, team members, billing
MemberCreate and edit their own rules, view audit log and analytics
ViewerRead-only access to rules, audit log, and analytics

Inviting members

Admins can invite new members by email or create an open invite link that anyone can use. Each invite has a role and an expiration date. Pending invites can be revoked at any time.

Admins can revoke pending invites at any time.

Activity

The Activity page is an audit trail of changes made within the dashboard itself — not agent actions, but human actions. Only visible to admins.

It tracks events like:

  • Rule created, updated, or deleted
  • Scoring configuration changed
  • API key created or deleted
  • Member invited, role changed, or removed
  • Ownership transferred

Each entry shows who performed the action, when, and what changed.

Settings

Scoring configuration

Control how Veto evaluates tool calls that don't match any rule.

  • Enabled/Disabled — toggle AI scoring on or off. When disabled, unmatched tool calls fall through to the default fail policy.
  • Model — which AI model to use for scoring (e.g. Haiku for speed, Sonnet for accuracy)
  • Timeout — max seconds to wait for an evaluation (default: 30s)
  • Risk threshold — score above which the decision becomes "ask" instead of "allow" (0.0 to 1.0)
  • Fail policy — what to do if the evaluation fails or times out: allow, deny, or ask

Custom scoring prompt

Customize the system prompt used for AI scoring. Choose from presets:

  • Default — balanced security reviewer
  • Strict — zero-trust, deny-by-default posture
  • Permissive — dev-friendly, allow most actions

Or write your own prompt for full control.

Cache settings

Configure the scoring cache TTL (time-to-live). Cached evaluations are faster and cheaper. The dashboard shows your current cache hit rate.

API keys

Create and manage API keys for the hook-based integration.

Each key has:

  • Name — a label to identify the key
  • Scope — full access or read-only
  • Expiration — optional expiry date
  • Assigned to — optionally tie the key to a specific team member

Keys are shown only once at creation time — copy them immediately.

LLM Proxy

Configure the built-in LLM proxy:

  • Enable/Disable the proxy
  • Mode — switch between two modes:
    • Passthrough — users keep their own LLM API keys; the proxy forwards their credentials to the upstream provider
    • BYOK — the organization provides LLM API keys (Anthropic, OpenAI, Google, Mistral); all users share the org's keys via a virtual key
  • Virtual key — the proxy key your agents use to authenticate. Can be rotated.

Billing

View your current plan and usage from the Billing page. You can upgrade, change your payment method, or cancel your subscription through the Billing Portal.

Plan features

FeatureFreeTeamBusiness
Members1UnlimitedUnlimited
Rules20UnlimitedUnlimited
AI Evals/month3,0005,000/user20,000/user
AI modelsHaikuHaiku + SonnetAll
Audit retention7 days30 days90 days
Custom scoring promptsNoNoYes
Team managementNoYesYes
ExportNoYesYes

Eval limits are enforced as a hard block — once reached, all tool calls are denied until the next billing cycle. Team and Business eval limits scale with seat count.