Three-stage AI pipeline

Moderation that listens
before it acts

GuardBot investigates harmful messages through a structured conversation, weighs both sides, then decides. No snap judgements. No bans for context your moderators missed.

Active servers
Continuous uptime
Incidents resolved
3
AI pipeline stages

Built for communities that care about nuance

Every moderation decision is explained, logged, and reviewable. No black box.

Conversational moderation

Before acting, GuardBot replies to the flagged message, hears from the accused, then asks the victim whether they felt targeted. The decision reflects both sides.

Scam detection

Every discord.gg link is evaluated for cam-site bait, free Nitro scams, and hacked account patterns. Jokes and quotations are distinguished from real threats using confidence scoring.

Image analysis

Images are described by a vision model and fed into the same moderation pipeline as text. CSAM and doxxing images trigger immediate action regardless of the accompanying text.

Full incident audit log

Every decision is stored with the original message, AI reasoning, and the full conversation thread. Filter, search, and export to CSV directly from the dashboard.

Channel-level rules

Set individual channels to strict, normal, relaxed, or off. Add a custom note that gets sent to the AI as context. Banter channels get treated differently from help channels.

Extreme violation override

Explicit threats with personal information, targeted slurs, CSAM, and doxxing are handled immediately without conversation. The moderator is pinged with full context.

The three-stage pipeline

Every message that warrants attention goes through each stage in order.

1
OpenAI Moderation API

Free. Runs on every message. If content is not flagged, the pipeline stops immediately. No credits consumed. No AI cost. Most messages never leave Stage 1.

Always free
2
GPT-5.4-nano analysis

The flagged message, last 20 channel messages, user history, and your custom context prompt are sent to GPT-5.4-nano. It determines whether the flag is a true positive, identifies the victim, and classifies severity. False positives are logged but no credits are charged.

0.20 credits if true positive
3
Conversational resolution

The bot addresses the accused, hears their side, then contacts the victim. The final decision is made by the same model with full context. Every exchange is logged. The moderator is pinged when the bot is uncertain rather than acting unilaterally.

Full audit trail

Straightforward pricing

Pay monthly based on server size. LTC payments receive a 50 cent discount on any fixed plan.

Starter
€5/mo
or €4.50 with LTC
Up to 1,000 members
  • Full moderation pipeline
  • Scam detection
  • Incident audit log
  • Channel-level rules
  • Image analysis (add-on)
Get started
Pay with LTC for €4.50/mo
Scale
€30/mo
or €29.50 with LTC
10,001 to 50,000 members
  • Everything in Growth
  • Image analysis included
  • Priority moderator pings
Get started
Pay with LTC for €29.50/mo
Pay-as-you-go
Credits
50,000+ members
Includes everything
  • Image analysis always included
  • Moderation run: 0.20 credits
  • Scam detection: 0.05 credits
  • Image analysis: 0.04 credits
  • Step 1 API: always free
Buy credits

Credit bundles

500
credits
€5.00
1,000
credits
€10.00
3,000
credits
€28.00 save €2
10,000
credits
€85.00 save €15
25,000
credits
€200.00 save €50

Credits never expire. Minimum bundle: 500 credits. LTC discount applies to fixed plans only, not credit bundles.

Ready to protect your community?

Add GuardBot to your server and set it up in minutes. No moderation happens until your plan is active.

Add GuardBot