Canon

Product · Agents

An agent per player. Inspectable, replayable, explainable.

Cohort campaigns blur the player. Canon spins up an agent per player, with their own elasticity, lifecycle stage, and risk profile — and shows you exactly what it's doing, why, and what it would do next.

Agent fleet · live
182,491 agents
deciding
Decisions / sec
342
p95 latency
42 ms
RG holds
0.3%

What an agent sees

Every signal that produced every decision.

Open any player and Canon shows the full timeline — every event consumed, every signal extracted, every decision shipped. Hover a decision to see the policy that fired and the alternatives that were ranked beneath it.

  • Per-player timeline of consumed events
  • Signal trace: what the agent inferred from each event
  • Decision log with shipped action + ranked alternatives
  • RG check trace: which guardrails fired and why
  • Lifetime value and lift vs. control
91
Player #P-91823
Slots · Tier 2 · UK · 14-day lifetime
Active
Decision timeline · 14 days
  1. Day 14 · 14:32Mission
    Streak 3 · £2.50
  2. Day 14 · 09:14Bonus
    Lapse-prevention · £5.00
  3. Day 13 · 22:08Cooldown
    Loss streak detected
  4. Day 12 · 15:40F2P
    Session greeting
  5. Day 9 · 11:22Mission
    Activation streak · £1.50
  6. Day 7 · 19:12RG Hold
    Session length signal
  7. Day 5 · 14:08Bonus
    Re-engagement · £4.00
  8. Day 1 · 09:10Sign-up
    New player
Lifetime ARPU
£18.40
Decisions
47
vs. control
+£23
What agents do

Six things every agent does, every day.

01Read the player

Pull the player's full event history, lifecycle stage, and risk profile from your PAM and CDP.

02Score the moment

Evaluate the current event against the active strategies, ranked by expected lift.

03Pick an action

Bonus, mission, cashback, cooldown, hold, or no action — sized to the player and the moment.

04Pass the guardrails

RG checks (loss-chasing, stake escalation, session, deposit decline) can downgrade or block the chosen action.

05Ship and log

Send the decision to your fulfilment system. Log every signal, score, and override that produced it.

06Learn

Update the per-player elasticity prior on the observed outcome. Carry it forward to the next decision.

FAQ

Agents, in detail.

No. The decisioning policy is a compact bandit-style model trained on operator data. LLMs are not in the decision path — latency and cost rule them out at scale.

See an agent reason in real time.

A demo on a synthetic player walks through 14 days of decisions, signal-by-signal.