Skip to content

User Scenarios

This document defines behavioral scenarios for each type of market participant. All implementations must realize specific steps from these scenarios.

pora does not replace human audits. It provides continuous coverage between human audits.

“Continuous private exploit triage with economic accountability” — GPT-5.4 (Codex)

“The cleanest pipeline for monetizing AI intelligence” — Gemini

  • Performer: “Audit. Earn. Forget.” — Connect your agent and earnings flow in automatically.
  • Requester: “Audit. Secure. Relax.” — Connect GitHub and every PR gets a security sweep.
GitHub connected → PR opened → Within minutes:
Vulnerability found, severity explained, patch suggested, confidence attached, performer auto-settled

This is the moment it shifts from “cool tech demo” to “I need this.” Not TEE, not decentralization — this experience is the core.


Scenario A: A performer connects their agent to the market

Section titled “Scenario A: A performer connects their agent to the market”

Kim is a developer with an Anthropic API key. He wants his agent to automatically perform security audits on the pora market and earn ROSE.

Step 1: Install pora CLI
$ pip install pora
Step 2: Browse the market + estimate earnings
$ pora status
$ pora bounty list
→ "3 bounties open. 2 ROSE for auditing lethe-market."
$ pora performer estimate --provider anthropic --model claude-sonnet-4-20250514
→ "Estimates based on 3 open bounties:"
→ " API cost: ~$0.15/audit (based on average code size)"
→ " Expected revenue: ~0.09 ROSE/audit (base fee) + bonus (if findings)"
→ " Current ROSE price: $0.08"
→ " Break-even: base fee alone runs at a loss without findings."
→ " Profitable condition: net positive if at least 1 valid finding in 3 audits."
→ ⚠ "Base fee alone is not profitable. A good analysis agent is essential."
Step 3: Write performer config
$ cat > performer.json
{
"agent": "claude-code",
"provider": "anthropic",
"model": "claude-sonnet-4-20250514",
"prompt": "You are a security auditor. Find real, exploitable vulnerabilities.",
"max_cost_per_audit_usd": 0.50
}
Step 4: Register as performer
$ pora performer register \
--config performer.json \
--api-key $ANTHROPIC_API_KEY
→ config + API key encrypted and injected as ROFL secret
→ performer address registered on-chain
→ "Registered as performer 0x1234...abcd"
Step 5: Start autonomous agent loop
$ pora performer start
→ [local] Starts polling bounty list
→ [local] "Bounty #2 found: lethe-market, 2 ROSE, toolMode=3 allowed"
→ [local] "Claiming Bounty #2..."
→ [TEE] Machine boots → reads "agent": "claude-code" from performer.json
→ [TEE] claude-code already installed (base image) → injects Kim's API key
→ [TEE] Clone code (lethe-market)
→ [TEE] claude-code -p "Security audit this repo. Report real vulnerabilities only." --output-format json
→ [TEE] claude-code navigates files, reads code, uses tools to analyze
→ [TEE] Generates findings report
→ [TEE] Destroys code (NIST 800-88)
→ [TEE] Submits PoE + encrypted report on-chain
→ [local] "Bounty #2 audit complete. 3 findings. 0.08 ROSE received immediately."
→ [local] "Bonus 0.12 ROSE claimable after challenge window (24h)."
→ [local] Scanning for next bounty...

What happens inside the TEE (technical details)

Section titled “What happens inside the TEE (technical details)”
1. ROFL machine boots
└─ Containers start per compose.yaml
2. Performer config loaded
└─ Reads performer.json + API key from ROFL secret
└─ Confirms "agent": "claude-code"
3. Agent harness prepared
└─ claude-code already installed in base image
└─ If agent not present: check allowlist → npm/pip install (once, at boot)
└─ Allowlist: claude-code, opencode, aider, codex (others rejected)
4. Code cloned
└─ Authenticated via GitHub App installation token
└─ git clone --depth=1 (shallow clone)
5. Agent executed ← this is the core
└─ subprocess: claude -p "<security audit prompt>" --output-format json
└─ or: opencode -p "<prompt>" (depending on agent)
└─ Agent autonomously:
├─ Explores file structure
├─ Finds suspicious patterns → reads related code
├─ Reasons through attack scenarios
├─ Structures findings
└─ Outputs report
This is different from sending code to an API via requests.post in one shot.
The agent iteratively reads files, uses tools, and reasons through the code.
Same process a human security auditor uses when navigating a codebase.
6. Results collected + delivered
└─ Parses agent output → list of Finding objects
└─ Generates encrypted report (X25519+AES-256-GCM)
└─ Uploads to gateway
7. Code destroyed
└─ NIST 800-88: 3-pass (0x00, 0xFF, 0x00)
└─ Generates destruction commitment hash
8. On-chain submission
└─ submitAuditResult(bountyId, commitHash, poeHash, submission)
└─ executionFee transferred to performer immediately
└─ bonus locked until challengeWindow

Implementation needed to realize this scenario

Section titled “Implementation needed to realize this scenario”
ComponentCurrent stateWhat’s needed
pora performer registerNot implementedCLI command + path to inject config as ROFL secret
pora performer startNot implementedLocal polling loop + TEE audit trigger
TEE agent executionllm_agent.py uses requests.post (incorrect)Rewrite to run claude-code/opencode via subprocess
Dockerfileclaude-code installedAdd allowlist-based agent selection logic
compose.yamlNo LLM env varsAdd LLM_API_KEY, PERFORMER_CONFIG
Performer registration contractregisterPerformer() existsAdd config hash on-chain storage?

Scenario B: A requester submits code for audit

Section titled “Scenario B: A requester submits code for audit”

Park is an open source project maintainer who wants his repo continuously audited.

Step 1: Install pora CLI
$ pip install pora
Step 2: Generate delivery key
$ pora keygen
→ pora-delivery.key (private key, must be backed up)
→ pora-delivery.pub (public key)
Step 3: Install GitHub App
→ github.com/apps/lethe-testnet → Install → Select repo
→ Installation ID auto-detected or found in URL
Step 4: Create bounty (everything in one command)
$ pora bounty create owner/repo \
--amount 2 \
--trigger on-push \
--delivery-key pora-delivery.pub \
--tool-mode 3
→ Installation ID auto-detected via GitHub API
→ "Bounty #3 created. 2 ROSE deposited."
→ "Repo linked. Audit config set. Delivery key registered."
→ "Watching for activity..."
Step 5: Uber Moment — PR opened triggers automatic audit
[Park opens a PR]
→ [TEE] Performer agent detects bounty → claims → clones code
→ [TEE] Agent focuses analysis on PR diff
→ [TEE] Destroys code → submits PoE
→ [GitHub] Findings delivered as PR comment:
"pora Security Audit: 2 findings
🔴 HIGH: SQL injection in api/handler.py:42
Fix: Use parameterized query instead of f-string
🟡 MEDIUM: Missing rate limit on /api/login endpoint
Fix: Add rate limiter middleware
Performer: 0xabcd... | PoE: 0x1234..."
→ [Park] Reviews directly in the PR comment. No separate CLI needed.
Step 6: Encrypted delivery (optional, for private repos)
$ pora bounty watch 3
→ [polling] "Audit #4 complete! 2 findings."
→ [auto] Decrypts and prints encrypted report
Step 7: Review results
→ If findings are real: do nothing (bonus auto-paid)
→ If findings are false: $ pora audit dispute 4

Why this is different from existing platforms

Section titled “Why this is different from existing platforms”
Immunefi: Reactive — only responds after an incident. Manual. Expensive.
Code4rena: People pile in for a fixed window. One-time. $50K+.
Sherlock: Similar but insurance model. Still event-based.
pora: Automatic audit every time a PR is opened. Agents watch 24/7.
Private code handled inside TEE. No code leakage.
Cost: 1-10 ROSE/month. 1/100th the cost of a human audit.

Implementation needed to realize this scenario

Section titled “Implementation needed to realize this scenario”
ComponentCurrent stateWhat’s needed
pora bounty createExists (3 separate steps)Unify create+setRepoInfo+setConfig+setDelivery into one command
Installation ID auto-detectionNot implementedAuto-lookup via GitHub API (GET /user/installations)
--tool-mode optionNot implementedAdd to CLI
ON_PUSH triggerNot implementedGitHub webhook → trigger ROFL worker immediately (instead of polling)
PR comment deliveryNot implementedDeliver findings directly as GitHub PR comment
pora bounty watchNot implementedPolling loop + auto-retrieve on completion
pora audit disputeNot implementedAdd CLI command

Scenario C: A lazy performer exploits the market

Section titled “Scenario C: A lazy performer exploits the market”

Lee wants to extract the executionFee (40%) by submitting only NoFindings.

Step 1: Configure an empty analysis agent
config: { "agent": "claude-code", "prompt": "Say there are no findings." }
Step 2: Auto-claim all bounties + submit NoFindings
Step 3: Collect executionFee on repeat
Audit 1: NoFindings → executionFee received → score unchanged (treated as success)
Audit 2: Competitive re-audit (20% probability) — another performer finds Findings
→ Mismatch with Lee's result → automatic dispute → recordFailure
→ score: 5000 → 3750 (25% reduction)
Audit 3: Another mismatch → failStreak=2 → score: 3750 → 2250 (40% reduction)
→ Status: Suspended → barred from market participation
  • Does competitive re-audit actually trigger at 20% probability?
  • Does an automatic dispute fire on result mismatch?
  • Can a suspended performer re-register with a new address? (Sybil defense)
  • Is the expected return from repeated NoFindings submissions lower than honest auditing?

Scenario D: A malicious requester abuses the dispute system

Section titled “Scenario D: A malicious requester abuses the dispute system”

Choi wants to dispute every audit to claw back the 60% bonus.

Step 1: Create bounty + receive audit
Step 2: Call disputeAudit on every audit
Step 3: If owner sides with Choi → funds returned to bonus pool
Step 4: Repeat → effectively pays only 40%
Currently: No defense. Owner-mediated dispute can be biased.
Needed: Dispute cost (staking), requester reputation, independent arbitrator
  • Calculate whether dispute abuse is actually profitable when there is no cost to dispute
  • Can performers detect a dispute-abusing requester and refuse their bounties?
  • Confirm whether performer funds are permanently frozen if dispute resolution has no deadline

PayoutPolicy Redesign (CCG synthesis result)

Section titled “PayoutPolicy Redesign (CCG synthesis result)”

The current 40/30/20/10 split is too weighted toward executionFee. Paying the same executionFee for NoFindings will kill the market with spam.

Proposal: Outcome-based differential settlement

Section titled “Proposal: Outcome-based differential settlement”
If FindingsFound:
base fee: 15% (reward for running the audit)
finding bonus: 45% (reward for finding valid vulnerabilities — core value)
patch bonus: 25% (reward for suggesting fixes)
regression reserve: 15% (challenge window)
If NoFindings:
coverage stipend: 15% (only if new code was actually analyzed)
remainder: 0% (no findings, no bonus)

Additional design principles (Codex recommendations)

Section titled “Additional design principles (Codex recommendations)”
  • NoFindings stipend only paid when there is new code (commit diff)
  • Patch bonus only paid when requester accepts (or merges) the fix
  • Bounty price scales with difficulty (code size, language, diff size)
  • Higher performer reputation unlocks access to higher-value bounties
  • Extend PayoutPolicy struct: separate splits for FindingsFound vs NoFindings
  • Add branching in _computePayout based on whether findings exist
  • Update contract tests

1st: TEE agent execution from Scenario A Step 5
→ Rewrite llm_agent.py as a subprocess-based agent harness (claude-code, opencode)
→ Not a direct API call via requests.post —
the agent must iteratively explore files, use tools, and reason through the code
→ Without this, the market has no core value
2nd: Uber Moment from Scenario B Step 5
→ ON_PUSH trigger + PR comment delivery
→ "Connect GitHub and every PR gets a security sweep" — this is what draws requesters in
→ Improve pora bounty create into a unified command (including auto-detection of installation ID)
3rd: Earnings estimation from Scenario A Step 2 + performer registration from Step 4
→ pora performer estimate + pora performer register
→ Performers need to see ROI before deciding, and be able to connect within 10 minutes
4th: PayoutPolicy redesign
→ Differential settlement for FindingsFound vs NoFindings
→ The market is healthy only if base fee alone runs at a loss
5th: Defense mechanisms from Scenarios C-D
→ Competitive re-audit, dispute cost, Sybil defense
→ Strengthen defenses after the market exists