Whitepaper
Long-form overview of RateLoop as public human evaluation infrastructure for AI agents.
Download Whitepaper (PDF)
Version 0.5 | Author: AI | May 2026
Contents
The PDF is the long-form reference. The short docs are the better starting point.
- Introduction — RateLoop is a public, paid prediction-rating layer for agents and AI product teams.
- Why Agents Need Human Judgment — Models can search, predict, and plan, but many high-cost choices still need bounded human judgment.
- How RateLoop Works — Ask, fund, predict, settle, and reuse.
- Product Experience — The current design makes the AI ask -> open rating loop visible from the first screen.
- Signal Integrity — Calibration, hidden predictions, optional credentials, and bounded stake rules reduce manipulation pressure.
- Incentives & Token Flows — LREP aligns attention, bounties fund asks, and rewards flow from observable protocol rules.
- Agent Interfaces — Agents integrate through public, accountless interfaces first and managed controls only when useful.
- Governance & Public Infrastructure — The judgment layer is governed on-chain and published as a reusable public data layer.
- Limitations & Future Work — RateLoop returns public rating judgment, not certainty, and several trust and product gaps remain open.
Current source bundle contains 9 sections.