API Reference
Complete reference for the Cournot API. This page covers two categories of endpoints: the Public Data API for querying stored market data (no authentication required), and the Proof-of-Reasoning Pipeline API for resolving prediction markets through AI-driven evidence collection and cryptographic verification.
Public Data API
These endpoints serve stored market data directly from the Cournot platform. They require no authentication and are designed for third-party integrations — partners can use them to fetch market details, track resolution status, and display market information in their own applications.
/markets/id?id={market_id}Returns full details for a given market stored on the Cournot platform, including current status, classification, external data, and AI resolution results (if resolved). This is a direct data endpoint — it reads from Cournot's database, not the AI Oracle.
Request Parameters
| Field | Type | Description |
|---|---|---|
idrequired | integer | The market ID to look up |
Request Body
# No request body — pass id as a query parameter
GET https://interface.cournot.ai/play/polymarket/markets/id?id=185557cURL Example
curl "https://interface.cournot.ai/play/polymarket/markets/id?id=185557"Response Fields
| Field | Type | Description |
|---|---|---|
code | integer | Response status code. 0 = success. |
msg | string | Status message, e.g. "Success" |
data.market | Market | Full market object with all fields |
data.market.id | integer | Unique market identifier |
data.market.title | string | Market question title |
data.market.description | string | Resolution criteria (may contain HTML) |
data.market.platform_url | string | Link to the market on the source platform |
data.market.source | string | Source platform: "polymarket", "kalshi", "limitless", "myriad", "predictfun" |
data.market.status | string | Market status: "monitoring", "pending_verification", or "resolved" |
data.market.market_timing_type | string | "event_based" or "time_based" |
data.market.start_time | string | Market start time (ISO 8601) |
data.market.end_time | string | Market end time (ISO 8601) |
data.market.ai_prompt | object | Compiled AI prompt specification used for resolution |
data.market.ai_result | string | AI resolution result text (empty if not yet resolved) |
data.market.ai_outcome | string | AI outcome: "YES", "NO", "INVALID", or empty |
data.external_data | ExternalData[] | Array of external data sources collected for this market |
data.classification | Classification | Market category, subcategory, and competition |
Response Example
{
"code": 0,
"msg": "Success",
"data": {
"market": {
"id": 185557,
"title": "Will Denmark use a 4-3-3 starting lineup?",
"description": "<resolution criteria...>",
"platform_url": "https://polymarket.com/event/...",
"source": "polymarket",
"status": "monitoring",
"market_timing_type": "time_based",
"start_time": "2025-03-20T00:00:00Z",
"end_time": "2026-03-26T21:00:00Z",
"ai_prompt": { "..." : "..." },
"ai_result": "",
"ai_outcome": ""
},
"external_data": [
{
"collector": "espn_soccer",
"data": {
"event": "Denmark vs North Macedonia",
"status": "pre",
"venue": "Parken Stadium"
}
}
],
"classification": {
"category": "sports",
"subcategory": "football",
"competition": "FIFA World Cup Qualifiers"
}
}
}- •No authentication required — this is a public endpoint for third-party integrations.
- •This endpoint returns stored data from the Cournot platform, not live AI Oracle results.
- •The ai_result and ai_outcome fields are empty strings until the market is resolved.
- •The description field may contain HTML markup for resolution criteria.
/markets/disputes?code={code}&market_id={market_id}&dispute_type=manual&page_num=1&page_size=10Returns the dispute history for a given market. Disputes are challenges submitted by admins against a market's AI resolution, each containing the original and proposed outcomes with reasoning.
Request Parameters
| Field | Type | Description |
|---|---|---|
coderequired | string | Admin access code for authentication |
market_idrequired | integer | The market ID to fetch disputes for |
dispute_typerequired | string | Type of dispute to fetch. Currently only "manual" is supported |
page_num | integer | Page number for pagination (default: 1) |
page_size | integer | Number of disputes per page (default: 10) |
Request Body
# Pass parameters as query strings
GET https://interface.cournot.ai/play/polymarket/markets/disputes?code={code}&market_id=185557&dispute_type=manual&page_num=1&page_size=10cURL Example
curl "https://interface.cournot.ai/play/polymarket/markets/disputes?code={code}&market_id=185557&dispute_type=manual&page_num=1&page_size=10"Response Fields
| Field | Type | Description |
|---|---|---|
code | integer | Response status code. 0 = success. |
msg | string | Status message, e.g. "Success" |
data.disputes | Dispute[] | Array of dispute objects for this market |
data.disputes[].id | integer | Unique dispute identifier |
data.disputes[].market_id | integer | Associated market ID |
data.disputes[].dispute_type | string | Dispute type, e.g. "manual" |
data.disputes[].previous_ai_result | string | The AI result before the dispute |
data.disputes[].previous_ai_outcome | string | The AI outcome before the dispute (YES/NO/INVALID) |
data.disputes[].proposed_ai_result | string | The proposed replacement AI result |
data.disputes[].proposed_ai_outcome | string | The proposed replacement outcome |
data.disputes[].reason | string | Reason provided for the dispute |
data.disputes[].submitted_by | string | Name of the admin who submitted the dispute |
data.disputes[].is_accepted | boolean | Whether the dispute has been accepted |
data.disputes[].accepted_by | string | Name of the admin who accepted the dispute (empty if not accepted) |
data.disputes[].accepted_time | string | When the dispute was accepted (ISO 8601, empty if not accepted) |
data.disputes[].created_time | string | When the dispute was created (ISO 8601) |
data.total | integer | Total number of disputes matching the query |
Response Example
{
"code": 0,
"msg": "Success",
"data": {
"disputes": [
{
"id": 42,
"market_id": 185557,
"dispute_type": "manual",
"previous_ai_result": "{\"outcome\":\"YES\",\"confidence\":0.92,...}",
"previous_ai_outcome": "YES",
"proposed_ai_result": "{\"outcome\":\"NO\",\"confidence\":0.85,...}",
"proposed_ai_outcome": "NO",
"reason": "New evidence shows the event was cancelled",
"submitted_by": "admin_1",
"is_accepted": true,
"accepted_by": "admin_2",
"accepted_time": "2025-06-15T14:30:00Z",
"created_time": "2025-06-15T12:00:00Z"
}
],
"total": 1
}
}- •Requires a valid admin access code.
- •Only dispute_type "manual" is currently supported.
- •Disputes are returned in reverse chronological order (newest first).
- •The previous_ai_result and proposed_ai_result fields contain JSON-encoded AI result objects.
PoR Pipeline API
The Proof-of-Reasoning pipeline resolves prediction market questions in five sequential steps, producing a cryptographically verifiable PoR bundle. These endpoints call the AI Oracle directly and require authentication via an access code. All requests use a gateway envelope format.
All API requests are made to a single gateway endpoint. The gateway accepts a standardized envelope and routes to internal pipeline paths. Responses are also wrapped in a gateway envelope.
Base Endpoint
https://interface.cournot.ai/play/polymarket/ai_dataRequest Envelope
Every request must be wrapped in this envelope. The actual payload goes inside post_data as a JSON-stringified string.
{
"code": "<YOUR_ACCESS_CODE>",
"post_data": "<STRINGIFIED_JSON_PAYLOAD>",
"path": "<INTERNAL_PATH>",
"method": "<HTTP_METHOD>"
}Envelope Fields
| Field | Type | Description |
|---|---|---|
coderequired | string | Your API access code for authentication |
post_datarequired | string | JSON-stringified request payload. Use "{}" for GET requests with no body. |
pathrequired | string | Internal pipeline path, e.g. "/step/prompt" or "/capabilities" |
methodrequired | string | HTTP method for the internal route: "POST" or "GET" |
Gateway Response Envelope
All responses are wrapped in this envelope. A successful call returns code: 0. The step-level response is JSON-stringified inside data.result and must be parsed separately.
{
"code": 0,
"msg": "Success",
"data": {
"result": "<STRINGIFIED_STEP_RESPONSE>"
}
}All requests require a valid access code passed in the code field of the gateway envelope. If the code is invalid or missing, the gateway returns error code 4100.
// Valid request
{
"code": "YOUR_ACCESS_CODE", // Required in every request
"post_data": "{}",
"path": "/capabilities",
"method": "GET"
}
// Error response for invalid code
{
"code": 4100,
"msg": "Invalid access code",
"data": null
}The PoR pipeline consists of five sequential steps. Each step produces artifacts consumed by subsequent steps. Steps must be called in order.
| Method | Path | Description | |
|---|---|---|---|
POST | /step/prompt | Compile question into prompt spec + tool plan | |
POST | /step/collect | Gather evidence from external sources | |
POST | /step/quality_check | Evaluate evidence quality, produce retry hints | |
POST | /step/audit | Produce reasoning trace from evidence | |
POST | /step/judge | Determine outcome and confidence | |
POST | /step/bundle | Build cryptographic PoR bundle | |
POST | /step/resolve | Run full pipeline in a single call | |
POST | /validate | Validate and compile a market query | |
POST | /dispute | Structured dispute-driven rerun | |
POST | /dispute/llm | LLM-assisted dispute from 3 user inputs | |
GET | /capabilities | List available providers, collectors, and steps |
For GET requests, set post_data to "{}" in the envelope.
/step/promptCompiles a natural-language market question into a structured prompt specification and tool plan. This is the entry point of the pipeline. The prompt spec defines the resolution rules, allowed sources, and prediction semantics. The tool plan specifies which data requirements need to be fulfilled and from which sources. When the query involves a scheduled event, the LLM compiler auto-detects a temporal constraint and includes it in prompt_spec.extra.temporal_constraint — extract and pass this to /step/audit and /step/judge.
Request Parameters
| Field | Type | Description |
|---|---|---|
user_inputrequired | string | The natural-language market question to resolve |
strict_modeoptional | boolean | When true, only official sources are allowed. Defaults to false. |
llm_provideroptional | string | LLM provider override (e.g. "openai", "anthropic", "google") |
llm_modeloptional | string | LLM model override (e.g. "gpt-4o") |
Request Body
{
"user_input": "Will Bitcoin exceed 100k by March 2025?",
"strict_mode": false
}cURL Example
curl -X POST 'https://interface.cournot.ai/play/polymarket/ai_data' \
-H 'Content-Type: application/json' \
--max-time 300 \
-d '{
"code": "YOUR_CODE",
"post_data": "{\"user_input\": \"Will Bitcoin exceed 100k by March 2025?\", \"strict_mode\": false}",
"path": "/step/prompt",
"method": "POST"
}'Response Fields
| Field | Type | Description |
|---|---|---|
ok | boolean | Whether the step succeeded |
market_id | string | Generated market identifier, e.g. "mk_3f5b9c7e" |
prompt_spec | PromptSpec | Structured specification including market definition, resolution rules, and data requirements |
prompt_spec.extra.temporal_constraint | object | null | Auto-detected temporal constraint with { enabled, event_time, reason }. Extract and pass to /step/audit and /step/judge. |
tool_plan | ToolPlan | Execution plan referencing requirements and sources to query |
metadata | object | Compiler info, strict_mode flag, requirement count |
error | string | null | Error message if compilation failed |
Response Example
{
"ok": true,
"market_id": "mk_3f5b9c7e",
"prompt_spec": {
"schema_version": "v1",
"task_type": "prediction_resolution",
"market": {
"market_id": "mk_3f5b9c7e",
"question": "Will Bitcoin exceed 100k by March 2025?",
"market_type": "binary",
"possible_outcomes": ["YES", "NO"],
"resolution_rules": {
"rules": [
{ "rule_id": "r1", "description": "...", "priority": 1 }
]
}
},
"prediction_semantics": {
"target_entity": "Bitcoin",
"predicate": "exceeds",
"threshold": "100000 USD",
"timeframe": "by March 2025"
},
"data_requirements": [
{
"requirement_id": "req_01",
"description": "Current BTC price data",
"source_targets": [...]
}
]
},
"tool_plan": {
"plan_id": "plan_abc123",
"requirements": ["req_01"],
"sources": ["source_coinmarketcap"],
"min_provenance_tier": 1,
"allow_fallbacks": true
},
"metadata": {
"compiler": "prompt-compiler-v2",
"strict_mode": false,
"requirement_count": 3
},
"error": null
}Schema Detail
PromptSpec {
schema_version "v1"
task_type "prediction_resolution"
market {
market_id, question, event_definition, timezone
resolution_deadline, resolution_window { start, end }
resolution_rules.rules[] { rule_id, description, priority }
allowed_sources[] { source_id, kind, allow, min_provenance_tier }
market_type "binary"
possible_outcomes ["YES", "NO"]
}
prediction_semantics { target_entity, predicate, threshold, timeframe }
data_requirements[] {
requirement_id, description
source_targets[] { source_id, uri, method, expected_content_type }
selection_policy { strategy, min_sources, max_sources, quorum }
}
extra {
strict_mode, compiler, assumptions[]
confidence_policy { min_confidence_for_yesno, default_confidence }
temporal_constraint? { // auto-detected for scheduled events
enabled true
event_time ISO 8601 UTC
reason string
}
}
}
ToolPlan {
plan_id, requirements[], sources[]
min_provenance_tier, allow_fallbacks
}- •This step has no dependencies and can be called directly with just user_input.
- •The generated prompt_spec and tool_plan are passed to all subsequent steps.
- •strict_mode constrains the pipeline to only use official/authoritative sources.
/step/collectRuns the configured collector agents to gather evidence bundles from external sources. Each collector queries different source types (web search, APIs, databases) and returns structured evidence items with provenance metadata. Multiple collectors can run in parallel. When retrying after a quality check failure, pass the quality_feedback field with retry hints to adjust search strategy.
Request Parameters
| Field | Type | Description |
|---|---|---|
prompt_specrequired | PromptSpec | Structured prompt specification from Step 1 |
tool_planrequired | ToolPlan | Execution plan from Step 1 |
collectorsoptional | string[] | List of collector names to run. See /capabilities for available collectors. |
include_raw_contentoptional | boolean | Whether to include raw fetched content in the response. Defaults to false. |
quality_feedbackoptional | object | Retry hints from /step/quality_check scorecard.retry_hints. Adjusts search queries, domains, and focus areas. |
llm_provideroptional | string | LLM provider override |
llm_modeloptional | string | LLM model override |
Request Body
{
"prompt_spec": { ... },
"tool_plan": { ... },
"collectors": ["CollectorWebPageReader", "CollectorOpenSearch"],
"include_raw_content": false
}Response Fields
| Field | Type | Description |
|---|---|---|
ok | boolean | Whether collection completed successfully |
collectors_used | string[] | Names of collectors that actually ran |
evidence_bundles | EvidenceBundle[] | Array of evidence bundles, one per collector |
execution_logs | ExecutionLog[] | Per-call timing, tool name, input/output, and errors |
errors | string[] | Non-fatal errors encountered during collection |
Response Example
{
"ok": true,
"collectors_used": ["CollectorWebPageReader", "CollectorOpenSearch"],
"evidence_bundles": [
{
"bundle_id": "eb_a1b2c3",
"market_id": "mk_3f5b9c7e",
"collector_name": "CollectorWebPageReader",
"weight": 1.0,
"items": [
{
"evidence_id": "ev_abc123",
"source_uri": "https://...",
"source_name": "CoinMarketCap",
"tier": 1,
"fetched_at": "2025-01-15T10:30:00Z",
"content_hash": "sha256:a1b2c3...",
"parsed_excerpt": "Bitcoin is currently trading at ...",
"success": true,
"extracted_fields": {
"confidence": 0.85,
"resolution_status": "OPEN"
}
}
],
"collected_at": "2025-01-15T10:30:05Z",
"execution_time_ms": 4523
}
],
"execution_logs": [
{
"plan_id": "plan_abc123",
"calls": [
{
"tool": "CollectorWebPageReader",
"started_at": "...",
"ended_at": "...",
"error": null
}
]
}
],
"errors": []
}Schema Detail
EvidenceBundle {
bundle_id, market_id, plan_id, collector_name
weight number (default 1.0)
items[] {
evidence_id hex hash
requirement_id links back to data_requirements
provenance {
source_id, source_uri, tier, fetched_at
content_hash SHA-256 of raw content
cache_hit boolean
}
content_type, parsed_value, success, error
extracted_fields { ... }
}
collected_at, execution_time_ms
requirements_fulfilled[], requirements_unfulfilled[]
}
ExecutionLog {
plan_id
calls[] { tool, input, output, started_at, ended_at, error }
started_at, ended_at
}- •If collectors is omitted, the pipeline uses a default set based on the tool plan.
- •Collectors run in priority order (highest first). Higher priority collectors are considered more reliable.
- •Evidence items include provenance metadata (source_uri, tier, content_hash) for auditability.
- •Set include_raw_content: true to receive the full fetched content (increases response size significantly).
/step/quality_checkEvaluates collected evidence quality before proceeding to audit. Returns a scorecard with quality signals and machine-readable retry hints. If quality is below threshold, retry /step/collect with the retry_hints as quality_feedback. This step is optional but recommended — if you skip it, audit and judge still work.
Request Parameters
| Field | Type | Description |
|---|---|---|
prompt_specrequired | PromptSpec | Compiled prompt specification from Step 1 |
evidence_bundlesrequired | EvidenceBundle[] | Evidence bundles from Step 2 |
Request Body
{
"prompt_spec": { ... },
"evidence_bundles": [ ... ]
}Response Fields
| Field | Type | Description |
|---|---|---|
ok | boolean | Whether the quality check ran successfully |
scorecard | QualityScorecard | null | Quality scorecard with metrics, flags, and retry hints |
scorecard.source_match | "FULL" | "PARTIAL" | "NONE" | How well evidence sources match required domains |
scorecard.data_type_match | boolean | Whether evidence data types match what was requested |
scorecard.collector_agreement | "AGREE" | "DISAGREE" | "SINGLE" | Whether multiple collectors agree on the outcome |
scorecard.requirements_coverage | number (0-1) | Fraction of data requirements covered by evidence |
scorecard.quality_level | "HIGH" | "MEDIUM" | "LOW" | Overall quality assessment |
scorecard.quality_flags | string[] | Issue flags like "source_mismatch", "requirements_gap" |
scorecard.meets_threshold | boolean | true = proceed to audit, false = consider retrying |
scorecard.recommendations | string[] | Human-readable improvement suggestions |
scorecard.retry_hints | object | Machine-readable hints to pass as quality_feedback to /step/collect |
meets_threshold | boolean | Top-level convenience duplicate of scorecard.meets_threshold |
errors | string[] | Non-fatal errors |
Response Example
{
"ok": true,
"scorecard": {
"source_match": "PARTIAL",
"data_type_match": true,
"collector_agreement": "AGREE",
"requirements_coverage": 0.65,
"quality_level": "MEDIUM",
"quality_flags": ["source_mismatch"],
"meets_threshold": false,
"recommendations": [
"Try broader search terms for requirement req_001"
],
"retry_hints": {
"search_queries": ["bitcoin price 2025 prediction"],
"required_domains": ["coinmarketcap.com"],
"skip_domains": [],
"data_type_hint": null,
"focus_requirements": ["req_001"],
"collector_guidance": "Focus on price data sources"
}
},
"meets_threshold": false,
"errors": []
}- •Call after /step/collect. If meets_threshold is false and retry_hints is non-empty, retry /step/collect with quality_feedback set to retry_hints.
- •Retry up to 2 times. Merge new evidence bundles with existing ones.
- •Pass the scorecard to /step/audit and /step/judge as quality_scorecard so they are aware of evidence quality issues.
- •This step is optional — audit and judge work without it, you just lose the quality feedback loop.
/step/auditAnalyzes collected evidence against the prompt specification to produce a structured reasoning trace. The audit step evaluates each piece of evidence, identifies conflicts, builds reasoning chains, and produces a preliminary outcome assessment with confidence score. When temporal_constraint is provided, the auditor computes temporal status (FUTURE/ACTIVE/PAST) and may force INVALID for future events.
Request Parameters
| Field | Type | Description |
|---|---|---|
prompt_specrequired | PromptSpec | Structured prompt specification from Step 1 |
evidence_bundlesrequired | EvidenceBundle[] | Evidence bundles from Step 2 |
quality_scorecardoptional | object | null | Quality scorecard from /step/quality_check. Informs auditor about evidence quality issues. |
temporal_constraintoptional | object | null | From prompt_spec.extra.temporal_constraint. Enables temporal guard (FUTURE/ACTIVE/PAST status). |
llm_provideroptional | string | LLM provider override |
llm_modeloptional | string | LLM model override |
Request Body
{
"prompt_spec": { ... },
"evidence_bundles": [ ... ],
"quality_scorecard": { ... },
"temporal_constraint": {
"enabled": true,
"event_time": "2027-05-31T00:00:00Z",
"reason": "Champions League final"
}
}Response Fields
| Field | Type | Description |
|---|---|---|
ok | boolean | Whether audit succeeded |
reasoning_trace | ReasoningTrace | Structured trace with reasoning steps, conflict detection, and preliminary outcome |
errors | string[] | Errors encountered during reasoning |
Response Example
{
"ok": true,
"reasoning_trace": {
"trace_id": "tr_xyz789",
"market_id": "mk_3f5b9c7e",
"steps": [
{
"step_id": "s1",
"step_type": "evidence_evaluation",
"description": "Evaluate BTC price data from CoinMarketCap",
"evidence_refs": [
{
"evidence_id": "ev_abc123",
"source_id": "source_coinmarketcap",
"field_used": "current_price",
"value_at_reference": "97500"
}
],
"conclusion": "BTC is currently at $97,500, close to but below the $100k threshold",
"confidence_delta": 0.1
}
],
"conflicts": [],
"evidence_summary": "Multiple sources confirm BTC near $97.5k ...",
"reasoning_summary": "Based on current price trajectory ...",
"preliminary_outcome": "YES",
"preliminary_confidence": 0.72,
"recommended_rule_id": "r1"
},
"errors": []
}Schema Detail
ReasoningTrace {
trace_id, market_id, bundle_id
steps[] {
step_id, step_type, description
evidence_refs[] {
evidence_id, requirement_id, source_id
field_used, value_at_reference
}
rule_id, input_summary, output_summary
conclusion, confidence_delta
depends_on[], metadata
}
conflicts[]
evidence_summary human-readable text
reasoning_summary human-readable text
preliminary_outcome "YES" | "NO" | "INVALID"
preliminary_confidence number (0-1)
recommended_rule_id
}- •The preliminary_outcome here is advisory. The final outcome is determined by the Judge step.
- •The reasoning trace captures step-by-step logic that can be independently verified.
- •Conflicts between evidence sources are explicitly identified and recorded.
/step/judgeApplies resolution rules to the evidence and reasoning trace to produce a final verdict. The judge evaluates the reasoning validity, applies confidence adjustments, and determines the definitive outcome. Includes an independent LLM review of the reasoning process. When temporal_constraint is provided, computes temporal status and may force INVALID for future or active events.
Request Parameters
| Field | Type | Description |
|---|---|---|
prompt_specrequired | PromptSpec | Structured prompt specification from Step 1 |
evidence_bundlesrequired | EvidenceBundle[] | Evidence bundles from Step 2 |
reasoning_tracerequired | ReasoningTrace | Reasoning trace from Step 3 |
quality_scorecardoptional | object | null | Quality scorecard from /step/quality_check. Informs judge about evidence quality issues. |
temporal_constraintoptional | object | null | From prompt_spec.extra.temporal_constraint. Enables temporal guard (FUTURE/ACTIVE/PAST status). |
llm_provideroptional | string | LLM provider override |
llm_modeloptional | string | LLM model override |
Request Body
{
"prompt_spec": { ... },
"evidence_bundles": [ ... ],
"reasoning_trace": { ... },
"quality_scorecard": { ... },
"temporal_constraint": {
"enabled": true,
"event_time": "2027-05-31T00:00:00Z",
"reason": "Champions League final"
}
}Response Fields
| Field | Type | Description |
|---|---|---|
ok | boolean | Whether judgment succeeded |
verdict | Verdict | Full verdict object with cryptographic hashes and LLM review |
outcome | string | Final outcome: "YES", "NO", or "INVALID" |
confidence | number | Final confidence score between 0 and 1 |
errors | string[] | Errors encountered during judgment |
Response Example
{
"ok": true,
"outcome": "YES",
"confidence": 0.78,
"verdict": {
"market_id": "mk_3f5b9c7e",
"outcome": "YES",
"confidence": 0.78,
"resolution_time": "2025-01-15T10:31:00Z",
"resolution_rule_id": "r1",
"prompt_spec_hash": "0xabc123...",
"evidence_root": "0xdef456...",
"reasoning_root": "0x789ghi...",
"justification_hash": "0xjkl012...",
"selected_leaf_refs": ["ev_abc123", "ev_def456"],
"metadata": {
"strict_mode": false,
"trace_id": "tr_xyz789",
"justification": "Evidence strongly suggests BTC will exceed $100k ...",
"llm_review": {
"outcome": "YES",
"confidence": 0.78,
"reasoning_valid": true,
"reasoning_issues": [],
"confidence_adjustments": [],
"final_justification": "The reasoning chain is sound ..."
}
}
},
"errors": []
}Schema Detail
Verdict {
market_id, outcome, confidence
resolution_time, resolution_rule_id
prompt_spec_hash hex hash
evidence_root hex Merkle root
reasoning_root hex Merkle root
justification_hash hex hash
selected_leaf_refs[] evidence_id references
metadata {
strict_mode, trace_id, bundle_id
bundle_count, step_count, conflict_count
justification human-readable string
llm_review {
outcome, confidence, resolution_rule_id
reasoning_valid boolean
reasoning_issues[], confidence_adjustments[]
final_justification
}
}
}- •The outcome is one of: "YES", "NO", or "INVALID" (when evidence is insufficient or temporal guard triggers).
- •Confidence ranges from 0 to 1. Values below the configured min_confidence_for_yesno threshold result in "INVALID".
- •The verdict includes Merkle root hashes for cryptographic verification of the full reasoning chain.
- •The llm_review provides an independent assessment of whether the reasoning is sound.
- •Temporal status: FUTURE (event_time > now) → INVALID; ACTIVE (now - event_time < 24h) → INVALID unless concluded; PAST (≥24h) → normal resolution.
/step/bundleHashes all pipeline artifacts into a cryptographic Proof-of-Reasoning (PoR) bundle with Merkle roots. This is the final step that produces a tamper-evident, verifiable record of the entire resolution process. The PoR root hash can be used to independently verify that no artifacts were modified after resolution.
Request Parameters
| Field | Type | Description |
|---|---|---|
prompt_specrequired | PromptSpec | Structured prompt specification from Step 1 |
evidence_bundlesrequired | EvidenceBundle[] | Evidence bundles from Step 2 |
reasoning_tracerequired | ReasoningTrace | Reasoning trace from Step 3 |
verdictrequired | Verdict | Verdict object from Step 4 |
Request Body
{
"prompt_spec": { ... },
"evidence_bundles": [ ... ],
"reasoning_trace": { ... },
"verdict": { ... }
}Response Fields
| Field | Type | Description |
|---|---|---|
ok | boolean | Whether bundling succeeded |
por_bundle | PoRBundle | Complete Proof-of-Reasoning bundle with all hashes and the full verdict |
por_root | string | Top-level Merkle root hash, e.g. "0xba9ec9c2..." |
roots | object | Individual root hashes for each pipeline artifact |
errors | string[] | Errors encountered during bundling |
Response Example
{
"ok": true,
"por_root": "0xba9ec9c2d4e6f8a0b1c3d5e7f9a2b4c6d8e0f1a3",
"roots": {
"prompt_spec_hash": "0xabc123...",
"evidence_root": "0xdef456...",
"reasoning_root": "0x789ghi...",
"por_root": "0xba9ec9c2..."
},
"por_bundle": {
"schema_version": "1.0",
"protocol_version": "1.0",
"market_id": "mk_3f5b9c7e",
"prompt_spec_hash": "0xabc123...",
"evidence_root": "0xdef456...",
"reasoning_root": "0x789ghi...",
"verdict_hash": "0xjkl012...",
"por_root": "0xba9ec9c2...",
"verdict": { ... },
"tee_attestation": null,
"signatures": {},
"created_at": "2025-01-15T10:31:05Z",
"metadata": {
"pipeline_version": "2.1.0",
"mode": "live"
}
},
"errors": []
}Schema Detail
PoRBundle {
schema_version, protocol_version, market_id
prompt_spec_hash hex
evidence_root hex
reasoning_root hex
verdict_hash hex
por_root hex (master Merkle root)
verdict full Verdict object
tee_attestation null | object
signatures {}
created_at, metadata { pipeline_version, mode }
}
roots {
prompt_spec_hash hex
evidence_root hex
reasoning_root hex
por_root hex
}- •The por_root is the master Merkle root that covers all other roots. Use it for single-hash verification.
- •Individual roots (prompt_spec_hash, evidence_root, reasoning_root) enable partial verification of specific pipeline stages.
- •tee_attestation and signatures are reserved for future TEE (Trusted Execution Environment) support.
/step/resolveRun the full resolution pipeline (collect → quality check → audit → judge → PoR bundle) in a single call. Quality check and temporal constraint are handled automatically — temporal_constraint is extracted from prompt_spec.extra and quality check runs with a retry loop by default.
Request Parameters
| Field | Type | Description |
|---|---|---|
prompt_specrequired | PromptSpec | Compiled prompt specification from /step/prompt |
tool_planrequired | ToolPlan | Tool execution plan from /step/prompt |
collectorsoptional | string[] | Which collectors to use (default: ["CollectorWebPageReader"]) |
execution_modeoptional | string | "production", "development" (default), or "test" |
enable_quality_checkoptional | boolean | Run quality check with retry loop after collection (default: true) |
max_quality_retriesoptional | integer | Max quality check retry iterations, 0–5 (default: 2) |
llm_provideroptional | string | LLM provider override |
llm_modeloptional | string | LLM model override |
Request Body
{
"prompt_spec": { ... },
"tool_plan": { ... },
"collectors": ["CollectorOpenSearch"],
"execution_mode": "development",
"enable_quality_check": true,
"max_quality_retries": 2
}Response Fields
| Field | Type | Description |
|---|---|---|
ok | boolean | Whether the full pipeline succeeded |
outcome | string | "YES", "NO", or "INVALID" |
confidence | number | Final confidence score 0–1 |
por_root | string | PoR Merkle root hash |
artifacts | object | All pipeline artifacts (evidence_bundles, reasoning_trace, verdict, por_bundle) |
errors | string[] | Non-fatal errors |
Response Example
{
"ok": true,
"outcome": "YES",
"confidence": 0.85,
"por_root": "0xba9ec9c2...",
"artifacts": {
"evidence_bundles": [ ... ],
"reasoning_trace": { ... },
"verdict": { ... },
"por_bundle": { ... }
},
"errors": []
}- •No extra fields needed for quality check or temporal constraint — both are fully automatic.
- •Set enable_quality_check: false to skip the quality check loop and go directly to audit.
- •This is equivalent to calling prompt → collect → quality_check → audit → judge → bundle individually.
/validateValidate and compile a market query in a single call. Runs LLM validation (classify market type, validate fields, assess resolvability), prompt compilation (PromptSpec + ToolPlan), and source reachability probes in parallel. The response includes both the validation result and the compiled prompt_spec/tool_plan ready for /step/collect.
Request Parameters
| Field | Type | Description |
|---|---|---|
user_inputrequired | string | The prediction market query to validate and compile (1–8000 chars) |
strict_modeoptional | boolean | Enable strict mode for deterministic hashing (default: true) |
llm_provideroptional | string | LLM provider override |
llm_modeloptional | string | LLM model override |
Request Body
{
"user_input": "Highest temperature in Buenos Aires on March 1?",
"strict_mode": true
}Response Fields
| Field | Type | Description |
|---|---|---|
ok | boolean | Whether validation and compilation succeeded |
classification | object | Market type classification with confidence and rationale |
classification.market_type | string | Detected type: FINANCIAL_PRICE, TEMPERATURE, SPORTS_MATCH, BINARY_EVENT, etc. |
validation | object | Checks passed/failed with severity and suggestions |
resolvability | object | Score (0–100), level (LOW/MEDIUM/HIGH/VERY_HIGH), risk factors |
source_reachability | array | URL probe results — reachable, status_code, errors |
prompt_spec | PromptSpec | Compiled prompt specification (pass to /step/collect) |
tool_plan | ToolPlan | Tool execution plan (pass to /step/collect) |
errors | string[] | Non-fatal errors |
Response Example
{
"ok": true,
"classification": {
"market_type": "TEMPERATURE",
"confidence": 0.95,
"detection_rationale": "Contains 'temperature', city name, and date"
},
"validation": {
"checks_passed": ["U-02", "U-03", "TEMP-01"],
"checks_failed": [
{
"check_id": "TEMP-04",
"severity": "warning",
"message": "No fallback data source specified.",
"suggestion": "Add an alternative source if Wunderground is unavailable."
}
]
},
"resolvability": {
"score": 35,
"level": "MEDIUM",
"risk_factors": [
{ "factor": "Single data source with no fallback", "points": 30 }
]
},
"source_reachability": [
{ "url": "https://www.wunderground.com", "reachable": true, "status_code": 200, "error": null }
],
"prompt_spec": { "..." : "..." },
"tool_plan": { "..." : "..." },
"errors": []
}- •Risk levels: LOW (0–15) = auto-resolution OK, MEDIUM (16–35) = may have difficulty, HIGH (36–55) = high failure risk, VERY_HIGH (56+) = unlikely to resolve.
- •The prompt_spec and tool_plan can be passed directly to /step/collect or /step/resolve.
- •Source reachability detects Cloudflare blocks, paywalls, and timeouts on data source URLs.
/disputeStateless dispute-driven rerun of audit/judge steps. Provide all context artifacts and a structured dispute request. In reasoning_only mode, reruns audit and judge with existing evidence. In full_rerun mode, re-collects evidence first. Returns updated artifacts with a before/after diff.
Request Parameters
| Field | Type | Description |
|---|---|---|
modeoptional | "reasoning_only" | "full_rerun" | reasoning_only reruns audit/judge only. full_rerun re-collects evidence first. |
reason_coderequired | enum | REASONING_ERROR, LOGIC_GAP, EVIDENCE_MISREAD, EVIDENCE_INSUFFICIENT, OTHER |
messagerequired | string | Dispute message describing the issue (1–8000 chars) |
targetoptional | object | { artifact: "evidence_bundle" | "reasoning_trace" | "verdict" | "prompt_spec", leaf_path?: string } |
prompt_specrequired | PromptSpec | Full PromptSpec from the previous run |
evidence_bundleoptional | object | EvidenceBundle from the previous run (for reasoning_only mode) |
reasoning_traceoptional | object | ReasoningTrace from the previous run |
tool_planoptional | object | Required for full_rerun mode |
collectorsoptional | string[] | Required for full_rerun mode |
patchoptional | object | { evidence_items_append?: array, prompt_spec_override?: object } |
Request Body
{
"mode": "reasoning_only",
"reason_code": "EVIDENCE_MISREAD",
"message": "The evidence was misinterpreted",
"target": {
"artifact": "evidence_bundle",
"leaf_path": "items[0].extracted_fields.outcome"
},
"prompt_spec": { ... },
"evidence_bundle": { ... },
"reasoning_trace": { ... }
}Response Fields
| Field | Type | Description |
|---|---|---|
ok | boolean | Whether the dispute rerun succeeded |
case_id | string | null | Optional correlation ID |
rerun_plan | string[] | Steps that were rerun, e.g. ["audit", "judge"] |
artifacts | object | Updated artifacts: prompt_spec, evidence_bundle, evidence_bundles, reasoning_trace, verdict |
diff | object | { steps_rerun, verdict_changed } |
Response Example
{
"ok": true,
"case_id": null,
"rerun_plan": ["audit", "judge"],
"artifacts": {
"prompt_spec": { ... },
"evidence_bundle": { ... },
"evidence_bundles": [ ... ],
"reasoning_trace": { ... },
"verdict": { ... }
},
"diff": {
"steps_rerun": ["audit", "judge"],
"verdict_changed": null
}
}/dispute/llmSimplified dispute endpoint that accepts 3 user inputs (reason, message, optional URLs) and uses an LLM to translate them into a structured DisputeRequest, then delegates to the existing dispute logic. Returns the same response as POST /dispute. Context artifacts (prompt_spec, evidence_bundle, reasoning_trace) are attached automatically from the current case.
Request Parameters
| Field | Type | Description |
|---|---|---|
reason_coderequired | enum | EVIDENCE_MISREAD, EVIDENCE_INSUFFICIENT, REASONING_ERROR, LOGIC_GAP, OTHER |
messagerequired | string | Free-text dispute message (1–4000 chars) |
evidence_urlsoptional | string[] | Up to 5 URLs to fetch as supporting evidence |
prompt_specrequired | PromptSpec | Full PromptSpec from the previous run (auto-attached by frontend) |
evidence_bundleoptional | object | EvidenceBundle from the previous run (auto-attached) |
reasoning_traceoptional | object | ReasoningTrace from the previous run (auto-attached) |
tool_planoptional | object | Only needed if LLM decides full_rerun (auto-attached) |
collectorsoptional | string[] | Only needed if LLM decides full_rerun (auto-attached) |
Request Body
{
"reason_code": "EVIDENCE_MISREAD",
"message": "Wikipedia shows PM Shmyhal announced a preliminary agreement on Feb 25 2025",
"evidence_urls": [
"https://en.wikipedia.org/wiki/Ukraine_US_Mineral_Agreement"
],
"prompt_spec": { ... },
"evidence_bundle": { ... },
"reasoning_trace": { ... }
}Response Fields
| Field | Type | Description |
|---|---|---|
ok | boolean | Whether the dispute rerun succeeded |
case_id | string | null | Optional correlation ID |
rerun_plan | string[] | Steps that were rerun |
artifacts | object | Updated artifacts with new verdict |
diff | object | Before/after comparison |
Response Example
{
"ok": true,
"case_id": null,
"rerun_plan": ["audit", "judge"],
"artifacts": {
"prompt_spec": { ... },
"evidence_bundles": [ ... ],
"reasoning_trace": { ... },
"verdict": { "outcome": "YES", "confidence": 0.92, "..." : "..." }
},
"diff": {
"steps_rerun": ["audit", "judge"],
"verdict_changed": null
}
}- •The user only provides reason_code, message, and optional evidence_urls. All other context is auto-attached.
- •The LLM decides whether to run reasoning_only or full_rerun based on the dispute content.
- •Returns the same response shape as POST /dispute.
/capabilitiesReturns available LLM providers, collector agents, and pipeline step definitions. Use this endpoint to discover which collectors and models are available before running the pipeline. No request body is required.
Request
curl -X POST 'https://interface.cournot.ai/play/polymarket/ai_data' \
-H 'Content-Type: application/json' \
-d '{
"code": "YOUR_CODE",
"post_data": "{}",
"path": "/capabilities",
"method": "GET"
}'Response Fields
| Field | Type | Description |
|---|---|---|
providers | Provider[] | Array of available LLM backends with their default models |
steps | StepDef[] | Array of pipeline step definitions with available agents and capabilities |
Response Example
{
"providers": [
{ "provider": "openai", "default_model": "gpt-4o" },
{ "provider": "google", "default_model": "gemini-2.5-flash" },
{ "provider": "grok", "default_model": "grok-4-fast" }
],
"steps": [
{
"name": "prompt",
"agents": ["prompt-compiler-v2"],
"description": "Compile market question into structured specification"
},
...
]
}Collectors are evidence-gathering agents that can be specified in the collectors parameter of /step/collect. They run in priority order (highest first).
Available Collectors
| Name | Priority | Description |
|---|---|---|
CollectorOpenSearch | 200 | Open web search collector |
CollectorCRP | 195 | CRP (Contextual Retrieval Pipeline) collector |
CollectorHyDE | 190 | HyDE (Hypothetical Document Embeddings) collector |
CollectorWebPageReader | 180 | Fetches and reads specific web pages for evidence |
CollectorSitePinned | 175 | Targets pinned/known source URLs from the prompt spec |
CollectorPAN | 170 | PAN (Parallel Augmented Navigation) collector |
CollectorAgenticRAG | 160 | Agentic RAG collector with autonomous retrieval |
CollectorGraphRAG | 150 | Graph-based RAG collector |
CollectorHTTP | 100 | Direct HTTP fetch collector |
Available LLM Providers
| Provider | Default Model |
|---|---|
openai | gpt-4o |
google | gemini-2.5-flash |
grok | grok-4-fast |
Errors can occur at two levels: the gateway envelope and the step response. Always check both.
Gateway Error Codes
| Code | Meaning | Action |
|---|---|---|
0 | Success | Parse data.result as JSON |
4100 | Invalid access code | Verify your access code is correct and active |
!= 0 | Other API error | Read msg for error details |
Step-Level Errors
Each step response contains an ok boolean and an errors[] array. Even when the gateway returns code: 0, the step itself may have partially failed.
// Gateway succeeds but step reports errors
{
"code": 0,
"msg": "Success",
"data": {
"result": "{
\"ok\": false,
\"errors\": [\"Timeout fetching source_coinmarketcap\"],
\"evidence_bundles\": []
}"
}
}
// Recommended error handling pattern:
const gateway = await response.json();
if (gateway.code !== 0) {
throw new Error(`Gateway error: ${gateway.msg}`);
}
const step = JSON.parse(gateway.data.result);
if (!step.ok) {
console.warn("Step errors:", step.errors);
}HTTP-Level Errors (Proxy)
When using the dashboard proxy (/api/proxy/...), additional HTTP errors may occur.
| HTTP Status | Meaning |
|---|---|
502 | Upstream request failed (connection error) |
504 | Upstream request timed out (GET: 30s, POST: 300s) |
Run all five steps sequentially to produce a complete Proof-of-Reasoning bundle. Copy this script and replace YOUR_CODE with your access code.
import json, requests
BASE = "https://interface.cournot.ai/play/polymarket/ai_data"
CODE = "YOUR_CODE"
def call(path: str, payload: dict, method: str = "POST") -> dict:
"""Wrap payload in the gateway envelope and call the API."""
resp = requests.post(BASE, json={
"code": CODE,
"post_data": json.dumps(payload),
"path": path,
"method": method,
}, timeout=300)
resp.raise_for_status()
body = resp.json()
if body["code"] != 0:
raise RuntimeError(body.get("msg", "API error"))
return json.loads(body["data"]["result"])
# ── Step 1: Prompt ──────────────────────────────────────────
prompt = call("/step/prompt", {
"user_input": "Will Bitcoin exceed 100k by March 2025?",
"strict_mode": False,
})
spec = prompt["prompt_spec"]
plan = prompt["tool_plan"]
temporal = (spec.get("extra") or {}).get("temporal_constraint")
print(f"[1/6] Prompt compiled: {prompt['market_id']}")
if temporal:
print(f" Temporal guard: {temporal['reason']}")
# ── Step 2: Collect ─────────────────────────────────────────
collect = call("/step/collect", {
"prompt_spec": spec,
"tool_plan": plan,
"collectors": ["CollectorWebPageReader"],
"include_raw_content": False,
})
bundles = collect["evidence_bundles"]
print(f"[2/6] Collected {len(bundles)} evidence bundle(s)")
# ── Step 2.5: Quality Check + Retry ─────────────────────────
quality_scorecard = None
MAX_RETRIES = 2
for i in range(MAX_RETRIES):
qc = call("/step/quality_check", {
"prompt_spec": spec,
"evidence_bundles": bundles,
})
if not qc.get("ok"):
break
quality_scorecard = qc.get("scorecard")
if qc.get("meets_threshold"):
break
hints = (quality_scorecard or {}).get("retry_hints", {})
if not hints:
break
retry = call("/step/collect", {
"prompt_spec": spec,
"tool_plan": plan,
"collectors": ["CollectorOpenSearch"],
"quality_feedback": hints,
})
if retry.get("ok") is not False:
bundles += retry.get("evidence_bundles", [])
level = (quality_scorecard or {}).get("quality_level", "N/A")
print(f"[2.5/6] Quality: {level}")
# ── Step 3: Audit ───────────────────────────────────────────
audit_payload = {
"prompt_spec": spec,
"evidence_bundles": bundles,
}
if quality_scorecard:
audit_payload["quality_scorecard"] = quality_scorecard
if temporal:
audit_payload["temporal_constraint"] = temporal
audit = call("/step/audit", audit_payload)
trace = audit["reasoning_trace"]
print(f"[3/6] Audit: {trace['preliminary_outcome']} "
f"({trace['preliminary_confidence']:.0%})")
# ── Step 4: Judge ───────────────────────────────────────────
judge_payload = {
"prompt_spec": spec,
"evidence_bundles": bundles,
"reasoning_trace": trace,
}
if quality_scorecard:
judge_payload["quality_scorecard"] = quality_scorecard
if temporal:
judge_payload["temporal_constraint"] = temporal
judge = call("/step/judge", judge_payload)
verdict = judge["verdict"]
print(f"[4/6] Verdict: {judge['outcome']} ({judge['confidence']:.0%})")
# ── Step 5: Bundle ──────────────────────────────────────────
bundle = call("/step/bundle", {
"prompt_spec": spec,
"evidence_bundles": bundles,
"reasoning_trace": trace,
"verdict": verdict,
})
print(f"[5/6] PoR Root: {bundle['por_root']}")
# ── Summary ─────────────────────────────────────────────────
print(f"\nOutcome: {judge['outcome']}")
print(f"Confidence: {judge['confidence']:.0%}")
print(f"PoR Root: {bundle['por_root']}")
