Cournot

API Reference

Complete reference for the Cournot API. This page covers two categories of endpoints: the Public Data API for querying stored market data (no authentication required), and the Proof-of-Reasoning Pipeline API for resolving prediction markets through AI-driven evidence collection and cryptographic verification.

Public Data API

These endpoints serve stored market data directly from the Cournot platform. They require no authentication and are designed for third-party integrations — partners can use them to fetch market details, track resolution status, and display market information in their own applications.

Market Info
GET
/markets/id?id={market_id}

Returns full details for a given market stored on the Cournot platform, including current status, classification, external data, and AI resolution results (if resolved). This is a direct data endpoint — it reads from Cournot's database, not the AI Oracle.

Request Parameters

FieldTypeDescription
id
required
integerThe market ID to look up

Request Body

json
# No request body — pass id as a query parameter
GET https://interface.cournot.ai/play/polymarket/markets/id?id=185557

cURL Example

bash
curl "https://interface.cournot.ai/play/polymarket/markets/id?id=185557"

Response Fields

FieldTypeDescription
codeintegerResponse status code. 0 = success.
msgstringStatus message, e.g. "Success"
data.marketMarketFull market object with all fields
data.market.idintegerUnique market identifier
data.market.titlestringMarket question title
data.market.descriptionstringResolution criteria (may contain HTML)
data.market.platform_urlstringLink to the market on the source platform
data.market.sourcestringSource platform: "polymarket", "kalshi", "limitless", "myriad", "predictfun"
data.market.statusstringMarket status: "monitoring", "pending_verification", or "resolved"
data.market.market_timing_typestring"event_based" or "time_based"
data.market.start_timestringMarket start time (ISO 8601)
data.market.end_timestringMarket end time (ISO 8601)
data.market.ai_promptobjectCompiled AI prompt specification used for resolution
data.market.ai_resultstringAI resolution result text (empty if not yet resolved)
data.market.ai_outcomestringAI outcome: "YES", "NO", "INVALID", or empty
data.external_dataExternalData[]Array of external data sources collected for this market
data.classificationClassificationMarket category, subcategory, and competition

Response Example

json
{
  "code": 0,
  "msg": "Success",
  "data": {
    "market": {
      "id": 185557,
      "title": "Will Denmark use a 4-3-3 starting lineup?",
      "description": "<resolution criteria...>",
      "platform_url": "https://polymarket.com/event/...",
      "source": "polymarket",
      "status": "monitoring",
      "market_timing_type": "time_based",
      "start_time": "2025-03-20T00:00:00Z",
      "end_time": "2026-03-26T21:00:00Z",
      "ai_prompt": { "..." : "..." },
      "ai_result": "",
      "ai_outcome": ""
    },
    "external_data": [
      {
        "collector": "espn_soccer",
        "data": {
          "event": "Denmark vs North Macedonia",
          "status": "pre",
          "venue": "Parken Stadium"
        }
      }
    ],
    "classification": {
      "category": "sports",
      "subcategory": "football",
      "competition": "FIFA World Cup Qualifiers"
    }
  }
}
Notes
  • No authentication required — this is a public endpoint for third-party integrations.
  • This endpoint returns stored data from the Cournot platform, not live AI Oracle results.
  • The ai_result and ai_outcome fields are empty strings until the market is resolved.
  • The description field may contain HTML markup for resolution criteria.
Market Disputes
GET
/markets/disputes?code={code}&market_id={market_id}&dispute_type=manual&page_num=1&page_size=10

Returns the dispute history for a given market. Disputes are challenges submitted by admins against a market's AI resolution, each containing the original and proposed outcomes with reasoning.

Request Parameters

FieldTypeDescription
code
required
stringAdmin access code for authentication
market_id
required
integerThe market ID to fetch disputes for
dispute_type
required
stringType of dispute to fetch. Currently only "manual" is supported
page_numintegerPage number for pagination (default: 1)
page_sizeintegerNumber of disputes per page (default: 10)

Request Body

json
# Pass parameters as query strings
GET https://interface.cournot.ai/play/polymarket/markets/disputes?code={code}&market_id=185557&dispute_type=manual&page_num=1&page_size=10

cURL Example

bash
curl "https://interface.cournot.ai/play/polymarket/markets/disputes?code={code}&market_id=185557&dispute_type=manual&page_num=1&page_size=10"

Response Fields

FieldTypeDescription
codeintegerResponse status code. 0 = success.
msgstringStatus message, e.g. "Success"
data.disputesDispute[]Array of dispute objects for this market
data.disputes[].idintegerUnique dispute identifier
data.disputes[].market_idintegerAssociated market ID
data.disputes[].dispute_typestringDispute type, e.g. "manual"
data.disputes[].previous_ai_resultstringThe AI result before the dispute
data.disputes[].previous_ai_outcomestringThe AI outcome before the dispute (YES/NO/INVALID)
data.disputes[].proposed_ai_resultstringThe proposed replacement AI result
data.disputes[].proposed_ai_outcomestringThe proposed replacement outcome
data.disputes[].reasonstringReason provided for the dispute
data.disputes[].submitted_bystringName of the admin who submitted the dispute
data.disputes[].is_acceptedbooleanWhether the dispute has been accepted
data.disputes[].accepted_bystringName of the admin who accepted the dispute (empty if not accepted)
data.disputes[].accepted_timestringWhen the dispute was accepted (ISO 8601, empty if not accepted)
data.disputes[].created_timestringWhen the dispute was created (ISO 8601)
data.totalintegerTotal number of disputes matching the query

Response Example

json
{
  "code": 0,
  "msg": "Success",
  "data": {
    "disputes": [
      {
        "id": 42,
        "market_id": 185557,
        "dispute_type": "manual",
        "previous_ai_result": "{\"outcome\":\"YES\",\"confidence\":0.92,...}",
        "previous_ai_outcome": "YES",
        "proposed_ai_result": "{\"outcome\":\"NO\",\"confidence\":0.85,...}",
        "proposed_ai_outcome": "NO",
        "reason": "New evidence shows the event was cancelled",
        "submitted_by": "admin_1",
        "is_accepted": true,
        "accepted_by": "admin_2",
        "accepted_time": "2025-06-15T14:30:00Z",
        "created_time": "2025-06-15T12:00:00Z"
      }
    ],
    "total": 1
  }
}
Notes
  • Requires a valid admin access code.
  • Only dispute_type "manual" is currently supported.
  • Disputes are returned in reverse chronological order (newest first).
  • The previous_ai_result and proposed_ai_result fields contain JSON-encoded AI result objects.

PoR Pipeline API

The Proof-of-Reasoning pipeline resolves prediction market questions in five sequential steps, producing a cryptographically verifiable PoR bundle. These endpoints call the AI Oracle directly and require authentication via an access code. All requests use a gateway envelope format.

Backend Gateway

All API requests are made to a single gateway endpoint. The gateway accepts a standardized envelope and routes to internal pipeline paths. Responses are also wrapped in a gateway envelope.

Base Endpoint

POST
https://interface.cournot.ai/play/polymarket/ai_data

Request Envelope

Every request must be wrapped in this envelope. The actual payload goes inside post_data as a JSON-stringified string.

json
{
  "code":      "<YOUR_ACCESS_CODE>",
  "post_data": "<STRINGIFIED_JSON_PAYLOAD>",
  "path":      "<INTERNAL_PATH>",
  "method":    "<HTTP_METHOD>"
}

Envelope Fields

FieldTypeDescription
code
required
stringYour API access code for authentication
post_data
required
stringJSON-stringified request payload. Use "{}" for GET requests with no body.
path
required
stringInternal pipeline path, e.g. "/step/prompt" or "/capabilities"
method
required
stringHTTP method for the internal route: "POST" or "GET"

Gateway Response Envelope

All responses are wrapped in this envelope. A successful call returns code: 0. The step-level response is JSON-stringified inside data.result and must be parsed separately.

json
{
  "code": 0,
  "msg":  "Success",
  "data": {
    "result": "<STRINGIFIED_STEP_RESPONSE>"
  }
}
Authentication

All requests require a valid access code passed in the code field of the gateway envelope. If the code is invalid or missing, the gateway returns error code 4100.

json
// Valid request
{
  "code": "YOUR_ACCESS_CODE",  // Required in every request
  "post_data": "{}",
  "path": "/capabilities",
  "method": "GET"
}

// Error response for invalid code
{
  "code": 4100,
  "msg": "Invalid access code",
  "data": null
}
Pipeline Overview

The PoR pipeline consists of five sequential steps. Each step produces artifacts consumed by subsequent steps. Steps must be called in order.

MethodPathDescription
POST
/step/promptCompile question into prompt spec + tool plan
POST
/step/collectGather evidence from external sources
POST
/step/quality_checkEvaluate evidence quality, produce retry hints
POST
/step/auditProduce reasoning trace from evidence
POST
/step/judgeDetermine outcome and confidence
POST
/step/bundleBuild cryptographic PoR bundle
POST
/step/resolveRun full pipeline in a single call
POST
/validateValidate and compile a market query
POST
/disputeStructured dispute-driven rerun
POST
/dispute/llmLLM-assisted dispute from 3 user inputs
GET
/capabilitiesList available providers, collectors, and steps

For GET requests, set post_data to "{}" in the envelope.

1
Prompt Compilation
POST
/step/prompt

Compiles a natural-language market question into a structured prompt specification and tool plan. This is the entry point of the pipeline. The prompt spec defines the resolution rules, allowed sources, and prediction semantics. The tool plan specifies which data requirements need to be fulfilled and from which sources. When the query involves a scheduled event, the LLM compiler auto-detects a temporal constraint and includes it in prompt_spec.extra.temporal_constraint — extract and pass this to /step/audit and /step/judge.

Request Parameters

FieldTypeDescription
user_input
required
stringThe natural-language market question to resolve
strict_mode
optional
booleanWhen true, only official sources are allowed. Defaults to false.
llm_provider
optional
stringLLM provider override (e.g. "openai", "anthropic", "google")
llm_model
optional
stringLLM model override (e.g. "gpt-4o")

Request Body

json
{
  "user_input": "Will Bitcoin exceed 100k by March 2025?",
  "strict_mode": false
}

cURL Example

bash
curl -X POST 'https://interface.cournot.ai/play/polymarket/ai_data' \
  -H 'Content-Type: application/json' \
  --max-time 300 \
  -d '{
    "code": "YOUR_CODE",
    "post_data": "{\"user_input\": \"Will Bitcoin exceed 100k by March 2025?\", \"strict_mode\": false}",
    "path": "/step/prompt",
    "method": "POST"
  }'

Response Fields

FieldTypeDescription
okbooleanWhether the step succeeded
market_idstringGenerated market identifier, e.g. "mk_3f5b9c7e"
prompt_specPromptSpecStructured specification including market definition, resolution rules, and data requirements
prompt_spec.extra.temporal_constraintobject | nullAuto-detected temporal constraint with { enabled, event_time, reason }. Extract and pass to /step/audit and /step/judge.
tool_planToolPlanExecution plan referencing requirements and sources to query
metadataobjectCompiler info, strict_mode flag, requirement count
errorstring | nullError message if compilation failed

Response Example

json
{
  "ok": true,
  "market_id": "mk_3f5b9c7e",
  "prompt_spec": {
    "schema_version": "v1",
    "task_type": "prediction_resolution",
    "market": {
      "market_id": "mk_3f5b9c7e",
      "question": "Will Bitcoin exceed 100k by March 2025?",
      "market_type": "binary",
      "possible_outcomes": ["YES", "NO"],
      "resolution_rules": {
        "rules": [
          { "rule_id": "r1", "description": "...", "priority": 1 }
        ]
      }
    },
    "prediction_semantics": {
      "target_entity": "Bitcoin",
      "predicate": "exceeds",
      "threshold": "100000 USD",
      "timeframe": "by March 2025"
    },
    "data_requirements": [
      {
        "requirement_id": "req_01",
        "description": "Current BTC price data",
        "source_targets": [...]
      }
    ]
  },
  "tool_plan": {
    "plan_id": "plan_abc123",
    "requirements": ["req_01"],
    "sources": ["source_coinmarketcap"],
    "min_provenance_tier": 1,
    "allow_fallbacks": true
  },
  "metadata": {
    "compiler": "prompt-compiler-v2",
    "strict_mode": false,
    "requirement_count": 3
  },
  "error": null
}

Schema Detail

PromptSpec {
  schema_version      "v1"
  task_type           "prediction_resolution"
  market {
    market_id, question, event_definition, timezone
    resolution_deadline, resolution_window { start, end }
    resolution_rules.rules[] { rule_id, description, priority }
    allowed_sources[]  { source_id, kind, allow, min_provenance_tier }
    market_type        "binary"
    possible_outcomes  ["YES", "NO"]
  }
  prediction_semantics { target_entity, predicate, threshold, timeframe }
  data_requirements[] {
    requirement_id, description
    source_targets[] { source_id, uri, method, expected_content_type }
    selection_policy  { strategy, min_sources, max_sources, quorum }
  }
  extra {
    strict_mode, compiler, assumptions[]
    confidence_policy { min_confidence_for_yesno, default_confidence }
    temporal_constraint? {       // auto-detected for scheduled events
      enabled        true
      event_time     ISO 8601 UTC
      reason         string
    }
  }
}

ToolPlan {
  plan_id, requirements[], sources[]
  min_provenance_tier, allow_fallbacks
}
Notes
  • This step has no dependencies and can be called directly with just user_input.
  • The generated prompt_spec and tool_plan are passed to all subsequent steps.
  • strict_mode constrains the pipeline to only use official/authoritative sources.
2
Evidence Collection
POST
/step/collect

Runs the configured collector agents to gather evidence bundles from external sources. Each collector queries different source types (web search, APIs, databases) and returns structured evidence items with provenance metadata. Multiple collectors can run in parallel. When retrying after a quality check failure, pass the quality_feedback field with retry hints to adjust search strategy.

Request Parameters

FieldTypeDescription
prompt_spec
required
PromptSpecStructured prompt specification from Step 1
tool_plan
required
ToolPlanExecution plan from Step 1
collectors
optional
string[]List of collector names to run. See /capabilities for available collectors.
include_raw_content
optional
booleanWhether to include raw fetched content in the response. Defaults to false.
quality_feedback
optional
objectRetry hints from /step/quality_check scorecard.retry_hints. Adjusts search queries, domains, and focus areas.
llm_provider
optional
stringLLM provider override
llm_model
optional
stringLLM model override

Request Body

json
{
  "prompt_spec": { ... },
  "tool_plan":   { ... },
  "collectors":  ["CollectorWebPageReader", "CollectorOpenSearch"],
  "include_raw_content": false
}

Response Fields

FieldTypeDescription
okbooleanWhether collection completed successfully
collectors_usedstring[]Names of collectors that actually ran
evidence_bundlesEvidenceBundle[]Array of evidence bundles, one per collector
execution_logsExecutionLog[]Per-call timing, tool name, input/output, and errors
errorsstring[]Non-fatal errors encountered during collection

Response Example

json
{
  "ok": true,
  "collectors_used": ["CollectorWebPageReader", "CollectorOpenSearch"],
  "evidence_bundles": [
    {
      "bundle_id": "eb_a1b2c3",
      "market_id": "mk_3f5b9c7e",
      "collector_name": "CollectorWebPageReader",
      "weight": 1.0,
      "items": [
        {
          "evidence_id": "ev_abc123",
          "source_uri": "https://...",
          "source_name": "CoinMarketCap",
          "tier": 1,
          "fetched_at": "2025-01-15T10:30:00Z",
          "content_hash": "sha256:a1b2c3...",
          "parsed_excerpt": "Bitcoin is currently trading at ...",
          "success": true,
          "extracted_fields": {
            "confidence": 0.85,
            "resolution_status": "OPEN"
          }
        }
      ],
      "collected_at": "2025-01-15T10:30:05Z",
      "execution_time_ms": 4523
    }
  ],
  "execution_logs": [
    {
      "plan_id": "plan_abc123",
      "calls": [
        {
          "tool": "CollectorWebPageReader",
          "started_at": "...",
          "ended_at": "...",
          "error": null
        }
      ]
    }
  ],
  "errors": []
}

Schema Detail

EvidenceBundle {
  bundle_id, market_id, plan_id, collector_name
  weight             number (default 1.0)
  items[] {
    evidence_id      hex hash
    requirement_id   links back to data_requirements
    provenance {
      source_id, source_uri, tier, fetched_at
      content_hash   SHA-256 of raw content
      cache_hit      boolean
    }
    content_type, parsed_value, success, error
    extracted_fields { ... }
  }
  collected_at, execution_time_ms
  requirements_fulfilled[], requirements_unfulfilled[]
}

ExecutionLog {
  plan_id
  calls[] { tool, input, output, started_at, ended_at, error }
  started_at, ended_at
}
Notes
  • If collectors is omitted, the pipeline uses a default set based on the tool plan.
  • Collectors run in priority order (highest first). Higher priority collectors are considered more reliable.
  • Evidence items include provenance metadata (source_uri, tier, content_hash) for auditability.
  • Set include_raw_content: true to receive the full fetched content (increases response size significantly).
Evidence Quality Check
POST
/step/quality_check

Evaluates collected evidence quality before proceeding to audit. Returns a scorecard with quality signals and machine-readable retry hints. If quality is below threshold, retry /step/collect with the retry_hints as quality_feedback. This step is optional but recommended — if you skip it, audit and judge still work.

Request Parameters

FieldTypeDescription
prompt_spec
required
PromptSpecCompiled prompt specification from Step 1
evidence_bundles
required
EvidenceBundle[]Evidence bundles from Step 2

Request Body

json
{
  "prompt_spec":      { ... },
  "evidence_bundles": [ ... ]
}

Response Fields

FieldTypeDescription
okbooleanWhether the quality check ran successfully
scorecardQualityScorecard | nullQuality scorecard with metrics, flags, and retry hints
scorecard.source_match"FULL" | "PARTIAL" | "NONE"How well evidence sources match required domains
scorecard.data_type_matchbooleanWhether evidence data types match what was requested
scorecard.collector_agreement"AGREE" | "DISAGREE" | "SINGLE"Whether multiple collectors agree on the outcome
scorecard.requirements_coveragenumber (0-1)Fraction of data requirements covered by evidence
scorecard.quality_level"HIGH" | "MEDIUM" | "LOW"Overall quality assessment
scorecard.quality_flagsstring[]Issue flags like "source_mismatch", "requirements_gap"
scorecard.meets_thresholdbooleantrue = proceed to audit, false = consider retrying
scorecard.recommendationsstring[]Human-readable improvement suggestions
scorecard.retry_hintsobjectMachine-readable hints to pass as quality_feedback to /step/collect
meets_thresholdbooleanTop-level convenience duplicate of scorecard.meets_threshold
errorsstring[]Non-fatal errors

Response Example

json
{
  "ok": true,
  "scorecard": {
    "source_match": "PARTIAL",
    "data_type_match": true,
    "collector_agreement": "AGREE",
    "requirements_coverage": 0.65,
    "quality_level": "MEDIUM",
    "quality_flags": ["source_mismatch"],
    "meets_threshold": false,
    "recommendations": [
      "Try broader search terms for requirement req_001"
    ],
    "retry_hints": {
      "search_queries": ["bitcoin price 2025 prediction"],
      "required_domains": ["coinmarketcap.com"],
      "skip_domains": [],
      "data_type_hint": null,
      "focus_requirements": ["req_001"],
      "collector_guidance": "Focus on price data sources"
    }
  },
  "meets_threshold": false,
  "errors": []
}
Notes
  • Call after /step/collect. If meets_threshold is false and retry_hints is non-empty, retry /step/collect with quality_feedback set to retry_hints.
  • Retry up to 2 times. Merge new evidence bundles with existing ones.
  • Pass the scorecard to /step/audit and /step/judge as quality_scorecard so they are aware of evidence quality issues.
  • This step is optional — audit and judge work without it, you just lose the quality feedback loop.
3
Evidence Audit
POST
/step/audit

Analyzes collected evidence against the prompt specification to produce a structured reasoning trace. The audit step evaluates each piece of evidence, identifies conflicts, builds reasoning chains, and produces a preliminary outcome assessment with confidence score. When temporal_constraint is provided, the auditor computes temporal status (FUTURE/ACTIVE/PAST) and may force INVALID for future events.

Request Parameters

FieldTypeDescription
prompt_spec
required
PromptSpecStructured prompt specification from Step 1
evidence_bundles
required
EvidenceBundle[]Evidence bundles from Step 2
quality_scorecard
optional
object | nullQuality scorecard from /step/quality_check. Informs auditor about evidence quality issues.
temporal_constraint
optional
object | nullFrom prompt_spec.extra.temporal_constraint. Enables temporal guard (FUTURE/ACTIVE/PAST status).
llm_provider
optional
stringLLM provider override
llm_model
optional
stringLLM model override

Request Body

json
{
  "prompt_spec":          { ... },
  "evidence_bundles":     [ ... ],
  "quality_scorecard":    { ... },
  "temporal_constraint":  {
    "enabled": true,
    "event_time": "2027-05-31T00:00:00Z",
    "reason": "Champions League final"
  }
}

Response Fields

FieldTypeDescription
okbooleanWhether audit succeeded
reasoning_traceReasoningTraceStructured trace with reasoning steps, conflict detection, and preliminary outcome
errorsstring[]Errors encountered during reasoning

Response Example

json
{
  "ok": true,
  "reasoning_trace": {
    "trace_id": "tr_xyz789",
    "market_id": "mk_3f5b9c7e",
    "steps": [
      {
        "step_id": "s1",
        "step_type": "evidence_evaluation",
        "description": "Evaluate BTC price data from CoinMarketCap",
        "evidence_refs": [
          {
            "evidence_id": "ev_abc123",
            "source_id": "source_coinmarketcap",
            "field_used": "current_price",
            "value_at_reference": "97500"
          }
        ],
        "conclusion": "BTC is currently at $97,500, close to but below the $100k threshold",
        "confidence_delta": 0.1
      }
    ],
    "conflicts": [],
    "evidence_summary": "Multiple sources confirm BTC near $97.5k ...",
    "reasoning_summary": "Based on current price trajectory ...",
    "preliminary_outcome": "YES",
    "preliminary_confidence": 0.72,
    "recommended_rule_id": "r1"
  },
  "errors": []
}

Schema Detail

ReasoningTrace {
  trace_id, market_id, bundle_id
  steps[] {
    step_id, step_type, description
    evidence_refs[] {
      evidence_id, requirement_id, source_id
      field_used, value_at_reference
    }
    rule_id, input_summary, output_summary
    conclusion, confidence_delta
    depends_on[], metadata
  }
  conflicts[]
  evidence_summary       human-readable text
  reasoning_summary      human-readable text
  preliminary_outcome    "YES" | "NO" | "INVALID"
  preliminary_confidence number (0-1)
  recommended_rule_id
}
Notes
  • The preliminary_outcome here is advisory. The final outcome is determined by the Judge step.
  • The reasoning trace captures step-by-step logic that can be independently verified.
  • Conflicts between evidence sources are explicitly identified and recorded.
4
Judgment
POST
/step/judge

Applies resolution rules to the evidence and reasoning trace to produce a final verdict. The judge evaluates the reasoning validity, applies confidence adjustments, and determines the definitive outcome. Includes an independent LLM review of the reasoning process. When temporal_constraint is provided, computes temporal status and may force INVALID for future or active events.

Request Parameters

FieldTypeDescription
prompt_spec
required
PromptSpecStructured prompt specification from Step 1
evidence_bundles
required
EvidenceBundle[]Evidence bundles from Step 2
reasoning_trace
required
ReasoningTraceReasoning trace from Step 3
quality_scorecard
optional
object | nullQuality scorecard from /step/quality_check. Informs judge about evidence quality issues.
temporal_constraint
optional
object | nullFrom prompt_spec.extra.temporal_constraint. Enables temporal guard (FUTURE/ACTIVE/PAST status).
llm_provider
optional
stringLLM provider override
llm_model
optional
stringLLM model override

Request Body

json
{
  "prompt_spec":         { ... },
  "evidence_bundles":    [ ... ],
  "reasoning_trace":     { ... },
  "quality_scorecard":   { ... },
  "temporal_constraint": {
    "enabled": true,
    "event_time": "2027-05-31T00:00:00Z",
    "reason": "Champions League final"
  }
}

Response Fields

FieldTypeDescription
okbooleanWhether judgment succeeded
verdictVerdictFull verdict object with cryptographic hashes and LLM review
outcomestringFinal outcome: "YES", "NO", or "INVALID"
confidencenumberFinal confidence score between 0 and 1
errorsstring[]Errors encountered during judgment

Response Example

json
{
  "ok": true,
  "outcome": "YES",
  "confidence": 0.78,
  "verdict": {
    "market_id": "mk_3f5b9c7e",
    "outcome": "YES",
    "confidence": 0.78,
    "resolution_time": "2025-01-15T10:31:00Z",
    "resolution_rule_id": "r1",
    "prompt_spec_hash": "0xabc123...",
    "evidence_root": "0xdef456...",
    "reasoning_root": "0x789ghi...",
    "justification_hash": "0xjkl012...",
    "selected_leaf_refs": ["ev_abc123", "ev_def456"],
    "metadata": {
      "strict_mode": false,
      "trace_id": "tr_xyz789",
      "justification": "Evidence strongly suggests BTC will exceed $100k ...",
      "llm_review": {
        "outcome": "YES",
        "confidence": 0.78,
        "reasoning_valid": true,
        "reasoning_issues": [],
        "confidence_adjustments": [],
        "final_justification": "The reasoning chain is sound ..."
      }
    }
  },
  "errors": []
}

Schema Detail

Verdict {
  market_id, outcome, confidence
  resolution_time, resolution_rule_id
  prompt_spec_hash     hex hash
  evidence_root        hex Merkle root
  reasoning_root       hex Merkle root
  justification_hash   hex hash
  selected_leaf_refs[] evidence_id references
  metadata {
    strict_mode, trace_id, bundle_id
    bundle_count, step_count, conflict_count
    justification        human-readable string
    llm_review {
      outcome, confidence, resolution_rule_id
      reasoning_valid    boolean
      reasoning_issues[], confidence_adjustments[]
      final_justification
    }
  }
}
Notes
  • The outcome is one of: "YES", "NO", or "INVALID" (when evidence is insufficient or temporal guard triggers).
  • Confidence ranges from 0 to 1. Values below the configured min_confidence_for_yesno threshold result in "INVALID".
  • The verdict includes Merkle root hashes for cryptographic verification of the full reasoning chain.
  • The llm_review provides an independent assessment of whether the reasoning is sound.
  • Temporal status: FUTURE (event_time > now) → INVALID; ACTIVE (now - event_time < 24h) → INVALID unless concluded; PAST (≥24h) → normal resolution.
5
Proof-of-Reasoning Bundle
POST
/step/bundle

Hashes all pipeline artifacts into a cryptographic Proof-of-Reasoning (PoR) bundle with Merkle roots. This is the final step that produces a tamper-evident, verifiable record of the entire resolution process. The PoR root hash can be used to independently verify that no artifacts were modified after resolution.

Request Parameters

FieldTypeDescription
prompt_spec
required
PromptSpecStructured prompt specification from Step 1
evidence_bundles
required
EvidenceBundle[]Evidence bundles from Step 2
reasoning_trace
required
ReasoningTraceReasoning trace from Step 3
verdict
required
VerdictVerdict object from Step 4

Request Body

json
{
  "prompt_spec":      { ... },
  "evidence_bundles": [ ... ],
  "reasoning_trace":  { ... },
  "verdict":          { ... }
}

Response Fields

FieldTypeDescription
okbooleanWhether bundling succeeded
por_bundlePoRBundleComplete Proof-of-Reasoning bundle with all hashes and the full verdict
por_rootstringTop-level Merkle root hash, e.g. "0xba9ec9c2..."
rootsobjectIndividual root hashes for each pipeline artifact
errorsstring[]Errors encountered during bundling

Response Example

json
{
  "ok": true,
  "por_root": "0xba9ec9c2d4e6f8a0b1c3d5e7f9a2b4c6d8e0f1a3",
  "roots": {
    "prompt_spec_hash": "0xabc123...",
    "evidence_root":    "0xdef456...",
    "reasoning_root":   "0x789ghi...",
    "por_root":         "0xba9ec9c2..."
  },
  "por_bundle": {
    "schema_version": "1.0",
    "protocol_version": "1.0",
    "market_id": "mk_3f5b9c7e",
    "prompt_spec_hash": "0xabc123...",
    "evidence_root": "0xdef456...",
    "reasoning_root": "0x789ghi...",
    "verdict_hash": "0xjkl012...",
    "por_root": "0xba9ec9c2...",
    "verdict": { ... },
    "tee_attestation": null,
    "signatures": {},
    "created_at": "2025-01-15T10:31:05Z",
    "metadata": {
      "pipeline_version": "2.1.0",
      "mode": "live"
    }
  },
  "errors": []
}

Schema Detail

PoRBundle {
  schema_version, protocol_version, market_id
  prompt_spec_hash   hex
  evidence_root      hex
  reasoning_root     hex
  verdict_hash       hex
  por_root           hex (master Merkle root)
  verdict            full Verdict object
  tee_attestation    null | object
  signatures         {}
  created_at, metadata { pipeline_version, mode }
}

roots {
  prompt_spec_hash   hex
  evidence_root      hex
  reasoning_root     hex
  por_root           hex
}
Notes
  • The por_root is the master Merkle root that covers all other roots. Use it for single-hash verification.
  • Individual roots (prompt_spec_hash, evidence_root, reasoning_root) enable partial verification of specific pipeline stages.
  • tee_attestation and signatures are reserved for future TEE (Trusted Execution Environment) support.
Resolve (All-in-One)
POST
/step/resolve

Run the full resolution pipeline (collect → quality check → audit → judge → PoR bundle) in a single call. Quality check and temporal constraint are handled automatically — temporal_constraint is extracted from prompt_spec.extra and quality check runs with a retry loop by default.

Request Parameters

FieldTypeDescription
prompt_spec
required
PromptSpecCompiled prompt specification from /step/prompt
tool_plan
required
ToolPlanTool execution plan from /step/prompt
collectors
optional
string[]Which collectors to use (default: ["CollectorWebPageReader"])
execution_mode
optional
string"production", "development" (default), or "test"
enable_quality_check
optional
booleanRun quality check with retry loop after collection (default: true)
max_quality_retries
optional
integerMax quality check retry iterations, 0–5 (default: 2)
llm_provider
optional
stringLLM provider override
llm_model
optional
stringLLM model override

Request Body

json
{
  "prompt_spec": { ... },
  "tool_plan":   { ... },
  "collectors":  ["CollectorOpenSearch"],
  "execution_mode": "development",
  "enable_quality_check": true,
  "max_quality_retries": 2
}

Response Fields

FieldTypeDescription
okbooleanWhether the full pipeline succeeded
outcomestring"YES", "NO", or "INVALID"
confidencenumberFinal confidence score 0–1
por_rootstringPoR Merkle root hash
artifactsobjectAll pipeline artifacts (evidence_bundles, reasoning_trace, verdict, por_bundle)
errorsstring[]Non-fatal errors

Response Example

json
{
  "ok": true,
  "outcome": "YES",
  "confidence": 0.85,
  "por_root": "0xba9ec9c2...",
  "artifacts": {
    "evidence_bundles": [ ... ],
    "reasoning_trace": { ... },
    "verdict": { ... },
    "por_bundle": { ... }
  },
  "errors": []
}
Notes
  • No extra fields needed for quality check or temporal constraint — both are fully automatic.
  • Set enable_quality_check: false to skip the quality check loop and go directly to audit.
  • This is equivalent to calling prompt → collect → quality_check → audit → judge → bundle individually.
Validate Market
POST
/validate

Validate and compile a market query in a single call. Runs LLM validation (classify market type, validate fields, assess resolvability), prompt compilation (PromptSpec + ToolPlan), and source reachability probes in parallel. The response includes both the validation result and the compiled prompt_spec/tool_plan ready for /step/collect.

Request Parameters

FieldTypeDescription
user_input
required
stringThe prediction market query to validate and compile (1–8000 chars)
strict_mode
optional
booleanEnable strict mode for deterministic hashing (default: true)
llm_provider
optional
stringLLM provider override
llm_model
optional
stringLLM model override

Request Body

json
{
  "user_input": "Highest temperature in Buenos Aires on March 1?",
  "strict_mode": true
}

Response Fields

FieldTypeDescription
okbooleanWhether validation and compilation succeeded
classificationobjectMarket type classification with confidence and rationale
classification.market_typestringDetected type: FINANCIAL_PRICE, TEMPERATURE, SPORTS_MATCH, BINARY_EVENT, etc.
validationobjectChecks passed/failed with severity and suggestions
resolvabilityobjectScore (0–100), level (LOW/MEDIUM/HIGH/VERY_HIGH), risk factors
source_reachabilityarrayURL probe results — reachable, status_code, errors
prompt_specPromptSpecCompiled prompt specification (pass to /step/collect)
tool_planToolPlanTool execution plan (pass to /step/collect)
errorsstring[]Non-fatal errors

Response Example

json
{
  "ok": true,
  "classification": {
    "market_type": "TEMPERATURE",
    "confidence": 0.95,
    "detection_rationale": "Contains 'temperature', city name, and date"
  },
  "validation": {
    "checks_passed": ["U-02", "U-03", "TEMP-01"],
    "checks_failed": [
      {
        "check_id": "TEMP-04",
        "severity": "warning",
        "message": "No fallback data source specified.",
        "suggestion": "Add an alternative source if Wunderground is unavailable."
      }
    ]
  },
  "resolvability": {
    "score": 35,
    "level": "MEDIUM",
    "risk_factors": [
      { "factor": "Single data source with no fallback", "points": 30 }
    ]
  },
  "source_reachability": [
    { "url": "https://www.wunderground.com", "reachable": true, "status_code": 200, "error": null }
  ],
  "prompt_spec": { "..." : "..." },
  "tool_plan": { "..." : "..." },
  "errors": []
}
Notes
  • Risk levels: LOW (0–15) = auto-resolution OK, MEDIUM (16–35) = may have difficulty, HIGH (36–55) = high failure risk, VERY_HIGH (56+) = unlikely to resolve.
  • The prompt_spec and tool_plan can be passed directly to /step/collect or /step/resolve.
  • Source reachability detects Cloudflare blocks, paywalls, and timeouts on data source URLs.
Dispute (Structured)
POST
/dispute

Stateless dispute-driven rerun of audit/judge steps. Provide all context artifacts and a structured dispute request. In reasoning_only mode, reruns audit and judge with existing evidence. In full_rerun mode, re-collects evidence first. Returns updated artifacts with a before/after diff.

Request Parameters

FieldTypeDescription
mode
optional
"reasoning_only" | "full_rerun"reasoning_only reruns audit/judge only. full_rerun re-collects evidence first.
reason_code
required
enumREASONING_ERROR, LOGIC_GAP, EVIDENCE_MISREAD, EVIDENCE_INSUFFICIENT, OTHER
message
required
stringDispute message describing the issue (1–8000 chars)
target
optional
object{ artifact: "evidence_bundle" | "reasoning_trace" | "verdict" | "prompt_spec", leaf_path?: string }
prompt_spec
required
PromptSpecFull PromptSpec from the previous run
evidence_bundle
optional
objectEvidenceBundle from the previous run (for reasoning_only mode)
reasoning_trace
optional
objectReasoningTrace from the previous run
tool_plan
optional
objectRequired for full_rerun mode
collectors
optional
string[]Required for full_rerun mode
patch
optional
object{ evidence_items_append?: array, prompt_spec_override?: object }

Request Body

json
{
  "mode": "reasoning_only",
  "reason_code": "EVIDENCE_MISREAD",
  "message": "The evidence was misinterpreted",
  "target": {
    "artifact": "evidence_bundle",
    "leaf_path": "items[0].extracted_fields.outcome"
  },
  "prompt_spec": { ... },
  "evidence_bundle": { ... },
  "reasoning_trace": { ... }
}

Response Fields

FieldTypeDescription
okbooleanWhether the dispute rerun succeeded
case_idstring | nullOptional correlation ID
rerun_planstring[]Steps that were rerun, e.g. ["audit", "judge"]
artifactsobjectUpdated artifacts: prompt_spec, evidence_bundle, evidence_bundles, reasoning_trace, verdict
diffobject{ steps_rerun, verdict_changed }

Response Example

json
{
  "ok": true,
  "case_id": null,
  "rerun_plan": ["audit", "judge"],
  "artifacts": {
    "prompt_spec": { ... },
    "evidence_bundle": { ... },
    "evidence_bundles": [ ... ],
    "reasoning_trace": { ... },
    "verdict": { ... }
  },
  "diff": {
    "steps_rerun": ["audit", "judge"],
    "verdict_changed": null
  }
}
Dispute (LLM-Assisted)
POST
/dispute/llm

Simplified dispute endpoint that accepts 3 user inputs (reason, message, optional URLs) and uses an LLM to translate them into a structured DisputeRequest, then delegates to the existing dispute logic. Returns the same response as POST /dispute. Context artifacts (prompt_spec, evidence_bundle, reasoning_trace) are attached automatically from the current case.

Request Parameters

FieldTypeDescription
reason_code
required
enumEVIDENCE_MISREAD, EVIDENCE_INSUFFICIENT, REASONING_ERROR, LOGIC_GAP, OTHER
message
required
stringFree-text dispute message (1–4000 chars)
evidence_urls
optional
string[]Up to 5 URLs to fetch as supporting evidence
prompt_spec
required
PromptSpecFull PromptSpec from the previous run (auto-attached by frontend)
evidence_bundle
optional
objectEvidenceBundle from the previous run (auto-attached)
reasoning_trace
optional
objectReasoningTrace from the previous run (auto-attached)
tool_plan
optional
objectOnly needed if LLM decides full_rerun (auto-attached)
collectors
optional
string[]Only needed if LLM decides full_rerun (auto-attached)

Request Body

json
{
  "reason_code": "EVIDENCE_MISREAD",
  "message": "Wikipedia shows PM Shmyhal announced a preliminary agreement on Feb 25 2025",
  "evidence_urls": [
    "https://en.wikipedia.org/wiki/Ukraine_US_Mineral_Agreement"
  ],
  "prompt_spec": { ... },
  "evidence_bundle": { ... },
  "reasoning_trace": { ... }
}

Response Fields

FieldTypeDescription
okbooleanWhether the dispute rerun succeeded
case_idstring | nullOptional correlation ID
rerun_planstring[]Steps that were rerun
artifactsobjectUpdated artifacts with new verdict
diffobjectBefore/after comparison

Response Example

json
{
  "ok": true,
  "case_id": null,
  "rerun_plan": ["audit", "judge"],
  "artifacts": {
    "prompt_spec": { ... },
    "evidence_bundles": [ ... ],
    "reasoning_trace": { ... },
    "verdict": { "outcome": "YES", "confidence": 0.92, "..." : "..." }
  },
  "diff": {
    "steps_rerun": ["audit", "judge"],
    "verdict_changed": null
  }
}
Notes
  • The user only provides reason_code, message, and optional evidence_urls. All other context is auto-attached.
  • The LLM decides whether to run reasoning_only or full_rerun based on the dispute content.
  • Returns the same response shape as POST /dispute.
Capabilities Discovery
GET
/capabilities

Returns available LLM providers, collector agents, and pipeline step definitions. Use this endpoint to discover which collectors and models are available before running the pipeline. No request body is required.

Request

bash
curl -X POST 'https://interface.cournot.ai/play/polymarket/ai_data' \
  -H 'Content-Type: application/json' \
  -d '{
    "code": "YOUR_CODE",
    "post_data": "{}",
    "path": "/capabilities",
    "method": "GET"
  }'

Response Fields

FieldTypeDescription
providersProvider[]Array of available LLM backends with their default models
stepsStepDef[]Array of pipeline step definitions with available agents and capabilities

Response Example

json
{
  "providers": [
    { "provider": "openai", "default_model": "gpt-4o" },
    { "provider": "google", "default_model": "gemini-2.5-flash" },
    { "provider": "grok", "default_model": "grok-4-fast" }
  ],
  "steps": [
    {
      "name": "prompt",
      "agents": ["prompt-compiler-v2"],
      "description": "Compile market question into structured specification"
    },
    ...
  ]
}
Collectors & Providers

Collectors are evidence-gathering agents that can be specified in the collectors parameter of /step/collect. They run in priority order (highest first).

Available Collectors

NamePriorityDescription
CollectorOpenSearch200Open web search collector
CollectorCRP195CRP (Contextual Retrieval Pipeline) collector
CollectorHyDE190HyDE (Hypothetical Document Embeddings) collector
CollectorWebPageReader180Fetches and reads specific web pages for evidence
CollectorSitePinned175Targets pinned/known source URLs from the prompt spec
CollectorPAN170PAN (Parallel Augmented Navigation) collector
CollectorAgenticRAG160Agentic RAG collector with autonomous retrieval
CollectorGraphRAG150Graph-based RAG collector
CollectorHTTP100Direct HTTP fetch collector

Available LLM Providers

ProviderDefault Model
openaigpt-4o
googlegemini-2.5-flash
grokgrok-4-fast
Error Handling

Errors can occur at two levels: the gateway envelope and the step response. Always check both.

Gateway Error Codes

CodeMeaningAction
0SuccessParse data.result as JSON
4100Invalid access codeVerify your access code is correct and active
!= 0Other API errorRead msg for error details

Step-Level Errors

Each step response contains an ok boolean and an errors[] array. Even when the gateway returns code: 0, the step itself may have partially failed.

json
// Gateway succeeds but step reports errors
{
  "code": 0,
  "msg": "Success",
  "data": {
    "result": "{
      \"ok\": false,
      \"errors\": [\"Timeout fetching source_coinmarketcap\"],
      \"evidence_bundles\": []
    }"
  }
}

// Recommended error handling pattern:
const gateway = await response.json();
if (gateway.code !== 0) {
  throw new Error(`Gateway error: ${gateway.msg}`);
}
const step = JSON.parse(gateway.data.result);
if (!step.ok) {
  console.warn("Step errors:", step.errors);
}

HTTP-Level Errors (Proxy)

When using the dashboard proxy (/api/proxy/...), additional HTTP errors may occur.

HTTP StatusMeaning
502Upstream request failed (connection error)
504Upstream request timed out (GET: 30s, POST: 300s)
Quick Start — Full Pipeline

Run all five steps sequentially to produce a complete Proof-of-Reasoning bundle. Copy this script and replace YOUR_CODE with your access code.

python
import json, requests

BASE = "https://interface.cournot.ai/play/polymarket/ai_data"
CODE = "YOUR_CODE"

def call(path: str, payload: dict, method: str = "POST") -> dict:
    """Wrap payload in the gateway envelope and call the API."""
    resp = requests.post(BASE, json={
        "code":      CODE,
        "post_data": json.dumps(payload),
        "path":      path,
        "method":    method,
    }, timeout=300)
    resp.raise_for_status()
    body = resp.json()
    if body["code"] != 0:
        raise RuntimeError(body.get("msg", "API error"))
    return json.loads(body["data"]["result"])

# ── Step 1: Prompt ──────────────────────────────────────────
prompt = call("/step/prompt", {
    "user_input":  "Will Bitcoin exceed 100k by March 2025?",
    "strict_mode": False,
})
spec = prompt["prompt_spec"]
plan = prompt["tool_plan"]
temporal = (spec.get("extra") or {}).get("temporal_constraint")
print(f"[1/6] Prompt compiled: {prompt['market_id']}")
if temporal:
    print(f"       Temporal guard: {temporal['reason']}")

# ── Step 2: Collect ─────────────────────────────────────────
collect = call("/step/collect", {
    "prompt_spec": spec,
    "tool_plan":   plan,
    "collectors":  ["CollectorWebPageReader"],
    "include_raw_content": False,
})
bundles = collect["evidence_bundles"]
print(f"[2/6] Collected {len(bundles)} evidence bundle(s)")

# ── Step 2.5: Quality Check + Retry ─────────────────────────
quality_scorecard = None
MAX_RETRIES = 2
for i in range(MAX_RETRIES):
    qc = call("/step/quality_check", {
        "prompt_spec":      spec,
        "evidence_bundles": bundles,
    })
    if not qc.get("ok"):
        break
    quality_scorecard = qc.get("scorecard")
    if qc.get("meets_threshold"):
        break
    hints = (quality_scorecard or {}).get("retry_hints", {})
    if not hints:
        break
    retry = call("/step/collect", {
        "prompt_spec":      spec,
        "tool_plan":        plan,
        "collectors":       ["CollectorOpenSearch"],
        "quality_feedback": hints,
    })
    if retry.get("ok") is not False:
        bundles += retry.get("evidence_bundles", [])
level = (quality_scorecard or {}).get("quality_level", "N/A")
print(f"[2.5/6] Quality: {level}")

# ── Step 3: Audit ───────────────────────────────────────────
audit_payload = {
    "prompt_spec":      spec,
    "evidence_bundles": bundles,
}
if quality_scorecard:
    audit_payload["quality_scorecard"] = quality_scorecard
if temporal:
    audit_payload["temporal_constraint"] = temporal

audit = call("/step/audit", audit_payload)
trace = audit["reasoning_trace"]
print(f"[3/6] Audit: {trace['preliminary_outcome']} "
      f"({trace['preliminary_confidence']:.0%})")

# ── Step 4: Judge ───────────────────────────────────────────
judge_payload = {
    "prompt_spec":      spec,
    "evidence_bundles": bundles,
    "reasoning_trace":  trace,
}
if quality_scorecard:
    judge_payload["quality_scorecard"] = quality_scorecard
if temporal:
    judge_payload["temporal_constraint"] = temporal

judge = call("/step/judge", judge_payload)
verdict = judge["verdict"]
print(f"[4/6] Verdict: {judge['outcome']} ({judge['confidence']:.0%})")

# ── Step 5: Bundle ──────────────────────────────────────────
bundle = call("/step/bundle", {
    "prompt_spec":      spec,
    "evidence_bundles": bundles,
    "reasoning_trace":  trace,
    "verdict":          verdict,
})
print(f"[5/6] PoR Root: {bundle['por_root']}")

# ── Summary ─────────────────────────────────────────────────
print(f"\nOutcome:    {judge['outcome']}")
print(f"Confidence: {judge['confidence']:.0%}")
print(f"PoR Root:   {bundle['por_root']}")