Skip to main content

Overview

The AI explain endpoint uses Vertex AI to generate natural-language analysis of your residual portfolio. It’s designed for embedding in partner-facing dashboards or reports.

Making a request

resp = requests.post(
    f"{BASE}/residuals/explain",
    headers=HEADERS,
    json={
        "year": 2026,
        "month": 1,
        "question": "Why did my payout decrease this month?"
    }
).json()

print(resp["data"]["explanation"])

Response

{
  "data": {
    "explanation": "Your January payout decreased by $2,340 compared to December...",
    "generated_at": "2026-02-09T14:30:00Z",
    "model": "gemini-2.0-flash"
  },
  "meta": {
    "request_id": "req_abc123",
    "api_version": "v2"
  },
  "errors": []
}

Rate limits

The AI explain endpoint has a stricter rate limit than other endpoints:
LimitValue
AI explain5 requests per minute per API key
Global120 requests per minute per API key
If you exceed the AI limit, you’ll receive a 429 RATE_LIMIT_EXCEEDED response with retry_after_seconds.

Example questions

QuestionWhat you get
”Summarize my portfolio performance”High-level volume, payout, and merchant count trends
”Why did my payout decrease?”Analysis of merchant attrition, volume changes, and fee adjustments
”Which merchants should I focus on?”Identifies high-growth or at-risk merchants
”Compare this month to last month”Month-over-month delta analysis

Tips

  • Cache responses — AI insights are cached for 1 hour server-side. Repeated identical requests within that window are fast and don’t count against your AI rate limit.
  • Be specific — More specific questions produce more actionable responses
  • Use with residual data — Pair AI insights with the Reconcile Residuals workflow for a complete picture