Skip to main content

Overview

For large datasets (like per-merchant residuals), the API supports async exports. You create an export job, poll for completion, then download the file via a signed URL.

Workflow

Step 1: Create the export

resp = requests.post(
    f"{BASE}/residuals/reports/2026/1/merchants/export",
    headers=HEADERS
).json()

export_id = resp["data"]["export_id"]
print(f"Export created: {export_id}")
The API returns 202 Accepted with an export ID.

Step 2: Poll for completion

import time

while True:
    status = requests.get(
        f"{BASE}/residuals/exports/{export_id}",
        headers=HEADERS
    ).json()["data"]

    if status["status"] == "completed":
        print("Export ready")
        break
    elif status["status"] == "failed":
        raise Exception(f"Export failed: {status.get('error')}")

    time.sleep(5)
Export status values:
StatusMeaning
pendingJob queued
processingGenerating CSV
completedReady for download
failedGeneration failed (see error)

Step 3: Download

download = requests.get(
    f"{BASE}/residuals/exports/{export_id}/download",
    headers=HEADERS
).json()["data"]

# download_url is a signed URL, valid until expires_at
csv_resp = requests.get(download["download_url"])
with open("residuals_2026_01.csv", "wb") as f:
    f.write(csv_resp.content)

Tips

  • Signed download URLs expire (check expires_at) — generate a new one if needed
  • Export jobs are rate-limited like any other endpoint
  • For monthly reconciliation, combine exports with the Reconcile Residuals workflow