Skip to main content

API Reference

All endpoints require authentication via Supabase session. Responses use { data } on success and { error: "message" } on failure.

Authentication & conventions

  • Auth: All routes require an authenticated Supabase session (cookie-based). Returns 401 if missing.
  • Workspace scoping: All resources are scoped to the user's workspace. Team members access the owner's workspace.
  • Rate limiting: Per-user limits. 429 response includes Retry-After header.
  • UUID validation: All [id] params validated against UUID regex. Returns 400 on invalid format.
  • BYOK: Users can configure their own LLM API keys in Settings. Keys are encrypted AES-256-GCM before storage.

Chat

POST/api/chatRate limit: 30/min per user

Send a message and receive an AI-powered streaming response with RAG context.

Request

{
  "message": "string (required, max 10,000 chars)",
  "conversationId": "uuid (optional)",
  "model": "'openai' | 'anthropic' | 'groq' (optional)",
  "attachments": [{ "id": "string", "name": "string", "extractedText": "string" }]
}

Response

Server-Sent Events (SSE) stream:
  event: start    → { conversationId: "uuid" }
  event: token    → { token: "string" }
  event: tool_start → { name: "string" }
  event: tool_result → { name: "string", result: "string" }
  event: replace  → { content: "string" } (PII de-anonymized)
  event: done     → { messageId, sources, tokensUsed, cost }
  event: error    → { error: "string" }

Requires at least one LLM API key (BYOK or env). Returns 503 if no backend available.

Documents

GET/api/documentsRate limit: 30/min per user

List all documents in the workspace with filtering, sorting, and pagination.

Request

Query params:
  folder: string (e.g. "/contracts")
  search: string (filename search)
  sort: "created_at" | "original_name" | "file_size"
  order: "asc" | "desc"
  page: number (default: 1, 100/page)

Response

{
  "documents": [{
    "id": "uuid",
    "original_name": "string",
    "file_type": "string",
    "file_size": number,
    "folder": "string",
    "status": "processing" | "indexed" | "error",
    "created_at": "ISO timestamp"
  }],
  "page": number,
  "limit": 100
}
POST/api/documents/uploadRate limit: 20/min per user

Upload a file (PDF, DOCX, TXT). Automatically indexed to pgvector for RAG.

Request

Content-Type: multipart/form-data
  file: File (max 50 MB, validated by magic bytes)

Response

201 Created
{
  "document": {
    "id": "uuid",
    "original_name": "string",
    "file_type": "string",
    "file_size": number,
    "status": "processing"
  }
}

Indexing happens asynchronously. Status transitions: processing → indexed | error.

PATCH/api/documents/[id]Rate limit: 20/min per user

Update document metadata (rename, move to folder).

Request

{
  "original_name": "string (optional)",
  "folder": "string (optional)"
}

Response

{ "message": "Document updated." }
DELETE/api/documents/[id]Rate limit: 20/min per user

Permanently delete a document and its indexed chunks.

Response

{ "message": "Document deleted." }

Agents

GET/api/agentsRate limit: 30/min per user

List all agents (custom + predefined) in the workspace.

Response

{
  "agents": [{
    "id": "uuid",
    "name": "string",
    "description": "string | null",
    "system_prompt": "string | null",
    "icon_type": "string",
    "is_predefined": boolean,
    "is_active": boolean,
    "linked_folders": ["string"]
  }]
}
POST/api/agentsRate limit: 10/min per user

Create a custom AI agent with system prompt and configuration.

Request

{
  "name": "string (1-50 chars, required)",
  "description": "string (max 200 chars)",
  "system_prompt": "string (max 2000 chars)",
  "icon_type": "bot | briefcase | document | shield | ...",
  "is_active": boolean,
  "linked_folders": ["string"]
}

Response

201 Created — full agent object

Owner/admin only. HTML stripped from all text fields (XSS protection).

POST/api/agents/pipelineRate limit: 10/min per user

Chain 2-5 agents sequentially or in parallel. Each receives previous output as context.

Request

{
  "message": "string (required)",
  "agentIds": ["uuid", "uuid", ...],
  "documentIds": ["uuid", ...] (optional),
  "mode": "sequential" | "parallel"
}

Response

{
  "data": {
    "finalOutput": "string",
    "steps": [{
      "agentId": "uuid",
      "agentName": "string",
      "output": "string",
      "tokensUsed": number
    }],
    "totalTokens": number,
    "mode": "sequential" | "parallel"
  }
}

Parallel mode: first n-1 agents run concurrently, last agent synthesizes.

Conversations

GET/api/conversationsRate limit: 30/min per user

List all chat conversations (most recent first, max 50).

Response

{
  "conversations": [{
    "id": "uuid",
    "title": "string",
    "createdAt": "ISO timestamp",
    "updatedAt": "ISO timestamp",
    "messageCount": number
  }]
}
GET/api/conversations/[id]/messagesRate limit: 30/min per user

Fetch all messages in a conversation with sources and feedback.

Response

{
  "messages": [{
    "id": "uuid",
    "role": "user" | "assistant",
    "content": "string",
    "sources": [{
      "document_id": "uuid",
      "content": "string (excerpt)",
      "similarity": number
    }],
    "feedback": "thumbs_up" | "thumbs_down" | null,
    "timestamp": "HH:MM"
  }]
}

Settings

GET/api/settings

Get user profile and workspace settings (LLM config, privacy, data region).

Response

{
  "name": "string",
  "email": "string",
  "plan": "free" | "starter" | "pro" | "business",
  "profileSettings": {
    "llmKeys": { "openai": "••••", "anthropic": "••••" },
    "llmProvider": "string",
    "emailNotifs": boolean
  },
  "workspaceSettings": {
    "dataRegion": "global" | "eu_only" | "local_only",
    "piiAnonymization": boolean,
    "enabledProviders": ["string"],
    "defaultProvider": "string"
  }
}
PUT/api/settingsRate limit: 20/min per user

Update profile name, LLM keys (encrypted AES-256-GCM), privacy settings.

Request

{
  "name": "string",
  "apiKeys": { "openai": "sk-...", "anthropic": "sk-ant-..." },
  "dataRegion": "global" | "eu_only" | "local_only",
  "piiAnonymization": boolean,
  "enabledProviders": ["string"],
  "defaultProvider": "string"
}

Response

{ "success": true }

LLM keys encrypted before storage. Workspace settings: owner only.

Need help integrating? Contact us at support@ai-deskflow.com