> For the complete documentation index, see llms.txt.
Skip to main content

Check out Port for yourself ➜ 

Port AI API Interaction

Port AI can be accessed programmatically through Port's API, enabling integration into custom applications and workflows. This provides the most flexible way to incorporate Port AI capabilities into your existing tools and processes.

API Endpoints

Port AI provides streaming API endpoints for real-time interaction:

  • Port AI Assistant: /v1/ai/invoke - General-purpose AI interactions.
  • AI Agents: /v1/agent/<AGENT_IDENTIFIER>/invoke - Domain-specific agent interactions.

All interactions use streaming responses as Server-Sent Events (SSE) to provide real-time updates during execution. The response will be in text/event-stream format.

Interaction Process

  1. Invoke Port AI
  2. The API will start sending Server-Sent Events
  3. Your client should process these events as they arrive, with each event providing information about the AI's progress or final response

Basic API Examples

Port AI Assistant:

curl 'https://api.port.io/v1/ai/invoke' \
-H 'Authorization: Bearer <YOUR_API_TOKEN>' \
-H 'Content-Type: application/json' \
--data-raw '{"prompt":"What services are failing health checks?"}'

AI Agents:

curl 'https://api.port.io/v1/agent/<AGENT_IDENTIFIER>/invoke' \
-H 'Authorization: Bearer <YOUR_API_TOKEN>' \
-H 'Content-Type: application/json' \
--data-raw '{"prompt":"Analyze the health of our production services"}'

With metadata labels:

curl 'https://api.port.io/v1/ai/invoke' \
-H 'Authorization: Bearer <YOUR_API_TOKEN>' \
-H 'Content-Type: application/json' \
--data-raw '{
"prompt":"What services are failing health checks?",
"tools": ["^(list|search|describe)_.*"],
"labels": {
"source": "monitoring_system",
"environment": "production",
"triggered_by": "automated_check"
}
}

Streaming Response Format

The API responds with Content-Type: text/event-stream; charset=utf-8.

Each event in the stream has the following format:

event: <event_name>
data: <json_payload_or_string>

Note the blank line after data: ... which separates events.

Example Event Sequence

event: tool_call
data: { "id": "call_0", "name": "list_entities", "arguments": "{\"blueprintIdentifier\":\"service\"}" }

event: tool_result
data: { "id": "call_0", "content": "Found 15 services in your catalog..." }

event: tool_call
data: { "id": "call_1", "name": "run_action", "arguments": "{\"actionIdentifier\":\"create_incident\"}" }

event: tool_result
data: { "id": "call_1", "content": "Action run created successfully with ID: run_12345" }

event: execution
data: I found 15 services in your catalog and created an incident report as requested.

event: done
data: {
"rateLimitUsage": {
"maxRequests": 200,
"remainingRequests": 193,
"maxTokens": 200000,
"remainingTokens": 179910,
"remainingTimeMs": 903
},
"monthlyQuotaUsage": {
"monthlyLimit": 50,
"remainingQuota": 49,
"month": "2025-09",
"remainingTimeMs": 1766899073
}
}

Event Types

tool_call (Click to expand)

Indicates that Port AI is about to execute a tool. This event provides details about the tool being called and its arguments. For large arguments, the data may be sent in multiple chunks.

{
"id": "call_0",
"name": "list_entities",
"arguments": "{\"blueprintIdentifier\":\"service\",\"limit\":10}",
"lastChunk": true
}

Fields:

  • id: Unique identifier for this tool call.
  • name: Name of the tool being executed (only included in the first chunk).
  • arguments: JSON string containing the tool arguments (may be chunked for large payloads).
  • lastChunk: Boolean indicating if this is the final chunk for this tool call (optional, only present on the last chunk).
tool_result (Click to expand)

Contains the result of a tool execution. For large results, the data may be sent in multiple chunks.

{
"id": "call_0",
"content": "Found 15 services in your catalog: api-gateway, user-service, payment-service...",
"lastChunk": true
}

Fields:

  • id: Unique identifier matching the corresponding tool call.
  • content: The result content from the tool execution (may be chunked for large responses).
  • lastChunk: Boolean indicating if this is the final chunk for this tool result (optional, only present on the last chunk).
execution (Click to expand)

The final textual answer or a chunk of the answer from Port AI. For longer responses, multiple execution events might be sent.

done (Click to expand)

Signals that Port AI has finished processing and the response stream is complete. This event also includes quota usage information for managing your API limits.

{
"rateLimitUsage": {
"maxRequests": 200,
"remainingRequests": 193,
"maxTokens": 200000,
"remainingTokens": 179910,
"remainingTimeMs": 903
},
"monthlyQuotaUsage": {
"monthlyLimit": 50,
"remainingQuota": 49,
"month": "2025-09",
"remainingTimeMs": 1766899073
}
}

Quota Usage Fields:

  • maxRequests: Maximum number of LLM calls allowed in the current rolling window.
  • remainingRequests: Number of LLM calls remaining in the current window.
  • maxTokens: Maximum number of tokens allowed in the current rolling window.
  • remainingTokens: Number of tokens remaining in the current window.
  • remainingTimeMs: Time in milliseconds until the rolling window resets.

Processing Quota Information

Managing quota usage

Use the quota information in the done event to implement client-side rate limiting and avoid hitting API limits. When remainingRequests (remaining LLM calls) or remainingTokens are low, consider adding delays between requests or queuing them for later execution.

JavaScript Example: Processing Quota Information (Click to expand)

When processing the streaming response, you'll receive quota usage information in the final done event. Here's a JavaScript example of how to handle this:

const eventSource = new EventSource(apiUrl);

eventSource.addEventListener("done", (event) => {
const data = JSON.parse(event.data);

if (data.rateLimitUsage) {
const { remainingRequests, remainingTokens, remainingTimeMs } =
data.rateLimitUsage;

// Check if quota is running low (LLM calls or tokens)
if (remainingRequests < 10 || remainingTokens < 10000) {
console.warn("Quota running low, consider rate limiting");
// Implement rate limiting logic
}

// Schedule next request after quota reset if needed
if (remainingRequests === 0) {
setTimeout(() => {
// Safe to make next request
}, remainingTimeMs);
}
}

eventSource.close();
});

Rate Limits and Quotas

Port AI operates with specific limits to ensure optimal performance for all users:

LLM Provider Limits

These limits apply when using Port's managed AI infrastructure. When you configure your own LLM provider, these Port-specific limits no longer apply, and usage will be governed by your provider's own limits and pricing.

Port acts as a bridge to leading LLM providers and doesn't host LLM models internally.

Rate Limits (Per Minute)

  • LLM call limit: 200 LLM calls per minute.
  • Token usage limit: 500,000 tokens per minute.
  • These limits reset every minute.

Monthly Quota

  • Default quota: 500 AI invocations per month.
  • Each interaction with Port AI counts as one request against your quota.
  • Quota resets monthly.
Usage limits

Usage limits may change without prior notice. Once a limit is reached, you will need to wait until it resets.
If you attempt to interact with Port AI after reaching a limit, you will receive an error message indicating that the limit has been exceeded. The query limit is estimated and depends on the actual token usage.

Monitor your usage

You can monitor your current usage in several ways:

Rate limits

  • Check the final done event in streaming responses for remaining LLM calls, tokens, and reset time.

Monthly quota

You can monitor your current monthly quota usage using the Get monthly AI invocations quota usage API endpoint

Proactive quota monitoring

Check your monthly quota before making multiple Port AI requests to avoid hitting limits. When remainingQuota is low, consider implementing rate limiting or queuing requests until the monthly quota resets. Note that you may also encounter per-minute rate limits, which are separate from this monthly quota.

Structured output

Port AI supports structured output generation, allowing you to specify a JSON Schema that the AI response must conform to. This is useful when you need to parse the AI response programmatically.

How it works

Include an outputSchema parameter in your API request with a valid JSON Schema. The AI will generate a structured JSON object matching the schema instead of free-form text.

curl 'https://api.port.io/v1/ai/invoke' \
-H 'Authorization: Bearer <YOUR_API_TOKEN>' \
-H 'Content-Type: application/json' \
--data-raw '{
"prompt": "Analyze the health of our production services",
"tools": ["^(list|search|describe)_.*"],
"outputSchema": {
"type": "object",
"properties": {
"summary": { "type": "string" },
"healthyServices": { "type": "number" },
"unhealthyServices": { "type": "number" },
"recommendations": {
"type": "array",
"items": { "type": "string" }
}
},
"required": ["summary", "healthyServices", "unhealthyServices"]
}
}'

The same parameter works with AI agents:

curl 'https://api.port.io/v1/agent/<AGENT_IDENTIFIER>/invoke' \
-H 'Authorization: Bearer <YOUR_API_TOKEN>' \
-H 'Content-Type: application/json' \
--data-raw '{
"prompt": "Analyze service dependencies",
"outputSchema": {
"type": "object",
"properties": {
"serviceName": { "type": "string" },
"dependencies": {
"type": "array",
"items": { "type": "string" }
},
"riskLevel": { "type": "string" }
},
"required": ["serviceName", "dependencies"]
}
}'

Schema requirements

The outputSchema must be a valid JSON Schema with type: "object" at the root level. You can define:

  • properties: The fields the AI should generate.
  • required: Fields that must be present in the response.
  • Nested objects and arrays for complex structures.
Structured output behavior

When outputSchema is provided, the AI must generate a valid JSON object matching the schema. If the AI fails to generate valid output conforming to the schema, the request will fail with an error. The response will contain the structured JSON object as the final execution event data.

Selecting Model

Port AI allows you to specify which LLM provider and model to use for specific API requests, giving you fine-grained control over AI processing on a per-request basis.

How LLM Providers Work

Port AI supports multiple LLM providers and models. You can either use Port's managed AI infrastructure (default) or configure your own LLM providers for additional control over data privacy, costs, and compliance.

Learn more about LLM Provider Management and see the supported models and providers.

Specifying Provider and Model

When making API requests, you can include provider and model parameters (if none specified, your organization's default will be used). See the Invoke an agent API reference for detailed example.

Default Behavior

If no provider is specified in your API request, the system uses your organization's configured defaults, or falls back to Port's system defaults if none are configured.

Tool Selection

Port AI allows you to control which specific tools from the Port MCP server are available for each API interaction. This provides fine-grained control over what actions Port AI can perform, enabling you to create secure, purpose-specific AI interactions.

You can also control whether each tool runs automatically or requires manual approval before execution. For a full explanation of how tool availability and approval modes are resolved, see tool availability and approval.

Permission-Based Tool Filtering

Selected tools will be available based on your regex patterns but won't include tools that are not within your permission scope. This means:

  • If you request an action to create a Jira ticket but this action is not available to you as a user, it won't be available to Port AI.
  • Members trying to use builder tools like upsert_blueprint will not have access to these tools through Port AI if they lack the necessary permissions.
  • Tool availability is determined by the intersection of your regex selection AND your user permissions.

Port AI respects your individual user permissions and cannot access tools or perform actions that you don't have permission to use.

How Tool Selection Works

Include a tools parameter in your API request with an array of regex patterns. Port AI will only use tools whose names match at least one of these patterns.

Basic format:

{
"prompt": "Your question or request",
"tools": ["regex_pattern_1", "regex_pattern_2"]
}
Port tools vs. MCP connector tools

When you attach MCP connectors, treat patterns in two groups. To enable tools from a connector, an entry in tools must start with that connector's identifier in Port, then an underscore, then the rest of your regex (for example notion_.* for the notion connector). You cannot place ^ or any other characters before that prefix: ^notion_.* does not count as a connector pattern and will not enable connector tools. Every other pattern applies only to Port MCP tools (catalog query, run_* actions, and the rest of the built-in Port tool surface), including very broad regex such as .* or ^run_.*. Those patterns never enable connector tools on their own, even when the regex text could match a connector tool name. Add an explicit pattern that begins with {identifier}_ when you need connector tools.

Common Tool Selection Patterns

Read-only Operations (Click to expand)

Perfect for monitoring dashboards and reporting systems where no modifications should be made.

["^(list|search|track|describe)_.*"]

What this matches:

  • list_entities, list_blueprints, list_scorecards.
  • list_actions, list_integrations.
  • describe_user_details.
  • search_port_knowledge_sources.
Action Execution Only (Click to expand)

Allows only action execution tools while blocking data query operations.

["^run_.*"]

What this matches:

  • run_action (the underlying tool that executes all self-service actions).
  • run_create_service, run_deploy_to_production.
  • run_github_create_issue, run_jira_create_ticket.
  • run_slack_notify_team.
run_action must be matched

Self-service actions are executed through the internal run_action tool. Your tools regex must match run_action — either explicitly or via a pattern like "^run_.*". If you only list specific action identifiers (e.g. ["run_deploy_to_production"]) without a pattern that also matches run_action, the agent will not be able to execute those actions.

Specific Integration Actions (Click to expand)

Target specific third-party service integrations.

["run_.*github.*", "run_.*jira.*", "run_.*zendesk.*"]

What this matches:

  • run_github_create_issue, run_github_merge_pr.
  • run_jira_create_ticket, run_jira_update_status.
  • run_zendesk_create_ticket.
Safe Entity Operations (Click to expand)

Enables entity operations while preventing accidental deletions.

["(?!delete_)\\w+_entity$", "list_.*"]

What this matches:

  • list_entities, upsert_entity.
  • Excludes: delete_entity.
Documentation and Help Tools (Click to expand)

Focus on documentation search and help functionality.

[".*docs.*", "search_.*", "describe_.*"]

What this matches:

  • search_port_knowledge_sources.
  • describe_user_details.
Blueprint and Scorecard Analysis (Click to expand)

Focus on catalog structure and quality metrics without action execution.

[".*blueprint.*", ".*scorecard.*", "^list_.*"]

What this matches:

  • list_blueprints, upsert_blueprint.
  • list_scorecards, upsert_scorecard.
  • All list operations.
MCP connector tools (Click to expand)

Tools from MCP connectors use names that start with the connector identifier in Port, followed by an underscore. A tools entry only applies to connector tools if the pattern string itself begins with that {identifier}_ text (for example notion_.*). A leading regex anchor breaks that rule, so ^notion_.* does not work for connector tools. Patterns meant for Port tools (list_*, run_*, .*, ^run_.*, and so on) never unlock connector tools by themselves; see Port tools vs. MCP connector tools above.

["notion_.*"]

What this matches:

  • Any published tool for that connector, such as notion_notion-search.

Use the same identifier you pass in mcpServers[].identifier in place of notion.

Interactive Tool Matcher

Test your regex patterns to see which MCP tools would be available to Port AI. Enter your patterns in JSON array format (e.g., ["^(list|get)_.*", "run_.*github.*"]) and see the matching tools in real-time.

Enter an array of regex patterns in JSON format. Patterns automatically match from the beginning of tool names (^ is added automatically).

Tools (0 of 31 matched)

list_blueprintsupsert_blueprintdelete_blueprinttrigger_auto_discoverylist_entitiessimulate_blueprint_permissionsupsert_entitydelete_entitylist_scorecardsupsert_scorecarddelete_scorecardlist_workflowstrigger_workflow_runget_workflow_runupsert_workflowdelete_workflowlist_actionsupsert_actiondelete_actiontrack_action_runrun_actionget_action_permissionsupdate_action_permissionslist_integrationstest_integration_mappingget_integration_sync_metricsget_integration_event_logsget_integration_kinds_with_examplessearch_port_knowledge_sourcesdescribe_user_detailsload_skill
Action Tools Note

Action tools (starting with run_*) depend on your Port configuration. The examples shown represent common action patterns, but your actual available actions may differ based on the self-service actions configured in your Port instance.

Best Practices

Security and Control
  • Principle of least privilege: Only include tools necessary for the specific use case.
  • Test patterns: Use the interactive matcher above to verify your regex patterns.
  • Automated systems: Use highly restrictive patterns for automated workflows.
  • User-facing interfaces: Consider broader patterns for interactive use cases.

Tool Approval in API Requests

In addition to controlling which tools are available, you can control whether each tool requires manual approval or runs automatically. There are two ways to manage this:

Per-Invocation Overrides

Pass the toolApprovalOverrides field in your request body. Each key is a tool name (regex pattern), and each value is either approval or automatic:

{
"userPrompt": "Deploy service X to production",
"tools": [".*"],
"toolApprovalOverrides": {
"run_action": "approval",
"list_.*": "automatic"
}
}

Per-invocation overrides take the highest priority and apply only to that single request.

Persistent Preferences

Use the tool approval preferences API to save default approval modes for your user. These preferences apply to all future invocations unless overridden by toolApprovalOverrides in the request body.

# Get current preferences
curl 'https://api.port.io/v1/ai/tool-approval-preferences' \
-H 'Authorization: Bearer <YOUR_API_TOKEN>'

# Update preferences
curl -X PUT 'https://api.port.io/v1/ai/tool-approval-preferences' \
-H 'Authorization: Bearer <YOUR_API_TOKEN>' \
-H 'Content-Type: application/json' \
--data-raw '{
"toolOverrides": {
"run_action": "approval",
"list_entities": "automatic"
},
"disabledTools": ["delete_entity"]
}'

You can also disable specific tools through disabledTools in the preferences API. Disabled tools are excluded from all invocations unless the request includes an explicit tools array, which takes full control of tool availability.

For the complete approval resolution order and details, see tool availability and approval.

MCP Servers in API Requests

When using the /v1/ai/invoke endpoint, you can attach up to five configured MCP connectors per request. This allows Port AI to use tools from those servers in addition to your Port MCP tools, filtered by your tools patterns.

AI agent invocations

The mcpServers parameter applies to general-purpose AI interactions (/v1/ai/invoke) only. It is not part of the invoke an agent request body.

Prerequisites

  • Your organization has MCP connectors set up (admins add servers under Data sources and publish allowed tools).
  • The API token represents a user who is allowed to use those connectors. For connectors that use per-user OAuth, authenticate that user in Port before you rely on connector tools (for example from MCP Servers in the avatar menu or from the Port AI chat + menu) so OAuth tokens are available for tool calls.

Limitations

When the API is invoked by an org user - for example, from an automation trigger - MCP servers that use OAuth authentication cannot be used. OAuth requires a real user session to complete the authorization flow, which org users do not have.

Only MCP servers configured with token-based authentication are supported in this context.

Request body

Add an mcpServers array. Each item must include the connector identifier in Port (the _mcp_server entity identifier), for example the value you see on the connector in the catalog or in Data sources.

"mcpServers": [
{ "identifier": "notion" }
]

Your existing tools array still controls which tool names may run. Port-side patterns (anything that does not begin with {identifier}_ as the first characters of the pattern string, including .* and ^notion_.*) only affect Port MCP tools; use notion_.* (not ^notion_.*) when the identifier is notion. See MCP connector tools pattern and Port tools vs. MCP connector tools. Native Port tools and MCP connector tools are evaluated together against those patterns.

Example

curl 'https://api.port.io/v1/ai/invoke' \
-H 'Authorization: Bearer <YOUR_API_TOKEN>' \
-H 'Content-Type: application/json' \
--data-raw '{
"userPrompt": "Search our Notion space for the onboarding checklist and summarize the steps.",
"tools": ["notion_notion-search", "^list_.*", "^run_.*"],
"mcpServers": [
{ "identifier": "notion" }
]
}'

For all request fields (including userPrompt, tools, and optional mcpServers), see the General-purpose AI interactions API reference.

Integration Patterns

Direct API Calls

Integrate Port AI directly into your applications using HTTP requests:

# Basic Port AI request
curl 'https://api.port.io/v1/ai/invoke' \
-H 'Authorization: Bearer <YOUR_API_TOKEN>' \
-H 'Content-Type: application/json' \
--data-raw '{
"prompt": "What services are failing health checks?",
"tools": ["^(list|search|describe)_.*"],
"labels": {
"source": "monitoring_system",
"check_type": "health_analysis"
}'

# AI Agent request
curl 'https://api.port.io/v1/agent/<AGENT_IDENTIFIER>/invoke' \
-H 'Authorization: Bearer <YOUR_API_TOKEN>' \
-H 'Content-Type: application/json' \
--data-raw '{
"prompt": "Analyze the health of our production services",
"labels": {
"source": "monitoring_dashboard",
"environment": "production"
}'

Application Integration Example

// Example: Monitoring dashboard integration
async function checkServiceHealth(serviceName) {
const response = await fetch("/api/port-ai/check-service", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
prompt: `Analyze the health of service ${serviceName}`,
tools: ["^(list|search|describe)_.*"],
labels: {
source: "monitoring_dashboard",
service: serviceName,
check_type: "health_analysis",
},
}),
});

// Process streaming response
const reader = response.body.getReader();
// Handle SSE parsing...
}

Error Handling

Common error scenarios and handling strategies:

Rate Limit Exceeded

{
"error": "Rate limit exceeded",
"type": "RATE_LIMIT_ERROR",
"retryAfter": 3600
}

Quota Exceeded

{
"error": "Monthly quota exceeded",
"type": "QUOTA_ERROR",
"resetDate": "2025-10-01T00:00:00Z"
}
Implementation Example: Error Handling (Click to expand)
async function handlePortAIRequest(prompt) {
try {
const response = await invokePortAI(prompt);
return response;
} catch (error) {
if (error.type === "RATE_LIMIT_ERROR") {
// Wait and retry
await new Promise((resolve) =>
setTimeout(resolve, error.retryAfter * 1000),
);
return handlePortAIRequest(prompt);
} else if (error.type === "QUOTA_ERROR") {
// Queue for next month or upgrade plan
console.log("Monthly quota exceeded, queuing request");
return null;
}
throw error;
}
}

Security Considerations

When integrating Port AI via API:

  • Authentication: Always use secure API token storage and rotation.
  • Data privacy: Port AI respects your organization's RBAC and data access policies.
  • Audit trail: All API interactions are logged and trackable.
  • Rate limiting: Implement client-side rate limiting to avoid hitting API limits.

For comprehensive security information, see AI Security and Data Controls.

Detailed Security Information: