Integrations

Connect Refario to the stack you already run.

Bring provider telemetry, SDK events, MCP tools, and internal service data into one operational layer without rebuilding your runtime.

Connected data path
Sources in, shared operational view out.

Provider telemetry, SDK events, and MCP tool activity land in one runtime layer that teams can actually operate from.

Sources
OpenAI and Anthropic
LangChain and app SDKs
MCP tools and internal APIs
Refario
Runs, spans, cost, tools, and guardrails in one graph
Run timeline
Provider attribution
Tool diagnostics
Incident context
Teams
Engineering root cause
Finance spend review
Ops policy response
Integration model

Capture the runtime edges where AI behavior actually happens.

Providers, SDKs, MCP tools, and internal services all contribute to one execution path. Refario is built to correlate those signals into a single operational view.

Ingestion surfaces
Refario connects providers, application events, tools, and policies into the same run graph.
01
Provider telemetry
Bring model usage, token counts, errors, latency, and cost signals from providers into the same timeline.
02
SDK and workflow events
Capture run lifecycle events, environment tags, customer metadata, and workflow identifiers from your application.
03
MCP tools and internal APIs
Add tool execution, retries, transport health, and downstream service behavior without splitting the trace.
Incremental adoption
Incremental adoption
Start with the providers and workflows you already run. Expand coverage without rewriting the architecture around your agents.
One operational schema
One operational schema
Runs, spans, models, tools, guardrails, and cost attribution land in one system instead of separate diagnostic silos.
Outputs teams can use
Outputs teams can use
Engineering, finance, and ops consume the same connected data through dashboards, traces, alerts, and reports.
Data flow

Send runtime signals in. Get operational answers back.

The point is not collecting more logs. It is turning runtime events into decisions about reliability, spend, and control.

What you send
Structured runtime events from the systems already in the path.

Refario is most useful when it receives the same context your runtime already depends on: run identity, workflow identity, model activity, tool execution, and policy events.

  • Run and span lifecycle events
  • Model provider, token, latency, and error metadata
  • MCP tool calls, retries, and transport diagnostics
  • Project, environment, customer, and workflow tags
What teams get back
A shared command surface instead of fragmented telemetry.

Once correlated, the data becomes usable across multiple roles without losing the execution details that explain what actually happened.

  • Run timelines and workflow-level diagnostics
  • Provider and model cost attribution
  • Guardrail monitoring and incident context
  • Dashboards, alerts, and scheduled reporting
Supported patterns

Designed for modern AI infrastructure, not just one SDK path.

Use Refario with direct model providers, orchestration frameworks, MCP registries, and internal platform services in the same deployment.

OpenAIAnthropicLangChainMCP ToolsInternal APIsCustom SDKsSlackNotion
Providers
Providers
Track model-level usage, latency, fallback behavior, and spend across the providers your workflows already call.
MCP tools
MCP tools
Observe tool success, retry rate, latency, and transport failures with the affected workflow still attached.
Custom SDK events
Custom SDK events
Ingest project, environment, customer, and release metadata so traces remain useful in real operating reviews.
Finance-ready cost data
Finance-ready cost data
Convert tokens and provider usage into cost views that map directly to workflows, teams, and budgets.
Provider integrations

Connect OpenAI and Anthropic model telemetry for consistent run, latency, and spend visibility.

  • Model call metadata
  • Token and cost correlation
  • Provider-level error tracking
MCP and tool integrations

Monitor MCP tools and custom actions with transport-level and operation-level diagnostics.

  • Tool success and latency
  • Transport and retry visibility
  • Workflow impact context
SDK + internal APIs

Ingest custom run events from your own stack to correlate app behavior with AI execution.

  • Custom run metadata
  • Project and environment tagging
  • Unified analytics dashboards
Ready to operate production AI

Get end-to-end visibility across runs, spend, and guardrails.

Start free to connect your first project, then book a demo for rollout planning with your stack.