See and control your entire AI system
Refario gives you full visibility into runs, costs, tools, and failures across your AI workflows.
- Trace every request end-to-end
- Understand cost across models and workflows
- Catch failures before they reach users
Explore a real AI system in motion
Click any step to inspect runs, tools, and decisions
Tool execution
Connected tool completed on first attempt and returned structured data to the agent.
From system visibility to clear outcomes
Translate what you see in the system map into operational value for engineering, product, and finance.
Find exactly where requests fail
Track spend across models and workflows
Catch failures and guardrail violations
See how agents and tools interact
Depth without complexity
Three focused layers, each tied back to the same system map.
Every request, fully visible
Timeline view of spans, tool calls, and outputs connected to one request path.
Know exactly where your spend goes
See model and workflow spend split without leaving the operating context.
Catch issues before users do
Failures and violation signals surface early with clear ownership and status.
Get started in minutes
Remove friction and go from instrumentation to operational insight quickly.
- 1Send your AI runs via SDK
- 2Instantly see traces, costs, and tools
- 3Optimize performance and reliability
Built for real AI systems
Subtle differentiation for teams running production AI, not isolated demos.
Works across LLM apps, agents, and MCP tools
Combines observability, cost, and operations
Built for engineering, product, and finance teams
Start free. Get full visibility from day one.
Low-friction entry with a clear free plan and no credit card required.
- 10k runs / month
- 1 project
- 1 user
- 7-day retention
- Runs, traces, dashboards, workflow views, and SDK ingestion
- 100k runs / month
- 3 projects
- 2 users
- 30-day retention
- Anomaly detection, guardrails, alerts, and workflow analytics
- 1M runs / month
- 10 projects
- 10 users
- 90-day retention
- Finance dashboard, cost attribution, scheduled reports, and Slack/webhook alerts
- Higher negotiated limits
- SSO, audit logs, and SLA
- Custom retention
- Dedicated onboarding
- Custom deployment options
- Commercial terms for larger rollouts
Answers to common questions
Current capabilities teams ask about most.
Provider options in-app include OpenAI (API key), Claude (Anthropic) (API key), Gemini (Google) (OAuth or API key), Slack AI (OAuth, API key, or MCP), Notion AI (OAuth or MCP), Cursor (API key), and Custom MCP (MCP). Live usage sync flows are currently active for OpenAI, Anthropic, Cursor, and Slack.
Yes. Model breakdown views show provider/model rows with calls, error rate, average latency, token totals, and cost. Typical examples include GPT-5.3-Codex, GPT-5.2-Codex, claude-sonnet-4-6, and gemini-3.1-pro-preview.
Yes. The current web app includes a Tools page for MCP registry, tool-call volume, success rate, and latency monitoring, with alerts for newly detected tools and reliability drops, plus a Guardrails page for rule coverage, violations logs, and timeseries.
Yes. Budget Guardrails support shared thresholds by scope, Spend Forecast projects monthly cost, and Scheduled Reports can be delivered by email or webhook.
Choose Refario when the hard production question is no longer just what happened inside one model call, but which workflow, tool, or guardrail changed outcomes and what that shift cost.
Start understanding your AI system today
See runs, costs, and performance in minutes.
