Documentation Index Fetch the complete documentation index at: https://docs.orq.ai/llms.txt
Use this file to discover all available pages before exploring further.
AI Router Route your LLM calls through the AI Router with a single base URL change. Zero vendor lock-in: always run on the best model at the lowest cost for your use case.
Observability Instrument your code with OpenTelemetry to capture traces, logs, and metrics for every LLM call, agent step, and tool use.
AI Router
Overview
Agno is a Python framework for building production-ready AI agents with structured workflows, tool integration, and multi-agent orchestration. By connecting Agno to Orq.ai’s AI Router, you get access to 300+ models for your agents with a single configuration change.
Key Benefits
Orq.ai’s AI Router enhances your Agno applications with:
Complete Observability Track every agent interaction, tool use, and model call with detailed traces
Built-in Reliability Automatic fallbacks, retries, and load balancing for production resilience
Cost Optimization Real-time cost tracking and spend management across all your AI operations
Multi-Provider Access Access 300+ LLMs and 20+ providers through a single, unified integration
Prerequisites
Before integrating Agno with Orq.ai, ensure you have:
An Orq.ai account and API Key
Python 3.10 or higher
Installation
Configuration
Configure Agno to use Orq.ai’s AI Router via OpenAILike with a custom base_url:
import os
from agno.models.openai.like import OpenAILike
llm = OpenAILike(
id = "openai/gpt-4o" ,
api_key = os.getenv( "ORQ_API_KEY" ),
base_url = "https://api.orq.ai/v3/router" ,
)
base_url : https://api.orq.ai/v3/router
Basic Agent Example
import os
from agno.agent import Agent
from agno.models.openai.like import OpenAILike
llm = OpenAILike(
id = "openai/gpt-4o" ,
api_key = os.getenv( "ORQ_API_KEY" ),
base_url = "https://api.orq.ai/v3/router" ,
)
agent = Agent(
model = llm,
instructions = "You are a helpful assistant. Keep responses concise." ,
markdown = True ,
)
agent.print_response( "Explain machine learning in two sentences." )
Add tools to your agent for real-world tasks:
import os
from agno.agent import Agent
from agno.models.openai.like import OpenAILike
from agno.tools.yfinance import YFinanceTools
llm = OpenAILike(
id = "openai/gpt-4o" ,
api_key = os.getenv( "ORQ_API_KEY" ),
base_url = "https://api.orq.ai/v3/router" ,
)
agent = Agent(
model = llm,
tools = [YFinanceTools( stock_price = True )],
instructions = "Use tables to display data. Don't include any other text." ,
markdown = True ,
)
agent.print_response( "What is the stock price of Apple?" , stream = True )
Define your own Python functions as tools:
import os
from agno.agent import Agent
from agno.models.openai.like import OpenAILike
def get_weather (city: str ) -> str :
"""Get the current weather for a city."""
return f "Weather in { city } : Sunny, 22°C."
def convert_currency (amount: float , from_currency: str , to_currency: str ) -> str :
"""Convert an amount between currencies."""
return f " { amount } { from_currency } = { amount * 1.1 :.2f } { to_currency } (approximate)"
llm = OpenAILike(
id = "openai/gpt-4o" ,
api_key = os.getenv( "ORQ_API_KEY" ),
base_url = "https://api.orq.ai/v3/router" ,
)
agent = Agent(
model = llm,
tools = [get_weather, convert_currency],
instructions = "Use the available tools to answer questions accurately." ,
markdown = True ,
)
agent.print_response( "What's the weather in Tokyo and convert 100 USD to EUR?" )
Multi-Agent Team
Orchestrate multiple specialized agents:
import os
from agno.agent import Agent
from agno.models.openai.like import OpenAILike
from agno.team import Team
llm = OpenAILike(
id = "openai/gpt-4o" ,
api_key = os.getenv( "ORQ_API_KEY" ),
base_url = "https://api.orq.ai/v3/router" ,
)
researcher = Agent(
name = "Researcher" ,
model = llm,
instructions = "You research topics and provide accurate, concise facts." ,
markdown = True ,
)
writer = Agent(
name = "Writer" ,
model = llm,
instructions = "You take research and turn it into clear, engaging content." ,
markdown = True ,
)
team = Team(
members = [researcher, writer],
model = llm,
instructions = "Collaborate to produce well-researched, well-written content." ,
markdown = True ,
)
team.print_response( "Write a short paragraph about renewable energy benefits." )
Model Selection
With Orq.ai, you can use any supported model from 20+ providers:
import os
from agno.models.openai.like import OpenAILike
# Use Claude
claude_llm = OpenAILike(
id = "anthropic/claude-sonnet-4-5-20250929" ,
api_key = os.getenv( "ORQ_API_KEY" ),
base_url = "https://api.orq.ai/v3/router" ,
)
# Use Gemini
gemini_llm = OpenAILike(
id = "google/gemini-2.5-flash" ,
api_key = os.getenv( "ORQ_API_KEY" ),
base_url = "https://api.orq.ai/v3/router" ,
)
# Use Groq
groq_llm = OpenAILike(
id = "groq/llama-3.3-70b-versatile" ,
api_key = os.getenv( "ORQ_API_KEY" ),
base_url = "https://api.orq.ai/v3/router" ,
)
Observability
Installation
pip install opentelemetry-api \
opentelemetry-sdk \
"opentelemetry-exporter-otlp-proto-http" \
openinference-instrumentation-agno
Configuring Orq.ai Observability
Set up the tracer provider and instrument Agno before creating any agents:
import os
from opentelemetry import trace as trace_api
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
exporter = OTLPSpanExporter(
endpoint = "https://api.orq.ai/v2/otel/v1/traces" ,
headers = { "Authorization" : f "Bearer { os.environ[ 'ORQ_API_KEY' ] } " },
)
tracer_provider = TracerProvider()
tracer_provider.add_span_processor(BatchSpanProcessor(exporter))
trace_api.set_tracer_provider(tracer_provider)
from openinference.instrumentation.agno import AgnoInstrumentor
AgnoInstrumentor().instrument()
The OTEL setup and AgnoInstrumentor().instrument() call must happen before any agent instantiation.
Basic Example
import os
from opentelemetry import trace as trace_api
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
exporter = OTLPSpanExporter(
endpoint = "https://api.orq.ai/v2/otel/v1/traces" ,
headers = { "Authorization" : f "Bearer { os.environ[ 'ORQ_API_KEY' ] } " },
)
tracer_provider = TracerProvider()
tracer_provider.add_span_processor(BatchSpanProcessor(exporter))
trace_api.set_tracer_provider(tracer_provider)
from openinference.instrumentation.agno import AgnoInstrumentor
AgnoInstrumentor().instrument()
from agno.agent import Agent
from agno.models.openai.like import OpenAILike
agent = Agent(
name = "orq-demo" ,
model = OpenAILike(
id = "openai/gpt-4o-mini" ,
api_key = os.getenv( "ORQ_API_KEY" ),
base_url = "https://api.orq.ai/v3/router" ,
),
markdown = True ,
)
agent.print_response( "Summarize the latest AI news." )
Evaluations & Experiments
Once your agents are running, use Evaluatorq to score outputs across a dataset and Experiments to compare configurations side-by-side.
Run Evaluations with Evaluatorq Run parallel evaluations across your agents and compare results.
Run Experiments via the API Compare agent configurations and view results in the AI Studio.