Documentation Index Fetch the complete documentation index at: https://docs.orq.ai/llms.txt
Use this file to discover all available pages before exploring further.
Observability Instrument your code with OpenTelemetry to capture traces, logs, and metrics for every LLM call, agent step, and tool use.
Observability
Overview
BeeAI is IBM’s open-source agent framework for building production-ready multi-agent systems. It uses the openinference-instrumentation-beeai library to export traces via OpenTelemetry.
Prerequisites
An Orq.ai account and API Key
Python 3.10+
An OpenAI API key (OPENAI_API_KEY) — the examples use OpenAI models
Installation
pip install beeai-framework \
opentelemetry-api \
opentelemetry-sdk \
"opentelemetry-exporter-otlp-proto-http" \
openinference-instrumentation-beeai
Configuring Orq.ai Observability
import os
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from openinference.instrumentation.beeai import BeeAIInstrumentor
exporter = OTLPSpanExporter(
endpoint = "https://api.orq.ai/v2/otel/v1/traces" ,
headers = { "Authorization" : f "Bearer { os.environ[ 'ORQ_API_KEY' ] } " },
)
tracer_provider = TracerProvider()
tracer_provider.add_span_processor(BatchSpanProcessor(exporter))
BeeAIInstrumentor().instrument( tracer_provider = tracer_provider)
Basic Example
import os
import asyncio
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from openinference.instrumentation.beeai import BeeAIInstrumentor
exporter = OTLPSpanExporter(
endpoint = "https://api.orq.ai/v2/otel/v1/traces" ,
headers = { "Authorization" : f "Bearer { os.environ[ 'ORQ_API_KEY' ] } " },
)
tracer_provider = TracerProvider()
tracer_provider.add_span_processor(BatchSpanProcessor(exporter))
BeeAIInstrumentor().instrument( tracer_provider = tracer_provider)
from beeai_framework.agents.react import ReActAgent
from beeai_framework.adapters.openai import OpenAIChatModel
from beeai_framework.memory import UnconstrainedMemory
async def main ():
agent = ReActAgent(
llm = OpenAIChatModel( "gpt-4o-mini" ),
memory = UnconstrainedMemory(),
tools = [],
)
result = await agent.run( "What is 2 + 2?" )
print (result.output.text)
asyncio.run(main())
Evaluations & Experiments
Once your agents are running, use Evaluatorq to score outputs across a dataset and Experiments to compare configurations side-by-side.
Run Evaluations with Evaluatorq Run parallel evaluations across your agents and compare results.
Run Experiments via the API Compare agent configurations and view results in the AI Studio.