Everything you need to integrate CSAT tracking, monitor escalations, and prove ROI from your Customer Support AI chatbot
pip install agentmonitorSign up at Customer Support AI Monitor and get your API key from the dashboard.
export AGENT_MONITOR_API_KEY="your-api-key-here"Add the @monitor.track() decorator to any function:
from agentmonitor import monitor
import openai
@monitor.track()
def chat_with_ai(user_message: str):
"""AI chat function with automatic monitoring"""
response = openai.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": user_message}
]
)
return response.choices[0].message.content
# That's it! Your function is now monitored
result = chat_with_ai("Hello, how are you?")Your AI calls are now being monitored. Check your dashboard to see real-time metrics, costs, and performance data.
# Required
AGENT_MONITOR_API_KEY=your-api-key-here
# Optional
AGENT_MONITOR_ENDPOINT=https://api.agentmonitor.ai # Custom endpoint
AGENT_MONITOR_ENVIRONMENT=production # Environment tag
AGENT_MONITOR_LOG_LEVEL=INFO # Logging levelfrom agentmonitor import monitor
# Configure monitoring settings
monitor.configure(
api_key="your-api-key",
environment="production",
tags={"service": "chatbot", "version": "2.0"},
sample_rate=1.0, # Monitor 100% of calls
timeout=5000, # 5 second timeout
batch_size=100, # Batch events for efficiency
)Decorator to automatically monitor function calls.
@monitor.track(
name: str = None, # Custom name for the event
tags: dict = None, # Additional tags
capture_input: bool = True, # Capture function inputs
capture_output: bool = True,# Capture function outputs
capture_errors: bool = True # Capture exceptions
)Manually track custom events.
monitor.track_event(
name="custom_event",
properties={
"user_id": "123",
"action": "button_click",
"metadata": {"page": "dashboard"}
},
timestamp=datetime.now()
)Track code blocks with context managers.
with monitor.track_context("data_processing"):
# Your code here
process_data()
transform_results()
# All operations are tracked togetherfrom agentmonitor import monitor
from openai import OpenAI
client = OpenAI()
@monitor.track(name="openai_completion")
def generate_completion(prompt: str, model: str = "gpt-4"):
response = client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": prompt}],
temperature=0.7,
max_tokens=500
)
return response.choices[0].message.content
# Use it
result = generate_completion("Write a haiku about AI")from agentmonitor import monitor
from langchain.chains import LLMChain
from langchain.llms import OpenAI
@monitor.track(name="langchain_agent")
def run_agent_chain(user_input: str):
llm = OpenAI(temperature=0.9)
chain = LLMChain(llm=llm, prompt=prompt_template)
return chain.run(user_input)
# Automatically tracks the entire chain execution
result = run_agent_chain("Analyze this data")from agentmonitor import monitor
import anthropic
client = anthropic.Anthropic()
@monitor.track(name="claude_completion")
def chat_with_claude(message: str):
response = client.messages.create(
model="claude-3-opus-20240229",
max_tokens=1024,
messages=[{"role": "user", "content": message}]
)
return response.content[0].text
result = chat_with_claude("Explain quantum computing")from agentmonitor import monitor
import asyncio
@monitor.track(name="async_ai_call")
async def async_ai_function(prompt: str):
response = await async_openai_client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": prompt}]
)
return response.choices[0].message.content
# Works seamlessly with async/await
result = await async_ai_function("Generate ideas")Give your tracked functions descriptive names to easily identify them in the dashboard.
@monitor.track(name="customer_support_chatbot")
def handle_support_query(query: str): ...Use tags to categorize and filter your AI calls in the dashboard.
@monitor.track(tags={"customer_id": "123", "tier": "premium"})
def personalized_ai_response(prompt: str): ...Control what data is captured to ensure privacy and compliance.
@monitor.track(capture_input=False, capture_output=False)
def process_sensitive_data(user_data: dict): ...Use environment variables to control monitoring in different environments.
monitor.configure(
environment=os.getenv("ENV", "development"),
sample_rate=1.0 if os.getenv("ENV") == "production" else 0.1
)All data is encrypted in transit (TLS 1.3) and at rest (AES-256). We never store your API keys for third-party AI services.
We are SOC 2 Type II certified, GDPR compliant, and HIPAA ready. Enterprise plans include BAA agreements.
Configure data retention policies from 7 days to unlimited. Delete data at any time through the dashboard or API.
Our team is here to help you get the most out of Customer Support AI Monitor.