Integrations
Connect EvalGuard to your existing tools for notifications, issue tracking, CI/CD, and AI framework instrumentation.
Notifications & Alerting
Slack
Get notified in Slack when evals pass/fail, security scans complete, or monitoring alerts fire.
{
"type": "slack",
"webhookUrl": "https://hooks.slack.com/services/T00.../B00.../xxx",
"channel": "#ai-evals",
"events": ["eval.completed", "scan.completed", "monitoring.alert"]
}Discord
Send eval and scan results to a Discord channel via webhook.
{
"type": "discord",
"webhookUrl": "https://discord.com/api/webhooks/...",
"events": ["eval.completed", "scan.completed"]
}Microsoft Teams
Post results to a Teams channel via an incoming webhook connector.
{
"type": "teams",
"webhookUrl": "https://outlook.office.com/webhook/...",
"events": ["eval.completed", "scan.failed"]
}PagerDuty
Trigger PagerDuty incidents when security scans fail or monitoring thresholds are breached.
{
"type": "pagerduty",
"routingKey": "your-pagerduty-integration-key",
"severity": "critical",
"events": ["scan.failed", "monitoring.critical"]
}Issue Tracking
Jira
Automatically create Jira issues for failed eval cases or security findings.
{
"type": "jira",
"baseUrl": "https://your-org.atlassian.net",
"email": "you@company.com",
"apiToken": "your-jira-api-token",
"projectKey": "AI",
"issueType": "Bug",
"events": ["eval.failed", "scan.finding"]
}Linear
Create Linear issues from eval failures and security findings.
{
"type": "linear",
"apiKey": "lin_api_...",
"teamId": "your-team-id",
"events": ["eval.failed", "scan.finding"]
}AI Framework Integrations
LangChain
Instrument LangChain chains and agents to send traces to EvalGuard.
from evalguard.integrations import EvalGuardCallbackHandler
handler = EvalGuardCallbackHandler(
api_key="eg_sk_...",
project_id="proj_abc123",
)
# Add to any LangChain chain or agent
chain = LLMChain(llm=llm, prompt=prompt, callbacks=[handler])
result = chain.run("What is AI?")LlamaIndex
from evalguard.integrations import EvalGuardLlamaIndexCallback
callback = EvalGuardLlamaIndexCallback(
api_key="eg_sk_...",
project_id="proj_abc123",
)
# Attach to LlamaIndex service context
from llama_index.core import Settings
Settings.callback_manager.add_handler(callback)CrewAI
from evalguard.integrations import EvalGuardCrewAICallback
callback = EvalGuardCrewAICallback(api_key="eg_sk_...")
crew = Crew(
agents=[agent1, agent2],
tasks=[task1, task2],
callbacks=[callback],
)AutoGen
from evalguard.integrations import EvalGuardAutoGenCallback
callback = EvalGuardAutoGenCallback(api_key="eg_sk_...")
# Attach to AutoGen group chat
groupchat = autogen.GroupChat(
agents=[user_proxy, assistant],
messages=[],
max_round=10,
)
groupchat.register_callback(callback)Haystack
from evalguard.integrations import EvalGuardHaystackTracer
tracer = EvalGuardHaystackTracer(api_key="eg_sk_...")
pipeline = Pipeline()
pipeline.add_component("llm", OpenAIChatGenerator())
pipeline.tracing = tracerOther Frameworks
EvalGuard also provides integrations for DSPy, Instructor, Marvin, Guidance, Semantic Kernel, PromptFlow, Flowise, n8n, OpenAI SDK, and Vercel AI SDK. See the integration modules in the core package for details.
CI/CD Integrations
GitHub Actions
name: EvalGuard CI
on:
pull_request:
branches: [main]
jobs:
eval:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- name: Install EvalGuard CLI
run: npm install -g @evalguard/cli
- name: Run Evals
run: evalguard eval:local evals/regression.json --output json
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
- name: Run Security Scan
run: evalguard scan:local scans/security.json --verbose
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
- name: Validate Configs
run: |
evalguard validate evals/regression.json
evalguard validate scans/security.jsonGitLab CI
evalguard:
image: node:20
stage: test
script:
- npm install -g @evalguard/cli
- evalguard eval:local evals/regression.json --output json
- evalguard scan:local scans/security.json --verbose
variables:
OPENAI_API_KEY: $OPENAI_API_KEYJenkins
pipeline {
agent { docker { image 'node:20' } }
environment {
OPENAI_API_KEY = credentials('openai-api-key')
}
stages {
stage('Install') {
steps {
sh 'npm install -g @evalguard/cli'
}
}
stage('Eval') {
steps {
sh 'evalguard eval:local evals/regression.json --output json'
}
}
stage('Security Scan') {
steps {
sh 'evalguard scan:local scans/security.json --verbose'
}
}
}
}Azure Pipelines
trigger:
- main
pool:
vmImage: 'ubuntu-latest'
steps:
- task: NodeTool@0
inputs:
versionSpec: '20.x'
- script: npm install -g @evalguard/cli
displayName: 'Install EvalGuard CLI'
- script: evalguard eval:local evals/regression.json --output json
displayName: 'Run Evals'
env:
OPENAI_API_KEY: $(OPENAI_API_KEY)