AI

AI Agents

AI Agents in Status200 automatically fix errors, performance issues, and database queries in your code. Powered by OpenTelemetry observability data, AI Agents create pull requests with fixes—not just alerts.

What Can AI Agents Do?

AI Agents analyze your observability data (traces, logs, and metrics) to detect and automatically fix issues in your codebase:

  • Fix Errors Automatically: When AI Agent notices exceptions in your traces or logs, it automatically fixes the issue and creates a pull request.
  • Fix Performance Issues: Analyzes traces that take the longest to execute and creates pull requests with performance optimizations.
  • Fix Database Queries: Identifies slow or inefficient database queries and optimizes them with proper indexing and query rewrites.
  • Fix Frontend Issues: Addresses frontend-specific performance problems, rendering issues, and JavaScript errors automatically.
  • Add Telemetry Automatically: Add tracing, metrics, and logs to your codebase with a single click. No manual instrumentation needed.
  • GitHub & GitLab Integration: Seamlessly integrates with your existing repositories. PRs are created directly in your workflow.
  • CI/CD Integration: Integrates with your existing CI/CD pipelines. Fixes are tested and validated before PR creation.
  • Terraform Support: Fix infrastructure issues automatically. Supports Terraform and OpenTofu for infrastructure-as-code.
  • Issue Tracker Integration: Connects with Jira, Linear, and other issue trackers. Automatically links fixes to relevant issues.

How It Works

  1. Collect Data: OpenTelemetry collects traces, logs, and metrics from your application
  2. Detect Issues: AI identifies errors, performance bottlenecks, and slow queries
  3. Generate Fix: AI analyzes your codebase and creates the fix automatically
  4. Create PR: Pull request with fix and detailed report ready for review

LLM Provider Flexibility

Status200 works with any LLM provider. You can use:

  • OpenAI GPT models
  • Anthropic Claude models
  • Meta Llama (via Ollama or other providers)
  • Custom self-hosted models

Self-host your AI model and keep your code completely private.

Privacy

Regardless of your plan, Status200 never sees, stores, or trains on your code:

  • No Code Access: Your code stays on your infrastructure
  • No Data Storage: Zero data retention policy
  • No Training: Your code is never used for AI training

Global AI Agents vs Self-Hosted AI Agents

Global AI Agents

If you are using Status200 SaaS (cloud-hosted version), Global AI Agents are provided by Status200 and are pre-configured and ready to use. These agents are managed by Status200 and require no additional setup.

Global AI Agents are automatically available to all projects unless disabled in your project settings.

Self-Hosted AI Agents

For organizations that need to run AI agents within their own infrastructure (e.g., for security, compliance, or network access requirements), Status200 supports self-hosted AI agents.

Self-hosted AI agents:

  • Run within your private network
  • Can access internal resources and systems
  • Give you full control over the agent's environment
  • Can be customized for your specific needs

Setting Up a Self-Hosted AI Agent

Step 1: Create an AI Agent in Status200

  1. Log in to your Status200 dashboard
  2. Go to Project Settings > AI Agents
  3. Click Create AI Agent to add a new agent
  4. Fill in the required fields:
    • Name: A friendly name for your AI agent
    • Description (optional): A description of the agent's purpose
  5. Once created, you will receive an AI_AGENT_ID and AI_AGENT_KEY

Important: Save your AI_AGENT_KEY securely. It will only be shown once and cannot be retrieved later.

Step 2: Deploy the AI Agent

Docker

To run an AI agent, make sure you have Docker installed. Run the agent with:

docker run --name status200-ai-agent --network host \
  -e AI_AGENT_KEY=<ai-agent-key> \
  -e AI_AGENT_ID=<ai-agent-id> \
  -e S200_URL=https://status200.ru \
  -d status200/ai-agent:release

If you are self-hosting Status200, change S200_URL to your custom self-hosted instance URL.

Docker Compose

You can also run the AI agent using docker-compose. Create a docker-compose.yml file:

version: "3"

services:
  status200-ai-agent:
    image: status200/ai-agent:release
    container_name: status200-ai-agent
    environment:
      - AI_AGENT_KEY=<ai-agent-key>
      - AI_AGENT_ID=<ai-agent-id>
      - S200_URL=https://status200.ru
    network_mode: host
    restart: always

Then run:

docker compose up -d

Kubernetes

Create a status200-ai-agent.yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: status200-ai-agent
spec:
  selector:
    matchLabels:
      app: status200-ai-agent
  template:
    metadata:
      labels:
        app: status200-ai-agent
    spec:
      containers:
      - name: status200-ai-agent
        image: status200/ai-agent:release
        env:
          - name: AI_AGENT_KEY
            value: "<ai-agent-key>"
          - name: AI_AGENT_ID
            value: "<ai-agent-id>"
          - name: S200_URL
            value: "https://status200.ru"

Apply the configuration:

kubectl apply -f status200-ai-agent.yaml

Environment Variables

The AI agent supports the following environment variables:

Required Variables

VariableDescription
AI_AGENT_KEYThe AI agent key from your Status200 dashboard
AI_AGENT_IDThe AI agent ID from your Status200 dashboard
S200_URLThe URL of your Status200 instance (default: https://status200.ru)

Verifying Your AI Agent

After deploying your AI agent:

  1. Go to Project Settings > AI Agents in your Status200 dashboard
  2. Your agent should show as Connected within a few minutes
  3. If the status shows Disconnected, check the container logs for errors

To view container logs:

# Docker
docker logs status200-ai-agent

# Kubernetes
kubectl logs deployment/status200-ai-agent

Troubleshooting

Agent Not Connecting

  1. Verify credentials: Ensure AI_AGENT_KEY and AI_AGENT_ID are correct
  2. Check network: Ensure the agent can reach your Status200 instance
  3. Review logs: Check container logs for error messages
  4. Firewall rules: Ensure outbound HTTPS (port 443) is allowed

Agent Keeps Disconnecting

  1. Check resource limits: Ensure the container has sufficient memory and CPU
  2. Network stability: Verify network connectivity is stable
  3. Review logs: Look for timeout or connection errors in the logs

Need Help?

If you encounter issues with your AI agent:

  1. Check the Status200 GitHub Issues for known problems
  2. Create a new issue if your problem isn't already reported
  3. Contact support if you're on an enterprise plan