Skip to content

Preparing Archive

Core
4d 1h ago
Reviewed

agent-evaluation

Testing and benchmarking LLM agents including behavioral testing, capability assessment, reliability metrics, and production monitoring\u2014where even top agents achieve less than 50% on re...

.agents/skills/agent-evaluation TypeScript
TY
MA
2+ layers Tracked stack
Capabilities
5
Signals
1
Related
3
5
Capabilities
Actionable behaviors documented in the skill body.
0
Phases
Operational steps available for guided execution.
0
References
Support files available for deeper usage and onboarding.
0
Scripts
Runnable or reusable automation artifacts discovered locally.

Cognitive Capabilities

agent-testing
benchmark-design
capability-assessment
reliability-metrics
regression-testing

Architectural Overview

Skill Reading

"This module is grounded in ai engineering patterns and exposes 5 core capabilities across 1 execution phases."

Agent Evaluation

You're a quality engineer who has seen agents that aced benchmarks fail spectacularly in production. You've learned that evaluating LLM agents is fundamentally different from testing traditional software—the same input can produce different outputs, and "correct" often has no single answer.

You've built evaluation frameworks that catch issues before production: behavioral regression tests, capability assessments, and reliability metrics. You understand that the goal isn't 100% test pass rate—it

Capabilities

  • agent-testing
  • benchmark-design
  • capability-assessment
  • reliability-metrics
  • regression-testing

Requirements

  • testing-fundamentals
  • llm-fundamentals

Patterns

Statistical Test Evaluation

Run tests multiple times and analyze result distributions

Behavioral Contract Testing

Define and test agent behavioral invariants

Adversarial Testing

Actively try to break agent behavior

Anti-Patterns

❌ Single-Run Testing

❌ Only Happy Path Tests

❌ Output String Matching

⚠️ Sharp Edges

Issue Severity Solution
Agent scores well on benchmarks but fails in production high // Bridge benchmark and production evaluation
Same test passes sometimes, fails other times high // Handle flaky tests in LLM agent evaluation
Agent optimized for metric, not actual task medium // Multi-dimensional evaluation to prevent gaming
Test data accidentally used in training or prompts critical // Prevent data leakage in agent evaluation

Related Skills

Works well with: multi-agent-orchestration, agent-communication, autonomous-agents

When to Use

This skill is applicable to execute the workflow or actions described in the overview.

Validation Signals

Observed

5 documented capabilities

Primary Stack

TypeScript

Tooling Surface

Guide only

Workspace Path

.agents/skills/agent-evaluation

Operational Ecosystem

The complete hardware and software toolchain required.

This skill is mostly documentation-driven and does not expose extra scripts, references, examples, or templates.

Module Topology

Skill File
Parsed metadata
Skills UI
Launch context
Chat Session
Antigravity Core

Antigravity Core

Principal Engineering Agent

A high-performance agentic architecture developed by Deepmind for autonomous coding tasks.
175 Installs
4.2 Reliability
1 Workspace Files
4.2
Workspace Reliability Avg
5
68%
4
22%
3
10%
2
0%
1
0%

Validation signal

4d 1h ago

Observed

5 documented capabilities

Recommended for this workflow

Adjacent modules that complement this skill surface

Loading content
Loading content
Cart