Skip to Content
TestingTesting

Testing

Create and run automated tests for your AI assistants.

What’s in Testing

  • Personas: Define simulated user profiles for testing
  • Tests: Create test scenarios with steps and assertions
  • Suites: Group related tests for organized execution
  • Setups: Define pre-run configurations and data setup
  • Results: View and analyze test outcomes

Testing Workflow

  1. Create Personas that represent your user types
  2. Write Tests with conversation flows and evaluation criteria
  3. Organize Tests into Suites by feature or journey
  4. Configure Setups for any prerequisite data or state
  5. Run Suites and review results

How Testing Works

Persona → defines simulated user behavior Test → specifies conversation and assertions Suite → groups tests + selects assistant + persona Setup → prepares data before run Results → shows pass/fail, transcripts, scores

Key Features

  • LLM-as-a-Judge: Evaluate responses using AI-powered scoring
  • Multi-turn conversations: Test complex dialogue flows
  • Request verification: Confirm side-effects via HTTP calls
  • CI/CD integration: Run tests automatically on every deploy
Last updated on