FAQ
What is Lamdis?
Lamdis is a testing and assurance platform for AI assistants. It lets you define personas and scenarios, run conversations against your chatbots or agents, evaluate responses using LLM-as-a-judge (powered by AWS Bedrock), and confirm side-effects via HTTP requests.
How do I create a test?
Go to Testing → Tests and click “New Test”. Define your test steps including user messages, assistant checks with rubrics, and optional HTTP request confirmations. Then add the test to a suite to run it.
What are personas?
Personas represent simulated users with specific traits, communication styles, or intents. They help generate realistic multi-turn conversations during testing.
How does the judge work?
The judge uses AWS Bedrock (Claude) to evaluate assistant responses against your rubric. Set a threshold (e.g., 0.75) and the test passes if the judge score meets or exceeds it. The judge analyzes whether the assistant’s response meets your criteria and provides a score with reasoning.
Can I run tests in CI/CD?
Yes! Use the Lamdis API to trigger suite runs programmatically. You can block merges or fail builds when tests don’t meet thresholds.
- Create an API key with
runs:executeandruns:readpermissions - Configure your CI pipeline (see CI/CD Integration guide)
- Trigger runs and poll for results
What are Setups and Environments?
- Environments store secrets and configuration (API keys, base URLs) for different targets (dev, staging, prod).
- Setups combine an environment with an assistant connection to define where tests run.
How do I test against my assistant?
- Create a Connection with your assistant’s endpoint (HTTP chat or Bedrock)
- Create an Environment with any required secrets
- Create a Setup linking the connection and environment
- Add the setup to your suite and run tests
What are HTTP request steps?
Request steps let you call external APIs during tests—useful for setup (creating test accounts), verification (confirming a booking was made), or teardown (cleaning up data). Define reusable requests in Actions .
How do I extract data from conversations?
Use extract steps to pull values from assistant responses into variables. These can then be used in subsequent steps via variable interpolation ({{variableName}}).
What artifacts does Lamdis produce?
Each run generates:
- Conversation transcripts
- Judge scores and reasoning
- Request/response logs
- Timing data
View and export these from Testing → Results .
Does Lamdis store my data?
Test results, transcripts, and configurations are stored in your org’s workspace. Secrets in Variables are encrypted at rest. All access is logged in the Audit Log .
How do I view test results?
Go to Testing → Results to see run history, pass/fail status, judge scores, and detailed step-by-step transcripts.
What’s the difference between Testing and Assurance?
| Testing | Assurance |
|---|---|
| Development-time validation | Production monitoring |
| Catch issues before deployment | Continuously verify live systems |
| Pass/fail test results | Evidence collection for compliance |
Explore Assurance →
How do I get support?
- Check the Troubleshooting guide for common issues
- Email support@lamdis.ai for direct support
- Use the AI Assistant in the dashboard for quick help