What are Libraries?
Libraries are reusable components — prompts or scripts — that you can define once and reference from multiple automation workflows. Each library item can have test cases so you can validate its behavior before deploying it.Accessing Libraries
In the Automations section of your organization, click Libraries in the left sidebar. Library items are displayed in a grid, with each card showing the item name, a preview of its content, and the number of test cases.Managing Library Items
Each card has a ⋮ (more actions) menu in the top-right corner with the following options:| Action | Description |
|---|---|
| Rename | Opens a dialog to give the item a new name |
| Delete | Permanently removes the library item |
Library Detail View
Opening a library item displays the detail view, which is split into two panels:- Left panel — the prompt or script content
- Right panel — tabs for Test Data and Test Runs
Test Data
The Test Data tab lets you define test cases for a library item. Each test case has a name, a set of input variables, and an expected output.Adding a Test Case
Click Add Test to create a new test case. Each test case appears as a collapsible card.Naming a Test
Each test case has an editable name shown in its header. Click the pencil icon on a test card to edit the name inline. Press Enter to save or Escape to cancel.Variables
Variables are the inputs used to resolve{{template}} placeholders in your prompt before execution. Each variable has a key — the template path (e.g. trigger.contract.name) — and a value.
Open the Variables Editor
In the test case card, click Manage Variables to open the variables editor.
Add a Variable
Click Add Variable and enter the key (template path) and its value. Values are JSON — objects, arrays, strings, or numbers.
Edit a Variable Value
Click the pencil icon next to any variable to open an edit dialog. Enter valid JSON and click Save.
Variable keys must match the
{{template}} paths used in your prompt exactly. For example, a variable with key trigger.contract.name will fill in {{trigger.contract.name}} in the prompt text.Expected Output
Each test case has an Expected Output field containing the JSON value you expect the library item to return. Click the pencil icon next to Expected Output (or the Set expected output button if none is set) to open an edit dialog. Enter valid JSON and click Save.Adding Variables from a Run
Instead of typing test variables by hand, you can save a step’s real output directly from the execution log:- Go to the Runs page and expand a run that has already executed.
- Expand a step that produced output.
- Click Save Output to Library, select this library prompt and the target test case, and confirm the variable path.
- The output is added as a variable on that test and is available immediately.
Test Runs
The Test Runs tab lets you execute the library item’s test cases and review the results.Running Tests
Add Test Data
Switch to the Test Data tab and add at least one test case. Define any variables needed to resolve your prompt’s template placeholders, and set an expected output.
The Run button is disabled until you have added test data. Add test cases in the Test Data tab first.
Test Result Details
When you expand a test run result, you can see:| Section | Description |
|---|---|
| Variables | The variable values used as inputs for this test run |
| Resolved Prompt | The prompt text after all {{template}} placeholders were filled with the variable values |
| Expected Output | The JSON output you expected the library item to return |
| Actual Output | The JSON output actually returned by the library item |
| Error | Any error message if the test failed to execute |
Test Result Statuses
| Status | Meaning |
|---|---|
| Running | The test is currently executing |
| Success | The actual output matched the expected output |
| Error | The test encountered an error during execution |
Deleting Results
You can remove test run results in two ways:- Per-run delete — Hover over any test run row to reveal a trash icon, then click it to delete that individual run.
- Clear All — Click the Clear All button in the header to remove all test run results at once. This button is always visible but disabled when there are no runs.
Related Documentation
Variables
Store reusable JSON values for use in automation steps.
Automations Overview
An introduction to automations and how they work.