Skip to main content

What are Libraries?

Libraries are reusable components — prompts or scripts — that you can define once and reference from multiple automation workflows. Each library item can have test cases so you can validate its behavior before deploying it.

Accessing Libraries

In the Automations section of your organization, click Libraries in the left sidebar. Library items are displayed in a grid, with each card showing the item name, a preview of its content, and the number of test cases.

Managing Library Items

Each card has a (more actions) menu in the top-right corner with the following options:
ActionDescription
RenameOpens a dialog to give the item a new name
DeletePermanently removes the library item
To open a library item, click the Open button at the bottom of its card.

Library Detail View

Opening a library item displays the detail view, which is split into two panels:
  • Left panel — the prompt or script content
  • Right panel — tabs for Test Data and Test Runs

Test Data

The Test Data tab lets you define test cases for a library item. Each test case has a name, a set of input variables, and an expected output.

Adding a Test Case

Click Add Test to create a new test case. Each test case appears as a collapsible card.

Naming a Test

Each test case has an editable name shown in its header. Click the pencil icon on a test card to edit the name inline. Press Enter to save or Escape to cancel.

Variables

Variables are the inputs used to resolve {{template}} placeholders in your prompt before execution. Each variable has a key — the template path (e.g. trigger.contract.name) — and a value.
1

Open the Variables Editor

In the test case card, click Manage Variables to open the variables editor.
2

Add a Variable

Click Add Variable and enter the key (template path) and its value. Values are JSON — objects, arrays, strings, or numbers.
3

Edit a Variable Value

Click the pencil icon next to any variable to open an edit dialog. Enter valid JSON and click Save.
4

Remove a Variable

Click the trash icon next to any variable to delete it.
Variable keys must match the {{template}} paths used in your prompt exactly. For example, a variable with key trigger.contract.name will fill in {{trigger.contract.name}} in the prompt text.

Expected Output

Each test case has an Expected Output field containing the JSON value you expect the library item to return. Click the pencil icon next to Expected Output (or the Set expected output button if none is set) to open an edit dialog. Enter valid JSON and click Save.

Adding Variables from a Run

Instead of typing test variables by hand, you can save a step’s real output directly from the execution log:
  1. Go to the Runs page and expand a run that has already executed.
  2. Expand a step that produced output.
  3. Click Save Output to Library, select this library prompt and the target test case, and confirm the variable path.
  4. The output is added as a variable on that test and is available immediately.
This is the fastest way to build realistic test data — capture actual outputs from real or test runs and feed them straight into your library tests.

Test Runs

The Test Runs tab lets you execute the library item’s test cases and review the results.

Running Tests

1

Open a Library Item

Click the Open button on a library card to open the detail view.
2

Add Test Data

Switch to the Test Data tab and add at least one test case. Define any variables needed to resolve your prompt’s template placeholders, and set an expected output.
3

Switch to Test Runs

Click the Test Runs tab in the right panel.
4

Run the Tests

Click the Run button to execute all test cases against the current library content.
5

Review Results

Each test case shows its status, completion date, duration, and the actual output alongside the expected output. Expand any test result to see the full input, expected output, actual output, and any error messages.
The Run button is disabled until you have added test data. Add test cases in the Test Data tab first.

Test Result Details

When you expand a test run result, you can see:
SectionDescription
VariablesThe variable values used as inputs for this test run
Resolved PromptThe prompt text after all {{template}} placeholders were filled with the variable values
Expected OutputThe JSON output you expected the library item to return
Actual OutputThe JSON output actually returned by the library item
ErrorAny error message if the test failed to execute

Test Result Statuses

StatusMeaning
RunningThe test is currently executing
SuccessThe actual output matched the expected output
ErrorThe test encountered an error during execution

Deleting Results

You can remove test run results in two ways:
  • Per-run delete — Hover over any test run row to reveal a trash icon, then click it to delete that individual run.
  • Clear All — Click the Clear All button in the header to remove all test run results at once. This button is always visible but disabled when there are no runs.
Neither action affects the library content or test data.

Variables

Store reusable JSON values for use in automation steps.

Automations Overview

An introduction to automations and how they work.