Skip to main content

Autoevals Plugin

The Autoevals plugin provides a node that can automatically evaluate the performance of an LLM response using a battle-tested set of prompts.

For more information on autoevals, see its documentation.

Nodes

Autoevals Node

Autoevals Node

Inputs

The inputs to the Autoevals node depend on the configured evaluation being performed. All types will have an output port - This is the LLM response that will be evaluated. For other ports, see the table below.

Outputs

TitleData TypeDescriptionNotes
ScorenumberThe score that has been given to the response, from 0 to 1. A 0 indicates complete failure, and a 1 indicates a complete pass.
RationalestringThe rationale for the score that has been given to the response.
MetadataobjectThe complete metadata associated with the autoevals evaluation, including the rationale and any other information specific to the type selected.

Editor Settings

SettingDescriptionDefault ValueUse Input ToggleInput Data Type
EvaluatorThe evaluation that will be performed on the input.FactualityNoN/A

Evaluations

See the autoevals documentation