The Autoevals plugin provides a node that can automatically evaluate the performance of an LLM response using a battle-tested set of prompts.
For more information on autoevals, see its documentation.
The inputs to the Autoevals node depend on the configured evaluation being performed. All types will have an
output port - This is the LLM response that will be evaluated. For other ports, see the table below.
|Score||The score that has been given to the response, from 0 to 1. A 0 indicates complete failure, and a 1 indicates a complete pass.|
|Rationale||The rationale for the score that has been given to the response.|
|Metadata||The complete metadata associated with the autoevals evaluation, including the rationale and any other information specific to the type selected.|
|Setting||Description||Default Value||Use Input Toggle||Input Data Type|
|Evaluator||The evaluation that will be performed on the input.||Factuality||No||N/A|
See the autoevals documentation