Skip to main content

Chat Node

Overview

The Chat Node send one or more messages to an LLM - OpenAI's GPT, or any API compatible with the OpenAI API. It then returns the response from the LLM.

You can use the Chat Node for local LLMs, as long as their API is compatible with the OpenAI API. For example, you can use the Chat Node with LM Studio.

If you are looking for other language models that do not support the OpenAI API format, see the Plugins page for a list of available plugins that implement other language model nodes.

Chat Node Screenshot

Inputs

TitleData TypeDescriptionDefault ValueNotes
System Promptstring or chat-messageA convenience input that allows a system prompt to be prepended to the main prompt message/messages.(None)If not connected, then no system prompt will be prepended. You can always include a system prompt in the main prompt input instead, if you like instead, using an Assemble Prompt node.
Promptstring / string[] / chat-message / chat-message[]The main prompt to send to the language model. Can be one or more strings or chat-messages.(Empty list)Strings will be converted into chat messages of type user, with no name.
Functionsgpt-function or gpt-function[]Defines the available functions that GPT is allowed to call during its response.(Required)Only enabled if the Enable Function Use setting is enabled.

Example 1: Simple Response

  1. Add a Chat node to your graph.
  2. Add a text node and place your message to GPT inside the text node by opening its editor and replacing {{input}} with your message.
  3. Connect the output of the text node to the Prompt input of the Chat node.
  4. Run your graph. You will see the output of the Chat node at the bottom of the node.

Simple Response Example

Example 2: Connecting to LM Studio

  1. Add a Chat node to your graph.
  2. Add a text node and place your message to GPT inside the text node by opening its editor and replacing {{input}} with your message.
  3. Connect the output of the text node to the Prompt input of the Chat node.
  4. Set the Endpoint setting to http://localhost:1234/v1/chat/completions.
  5. Load your desired model into LM Studio.
  6. Enable CORS in LM Studio Server Options.
  7. Run your graph. You will see the output of the Chat node at the bottom of the node.

Error Handling

If nothing is connected to the Prompt input, the Chat node will error.

If the request to OpenAI fails due to rate-limiting, the Chat node will retry the request using a jittered exponential backoff algorithm. This retry will happen for up to 5 minutes. If the request still fails after 5 minutes, the Chat node will error.

caution

Be careful splitting a Chat node too much that you run into rate limiting issues.

If OpenAI returns a 500-level error (due to being overloaded or downtime, etc), the Chat node will retry in a similar manner.

FAQ

Q: What if I connect a different data type to the prompt or system prompt input?

A: The value will be attempted to be converted into a string, which will turn into a user type chat messages. So for example a number 5 will turn into a user message "5". If the value cannot be converted to a string, then it will be ignored for the list of prompt messages.

Q: What if an input is toggled on, but not connected?

A: The value configured in the UI will be used instead.

Q: What if the system prompt is connected, but the prompt is not?

A: The Chat Node will error. The prompt input is required. To send only a system prompt, you can use a Prompt node to create a system-type prompt, and connect it to the Prompt input.

Q: What if the system prompt is connected, and the prompt also contains a system prompt?

A: Both system prompts will be sent. System prompts that are not the first message in a chain are undefined behavior in GPT. It may work, or it may act strangely. It may follow one or both of the system prompts.

See Also