Generate text & Chat

Environment setup

The examples on this page use the Neo4j movie recommendations dataset, focusing on the plot and title properties of Movie nodes. There are 9083 Movie nodes with a plot and title property.

To recreate the graph, download and import this dump file into an empty Neo4j database. Dump files can be imported for both Aura and self-managed instances.

Generate text from prompt

The function ai.text.completion can be used to generate text based on a textual input prompt. It is similar to submitting a prompt to an LLM.

Use CALL ai.text.completion.providers() (see reference) to see supported providers and their configuration options.
Signature for ai.text.completion

Syntax

ai.text.completion(prompt, provider, configuration) :: STRING

Description

Generate text output based on the provided prompt.

Inputs

Name

Type

Description

prompt

STRING

Textual prompt.

provider

STRING

Name of the third party AI provider, see Providers.

configuration

MAP

Provider specific configuration, see Providers.

Returns

STRING

Generated text based on the provided prompt.

The example below selects the 5 top-rated movies, joins their titles and plots into a single, newline-separated string, and asks the external AI provider to pick the most child-friendly one.

MATCH (n:Movie) WHERE n.imdbRating IS NOT NULL
ORDER BY n.imdbRating DESC LIMIT 50  (1)
WITH n.title + ': ' + n.plot AS movie  (2)
WITH
  reduce(acc = '', item IN collect(movie) | acc + item + '\n') AS movies,  (3)
  {
    token: $openaiToken,
    model: 'gpt-5-nano',
    vendorOptions: {
      instructions: 'Be short.'
    }
  } AS config  (4)
RETURN ai.text.completion(
  'Here is a list of movies with their titles and plots. Recommend the most child-friendly one.\n\n' + movies,  (5)
  'openai',
  config
) AS result
1 Pick the 50 top-rated movies.
2 Join title and plot as <title>: <plot>.
3 Join all movies into a single, newline-separated string.
4 Provider specific configuration, see Providers → OpenAI.
5 Prompt for the AI model.
result

"Cosmos. It’s an educational, non-graphic science documentary about the universe, making it the most child-friendly option here (parental guidance for younger kids)."

The example below selects the 5 top-rated movies, joins their titles and plots into a single, newline-separated string, and asks the external AI provider to pick the most child-friendly one.

MATCH (n:Movie) WHERE n.imdbRating IS NOT NULL
ORDER BY n.imdbRating DESC LIMIT 50  (1)
WITH n.title + ': ' + n.plot AS movie  (2)
WITH
  reduce(acc = '', item IN collect(movie) | acc + item + '\n') AS movies,  (3)
  {
    token: $azureOpenaiToken,
    resource: '<azure-openai-resource>',
    model: 'gpt-5-nano',
    vendorOptions: {
      instructions: 'Be short.'
    }
  } AS config  (4)
RETURN ai.text.completion(
  'Here is a list of movies with their titles and plots. Recommend the most child-friendly one.\n\n' + movies,  (5)
  'azure-openai',
  config
) AS result
1 Pick the 50 top-rated movies.
2 Join title and plot as <title>: <plot>.
3 Join all movies into a single, newline-separated string.
4 Provider specific configuration, see Providers → Azure OpenAI.
5 Prompt for the AI model.
result

"Cosmos. It’s an educational, non-graphic science documentary about the universe, making it the most child-friendly option here (parental guidance for younger kids)."

The example below selects the 5 top-rated movies, joins their titles and plots into a single, newline-separated string, and asks the external AI provider to pick the most child-friendly one.

MATCH (n:Movie) WHERE n.imdbRating IS NOT NULL
ORDER BY n.imdbRating DESC LIMIT 50  (1)
WITH n.title + ': ' + n.plot AS movie  (2)
WITH
  reduce(acc = '', item IN collect(movie) | acc + item + '\n') AS movies,  (3)
  {
    token: $vertexaiToken,
    model: 'gemini-2.5-flash-lite',
    publisher: 'google',
    project: '<google-cloud-project>',
    region: '<gcp-region>',
    vendorOptions: {
      systemInstruction: 'Be short.'
    }
  } AS conf  (4)
RETURN ai.text.completion(
  'Here is a list of movies with their titles and plots. Recommend the most child-friendly one.\n\n' + movies,  (5)
  'vertexai',
  config
) AS result
1 Pick the 50 top-rated movies.
2 Join title and plot as <title>: <plot>.
3 Join all movies into a single, newline-separated string.
4 Provider specific configuration, see Providers → VertexAI.
5 Prompt for the AI model.
result

"The most child-friendly movie is Cosmos.

It is an educational documentary about space, lacking the violence and mature themes of the other options.

Would you like to know the recommended age range for Cosmos?"

The example below selects the 5 top-rated movies, joins their titles and plots into a single, newline-separated string, and asks the external AI provider to pick the most child-friendly one.

MATCH (n:Movie) WHERE n.imdbRating IS NOT NULL
ORDER BY n.imdbRating DESC LIMIT 50  (1)
WITH n.title + ': ' + n.plot AS movie  (2)
WITH
  reduce(acc = '', item IN collect(movie) | acc + item + '\n') AS movies,  (3)
  {
    accessKeyId: $awsAccessKeyId,
    secretAccessKey: $secretAccessKey,
    model: 'amazon.nova-micro-v1:0',
    region: '<region>',
    vendorOptions: {
      system: [{ text: 'Be short' }]
    }
  } AS conf  (4)
RETURN ai.text.completion(
  'Here is a list of movies with their titles and plots. Recommend the most child-friendly one.\n\n' + movies,  (5)
  'bedrock-nova',
  config
) AS result
1 Pick the 50 top-rated movies.
2 Join title and plot as <title>: <plot>.
3 Join all movies into a single, newline-separated string.
4 Provider specific configuration, see Providers → Amazon Bedrock Nova.
5 Prompt for the AI model.
result

"Cosmos is the most child-friendly. It’s an educational series about space and science, perfect for young minds."

Chat with context

The function ai.text.chat allows you to exchange several messages with an LLM, as part of a single thread.

Chats are only supported with OpenAI and Azure-OpenAI. Use CALL ai.text.chat.providers() (see reference) to see supported providers and their configuration options.
Signature for ai.text.chat

Syntax

ai.text.chat(prompt, chatId, provider, configuration = {}) :: MAP

Description

Chat based on the specified prompt, optionally continuing a previous interaction.

Inputs

Name

Type

Description

prompt

STRING

The user message to send.

chatId

STRING

Previous chat ID to continue the conversation. If this is the first message in the conversation, set it to null.

provider

STRING

The identifier of the provider: 'azure-openai', 'openai'. See Providers.

configuration

MAP

Provider-specific options. See Providers.

Returns

MAP

Contains message, with the response text, and chatId for follow-up messages.

Start a new chat

The example below selects the 5 top-rated movies, joins their titles and plots into a single, newline-separated string, and asks the external AI provider to pick a good documentary.

MATCH (n:Movie) WHERE n.imdbRating IS NOT NULL
ORDER BY n.imdbRating DESC LIMIT 50  (1)
WITH n.title + ': ' + n.plot AS movie  (2)
WITH
  reduce(acc = '', item IN collect(movie) | acc + item + '\n') AS movies,  (3)
  {
    token: $openaiToken,
    model: 'gpt-5-nano',
    vendorOptions: {
      instructions: 'Be short.'
    }
  } AS config  (4)
RETURN ai.text.chat(
  'Here is a list of movies with their titles and plots. I like space documentaries. Any recommendations?\n\n' + movies,  (5)
  null,  (6)
  'openai',
  config
) AS result
1 Pick the 50 top-rated movies.
2 Join title and plot as <title>: <plot>.
3 Join all movies into a single, newline-separated string.
4 Provider specific configuration, see Providers → OpenAI.
5 Prompt for the AI model.
6 Set chatId to null to start a new conversation.
result
{
  message: "If you want space-only docs from your list:
- Cosmos (Carl Sagan) — classic, big-picture tour of the universe.
- From the Earth to the Moon — docudrama about the Apollo program.

Want more beyond this list? I can add newer options like Apollo 11 (2019) or The Farthest (Voyager missions).",
  chatId: "resp_091b98b67dee7acb006964c14aded4819490dc26975ceb91ce"
}
MATCH (n:Movie) WHERE n.imdbRating IS NOT NULL
ORDER BY n.imdbRating DESC LIMIT 50  (1)
WITH n.title + ': ' + n.plot AS movie  (2)
WITH
  reduce(acc = '', item IN collect(movie) | acc + item + '\n') AS movies,  (3)
  {
    token: $azureOpenaiToken,
    resource: '<azure-openai-resource>',
    model: 'gpt-5-nano',
    vendorOptions: {
      instructions: 'Be short.'
    }
  } AS config  (4)
RETURN ai.text.chat(
  'Here is a list of movies with their titles and plots. I like space documentaries. Any recommendations?\n\n' + movies, (5)
  null,  (6)
  'azure-openai',
  conf
) AS result
1 Pick the 50 top-rated movies.
2 Join title and plot as <title>: <plot>.
3 Join all movies into a single, newline-separated string.
4 Provider specific configuration, see Providers → OpenAI.
5 Prompt for the AI model.
6 Set chatId to null to start a new conversation.
result
{
  message: "If you want space-only docs from your list:
- Cosmos (Carl Sagan) — classic, big-picture tour of the universe.
- From the Earth to the Moon — docudrama about the Apollo program.

Want more beyond this list? I can add newer options like Apollo 11 (2019) or The Farthest (Voyager missions).",
  chatId: "resp_0c3bb8d81c23d3e5006964bec5ea68819593d9d55ee89357e2"
}

Continue a chat using an existing chat ID

The example below continues the chat from the previous example, asking for a specific category of documentaries.

The returned chat ID is different for every call, so you need to update it at every request if you want the model to always have the context of the whole chat. Each request allows you to spawn a new thread by forking the interaction with the context provided up until that point.
Even if chat IDs are scoped to the API token that generated them, treat them as sensitive information, as further messages might ask to reveal previously-provided information (ex. What was my initial request?).
WITH
  {
    token: $openaiToken,
    model: 'gpt-5-nano',
    vendorOptions: {
      instructions: 'Be short.'
    }
  } AS config
RETURN ai.text.chat(
  'Could you suggest something more animal oriented?',  (1)
  $chatId,  (2)
  'openai',
  config
) AS result
1 The next prompt to send.
2 Reuse the chat ID from a previous response to continue the conversation. For example, pass it via a parameter.
result
{
  message: "Here are animal-oriented picks:

- Planet Earth (2006) – Stunning wildlife across habitats; classic nature documentary.
- Blue Planet II (2017) – Incredible marine life and ocean stories.
- My Octopus Teacher (2020) – Intimate, poetic look at one octopus and its world.
- The Cove (2009) – Investigative look at dolphin hunting; timely and provocative.
- Our Planet (2019) – Global nature series with beautiful visuals and conservation context.

Want streaming options or region help?",
  chatId: "resp_091b98b67dee7acb006964c4e9a6808194b6de0b1b00b307ec"
}
WITH
  {
    token: $azureOpenaiToken,
    resource: '<azure-openai-resource>',
    model: 'gpt-5-nano',
    vendorOptions: {
      instructions: 'Be short.'
    }
  } AS config
RETURN ai.text.chat(
  'Could you suggest something more animal oriented?',  (1)
  $chatId,  (2)
  'azure-openai',
  config
) AS result
1 The next prompt to send.
2 Reuse the chat ID from a previous response to continue the conversation. For example, pass it via a parameter.
result
{
  message: "Here are animal-oriented picks:

- Planet Earth (2006) – Stunning wildlife across habitats; classic nature documentary.
- Blue Planet II (2017) – Incredible marine life and ocean stories.
- My Octopus Teacher (2020) – Intimate, poetic look at one octopus and its world.
- The Cove (2009) – Investigative look at dolphin hunting; timely and provocative.
- Our Planet (2019) – Global nature series with beautiful visuals and conservation context.

Want streaming options or region help?",
  chatId: "resp_091b98b67dee7acb006964c6476ee08194a3f6e5f2bad4ddb8"
}

Run multiple requests in parallel

To run text completion for several prompts in parallel, you have two options:

  1. Use CALL {…​} IN CONCURRENT TRANSACTIONS, with 1 row per transaction:

    WITH ['Name a movie', 'Name a book'] AS prompts
    UNWIND prompts AS prompt
    CALL(prompt) {
      WITH
      {
        token: $openaiToken,
        model: 'gpt-5-nano'
      } AS config
      RETURN ai.text.completion(prompt, 'openai', config) AS response
    } IN CONCURRENT TRANSACTIONS OF 1 ROW
    RETURN response
  2. Use Cypher’s parallel runtime: Enteprise Edition

    CYPHER runtime=parallel
    WITH
      {
        token: $openaiToken,
        model: 'gpt-5-nano'
      } AS config,
      ['Name a movie', 'Name a book'] AS prompts
    UNWIND prompts AS prompt
    RETURN ai.text.completion(prompt, 'openai', config) AS result

Providers

You can generate text via the following providers:

  • OpenAI (openai)

  • Azure OpenAI (azure-openai)

  • Google Vertex AI (vertexai)

  • Amazon Bedrock Nova Models (bedrock-nova)

OpenAI

OpenAI parameters
Name Type Default Description

model

STRING

-

Model ID (see OpenAI → Models).

token

STRING

-

OpenAI API key (see OpenAI → API Keys).

chatHistory

LIST<ANY>

[]

Optional conversation history to provide context to the model. Pass a list of MAP values with structure:

  • role: either user or assistant

  • content: a STRING representing a message

vendorOptions

MAP

{}

Optional vendor options that will be passed on as-is in the request to OpenAI (see OpenAI → Create a model response).

Usage example
WITH
  {
    token: $openaiToken,
    model: 'gpt-5-nano',
    vendorOptions: {
      max_output_tokens: 1024,
      instructions: 'Be short.'
    },
    chatHistory: [
      {
        role: "user",
        content: "My favorite movies are in the Fantasy or Sci-fi genre."
      },
      {
        role: "assistant",
        content: "Nice! fantasy and sci-fi have great vibes. Want tailored recs?"
      }
    ]
  } AS conf
RETURN ai.text.completion('Yes, please recommend a movie to me', 'openai', conf) AS result
You can change OpenAI’s base URL (default: https://api.openai.com) via the genai.openai.baseurl setting. The change applies to all ai.text.* calls that use OpenAI, including ai.text.embed, ai.text.embedBatch and ai.text.completion. See Configuration Options → genai.openai.baseurl.

Azure OpenAI

Azure OpenAI parameters
Name Type Default Description

model

STRING

-

Model ID (see Azure → Azure OpenAI in Foundry Models).

resource

STRING

-

Azure resource name.

token

STRING

-

Azure OAuth2 bearer token.

chatHistory

LIST<ANY>

[]

Optional conversation history to provide context to the model. Pass a list of MAP values with structure:

  • role: either user or assistant

  • content: a STRING representing a message

vendorOptions

MAP

{}

Optional vendor options that will be passed on as-is in the request to Azure.

Usage example
WITH
  {
    token: $azureToken,
    resource: 'my-azure-openai-resource',
    model: 'gpt-5-nano',
    vendorOptions: {
      max_output_tokens: 1024,
      instructions: 'Be short.'
    },
    chatHistory: [
      {
        role: "user",
        content: "My favorite movies are in the Fantasy or Sci-fi genre."
      },
      {
        role: "assistant",
        content: "Nice! fantasy and sci-fi have great vibes. Want tailored recs?"
      }
    ]
  } AS conf
RETURN ai.text.completion('Yes, please recommend a movie to me', 'azure-openai', conf) AS result

Google VertexAI

VertexAI parameters
Name Type Default Description

model

STRING

-

Model resource name (see Vertex AI → Model Garden).

project

STRING

-

Google cloud project ID.

region

STRING

-

Google cloud region (see Vertex AI → Locations).

publisher

STRING

'google'

Model publisher.

apiKey

STRING

-

Vertex AI API key.

token

STRING

-

Vertex API access token.

chatHistory

LIST<ANY>

[]

Optional conversation history to provide context to the model. Pass a list of MAP values with structure:

  • role: either user or assistant

  • parts: a LIST containing a MAP with the key text and value representing a message

vendorOptions

MAP

{}

Optional vendor options that will be passed on as-is in the request to VertexAI (see Vertex AI → Method: models.generateContent).

Exactly one of apiKey or token must be provided.
Usage example
WITH
  {
    token: $vertexaiApiAccessKey,
    model: 'gemini-2.5-flash-lite',
    publisher: 'google',
    project: 'my-google-cloud-project',
    region: 'asia-northeast1',
    vendorOptions: {
      systemInstruction: 'Be short.'
    },
    chatHistory: [
      {
        role: "user",
        parts: [
          { text: "My favorite movies are in the Fantasy or Sci-fi genre." }
        ]
      },
      {
        role: "model",
        parts: [
          { text: "Nice! fantasy and sci-fi have great vibes. Want tailored recs?" }
        ]
      }
    ]
  } AS conf
RETURN ai.text.completion('Yes, please recommend a movie to me', 'vertexai', conf) AS result

Amazon Bedrock Nova Models

This provider supports all models that use the same request parameters and response fields as the Nova text models.

Amazon Bedrock Nova parameters
Name Type Default Description

model

STRING

-

Model ID or its ARN.

region

STRING

-

Amazon region (see Amazon Bedrock → Model Support).

accessKeyId

STRING

-

Amazon access key ID.

secretAccessKey

STRING

-

Amazon secret access key.

chatHistory

LIST<ANY>

[]

Optional conversation history to provide context to the model. Pass a list of MAP values with structure:

  • role: either user or assistant

  • content: a LIST containing a MAP with the key text and value representing a message

vendorOptions

MAP

{}

Optional vendor options that will be passed on as-is in the request to Bedrock (see Amazon Bedrock → Inference request parameters and response fields).

Usage example
WITH
  {
    accessKeyId: $awsAccessKeyId,
    secretAccessKey: $secretAccessKey,
    model: 'amazon.nova-micro-v1:0',
    region: 'eu-west-2',
    vendorOptions: {
      system: [{ text: 'Be short' }],
      inferenceConfig: { maxTokens: 1024 }
    },
    chatHistory: [
       {
         role: "user",
         content: [
           { text: "My favorite movies are in the Fantasy or Sci-fi genre." }
         ]
       },
       {
         role: "assistant",
         content: [
           { text: "Nice! fantasy and sci-fi have great vibes. Want tailored recs?" }
         ]
       }
    ]
  } AS conf
RETURN ai.text.completion('Yes, please recommend a movie to me', 'bedrock-nova', conf) AS result

Amazon Bedrock Titan Models

This provider supports all models that use the same request parameters and response fields as the Titan text models. Configuration and usage is similar to Bedrock Nova Models, however, chatHistory is not supported.

Titan models have different vendorOptions, see Amazon Bedrock → Titan Text Models.