Generate text & Chat
Environment setup
The examples on this page use the Neo4j movie recommendations dataset, focusing on the plot and title properties of Movie nodes.
There are 9083 Movie nodes with a plot and title property.
To recreate the graph, download and import this dump file into an empty Neo4j database. Dump files can be imported for both Aura and self-managed instances.
Generate text from promptIntroduced in 2025.11
The function ai.text.completion can be used to generate text based on a textual input prompt.
It is similar to submitting a prompt to an LLM.
Use CALL ai.text.completion.providers() (see reference) to see supported providers and their configuration options.
|
Syntax |
|
||
Description |
Generate text output based on the provided prompt. |
||
Inputs |
Name |
Type |
Description |
|
|
Textual prompt. |
|
|
|
Name of the third party AI provider, see Providers. |
|
|
|
Provider specific configuration, see Providers. |
|
Returns |
|
Generated text based on the provided prompt. |
|
The example below selects the 5 top-rated movies, joins their titles and plots into a single, newline-separated string, and asks the external AI provider to pick the most child-friendly one.
MATCH (n:Movie) WHERE n.imdbRating IS NOT NULL
ORDER BY n.imdbRating DESC LIMIT 50 (1)
WITH n.title + ': ' + n.plot AS movie (2)
WITH
reduce(acc = '', item IN collect(movie) | acc + item + '\n') AS movies, (3)
{
token: $openaiToken,
model: 'gpt-5-nano',
vendorOptions: {
instructions: 'Be short.'
}
} AS config (4)
RETURN ai.text.completion(
'Here is a list of movies with their titles and plots. Recommend the most child-friendly one.\n\n' + movies, (5)
'openai',
config
) AS result
| 1 | Pick the 50 top-rated movies. |
| 2 | Join title and plot as <title>: <plot>. |
| 3 | Join all movies into a single, newline-separated string. |
| 4 | Provider specific configuration, see Providers → OpenAI. |
| 5 | Prompt for the AI model. |
| result |
|---|
"Cosmos. It’s an educational, non-graphic science documentary about the universe, making it the most child-friendly option here (parental guidance for younger kids)." |
The example below selects the 5 top-rated movies, joins their titles and plots into a single, newline-separated string, and asks the external AI provider to pick the most child-friendly one.
MATCH (n:Movie) WHERE n.imdbRating IS NOT NULL
ORDER BY n.imdbRating DESC LIMIT 50 (1)
WITH n.title + ': ' + n.plot AS movie (2)
WITH
reduce(acc = '', item IN collect(movie) | acc + item + '\n') AS movies, (3)
{
token: $azureOpenaiToken,
resource: '<azure-openai-resource>',
model: 'gpt-5-nano',
vendorOptions: {
instructions: 'Be short.'
}
} AS config (4)
RETURN ai.text.completion(
'Here is a list of movies with their titles and plots. Recommend the most child-friendly one.\n\n' + movies, (5)
'azure-openai',
config
) AS result
| 1 | Pick the 50 top-rated movies. |
| 2 | Join title and plot as <title>: <plot>. |
| 3 | Join all movies into a single, newline-separated string. |
| 4 | Provider specific configuration, see Providers → Azure OpenAI. |
| 5 | Prompt for the AI model. |
| result |
|---|
"Cosmos. It’s an educational, non-graphic science documentary about the universe, making it the most child-friendly option here (parental guidance for younger kids)." |
The example below selects the 5 top-rated movies, joins their titles and plots into a single, newline-separated string, and asks the external AI provider to pick the most child-friendly one.
MATCH (n:Movie) WHERE n.imdbRating IS NOT NULL
ORDER BY n.imdbRating DESC LIMIT 50 (1)
WITH n.title + ': ' + n.plot AS movie (2)
WITH
reduce(acc = '', item IN collect(movie) | acc + item + '\n') AS movies, (3)
{
token: $vertexaiToken,
model: 'gemini-2.5-flash-lite',
publisher: 'google',
project: '<google-cloud-project>',
region: '<gcp-region>',
vendorOptions: {
systemInstruction: 'Be short.'
}
} AS conf (4)
RETURN ai.text.completion(
'Here is a list of movies with their titles and plots. Recommend the most child-friendly one.\n\n' + movies, (5)
'vertexai',
config
) AS result
| 1 | Pick the 50 top-rated movies. |
| 2 | Join title and plot as <title>: <plot>. |
| 3 | Join all movies into a single, newline-separated string. |
| 4 | Provider specific configuration, see Providers → VertexAI. |
| 5 | Prompt for the AI model. |
| result |
|---|
"The most child-friendly movie is Cosmos. It is an educational documentary about space, lacking the violence and mature themes of the other options. Would you like to know the recommended age range for Cosmos?" |
The example below selects the 5 top-rated movies, joins their titles and plots into a single, newline-separated string, and asks the external AI provider to pick the most child-friendly one.
MATCH (n:Movie) WHERE n.imdbRating IS NOT NULL
ORDER BY n.imdbRating DESC LIMIT 50 (1)
WITH n.title + ': ' + n.plot AS movie (2)
WITH
reduce(acc = '', item IN collect(movie) | acc + item + '\n') AS movies, (3)
{
accessKeyId: $awsAccessKeyId,
secretAccessKey: $secretAccessKey,
model: 'amazon.nova-micro-v1:0',
region: '<region>',
vendorOptions: {
system: [{ text: 'Be short' }]
}
} AS conf (4)
RETURN ai.text.completion(
'Here is a list of movies with their titles and plots. Recommend the most child-friendly one.\n\n' + movies, (5)
'bedrock-nova',
config
) AS result
| 1 | Pick the 50 top-rated movies. |
| 2 | Join title and plot as <title>: <plot>. |
| 3 | Join all movies into a single, newline-separated string. |
| 4 | Provider specific configuration, see Providers → Amazon Bedrock Nova. |
| 5 | Prompt for the AI model. |
| result |
|---|
"Cosmos is the most child-friendly. It’s an educational series about space and science, perfect for young minds." |
Chat with contextIntroduced in 2025.12
The function ai.text.chat allows you to exchange several messages with an LLM, as part of a single thread.
Chats are only supported with OpenAI and Azure-OpenAI.
Use CALL ai.text.chat.providers() (see reference) to see supported providers and their configuration options.
|
Syntax |
|
||
Description |
Chat based on the specified prompt, optionally continuing a previous interaction. |
||
Inputs |
Name |
Type |
Description |
|
|
The user message to send. |
|
|
|
Previous chat ID to continue the conversation. If this is the first message in the conversation, set it to |
|
|
|
The identifier of the provider: |
|
|
|
Provider-specific options. See Providers. |
|
Returns |
|
Contains |
|
Start a new chat
The example below selects the 5 top-rated movies, joins their titles and plots into a single, newline-separated string, and asks the external AI provider to pick a good documentary.
MATCH (n:Movie) WHERE n.imdbRating IS NOT NULL
ORDER BY n.imdbRating DESC LIMIT 50 (1)
WITH n.title + ': ' + n.plot AS movie (2)
WITH
reduce(acc = '', item IN collect(movie) | acc + item + '\n') AS movies, (3)
{
token: $openaiToken,
model: 'gpt-5-nano',
vendorOptions: {
instructions: 'Be short.'
}
} AS config (4)
RETURN ai.text.chat(
'Here is a list of movies with their titles and plots. I like space documentaries. Any recommendations?\n\n' + movies, (5)
null, (6)
'openai',
config
) AS result
| 1 | Pick the 50 top-rated movies. |
| 2 | Join title and plot as <title>: <plot>. |
| 3 | Join all movies into a single, newline-separated string. |
| 4 | Provider specific configuration, see Providers → OpenAI. |
| 5 | Prompt for the AI model. |
| 6 | Set chatId to null to start a new conversation. |
| result |
|---|
|
MATCH (n:Movie) WHERE n.imdbRating IS NOT NULL
ORDER BY n.imdbRating DESC LIMIT 50 (1)
WITH n.title + ': ' + n.plot AS movie (2)
WITH
reduce(acc = '', item IN collect(movie) | acc + item + '\n') AS movies, (3)
{
token: $azureOpenaiToken,
resource: '<azure-openai-resource>',
model: 'gpt-5-nano',
vendorOptions: {
instructions: 'Be short.'
}
} AS config (4)
RETURN ai.text.chat(
'Here is a list of movies with their titles and plots. I like space documentaries. Any recommendations?\n\n' + movies, (5)
null, (6)
'azure-openai',
conf
) AS result
| 1 | Pick the 50 top-rated movies. |
| 2 | Join title and plot as <title>: <plot>. |
| 3 | Join all movies into a single, newline-separated string. |
| 4 | Provider specific configuration, see Providers → OpenAI. |
| 5 | Prompt for the AI model. |
| 6 | Set chatId to null to start a new conversation. |
| result |
|---|
|
Continue a chat using an existing chat ID
The example below continues the chat from the previous example, asking for a specific category of documentaries.
| The returned chat ID is different for every call, so you need to update it at every request if you want the model to always have the context of the whole chat. Each request allows you to spawn a new thread by forking the interaction with the context provided up until that point. |
Even if chat IDs are scoped to the API token that generated them, treat them as sensitive information, as further messages might ask to reveal previously-provided information (ex. What was my initial request?).
|
WITH
{
token: $openaiToken,
model: 'gpt-5-nano',
vendorOptions: {
instructions: 'Be short.'
}
} AS config
RETURN ai.text.chat(
'Could you suggest something more animal oriented?', (1)
$chatId, (2)
'openai',
config
) AS result
| 1 | The next prompt to send. |
| 2 | Reuse the chat ID from a previous response to continue the conversation. For example, pass it via a parameter. |
| result |
|---|
|
WITH
{
token: $azureOpenaiToken,
resource: '<azure-openai-resource>',
model: 'gpt-5-nano',
vendorOptions: {
instructions: 'Be short.'
}
} AS config
RETURN ai.text.chat(
'Could you suggest something more animal oriented?', (1)
$chatId, (2)
'azure-openai',
config
) AS result
| 1 | The next prompt to send. |
| 2 | Reuse the chat ID from a previous response to continue the conversation. For example, pass it via a parameter. |
| result |
|---|
|
Run multiple requests in parallel
To run text completion for several prompts in parallel, you have two options:
-
Use
CALL {…} IN CONCURRENT TRANSACTIONS, with 1 row per transaction:WITH ['Name a movie', 'Name a book'] AS prompts UNWIND prompts AS prompt CALL(prompt) { WITH { token: $openaiToken, model: 'gpt-5-nano' } AS config RETURN ai.text.completion(prompt, 'openai', config) AS response } IN CONCURRENT TRANSACTIONS OF 1 ROW RETURN response -
Use Cypher’s parallel runtime: Enteprise Edition
CYPHER runtime=parallel WITH { token: $openaiToken, model: 'gpt-5-nano' } AS config, ['Name a movie', 'Name a book'] AS prompts UNWIND prompts AS prompt RETURN ai.text.completion(prompt, 'openai', config) AS result
Providers
You can generate text via the following providers:
-
OpenAI (
openai) -
Azure OpenAI (
azure-openai) -
Google Vertex AI (
vertexai) -
Amazon Bedrock Nova Models (
bedrock-nova)
OpenAI
| Name | Type | Default | Description |
|---|---|---|---|
|
|
- |
Model ID (see OpenAI → Models). |
|
|
- |
OpenAI API key (see OpenAI → API Keys). |
|
|
|
Optional conversation history to provide context to the model. Pass a list of
|
|
|
|
Optional vendor options that will be passed on as-is in the request to OpenAI (see OpenAI → Create a model response). |
WITH
{
token: $openaiToken,
model: 'gpt-5-nano',
vendorOptions: {
max_output_tokens: 1024,
instructions: 'Be short.'
},
chatHistory: [
{
role: "user",
content: "My favorite movies are in the Fantasy or Sci-fi genre."
},
{
role: "assistant",
content: "Nice! fantasy and sci-fi have great vibes. Want tailored recs?"
}
]
} AS conf
RETURN ai.text.completion('Yes, please recommend a movie to me', 'openai', conf) AS result
You can change OpenAI’s base URL (default: https://api.openai.com) via the genai.openai.baseurl setting.
The change applies to all ai.text.* calls that use OpenAI, including ai.text.embed, ai.text.embedBatch and ai.text.completion.
See Configuration Options → genai.openai.baseurl.
|
Azure OpenAI
| Name | Type | Default | Description |
|---|---|---|---|
|
|
- |
Model ID (see Azure → Azure OpenAI in Foundry Models). |
|
|
- |
Azure resource name. |
|
|
- |
Azure OAuth2 bearer token. |
|
|
|
Optional conversation history to provide context to the model. Pass a list of
|
|
|
|
Optional vendor options that will be passed on as-is in the request to Azure. |
WITH
{
token: $azureToken,
resource: 'my-azure-openai-resource',
model: 'gpt-5-nano',
vendorOptions: {
max_output_tokens: 1024,
instructions: 'Be short.'
},
chatHistory: [
{
role: "user",
content: "My favorite movies are in the Fantasy or Sci-fi genre."
},
{
role: "assistant",
content: "Nice! fantasy and sci-fi have great vibes. Want tailored recs?"
}
]
} AS conf
RETURN ai.text.completion('Yes, please recommend a movie to me', 'azure-openai', conf) AS result
Google VertexAI
| Name | Type | Default | Description |
|---|---|---|---|
|
|
- |
Model resource name (see Vertex AI → Model Garden). |
|
|
- |
Google cloud project ID. |
|
|
- |
Google cloud region (see Vertex AI → Locations). |
|
|
'google' |
Model publisher. |
|
|
- |
Vertex AI API key. |
|
|
- |
Vertex API access token. |
|
|
|
Optional conversation history to provide context to the model. Pass a list of
|
|
|
|
Optional vendor options that will be passed on as-is in the request to VertexAI (see Vertex AI → Method: models.generateContent). |
Exactly one of apiKey or token must be provided.
|
WITH
{
token: $vertexaiApiAccessKey,
model: 'gemini-2.5-flash-lite',
publisher: 'google',
project: 'my-google-cloud-project',
region: 'asia-northeast1',
vendorOptions: {
systemInstruction: 'Be short.'
},
chatHistory: [
{
role: "user",
parts: [
{ text: "My favorite movies are in the Fantasy or Sci-fi genre." }
]
},
{
role: "model",
parts: [
{ text: "Nice! fantasy and sci-fi have great vibes. Want tailored recs?" }
]
}
]
} AS conf
RETURN ai.text.completion('Yes, please recommend a movie to me', 'vertexai', conf) AS result
Amazon Bedrock Nova Models
This provider supports all models that use the same request parameters and response fields as the Nova text models.
| Name | Type | Default | Description |
|---|---|---|---|
|
|
- |
Model ID or its ARN. |
|
|
- |
Amazon region (see Amazon Bedrock → Model Support). |
|
|
- |
Amazon access key ID. |
|
|
- |
Amazon secret access key. |
|
|
|
Optional conversation history to provide context to the model. Pass a list of
|
|
|
|
Optional vendor options that will be passed on as-is in the request to Bedrock (see Amazon Bedrock → Inference request parameters and response fields). |
WITH
{
accessKeyId: $awsAccessKeyId,
secretAccessKey: $secretAccessKey,
model: 'amazon.nova-micro-v1:0',
region: 'eu-west-2',
vendorOptions: {
system: [{ text: 'Be short' }],
inferenceConfig: { maxTokens: 1024 }
},
chatHistory: [
{
role: "user",
content: [
{ text: "My favorite movies are in the Fantasy or Sci-fi genre." }
]
},
{
role: "assistant",
content: [
{ text: "Nice! fantasy and sci-fi have great vibes. Want tailored recs?" }
]
}
]
} AS conf
RETURN ai.text.completion('Yes, please recommend a movie to me', 'bedrock-nova', conf) AS result
Amazon Bedrock Titan Models
This provider supports all models that use the same request parameters and response fields as the Titan text models.
Configuration and usage is similar to Bedrock Nova Models, however, chatHistory is not supported.
Titan models have different vendorOptions, see Amazon Bedrock → Titan Text Models.
|