Generate textIntroduced in 2025.11
The function ai.text.completion can be used to generate text based on a textual input prompt.
It is similar to submitting a prompt to an LLM.
Syntax |
|
||
Description |
Generate text output based on the provided prompt. |
||
Inputs |
Name |
Type |
Description |
|
|
Textual prompt. |
|
|
|
Name of the third party AI provider, see Providers. |
|
|
|
Provider specific configuration, see Providers. |
|
Returns |
Generated text based on the provided prompt. |
||
Example
The examples on this page use the Neo4j movie recommendations dataset, focusing on the plot and title properties of Movie nodes.
There are 9083 Movie nodes with a plot and title property.
To recreate the graph, download and import this dump file into an empty Neo4j database. Dump files can be imported for both Aura and self-managed instances.
The example below selects the 5 top-rated movies, joins their titles and plots into a single, newline-separated string, and asks the external AI provider to pick the most child-friendly one.
MATCH (n:Movie) WHERE n.imdbRating IS NOT NULL
ORDER BY n.imdbRating DESC LIMIT 5 (1)
WITH n.title || ': ' || n.plot AS movie (2)
WITH
reduce(acc = '', item IN collect(movie) | acc || item || '\n') AS movies, (3)
{
token: $openaiToken,
model: 'gpt-5-nano',
vendorOptions: {
max_output_tokens: 1024,
instructions: 'Be short.'
}
} AS config (4)
RETURN ai.text.completion(
'Here is a list of movies with their titles and plots. Recommend the most child-friendly one.\n\n' || movies, (5)
'openai',
config
) AS result
| 1 | Pick the 5 top-rated movies. |
| 2 | Join title and plot as <title>: <plot>. |
| 3 | Join all movies into a single, newline-separated string. |
| 4 | Provider specific configuration, see Providers → OpenAI. |
| 5 | Prompt for the AI model. |
| result |
|---|
"Cosmos. It’s an educational, non-graphic science documentary about the universe, making it the most child-friendly option here (parental guidance for younger kids)." |
The example below selects the 5 top-rated movies, joins their titles and plots into a single, newline-separated string, and asks the external AI provider to pick the most child-friendly one.
MATCH (n:Movie) WHERE n.imdbRating IS NOT NULL
ORDER BY n.imdbRating DESC LIMIT 5 (1)
WITH n.title || ': ' || n.plot AS movie (2)
WITH
reduce(acc = '', item IN collect(movie) | acc || item || '\n') AS movies, (3)
{
token: $azureOpenaiToken,
resource: '<azure-openai-resource>',
model: 'gpt-5-nano',
vendorOptions: {
max_output_tokens: 1024,
instructions: 'Be short.'
}
} AS config (4)
RETURN ai.text.completion(
'Here is a list of movies with their titles and plots. Recommend the most child-friendly one.\n\n' || movies, (5)
'azure-openai',
config
) AS result
| 1 | Pick the 5 top-rated movies. |
| 2 | Join title and plot as <title>: <plot>. |
| 3 | Join all movies into a single, newline-separated string. |
| 4 | Provider specific configuration, see Providers → Azure OpenAI. |
| 5 | Prompt for the AI model. |
| result |
|---|
"Cosmos. It’s an educational, non-graphic science documentary about the universe, making it the most child-friendly option here (parental guidance for younger kids)." |
The example below selects the 5 top-rated movies, joins their titles and plots into a single, newline-separated string, and asks the external AI provider to pick the most child-friendly one.
MATCH (n:Movie) WHERE n.imdbRating IS NOT NULL
ORDER BY n.imdbRating DESC LIMIT 5 (1)
WITH n.title || ': ' || n.plot AS movie (2)
WITH
reduce(acc = '', item IN collect(movie) | acc || item || '\n') AS movies, (3)
{
token: $vertexaiToken,
model: 'gemini-2.5-flash-lite',
publisher: 'google',
project: '<google-cloud-project>',
region: '<gcp-region>',
vendorOptions: {
systemInstruction: 'Be short.',
generationConfig: {
maxOutputTokens: 1024
}
}
} AS conf (4)
RETURN ai.text.completion(
'Here is a list of movies with their titles and plots. Recommend the most child-friendly one.\n\n' || movies, (5)
'vertexai',
config
) AS result
| 1 | Pick the 5 top-rated movies. |
| 2 | Join title and plot as <title>: <plot>. |
| 3 | Join all movies into a single, newline-separated string. |
| 4 | Provider specific configuration, see Providers → VertexAI. |
| 5 | Prompt for the AI model. |
| result |
|---|
"The most child-friendly movie is Cosmos. It is an educational documentary about space, lacking the violence and mature themes of the other options. Would you like to know the recommended age range for Cosmos?" |
The example below selects the 5 top-rated movies, joins their titles and plots into a single, newline-separated string, and asks the external AI provider to pick the most child-friendly one.
MATCH (n:Movie) WHERE n.imdbRating IS NOT NULL
ORDER BY n.imdbRating DESC LIMIT 5 (1)
WITH n.title || ': ' || n.plot AS movie (2)
WITH
reduce(acc = '', item IN collect(movie) | acc || item || '\n') AS movies, (3)
{
accessKeyId: $awsAccessKeyId,
secretAccessKey: $secretAccessKey,
model: 'amazon.nova-micro-v1:0',
region: '<region>',
vendorOptions: {
system: [{ text: 'Be short' }],
inferenceConfig: { maxTokens: 1024 }
}
} AS conf (4)
RETURN ai.text.completion(
'Here is a list of movies with their titles and plots. Recommend the most child-friendly one.\n\n' || movies, (5)
'bedrock-nova',
config
) AS result
| 1 | Pick the 5 top-rated movies. |
| 2 | Join title and plot as <title>: <plot>. |
| 3 | Join all movies into a single, newline-separated string. |
| 4 | Provider specific configuration, see Providers → Amazon Bedrock Nova. |
| 5 | Prompt for the AI model. |
| result |
|---|
"Cosmos is the most child-friendly. It’s an educational series about space and science, perfect for young minds." |
Run multiple requests in parallel
To run text completion for several prompts in parallel, you have two options:
-
Use
CALL {…} IN CONCURRENT TRANSACTIONS, with 1 row per transaction:WITH ['Name a movie', 'Name a book'] AS prompts UNWIND prompts AS prompt CALL(prompt) { WITH { token: $openaiToken, model: 'gpt-5-nano' } AS config RETURN ai.text.completion(prompt, 'openai', config) AS response } IN CONCURRENT TRANSACTIONS OF 1 ROW RETURN response -
Use Cypher’s parallel runtime: Enteprise Edition
CYPHER runtime=parallel WITH { token: $openaiToken, model: 'gpt-5-nano' } AS config, ['Name a movie', 'Name a book'] AS prompts UNWIND prompts AS prompt RETURN ai.text.completion(prompt, 'openai', config) AS result
Providers
You can generate text via the following providers:
-
OpenAI (
openai) -
Azure OpenAI (
azure-openai) -
Google Vertex AI (
vertexai) -
Amazon Bedrock Nova Models (
bedrock-nova)
The query CALL ai.text.completion.providers() (see reference) shows the list of supported providers in the installed version of the plugin.
OpenAI
| Name | Type | Default | Description |
|---|---|---|---|
|
|
- |
Model ID (see OpenAI → Models). |
|
|
- |
OpenAI API key (see OpenAI → API Keys). |
|
|
|
Optional vendor options that will be passed on as-is in the request to OpenAI (see OpenAI → Create a model response). |
WITH
{
token: $openaiToken,
model: 'gpt-5-nano',
vendorOptions: {
max_output_tokens: 1024,
instructions: 'Be short.'
}
} AS conf
RETURN ai.text.completion('Name a movie', 'openai', conf) AS result
Azure OpenAI
| Name | Type | Default | Description |
|---|---|---|---|
|
|
- |
Model ID (see Azure → Azure OpenAI in Foundry Models). |
|
|
- |
Azure resource name. |
|
|
- |
Azure OAuth2 bearer token. |
|
|
|
Optional vendor options that will be passed on as-is in the request to Azure. |
WITH
{
token: $azureToken,
resource: 'my-azure-openai-resource',
model: 'gpt-5-nano',
vendorOptions: {
max_output_tokens: 1024,
instructions: 'Be short.'
}
} AS conf
RETURN ai.text.completion('Name a movie', 'azure-openai', conf) AS result
Google VertexAI
| Name | Type | Default | Description |
|---|---|---|---|
|
|
- |
Model resource name (see Vertex AI → Model Garden). |
|
|
- |
Google cloud project ID. |
|
|
- |
Google cloud region (see Vertex AI → Locations). |
|
|
'google' |
Model publisher. |
|
|
- |
Vertex API access token. |
|
|
|
Optional vendor options that will be passed on as-is in the request to VertexAI (see Vertex AI → Method: models.generateContent). |
WITH
{
token: $vertexaiApiAccessKey,
model: 'gemini-2.5-flash-lite',
publisher: 'google',
project: 'my-google-cloud-project',
region: 'asia-northeast1',
vendorOptions: {
systemInstruction: 'Be short.'
}
} AS conf
RETURN ai.text.completion('Name a movie', 'vertexai', conf) AS result
Amazon Bedrock Nova Models
This provider supports all models that use the same request parameters and response fields as the Nova text models.
| Name | Type | Default | Description |
|---|---|---|---|
|
|
- |
Model ID or its ARN. |
|
|
- |
Amazon region (see Amazon Bedrock → Model Support). |
|
|
- |
Amazon access key ID. |
|
|
- |
Amazon secret access key. |
|
|
|
Optional vendor options that will be passed on as-is in the request to Bedrock (see Amazon Bedrock → Inference request parameters and response fields). |
WITH
{
accessKeyId: $awsAccessKeyId,
secretAccessKey: $secretAccessKey,
model: 'amazon.nova-micro-v1:0',
region: 'eu-west-2',
vendorOptions: {
system: [{ text: 'Be short' }],
inferenceConfig: { maxTokens: 1024 }
}
} AS conf
RETURN ai.text.completion('Name a movie', 'bedrock-nova', conf) AS result
Amazon Bedrock Titan Models
This provider supports all models that use the same request parameters and response fields as the Titan text models. Configuration and usage is similar to Bedrock Nova Models.
Titan models have different vendorOptions, see Amazon Bedrock → Titan Text Models.
|