Generate text

The function ai.text.completion can be used to generate text based on a textual input prompt. It is similar to submitting a prompt to an LLM.

Signature for ai.text.completion

Syntax

ai.text.completion(prompt, provider, configuration) :: STRING

Description

Generate text output based on the provided prompt.

Inputs

Name

Type

Description

prompt

STRING

Textual prompt.

provider

STRING

Name of the third party AI provider, see Providers.

configuration

MAP

Provider specific configuration, see Providers.

Returns

Generated text based on the provided prompt.

Example

The examples on this page use the Neo4j movie recommendations dataset, focusing on the plot and title properties of Movie nodes. There are 9083 Movie nodes with a plot and title property.

To recreate the graph, download and import this dump file into an empty Neo4j database. Dump files can be imported for both Aura and self-managed instances.

The example below selects the 5 top-rated movies, joins their titles and plots into a single, newline-separated string, and asks the external AI provider to pick the most child-friendly one.

MATCH (n:Movie) WHERE n.imdbRating IS NOT NULL
ORDER BY n.imdbRating DESC LIMIT 5  (1)
WITH n.title || ': ' || n.plot AS movie  (2)
WITH
  reduce(acc = '', item IN collect(movie) | acc || item || '\n') AS movies,  (3)
  {
    token: $openaiToken,
    model: 'gpt-5-nano',
    vendorOptions: {
      max_output_tokens: 1024,
      instructions: 'Be short.'
    }
  } AS config  (4)
RETURN ai.text.completion(
    'Here is a list of movies with their titles and plots. Recommend the most child-friendly one.\n\n' || movies,  (5)
    'openai',
    config
) AS result
1 Pick the 5 top-rated movies.
2 Join title and plot as <title>: <plot>.
3 Join all movies into a single, newline-separated string.
4 Provider specific configuration, see Providers → OpenAI.
5 Prompt for the AI model.
result

"Cosmos. It’s an educational, non-graphic science documentary about the universe, making it the most child-friendly option here (parental guidance for younger kids)."

The example below selects the 5 top-rated movies, joins their titles and plots into a single, newline-separated string, and asks the external AI provider to pick the most child-friendly one.

MATCH (n:Movie) WHERE n.imdbRating IS NOT NULL
ORDER BY n.imdbRating DESC LIMIT 5  (1)
WITH n.title || ': ' || n.plot AS movie  (2)
WITH
  reduce(acc = '', item IN collect(movie) | acc || item || '\n') AS movies,  (3)
  {
    token: $azureOpenaiToken,
    resource: '<azure-openai-resource>',
    model: 'gpt-5-nano',
    vendorOptions: {
      max_output_tokens: 1024,
      instructions: 'Be short.'
    }
  } AS config  (4)
RETURN ai.text.completion(
    'Here is a list of movies with their titles and plots. Recommend the most child-friendly one.\n\n' || movies,  (5)
    'azure-openai',
    config
) AS result
1 Pick the 5 top-rated movies.
2 Join title and plot as <title>: <plot>.
3 Join all movies into a single, newline-separated string.
4 Provider specific configuration, see Providers → Azure OpenAI.
5 Prompt for the AI model.
result

"Cosmos. It’s an educational, non-graphic science documentary about the universe, making it the most child-friendly option here (parental guidance for younger kids)."

The example below selects the 5 top-rated movies, joins their titles and plots into a single, newline-separated string, and asks the external AI provider to pick the most child-friendly one.

MATCH (n:Movie) WHERE n.imdbRating IS NOT NULL
ORDER BY n.imdbRating DESC LIMIT 5  (1)
WITH n.title || ': ' || n.plot AS movie  (2)
WITH
  reduce(acc = '', item IN collect(movie) | acc || item || '\n') AS movies,  (3)
  {
    token: $vertexaiToken,
    model: 'gemini-2.5-flash-lite',
    publisher: 'google',
    project: '<google-cloud-project>',
    region: '<gcp-region>',
    vendorOptions: {
      systemInstruction: 'Be short.',
      generationConfig: {
        maxOutputTokens: 1024
      }
    }
  } AS conf  (4)
RETURN ai.text.completion(
    'Here is a list of movies with their titles and plots. Recommend the most child-friendly one.\n\n' || movies,  (5)
    'vertexai',
    config
) AS result
1 Pick the 5 top-rated movies.
2 Join title and plot as <title>: <plot>.
3 Join all movies into a single, newline-separated string.
4 Provider specific configuration, see Providers → VertexAI.
5 Prompt for the AI model.
result

"The most child-friendly movie is Cosmos.

It is an educational documentary about space, lacking the violence and mature themes of the other options.

Would you like to know the recommended age range for Cosmos?"

The example below selects the 5 top-rated movies, joins their titles and plots into a single, newline-separated string, and asks the external AI provider to pick the most child-friendly one.

MATCH (n:Movie) WHERE n.imdbRating IS NOT NULL
ORDER BY n.imdbRating DESC LIMIT 5  (1)
WITH n.title || ': ' || n.plot AS movie  (2)
WITH
  reduce(acc = '', item IN collect(movie) | acc || item || '\n') AS movies,  (3)
  {
    accessKeyId: $awsAccessKeyId,
    secretAccessKey: $secretAccessKey,
    model: 'amazon.nova-micro-v1:0',
    region: '<region>',
    vendorOptions: {
      system: [{ text: 'Be short' }],
      inferenceConfig: { maxTokens: 1024 }
    }
  } AS conf  (4)
RETURN ai.text.completion(
    'Here is a list of movies with their titles and plots. Recommend the most child-friendly one.\n\n' || movies,  (5)
    'bedrock-nova',
    config
) AS result
1 Pick the 5 top-rated movies.
2 Join title and plot as <title>: <plot>.
3 Join all movies into a single, newline-separated string.
4 Provider specific configuration, see Providers → Amazon Bedrock Nova.
5 Prompt for the AI model.
result

"Cosmos is the most child-friendly. It’s an educational series about space and science, perfect for young minds."

Run multiple requests in parallel

To run text completion for several prompts in parallel, you have two options:

  1. Use CALL {…​} IN CONCURRENT TRANSACTIONS, with 1 row per transaction:

    WITH ['Name a movie', 'Name a book'] AS prompts
    UNWIND prompts AS prompt
    CALL(prompt) {
      WITH
      {
        token: $openaiToken,
        model: 'gpt-5-nano'
      } AS config
      RETURN ai.text.completion(prompt, 'openai', config) AS response
    } IN CONCURRENT TRANSACTIONS OF 1 ROW
    RETURN response
  2. Use Cypher’s parallel runtime: Enteprise Edition

    CYPHER runtime=parallel
    WITH
      {
        token: $openaiToken,
        model: 'gpt-5-nano'
      } AS config,
      ['Name a movie', 'Name a book'] AS prompts
    UNWIND prompts AS prompt
    RETURN ai.text.completion(prompt, 'openai', config) AS result

Providers

You can generate text via the following providers:

  • OpenAI (openai)

  • Azure OpenAI (azure-openai)

  • Google Vertex AI (vertexai)

  • Amazon Bedrock Nova Models (bedrock-nova)

The query CALL ai.text.completion.providers() (see reference) shows the list of supported providers in the installed version of the plugin.

OpenAI

OpenAI parameters
Name Type Default Description

model

STRING

-

Model ID (see OpenAI → Models).

token

STRING

-

OpenAI API key (see OpenAI → API Keys).

vendorOptions

MAP

{}

Optional vendor options that will be passed on as-is in the request to OpenAI (see OpenAI → Create a model response).

Usage example
WITH
  {
    token: $openaiToken,
    model: 'gpt-5-nano',
    vendorOptions: {
      max_output_tokens: 1024,
      instructions: 'Be short.'
    }
  } AS conf
RETURN ai.text.completion('Name a movie', 'openai', conf) AS result

Azure OpenAI

Azure OpenAI parameters
Name Type Default Description

model

STRING

-

Model ID (see Azure → Azure OpenAI in Foundry Models).

resource

STRING

-

Azure resource name.

token

STRING

-

Azure OAuth2 bearer token.

vendorOptions

MAP

{}

Optional vendor options that will be passed on as-is in the request to Azure.

Usage example
WITH
  {
    token: $azureToken,
    resource: 'my-azure-openai-resource',
    model: 'gpt-5-nano',
    vendorOptions: {
      max_output_tokens: 1024,
      instructions: 'Be short.'
    }
  } AS conf
RETURN ai.text.completion('Name a movie', 'azure-openai', conf) AS result

Google VertexAI

VertexAI parameters
Name Type Default Description

model

STRING

-

Model resource name (see Vertex AI → Model Garden).

project

STRING

-

Google cloud project ID.

region

STRING

-

Google cloud region (see Vertex AI → Locations).

publisher

STRING

'google'

Model publisher.

token

STRING

-

Vertex API access token.

vendorOptions

MAP

{}

Optional vendor options that will be passed on as-is in the request to VertexAI (see Vertex AI → Method: models.generateContent).

Usage example
WITH
  {
    token: $vertexaiApiAccessKey,
    model: 'gemini-2.5-flash-lite',
    publisher: 'google',
    project: 'my-google-cloud-project',
    region: 'asia-northeast1',
    vendorOptions: {
      systemInstruction: 'Be short.'
    }
  } AS conf
RETURN ai.text.completion('Name a movie', 'vertexai', conf) AS result

Amazon Bedrock Nova Models

This provider supports all models that use the same request parameters and response fields as the Nova text models.

Amazon Bedrock Nova parameters
Name Type Default Description

model

STRING

-

Model ID or its ARN.

region

STRING

-

Amazon region (see Amazon Bedrock → Model Support).

accessKeyId

STRING

-

Amazon access key ID.

secretAccessKey

STRING

-

Amazon secret access key.

vendorOptions

MAP

{}

Optional vendor options that will be passed on as-is in the request to Bedrock (see Amazon Bedrock → Inference request parameters and response fields).

Usage example
WITH
  {
    accessKeyId: $awsAccessKeyId,
    secretAccessKey: $secretAccessKey,
    model: 'amazon.nova-micro-v1:0',
    region: 'eu-west-2',
    vendorOptions: {
      system: [{ text: 'Be short' }],
      inferenceConfig: { maxTokens: 1024 }
    }
  } AS conf
RETURN ai.text.completion('Name a movie', 'bedrock-nova', conf) AS result

Amazon Bedrock Titan Models

This provider supports all models that use the same request parameters and response fields as the Titan text models. Configuration and usage is similar to Bedrock Nova Models.

Titan models have different vendorOptions, see Amazon Bedrock → Titan Text Models.