OpenAI API Access

You need to acquire an OpenAI API key to use these procedures. Using them will incur costs on your OpenAI account. You can set the api key globally by defining the apoc.openai.key configuration in apoc.conf

But you can also use these procedures to call OpenAI-compatible APIs, which will therefore have their own API key (or even without API Key). See the paragraph OpenAI-compatible provider below.

All the following procedures can have the following APOC config, i.e. in apoc.conf or via docker env variable .Apoc configuration

key

description

default

apoc.ml.openai.type

"AZURE", "HUGGINGFACE", "OPENAI", indicates whether the API is Azure, HuggingFace or another one

"OPENAI"

apoc.ml.openai.url

the OpenAI endpoint base url

https://api.openai.com/v1 (or empty string if apoc.ml.openai.type=<AZURE OR HUGGINGFACE>)

Moreover, they can have the following configuration keys, as the last parameter. If present, they take precedence over the analogous APOC configs.

Table 1. Common configuration parameter

key

description

apiType

analogous to apoc.ml.openai.type APOC config

endpoint

analogous to apoc.ml.openai.url APOC config

apiVersion

analogous to apoc.ml.azure.api.version APOC config

path

To customize the url portion added to the base url (defined by the endpoint config). By default, is /embeddings, /completions and /chat/completions for respectively the apoc.ml.openai.embedding, apoc.ml.openai.completion and apoc.ml.openai.chat procedures.

jsonPath

To customize JSONPath of the response. The default is $ for the apoc.ml.openai.chat and apoc.ml.openai.completion procedures, and $.data for the apoc.ml.openai.embedding procedure.

Therefore, we can use the following procedures with the Open AI Services provided by Azure, pointing to the correct endpoints as explained in the documentation.

That is, if we want to call an endpoint like https://my-resource.openai.azure.com/openai/deployments/my-deployment-id/embeddings?api-version=my-api-version` for example, by passing as a configuration parameter:

    {endpoint: "https://my-resource.openai.azure.com/openai/deployments/my-deployment-id",
        apiVersion: my-api-version,
        apiType: 'AZURE'
}

The /embeddings portion will be added under-the-hood. Similarly, if we use the apoc.ml.openai.completion, if we want to call an endpoint like https://my-resource.openai.azure.com/openai/deployments/my-deployment-id/completions?api-version=my-api-version for example, we can write the same configuration parameter as above, where the /completions portion will be added.

While using the apoc.ml.openai.chat, with the same configuration, the url portion /chat/completions will be added

Or else, we can write this apoc.conf:

apoc.ml.openai.url=https://my-resource.openai.azure.com/openai/deployments/my-deployment-id
apoc.ml.azure.api.version=my-api-version
apoc.ml.openai.type=AZURE

Generate Embeddings API

This procedure apoc.ml.openai.embedding can take a list of text strings, and will return one row per string, with the embedding data as a 1536 element vector. It uses the /embeddings/create API which is documented here.

Additional configuration is passed to the API, the default model used is text-embedding-ada-002.

Generate Embeddings Call
CALL apoc.ml.openai.embedding(['Some Text'], $apiKey, {}) yield index, text, embedding;
Table 2. Generate Embeddings Response
index text embedding

0

"Some Text"

[-0.0065358975, -7.9563365E-4, …​. -0.010693862, -0.005087272]

Table 3. Parameters
name description

texts

List of text strings

apiKey

OpenAI API key

configuration

optional map for entries like model and other request parameters.

We can also pass a custom endpoint: <MyAndPointKey> entry (it takes precedence over the apoc.ml.openai.url config). The <MyAndPointKey> can be the complete andpoint (e.g. using Azure: https://my-resource.openai.azure.com/openai/deployments/my-deployment-id/chat/completions?api-version=my-api-version), or with a %s (e.g. using Azure: https://my-resource.openai.azure.com/openai/deployments/my-deployment-id/%s?api-version=my-api-version) which will eventually be replaced with embeddings, chat/completion and completion by using respectively the apoc.ml.openai.embedding, apoc.ml.openai.chat and apoc.ml.openai.completion.

Or an authType: `AUTH_TYPE, which can be authType: "BEARER" (default config.), to pass the apiKey via the header as an Authorization: Bearer $apiKey, or authType: "API_KEY" to pass the apiKey as an api-key: $apiKey header entry.

Table 4. Results
name description

index

index entry in original list

text

line of text from original list

embedding

1536 element floating point embedding vector for ada-002 model

Text Completion API

This procedure apoc.ml.openai.completion can continue/complete a given text.

It uses the /completions/create API which is documented here.

Additional configuration is passed to the API, the default model used is text-davinci-003.

Text Completion Call
CALL apoc.ml.openai.completion('What color is the sky? Answer in one word: ', $apiKey, {config}) yield value;
Text Completion Response
{ created=1684248202, model="text-davinci-003", id="cmpl-7GqBWwX49yMJljdmnLkWxYettZoOy",
  usage={completion_tokens=2, prompt_tokens=12, total_tokens=14},
  choices=[{finish_reason="stop", index=0, text="Blue", logprobs=null}], object="text_completion"}
Table 5. Parameters
name description

prompt

Text to complete

apiKey

OpenAI API key

configuration

optional map for entries like model, temperature, and other request parameters

Table 6. Results
name description

value

result entry from OpenAI (containing)

OpenLM API

We can also call the Completion API of HuggingFace and Cohere, similar to the OpenLM library, as below.

For the HuggingFace API, we have to define the config apiType: 'HUGGINGFACE', since we have to transform the body request.

For example:

CALL apoc.ml.openai.completion('What color is the sky? Answer in one word: ', $huggingFaceApiKey,
{endpoint: 'https://api-inference.huggingface.co/models/gpt2', apiType: 'HUGGINGFACE', model: 'gpt2', path: ''})

Or also, by using the Cohere API, where we have to define path: ''' not to add the /completions suffix to the URL:

CALL apoc.ml.openai.completion('What color is the sky? Answer in one word: ', $cohereApiKey,
{endpoint: 'https://api.cohere.ai/v1/generate', path: '', model: 'command'})

Chat Completion API

This procedure apoc.ml.openai.chat takes a list of maps of chat exchanges between assistant and user (with optional system message), and will return the next message in the flow.

It uses the /chat/create API which is documented here.

Additional configuration is passed to the API, the default model used is gpt-3.5-turbo.

Chat Completion Call
CALL apoc.ml.openai.chat([
{role:"system", content:"Only answer with a single word"},
{role:"user", content:"What planet do humans live on?"}
],  $apiKey) yield value
Chat Completion Response
{created=1684248203, id="chatcmpl-7GqBXZr94avd4fluYDi2fWEz7DIHL",
object="chat.completion", model="gpt-3.5-turbo-0301",
usage={completion_tokens=2, prompt_tokens=26, total_tokens=28},
choices=[{finish_reason="stop", index=0, message={role="assistant", content="Earth."}}]}
Table 7. Parameters
name description

messages

List of maps of instructions with {role:"assistant|user|system", content:"text}

apiKey

OpenAI API key

configuration

optional map for entries like model, temperature, and other request parameters

Table 8. Results
name description

value

result entry from OpenAI (containing created, id, model, object, usage(tokens), choices(message, index, finish_reason))

OpenAI-compatible provider

We can also use these procedures to call OpenAI-compatible APIs, by defining the endpoint config, and possibly the model, path and jsonPath configs.

For example, we can call the Anyscale Endpoints:

CALL apoc.ml.openai.embedding(['Some Text'], $anyScaleApiKey,
{endpoint: 'https://api.endpoints.anyscale.com/v1', model: 'thenlper/gte-large'})

Or via LocalAI APIs (note that the apiKey is null by default):

CALL apoc.ml.openai.embedding(['Some Text'], "ignored",
{endpoint: 'http://localhost:8080/v1', model: 'text-embedding-ada-002'})

Or also, by using LLMatic Library:

CALL apoc.ml.openai.embedding(['Some Text'], "ignored",
{endpoint: 'http://localhost:3000/v1', model: 'thenlper/gte-large'})

Furthermore, we can use the Groq API, e.g.:

CALL apoc.ml.openai.chat([{"role": "user", "content": "Explain the importance of low latency LLMs"}],
    '<apiKey>',
    {endpoint: 'https://api.groq.com/openai/v1', model: 'mixtral-8x7b-32768'})

Query with natural language

This procedure apoc.ml.query takes a question in natural language and returns the results of that query.

It uses the chat/completions API which is documented here.

Query call
CALL apoc.ml.query("What movies did Tom Hanks play in?") yield value, query
RETURN *
Example response
+------------------------------------------------------------------------------------------------------------------------------+
| value                                 | query                                                                                |
+------------------------------------------------------------------------------------------------------------------------------+
| {m.title -> "You've Got Mail"}        | "cypher
MATCH (m:Movie)<-[:ACTED_IN]-(p:Person {name: 'Tom Hanks'})
RETURN m.title
" |
| {m.title -> "Apollo 13"}              | "cypher
MATCH (m:Movie)<-[:ACTED_IN]-(p:Person {name: 'Tom Hanks'})
RETURN m.title
" |
| {m.title -> "Joe Versus the Volcano"} | "cypher
MATCH (m:Movie)<-[:ACTED_IN]-(p:Person {name: 'Tom Hanks'})
RETURN m.title
" |
| {m.title -> "That Thing You Do"}      | "cypher
MATCH (m:Movie)<-[:ACTED_IN]-(p:Person {name: 'Tom Hanks'})
RETURN m.title
" |
| {m.title -> "Cloud Atlas"}            | "cypher
MATCH (m:Movie)<-[:ACTED_IN]-(p:Person {name: 'Tom Hanks'})
RETURN m.title
" |
| {m.title -> "The Da Vinci Code"}      | "cypher
MATCH (m:Movie)<-[:ACTED_IN]-(p:Person {name: 'Tom Hanks'})
RETURN m.title
" |
| {m.title -> "Sleepless in Seattle"}   | "cypher
MATCH (m:Movie)<-[:ACTED_IN]-(p:Person {name: 'Tom Hanks'})
RETURN m.title
" |
| {m.title -> "A League of Their Own"}  | "cypher
MATCH (m:Movie)<-[:ACTED_IN]-(p:Person {name: 'Tom Hanks'})
RETURN m.title
" |
| {m.title -> "The Green Mile"}         | "cypher
MATCH (m:Movie)<-[:ACTED_IN]-(p:Person {name: 'Tom Hanks'})
RETURN m.title
" |
| {m.title -> "Charlie Wilson's War"}   | "cypher
MATCH (m:Movie)<-[:ACTED_IN]-(p:Person {name: 'Tom Hanks'})
RETURN m.title
" |
| {m.title -> "Cast Away"}              | "cypher
MATCH (m:Movie)<-[:ACTED_IN]-(p:Person {name: 'Tom Hanks'})
RETURN m.title
" |
| {m.title -> "The Polar Express"}      | "cypher
MATCH (m:Movie)<-[:ACTED_IN]-(p:Person {name: 'Tom Hanks'})
RETURN m.title
" |
+------------------------------------------------------------------------------------------------------------------------------+
12 rows
Table 9. Input Parameters
name description

question

The question in the natural language

conf

An optional configuration map, please check the next section

Table 10. Configuration map
name description mandatory

retries

The number of retries in case of API call failures

no, default 3

retryWithError

If true, in case of error retry the api adding the following messages to the body request: {"role":"user", "content": "The previous Cypher Statement throws the following error, consider it to return the correct statement: `<errorMessage>`"}, {"role":"assistant", "content":"Cypher Statement (in backticks):"}

no, default false

apiKey

OpenAI API key

in case apoc.openai.key is not defined

model

The Open AI model

no, default gpt-3.5-turbo

sample

The number of nodes to skip, e.g. a sample of 1000 will read every 1000th node. It’s used as a parameter to apoc.meta.data procedure that computes the schema

no, default is a random number

Table 11. Results
name description

value

the result of the query

cypher

the query used to compute the result

Describe the graph model with natural language

This procedure apoc.ml.schema returns a description, in natural language, of the underlying dataset.

It uses the chat/completions API which is documented here.

Query call
CALL apoc.ml.schema() yield value
RETURN *
Example response
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| value                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| "The graph database schema represents a system where users can follow other users and review movies. Users (:Person) can either follow other users (:Person) or review movies (:Movie). The relationships allow users to express their preferences and opinions about movies. This schema can be compared to social media platforms where users can follow each other and leave reviews or ratings for movies they have watched. It can also be related to movie recommendation systems where user preferences and reviews play a crucial role in generating personalized recommendations." |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row
Table 12. Input Parameters
name description

conf

An optional configuration map, please check the next section

Table 13. Configuration map
name description mandatory

apiKey

OpenAI API key

in case apoc.openai.key is not defined

model

The Open AI model

no, default gpt-3.5-turbo

sample

The number of nodes to skip, e.g. a sample of 1000 will read every 1000th node. It’s used as a parameter to apoc.meta.data procedure that computes the schema

no, default is a random number

Table 14. Results
name description

value

the description of the dataset

Create cypher queries from a natural language query

This procedure apoc.ml.cypher takes a natural language question and transforms it into a number of requested cypher queries.

It uses the chat/completions API which is documented here.

Query call
CALL apoc.ml.cypher("Who are the actors which also directed a movie?", {count: 4}) yield cypher
RETURN *
Example response
+----------------------------------------------------------------------------------------------------------------+
| query                                                                                                          |
+----------------------------------------------------------------------------------------------------------------+
| "
MATCH (a:Person)-[:ACTED_IN]->(m:Movie)<-[:DIRECTED]-(d:Person)
RETURN a.name as actor, d.name as director
" |
| "cypher
MATCH (a:Person)-[:ACTED_IN]->(m:Movie)<-[:DIRECTED]-(a)
RETURN a.name
"                               |
| "
MATCH (a:Person)-[:ACTED_IN]->(m:Movie)<-[:DIRECTED]-(d:Person)
RETURN a.name
"                              |
| "cypher
MATCH (a:Person)-[:ACTED_IN]->(:Movie)<-[:DIRECTED]-(a)
RETURN DISTINCT a.name
"                       |
+----------------------------------------------------------------------------------------------------------------+
4 rows
Table 15. Input Parameters
name description mandatory

question

The question in the natural language

yes

conf

An optional configuration map, please check the next section

no

Table 16. Configuration map
name description mandatory

count

The number of queries to retrieve

no, default 1

apiKey

OpenAI API key

in case apoc.openai.key is not defined

model

The Open AI model

no, default gpt-3.5-turbo

sample

The number of nodes to skip, e.g. a sample of 1000 will read every 1000th node. It’s used as a parameter to apoc.meta.data procedure that computes the schema

no, default is a random number

Table 17. Results
name description

value

the description of the dataset

Create a natural language query explanation from a cypher query

This procedure apoc.ml.fromCypher takes a natural language question and transforms it into natural language query explanation.

It uses the chat/completions API which is documented here.

Query call
CALL apoc.ml.cypher("MATCH (p:Person {name: "Tom Hanks"})-[:ACTED_IN]->(m:Movie) RETURN m", {}) yield value
RETURN *
Table 18. Example response
value

this database schema represents a simplified version of a common movie database model. the movie node represents a movie entity with attributes such as the year it was released, a tagline, and the movie title. the person node represents a person involved in the movie industry, with attributes for the person’s year of birth and name. the relationship directed connects a person node to a movie node, indicating that the person directed the movie. in terms of domains, this schema can be related to the entertainment industry, specifically the movie industry. movies and people involved in creating those movies are fundamental entities in this domain. the directed relationship captures the directed-by relationship between a person and a movie. this type of model can be extended to include other relationships like acted_in, produced, wrote, etc., to capture more complex connections within the movie industry. overall, this graph database schema provides a simple yet powerful representation of entities and relationships in the movie domain, allowing for querying and analysis of connections within the industry.

Table 19. Input Parameters
name description mandatory

cypher

The question in the natural language

yes

conf

An optional configuration map, please check the next section

no

Table 20. Configuration map
name description mandatory

retries

The number of retries in case of API call failures

no, default 3

apiKey

OpenAI API key

in case apoc.openai.key is not defined

model

The Open AI model

no, default gpt-3.5-turbo

sample

The number of nodes to skip, e.g. a sample of 1000 will read every 1000th node. It’s used as a parameter to apoc.meta.data procedure that computes the schema

no, default is a random number

Table 21. Results
name description

value

the description of the dataset

Create explanation of the subgraph from a set of queries

This procedure apoc.ml.fromQueries returns an explanation, in natural language, of the given set of queries.

It uses the chat/completions API which is documented here.

Query call
CALL apoc.ml.fromQueries(['MATCH (n:Movie) RETURN n', 'MATCH (n:Person) RETURN n'],
    {apiKey: <apiKey>})
YIELD value
RETURN *
Example response
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| value                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| "The database represents movies and people, like in a movie database or social network.
    There are no defined relationships between nodes, allowing flexibility for future connections.
    The Movie node includes properties like title, tagline, and release year." |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row
Query call with path
CALL apoc.ml.fromQueries(['MATCH (n:Movie) RETURN n', 'MATCH p=(n:Movie)--() RETURN p'],
    {apiKey: <apiKey>})
YIELD value
RETURN *
Example response
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| value                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| "models relationships in the movie industry, connecting :Person nodes to :Movie nodes.
    It represents actors, directors, writers, producers, and reviewers connected to movies they are involved with.
    Similar to a social network graph but specialized for the entertainment industry.
    Each relationship type corresponds to common roles in movie production and reviewing.
    Allows for querying and analyzing connections and collaborations within the movie business." |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row
Table 22. Input Parameters
name description

queries

The list of queries

conf

An optional configuration map, please check the next section

Table 23. Configuration map
name description mandatory

apiKey

OpenAI API key

in case apoc.openai.key is not defined

model

The Open AI model

no, default gpt-3.5-turbo

sample

The number of nodes to skip, e.g. a sample of 1000 will read every 1000th node. It’s used as a parameter to apoc.meta.data procedure that computes the schema

no, default is a random number

Table 24. Results
name description

value

the description of the dataset

Query with Retrieval-augmented generation (RAG) technique

This procedure apoc.ml.rag takes a list of paths or a vector index name, relevant attributes and a natural language question to create a prompt implementing a Retrieval-augmented generation (RAG) technique.

See here for more info about the RAG process.

It uses the chat/completions API which is documented here.

Table 25. Input Parameters
name description mandatory

paths

the list of paths to retrieve and augment the prompt, it can also be a matching query or a vector index name

yes

attributes

the relevant attributes useful to retrieve and augment the prompt

yes

question

the user question

yes

conf

An optional configuration map, please check the next section

no

Table 26. Configuration map
name description mandatory

getLabelTypes

add the label / rel-type names to the info to augment the prompt

no, default true

embeddings

to search similar embeddings stored into a node vector index (in case of embeddings: "NODE") or relationship vector index (in case of embeddings: "REL")

no, default "FALSE"

topK

number of neighbors to find for each node (in case of embeddings: "NODE") or relationships (in case of embeddings: "REL")

no, default 40

apiKey

OpenAI API key

in case apoc.openai.key is not defined

prompt

the base prompt to be augmented with the context

no, default is:

"You are a customer service agent that helps a customer with answering questions about a service. Use the following context to answer the user question at the end. Make sure not to make any changes to the context if possible when prepare answers to provide accurate responses. If you don’t know the answer, just say `Sorry, I don’t know`, don’t try to make up an answer."

Using the apoc.ml.rag procedure we can reduce AI hallucinations (i.e. false or misleading responses), providing relevant and up-to-date information to our procedure via the 1st parameter.

For example, by executing the following procedure (with the gpt-3.5-turbo model, last updated in January 2022) we have a hallucination

Query call
CALL apoc.ml.openai.chat([
    {role:"user", content: "Which athletes won the gold medal in mixed doubles's curling  at the 2022 Winter Olympics?"}
], $apiKey)
Table 27. Example response
value

The gold medal in curling at the 2022 Winter Olympics was won by the Swedish men’s team and the Russian women’s team.

So, we can use the RAG technique to provide real results. For example with the given dataset (with data taken from this wikipedia page):

wikipedia dataset
CREATE (mixed2022:Discipline {title:"Mixed doubles's curling", year: 2022})
WITH mixed2022
CREATE (:Athlete {name: 'Stefania Constantini', country: 'Italy', irrelevant: 'asdasd'})-[:HAS_MEDAL {medal: 'Gold', irrelevant2: 'asdasd'}]->(mixed2022)
CREATE (:Athlete {name: 'Amos Mosaner', country: 'Italy', irrelevant: 'qweqwe'})-[:HAS_MEDAL {medal: 'Gold', irrelevant2: 'rwerew'}]->(mixed2022)
CREATE (:Athlete {name: 'Kristin Skaslien', country: 'Norway', irrelevant: 'dfgdfg'})-[:HAS_MEDAL {medal: 'Silver', irrelevant2: 'gdfg'}]->(mixed2022)
CREATE (:Athlete {name: 'Magnus Nedregotten', country: 'Norway', irrelevant: 'xcvxcv'})-[:HAS_MEDAL {medal: 'Silver', irrelevant2: 'asdasd'}]->(mixed2022)
CREATE (:Athlete {name: 'Almida de Val', country: 'Sweden', irrelevant: 'rtyrty'})-[:HAS_MEDAL {medal: 'Bronze', irrelevant2: 'bfbfb'}]->(mixed2022)
CREATE (:Athlete {name: 'Oskar Eriksson', country: 'Sweden', irrelevant: 'qwresdc'})-[:HAS_MEDAL {medal: 'Bronze', irrelevant2: 'juju'}]->(mixed2022)

we can execute:

Query call
MATCH path=(:Athlete)-[:HAS_MEDAL]->(Discipline)
WITH collect(path) AS paths
CALL apoc.ml.rag(paths,
  ["name", "country", "medal", "title", "year"],
  "Which athletes won the gold medal in mixed doubles's curling  at the 2022 Winter Olympics?",
  {apiKey: $apiKey}
) YIELD value
RETURN value
Table 28. Example response
value

The gold medal in curling at the 2022 Winter Olympics was won by Stefania Constantini and Amos Mosaner from Italy.

or:

Query call
MATCH path=(:Athlete)-[:HAS_MEDAL]->(Discipline)
WITH collect(path) AS paths
CALL apoc.ml.rag(paths,
  ["name", "country", "medal", "title", "year"],
  "Which athletes won the silver medal in mixed doubles's curling  at the 2022 Winter Olympics?",
  {apiKey: $apiKey}
) YIELD value
RETURN value
Table 29. Example response
value

The gold medal in curling at the 2022 Winter Olympics was won by Kristin Skaslien and Magnus Nedregotten from Norway.

We can also pass a string query returning paths/relationships/nodes, for example:

CALL apoc.ml.rag("MATCH path=(:Athlete)-[:HAS_MEDAL]->(Discipline) WITH collect(path) AS paths",
  ["name", "country", "medal", "title", "year"],
  "Which athletes won the gold medal in mixed doubles's curling  at the 2022 Winter Olympics?",
  {apiKey: $apiKey}
) YIELD value
RETURN value
Table 30. Example response
value

The gold medal in curling at the 2022 Winter Olympics was won by Stefania Constantini and Amos Mosaner from Italy.

or we can pass a vector index name as the 1st parameter, in case we stored useful info into embedding nodes. For example, given this node vector index:

CREATE VECTOR INDEX `rag-embeddings`
FOR (n:RagEmbedding) ON (n.embedding)
OPTIONS {indexConfig: {
 `vector.dimensions`: 1536,
 `vector.similarity_function`: 'cosine'
}}

and some (:RagEmbedding) nodes with the text properties, we can execute:

CALL apoc.ml.rag("rag-embeddings",
  ["text"],
  "Which athletes won the gold medal in mixed doubles's curling  at the 2022 Winter Olympics?",
  {apiKey: $apiKey, embeddings: "NODE", topK: 20}
) YIELD value
RETURN value

or, with a relationship vector index:

CREATE VECTOR INDEX `rag-rel-embeddings`
FOR ()-[r:RAG_EMBEDDING]-() ON (r.embedding)
OPTIONS {indexConfig: {
 `vector.dimensions`: 1536,
 `vector.similarity_function`: 'cosine'
}}

and some [:RagEmbedding] relationships with the text properties, we can execute:

CALL apoc.ml.rag("rag-rel-embeddings",
  ["text"],
  "Which athletes won the gold medal in mixed doubles's curling  at the 2022 Winter Olympics?",
  {apiKey: $apiKey, embeddings: "REL", topK: 20}
) YIELD value
RETURN value