OpenAI API Access
You need to acquire an OpenAI API key to use these procedures. Using them will incur costs on your OpenAI account. You can set the api key globally by defining the apoc.openai.key configuration in apoc.conf
|
Generate Embeddings API
This procedure apoc.ml.openai.embedding
can take a list of text strings, and will return one row per string, with the embedding data as a 1536 element vector.
It uses the /embeddings/create
API which is documented here.
Additional configuration is passed to the API, the default model used is text-embedding-ada-002
.
CALL apoc.ml.openai.embedding(['Some Text'], $apiKey, {}) yield index, text, embedding;
index | text | embedding |
---|---|---|
0 |
"Some Text" |
[-0.0065358975, -7.9563365E-4, …. -0.010693862, -0.005087272] |
name | description |
---|---|
texts |
List of text strings |
apiKey |
OpenAI API key |
configuration |
optional map for entries like model and other request parameters |
name | description |
---|---|
index |
index entry in original list |
text |
line of text from original list |
embedding |
1536 element floating point embedding vector for ada-002 model |
Text Completion API
This procedure apoc.ml.openai.completion
can continue/complete a given text.
It uses the /completions/create
API which is documented here.
Additional configuration is passed to the API, the default model used is text-davinci-003
.
CALL apoc.ml.openai.completion('What color is the sky? Answer in one word: ', $apiKey, {config}) yield value;
{ created=1684248202, model="text-davinci-003", id="cmpl-7GqBWwX49yMJljdmnLkWxYettZoOy", usage={completion_tokens=2, prompt_tokens=12, total_tokens=14}, choices=[{finish_reason="stop", index=0, text="Blue", logprobs=null}], object="text_completion"}
name | description |
---|---|
prompt |
Text to complete |
apiKey |
OpenAI API key |
configuration |
optional map for entries like model, temperature, and other request parameters |
name | description |
---|---|
value |
result entry from OpenAI (containing) |
Chat Completion API
This procedure apoc.ml.openai.chat
takes a list of maps of chat exchanges between assistant and user (with optional system message), and will return the next message in the flow.
It uses the /chat/create
API which is documented here.
Additional configuration is passed to the API, the default model used is gpt-3.5-turbo
.
CALL apoc.ml.openai.chat([
{role:"system", content:"Only answer with a single word"},
{role:"user", content:"What planet do humans live on?"}
], $apiKey) yield value
{created=1684248203, id="chatcmpl-7GqBXZr94avd4fluYDi2fWEz7DIHL", object="chat.completion", model="gpt-3.5-turbo-0301", usage={completion_tokens=2, prompt_tokens=26, total_tokens=28}, choices=[{finish_reason="stop", index=0, message={role="assistant", content="Earth."}}]}
name | description |
---|---|
messages |
List of maps of instructions with `{role:"assistant |
user |
system", content:"text}` |
apiKey |
OpenAI API key |
configuration |
optional map for entries like model, temperature, and other request parameters |
name | description |
---|---|
value |
result entry from OpenAI (containing created, id, model, object, usage(tokens), choices(message, index, finish_reason)) |
Query with natural language
This procedure apoc.ml.query
takes a question in natural language and returns the results of that query.
It uses the chat/completions
API which is documented here.
CALL apoc.ml.query("What movies did Tom Hanks play in?") yield value, query
RETURN *
+------------------------------------------------------------------------------------------------------------------------------+
| value | query |
+------------------------------------------------------------------------------------------------------------------------------+
| {m.title -> "You've Got Mail"} | "cypher
MATCH (m:Movie)<-[:ACTED_IN]-(p:Person {name: 'Tom Hanks'})
RETURN m.title
" |
| {m.title -> "Apollo 13"} | "cypher
MATCH (m:Movie)<-[:ACTED_IN]-(p:Person {name: 'Tom Hanks'})
RETURN m.title
" |
| {m.title -> "Joe Versus the Volcano"} | "cypher
MATCH (m:Movie)<-[:ACTED_IN]-(p:Person {name: 'Tom Hanks'})
RETURN m.title
" |
| {m.title -> "That Thing You Do"} | "cypher
MATCH (m:Movie)<-[:ACTED_IN]-(p:Person {name: 'Tom Hanks'})
RETURN m.title
" |
| {m.title -> "Cloud Atlas"} | "cypher
MATCH (m:Movie)<-[:ACTED_IN]-(p:Person {name: 'Tom Hanks'})
RETURN m.title
" |
| {m.title -> "The Da Vinci Code"} | "cypher
MATCH (m:Movie)<-[:ACTED_IN]-(p:Person {name: 'Tom Hanks'})
RETURN m.title
" |
| {m.title -> "Sleepless in Seattle"} | "cypher
MATCH (m:Movie)<-[:ACTED_IN]-(p:Person {name: 'Tom Hanks'})
RETURN m.title
" |
| {m.title -> "A League of Their Own"} | "cypher
MATCH (m:Movie)<-[:ACTED_IN]-(p:Person {name: 'Tom Hanks'})
RETURN m.title
" |
| {m.title -> "The Green Mile"} | "cypher
MATCH (m:Movie)<-[:ACTED_IN]-(p:Person {name: 'Tom Hanks'})
RETURN m.title
" |
| {m.title -> "Charlie Wilson's War"} | "cypher
MATCH (m:Movie)<-[:ACTED_IN]-(p:Person {name: 'Tom Hanks'})
RETURN m.title
" |
| {m.title -> "Cast Away"} | "cypher
MATCH (m:Movie)<-[:ACTED_IN]-(p:Person {name: 'Tom Hanks'})
RETURN m.title
" |
| {m.title -> "The Polar Express"} | "cypher
MATCH (m:Movie)<-[:ACTED_IN]-(p:Person {name: 'Tom Hanks'})
RETURN m.title
" |
+------------------------------------------------------------------------------------------------------------------------------+
12 rows
name | description |
---|---|
question |
The question in the natural language |
conf |
An optional configuration map, please check the next section |
name | description | mandatory |
---|---|---|
retries |
The number of retries in case of API call failures |
no, default |
apiKey |
OpenAI API key |
in case |
model |
The Open AI model |
no, default |
sample |
The number of nodes to skip, e.g. a sample of 1000 will read every 1000th node. It’s used as a parameter to |
no, default is a random number |
name | description |
---|---|
value |
the result of the query |
cypher |
the query used to compute the result |
Describe the graph model with natural language
This procedure apoc.ml.schema
returns a description, in natural language, of the underlying dataset.
It uses the chat/completions
API which is documented here.
CALL apoc.ml.schema() yield value
RETURN *
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| value |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| "The graph database schema represents a system where users can follow other users and review movies. Users (:Person) can either follow other users (:Person) or review movies (:Movie). The relationships allow users to express their preferences and opinions about movies. This schema can be compared to social media platforms where users can follow each other and leave reviews or ratings for movies they have watched. It can also be related to movie recommendation systems where user preferences and reviews play a crucial role in generating personalized recommendations." |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row
name | description |
---|---|
conf |
An optional configuration map, please check the next section |
name | description | mandatory |
---|---|---|
apiKey |
OpenAI API key |
in case |
model |
The Open AI model |
no, default |
sample |
The number of nodes to skip, e.g. a sample of 1000 will read every 1000th node. It’s used as a parameter to |
no, default is a random number |
name | description |
---|---|
value |
the description of the dataset |
Create cypher queries from a natural language query
This procedure apoc.ml.cypher
takes a natural language question and transforms it into a number of requested cypher queries.
It uses the chat/completions
API which is documented here.
CALL apoc.ml.cypher("Who are the actors which also directed a movie?", 4) yield cypher
RETURN *
+----------------------------------------------------------------------------------------------------------------+
| query |
+----------------------------------------------------------------------------------------------------------------+
| "
MATCH (a:Person)-[:ACTED_IN]->(m:Movie)<-[:DIRECTED]-(d:Person)
RETURN a.name as actor, d.name as director
" |
| "cypher
MATCH (a:Person)-[:ACTED_IN]->(m:Movie)<-[:DIRECTED]-(a)
RETURN a.name
" |
| "
MATCH (a:Person)-[:ACTED_IN]->(m:Movie)<-[:DIRECTED]-(d:Person)
RETURN a.name
" |
| "cypher
MATCH (a:Person)-[:ACTED_IN]->(:Movie)<-[:DIRECTED]-(a)
RETURN DISTINCT a.name
" |
+----------------------------------------------------------------------------------------------------------------+
4 rows
name | description | mandatory |
---|---|---|
question |
The question in the natural language |
yes |
name | description | mandatory |
---|---|---|
count |
The number of queries to retrieve |
no, default |
apiKey |
OpenAI API key |
in case |
model |
The Open AI model |
no, default |
sample |
The number of nodes to skip, e.g. a sample of 1000 will read every 1000th node. It’s used as a parameter to |
no, default is a random number |
name | description |
---|---|
value |
the description of the dataset |