OpenAI API Access
You need to acquire an OpenAI API key to use these procedures. Using them will incur costs on your OpenAI account. You can set the api key globally by defining the But you can also use these procedures to call OpenAI-compatible APIs, which will therefore have their own API key (or even without API Key). See the paragraph OpenAI-compatible provider below. |
All the following procedures can have the following APOC config, i.e. in apoc.conf
or via docker env variable
.Apoc configuration
key |
description |
default |
apoc.ml.openai.type |
"AZURE", "HUGGINGFACE", "OPENAI", indicates whether the API is Azure, HuggingFace or another one |
"OPENAI" |
apoc.ml.openai.url |
the OpenAI endpoint base url |
https://api.openai.com/v1
(or empty string if |
Moreover, they can have the following configuration keys, as the last parameter. If present, they take precedence over the analogous APOC configs.
key |
description |
apiType |
analogous to |
endpoint |
analogous to |
apiVersion |
analogous to |
path |
To customize the url portion added to the base url (defined by the |
jsonPath |
To customize JSONPath of the response.
The default is |
Therefore, we can use the following procedures with the Open AI Services provided by Azure, pointing to the correct endpoints as explained in the documentation.
That is, if we want to call an endpoint like https://my-resource.openai.azure.com/openai/deployments/my-deployment-id/embeddings?api-version=my-api-version` for example, by passing as a configuration parameter:
{endpoint: "https://my-resource.openai.azure.com/openai/deployments/my-deployment-id",
apiVersion: my-api-version,
apiType: 'AZURE'
}
The /embeddings
portion will be added under-the-hood.
Similarly, if we use the apoc.ml.openai.completion
, if we want to call an endpoint like https://my-resource.openai.azure.com/openai/deployments/my-deployment-id/completions?api-version=my-api-version
for example,
we can write the same configuration parameter as above,
where the /completions
portion will be added.
While using the apoc.ml.openai.chat
, with the same configuration, the url portion /chat/completions
will be added
Or else, we can write this apoc.conf
:
apoc.ml.openai.url=https://my-resource.openai.azure.com/openai/deployments/my-deployment-id
apoc.ml.azure.api.version=my-api-version
apoc.ml.openai.type=AZURE
Generate Embeddings API
This procedure apoc.ml.openai.embedding
can take a list of text strings, and will return one row per string, with the embedding data as a 1536 element vector.
It uses the /embeddings/create
API which is documented here.
Additional configuration is passed to the API, the default model used is text-embedding-ada-002
.
CALL apoc.ml.openai.embedding(['Some Text'], $apiKey, {}) yield index, text, embedding;
index | text | embedding |
---|---|---|
0 |
"Some Text" |
[-0.0065358975, -7.9563365E-4, …. -0.010693862, -0.005087272] |
name | description |
---|---|
texts |
List of text strings |
apiKey |
OpenAI API key |
configuration |
optional map for entries like model and other request parameters. We can also pass a custom Or an |
name | description |
---|---|
index |
index entry in original list |
text |
line of text from original list |
embedding |
1536 element floating point embedding vector for ada-002 model |
Text Completion API
This procedure apoc.ml.openai.completion
can continue/complete a given text.
It uses the /completions/create
API which is documented here.
Additional configuration is passed to the API, the default model used is text-davinci-003
.
CALL apoc.ml.openai.completion('What color is the sky? Answer in one word: ', $apiKey, {config}) yield value;
{ created=1684248202, model="text-davinci-003", id="cmpl-7GqBWwX49yMJljdmnLkWxYettZoOy", usage={completion_tokens=2, prompt_tokens=12, total_tokens=14}, choices=[{finish_reason="stop", index=0, text="Blue", logprobs=null}], object="text_completion"}
name | description |
---|---|
prompt |
Text to complete |
apiKey |
OpenAI API key |
configuration |
optional map for entries like model, temperature, and other request parameters |
name | description |
---|---|
value |
result entry from OpenAI (containing) |
OpenLM API
We can also call the Completion API of HuggingFace and Cohere, similar to the OpenLM library, as below.
For the HuggingFace API, we have to define the config apiType: 'HUGGINGFACE'
, since we have to transform the body request.
For example:
CALL apoc.ml.openai.completion('What color is the sky? Answer in one word: ', $huggingFaceApiKey,
{endpoint: 'https://api-inference.huggingface.co/models/gpt2', apiType: 'HUGGINGFACE', model: 'gpt2', path: ''})
Or also, by using the Cohere API, where we have to define path: '''
not to add the /completions
suffix to the URL:
CALL apoc.ml.openai.completion('What color is the sky? Answer in one word: ', $cohereApiKey,
{endpoint: 'https://api.cohere.ai/v1/generate', path: '', model: 'command'})
Chat Completion API
This procedure apoc.ml.openai.chat
takes a list of maps of chat exchanges between assistant and user (with optional system message), and will return the next message in the flow.
It uses the /chat/create
API which is documented here.
Additional configuration is passed to the API, the default model used is gpt-3.5-turbo
.
CALL apoc.ml.openai.chat([
{role:"system", content:"Only answer with a single word"},
{role:"user", content:"What planet do humans live on?"}
], $apiKey) yield value
{created=1684248203, id="chatcmpl-7GqBXZr94avd4fluYDi2fWEz7DIHL", object="chat.completion", model="gpt-3.5-turbo-0301", usage={completion_tokens=2, prompt_tokens=26, total_tokens=28}, choices=[{finish_reason="stop", index=0, message={role="assistant", content="Earth."}}]}
name | description |
---|---|
messages |
List of maps of instructions with |
apiKey |
OpenAI API key |
configuration |
optional map for entries like model, temperature, and other request parameters |
name | description |
---|---|
value |
result entry from OpenAI (containing created, id, model, object, usage(tokens), choices(message, index, finish_reason)) |
OpenAI-compatible provider
We can also use these procedures to call OpenAI-compatible APIs,
by defining the endpoint
config, and possibly the model
, path
and jsonPath
configs.
For example, we can call the Anyscale Endpoints:
CALL apoc.ml.openai.embedding(['Some Text'], $anyScaleApiKey,
{endpoint: 'https://api.endpoints.anyscale.com/v1', model: 'thenlper/gte-large'})
Or via LocalAI APIs (note that the apiKey is null
by default):
CALL apoc.ml.openai.embedding(['Some Text'], "ignored",
{endpoint: 'http://localhost:8080/v1', model: 'text-embedding-ada-002'})
Or also, by using LLMatic Library:
CALL apoc.ml.openai.embedding(['Some Text'], "ignored",
{endpoint: 'http://localhost:3000/v1', model: 'thenlper/gte-large'})
Furthermore, we can use the Groq API, e.g.:
CALL apoc.ml.openai.chat([{"role": "user", "content": "Explain the importance of low latency LLMs"}],
'<apiKey>',
{endpoint: 'https://api.groq.com/openai/v1', model: 'mixtral-8x7b-32768'})