Google Cloud Vertex.AI API Access
You need to create a Google Cloud project in your account an enable the Vertex.AI services. As an access-token you can run gcloud auth print-access-token . Using these services will incurr costs on your Google Cloud account.
|
All the following procedures can have the following APOC config, i.e. in apoc.conf
or via docker env variable
.Apoc configuration
key |
description |
default |
apoc.ml.vertexai.url |
the OpenAI endpoint base url |
the |
Moreover, they can have the following configuration keys, as the last parameter.
key | description | default |
---|---|---|
endpoint |
analogous to |
|
headers |
to add or edit the HTTP default header |
|
model |
The Vertex AI model |
depends on the procedure |
region |
The Vertex AI region |
us-central1 |
resource |
The Vertex AI resource (see below) |
depends on the procedure |
temperature, maxOutputTokens, maxDecodeSteps, topP, topK |
Optional parameter which can be passed into the HTTP request. Depend on the API used |
{temperature: 0.3, maxOutputTokens: 256, maxDecodeSteps: 200, topP: 0.8, topK: 40} |
We can define the endpoint
configuration as a full URL, e.g. https://us-central1-aiplatform.googleapis.com/v1/projects/myVertexAIProject/locations/us-central1/publishers/google/models/gemini-pro-vision:streamGenerateContent
,
or define it via parameters that will then be replaced by the other configurations.
For example, if we define no endpoint
config.,
the default one https://{region}-aiplatform.googleapis.com/v1/projects/{project}/locations/{region}/publishers/google/models/{model}:{resource}
will be valued, where:
-
{model}
will be model defined by themodel
configuration -
{region}
defined byregion
configuration -
{project}
defined by the 3rd parameter (project
) -
{resource}
defined byresource
configuration
Or else, we can define an endpoint
as https://us-central1-aiplatform.googleapis.com/v1/projects/{project}/locations/us-central1/publishers/google/models/gemini-pro-vision:streamGenerateContent
,
and in this case we just substitute {project}
with the 3rd parameter.
Let’s see some example.
Generate Embeddings API
This procedure apoc.ml.vertexai.embedding
can take a list of text strings, and will return one row per string, with the embedding data as a 768 element vector.
It uses the embedding endpoint which is documented here.
API Quotas are per project/per region and you can over-ride the default us-central1 in the configuration map e.g. {region:'us-east4'}. GCP Regions can be found here: https://cloud.google.com/about/locations
Additional configuration is passed to the API, the default model used is textembedding-gecko
.
CALL apoc.ml.vertexai.embedding(['Some Text'], $accessToken, $project, {region:'<region>'}) yield index, text, embedding;
index | text | embedding |
---|---|---|
0 |
"Some Text" |
[-0.0065358975, -7.9563365E-4, …. -0.010693862, -0.005087272] |
name | description |
---|---|
texts |
List of text strings |
accessToken |
Vertex.AI API access token |
project |
Google Cloud project |
configuration |
optional map for entries like model and other request parameters like |
name | description |
---|---|
index |
index entry in original list |
text |
line of text from original list |
embedding |
768 element floating point embedding vector for the textembedding-gecko model |
Text Completion API
This procedure apoc.ml.vertexai.completion
can continue/complete a given text.
It uses the completion model API which is documented here.
Additional configuration is passed to the API, the default model used is text-bison
.
CALL apoc.ml.vertexai.completion('What color is the sky? Answer in one word: ', $apiKey, $project, {})
{value={safetyAttributes={blocked=false, scores=[0.1], categories=[Sexual]}, recitationResult={recitations=[], recitationAction=NO_ACTION}, content=blue}}
name | description |
---|---|
prompt |
Text to complete |
accessToken |
Vertex.AI API access token |
project |
Google Cloud project |
configuration |
optional map for entries like model, region, temperature, topK, topP, maxOutputTokens, and other request parameters |
name | description |
---|---|
value |
result entry from Vertex.AI (content, safetyAttributes(blocked, categories, scores), recitationResult(recitationAction, recitations)) |
Chat Completion API
This procedure apoc.ml.vertexai.chat
takes a list of maps of chat exchanges between assistant and user (with optional system context), and will return the next message in the flow.
It uses the chat model API which is documented here.
Additional configuration is passed to the API, the default model used is chat-bison
.
CALL apoc.ml.vertexai.chat(
/*messages*/
[{author:"user", content:"What planet do timelords live on?"}],
$apiKey, $project,
{temperature:0},
/*context*/ "Fictional universe of Doctor Who. Only answer with a single word!",
/*examples*/ [{input:{content:"What planet do humans live on?"}, output:{content:"Earth"}}])
yield value
{value={candidates=[{author=1, content=Gallifrey.}], safetyAttributes={blocked=false, scores=[0.1, 0.1, 0.1], categories=[Religion & Belief, Sexual, Toxic]}, recitationResults=[{recitations=[], recitationAction=NO_ACTION}]}}
name | description |
---|---|
messages |
List of maps of instructions with `{author:"bot |
user", content:"text"}` |
accessToken |
Vertex.AI API access token |
project |
Google Cloud project |
configuration |
optional map for entries like region, model, temperature, topK, topP, maxOutputTokens and other parameters |
context |
optional context and system prompt for the completion |
examples |
name | description |
---|---|
value |
result entry from Vertex.AI (containing candidates(author, content), safetyAttributes(categories, scores, blocked), recitationResults(recitationAction, recitations)) |
Streaming API
This procedure apoc.ml.vertexai.stream
takes a list of maps of contents exchanges between assistant and user (with optional system context), and will return the next message in the flow.
By default, it uses the Gemini AI APIs.
CALL apoc.ml.vertexai.stream([{role: "user", parts: [{text: "translate book in italian"}]}], '<accessToken>', '<projectID>')
value |
---|
|
We can adjust the parameter, for example temperature
CALL apoc.ml.vertexai.stream([{role: "user", parts: [{text: "translate book in italian"}]}], '<accessToken>', '<projectID>',
{temperature: 0})
which corresponds to the following Http body request, where maxOutputTokens
, topP
and topK
have the default values specified above (Common configuration parameter
):
{ "contents": [ { "role": "user", "parts": [ { "text": "translate book in italian" } ] } ], "generation_config": { "temperature": 0, "maxOutputTokens": 256, "topP": 0.8, "topK": 40 } }
Custom API
Using this procedure we can potentially invoke any API available with vertex AI.
To permit maximum flexibility, in this case the first parameter is not manipulated and exactly matches the body of the HTTP request,
and the return type is ANY
.
CALL apoc.ml.vertexai.custom({
contents: [
{
role: "user",
parts: [
{text: "What is this?"},
{inlineData: {
mimeType: "image/png",
data: '<base64Image>'}
}
]
}
]
},
"<accessToken>",
"<projectId>",
{model: 'gemini-pro-vision'}
)
value |
---|
|
CALL apoc.ml.vertexai.custom({contents: {role: "user", parts: [{text: "translate book in italian"}]}},
"<accessToken>",
"<projectId>",
{endpoint: "https://us-central1-aiplatform.googleapis.com/v1/projects/{project}/locations/us-central1/publishers/google/models/gemini-pro-vision:streamGenerateContent"}
)
value |
---|
|
CALL apoc.ml.vertexai.custom({contents: {role: "user", parts: [{text: "translate book in italian"}]}},
"<accessToken>",
null,
{endpoint: "https://us-central1-aiplatform.googleapis.com/v1/projects/vertex-project-413513/locations/us-central1/publishers/google/models/gemini-pro-vision:streamGenerateContent"}
)
value |
---|
|
CALL apoc.ml.vertexai.custom({
"contents": [
{ "parts": [
{
"text": "translate the word 'book' in italian"
}],
"role": "user"
}]
},
"<accessToken>",
"<projectId>",
{model: 'gemini-1.5-flash-001'}
)
value |
---|
|
Moreover, we can use with other Google API with endpoints that don’t start with https://<region>-aiplatform.googleapis.com
,
for example we can use the Text-to-Speech API:
CALL apoc.ml.vertexai.custom(
{
input:{
text:'just a test'
},
voice:{
languageCode:'en-US',
name:'en-US-Studio-O'
},
audioConfig:{
audioEncoding:'LINEAR16',
speakingRate:1
}
},
"<accessToken>",
"<projectId>",
{endpoint: "https://texttospeech.googleapis.com/v1/text:synthesize"})