vertex_ai

Creator: coderz1093

Last updated:

Add to Cart

Description:

vertex ai

Vertex AI API Client #




Unofficial Dart client for the Vertex AI API.
Features #

Generative AI

Text models
Text chat models
Text embeddings models


Matching Engine

Create and manage indexes and index endpoints
Query indexes



Generative AI #
Generative AI
support on Vertex AI (also known as genai) gives you access to Google's large
generative AI models so you can use in your AI-powered applications.
Authentication #
The VertexAIGenAIClient delegates authentication to the
googleapis_auth package.
To create an instance of VertexAIGenAIClient you need to provide an
AuthClient
instance.
There are several ways to obtain an AuthClient depending on your use case.
Check out the googleapis_auth
package documentation for more details.
Example using a service account JSON:
final serviceAccountCredentials = ServiceAccountCredentials.fromJson(
json.decode(serviceAccountJson),
);
final authClient = await clientViaServiceAccount(
serviceAccountCredentials,
[VertexAIGenAIClient.cloudPlatformScope],
);
final vertexAi = VertexAIGenAIClient(
authHttpClient: authClient,
project: 'your-project-id',
);
copied to clipboard
The service account should have the following
permission:

aiplatform.endpoints.predict

The requiredOAuth2 scope
is:

https://www.googleapis.com/auth/cloud-platform (you can use the constant
VertexAIGenAIClient.cloudPlatformScope)

Text models #
PaLM API for text
is fine-tuned for language tasks such as classification, summarization, and
entity extraction.
final res = await vertexAi.text.predict(
prompt: 'What is the purpose of life?',
);
copied to clipboard
Chat models #
PaLM API for chat
is fine-tuned for multi-turn chat, where the model keeps track of previous
messages in the chat and uses it as context for generating new responses.
final res = await vertexAi.chat.predict(
context: 'I want you to act as a Socrat.',
messages: const [
VertexAIChatModelMessage(
author: 'USER',
content: 'Is justice neccessary in a society?',
),
],
);
copied to clipboard
Text embeddings #
The Text Embedding API
generates vector embeddings for input text. You can use embeddings for tasks
like semantic search, recommendation, classification, and outlier detection.
final res = await vertexAi.textEmbeddings.predict(
content: [
const VertexAITextEmbeddingsModelContent(
taskType: VertexAITextEmbeddingsModelTaskType.retrievalDocument,
title: 'The Paradox of Wisdom',
content: 'The only true wisdom is in knowing you know nothing',
),
);
copied to clipboard
Matching Engine #
Vertex AI Matching Engine
provides the industry's leading high-scale low latency vector database. These
vector databases are commonly referred to as vector similarity-matching or an
approximate nearest neighbor (ANN) service.
Matching Engine provides tooling to build use cases that match semantically
similar items. More specifically, given a query item, Matching Engine finds the
most semantically similar items to it from a large corpus of candidate items.
Authentication #
The VertexAIMatchingEngineClient delegates authentication to the
googleapis_auth package.
To create an instance of VertexAIMatchingEngineClient you need to provide
an AuthClient
instance.
There are several ways to obtain an AuthClient depending on your use case.
Check out the googleapis_auth
package documentation for more details.
Example using a service account JSON:
final serviceAccountCredentials = ServiceAccountCredentials.fromJson(
json.decode(serviceAccountJson),
);
final authClient = await clientViaServiceAccount(
serviceAccountCredentials,
[VertexAIGenAIClient.cloudPlatformScope],
);
final vertexAi = VertexAIMatchingEngineClient(
authHttpClient: authClient,
project: 'your-project-id',
location: 'europe-west1',
);
copied to clipboard
To be able to create and manage indexes and index endpoints, the service
account should have the following permissions:

aiplatform.indexes.create
aiplatform.indexes.get
aiplatform.indexes.list
aiplatform.indexes.update
aiplatform.indexes.delete
aiplatform.indexEndpoints.create
aiplatform.indexEndpoints.get
aiplatform.indexEndpoints.list
aiplatform.indexEndpoints.update
aiplatform.indexEndpoints.delete
aiplatform.indexEndpoints.deploy
aiplatform.indexEndpoints.undeploy

If you just want to query an index endpoint, the service account only needs:

aiplatform.indexEndpoints.queryVectors

The requiredOAuth2 scope
is:

https://www.googleapis.com/auth/cloud-platform (you can use the constant
VertexAIMatchingEngineClient.cloudPlatformScope)
See: https://cloud.google.com/vertex-ai/docs/generative-ai/access-control

Create an index #

Generate embeddings for your data and save them to a file (see supported
formats here).
Create a Cloud Storage bucket and upload the embeddings file.
Create the index:

final operation = await marchingEngine.indexes.create(
displayName: 'test-index',
description: 'This is a test index',
metadata: const VertexAINearestNeighborSearch(
contentsDeltaUri: 'gs://bucket-name/path-to-index-dir',
config: VertexAINearestNeighborSearchConfig(
dimensions: 768,
algorithmConfig: VertexAITreeAhAlgorithmConfig(),
),
),
);
copied to clipboard
To check the status of the operation:
final operation = await marchingEngine.indexes.operations.get(
name: operation.name,
);
print(operation.done);
copied to clipboard
Get index information #
final index = await marchingEngine.indexes.get(id: '5086059315115065344');
copied to clipboard
You can also list all indexes:
final indexes = await marchingEngine.indexes.list();
copied to clipboard
Update an index #
final res = await marchingEngine.indexes.update(
id: '5086059315115065344',
metadata: const VertexAIIndexRequestMetadata(
contentsDeltaUri: 'gs://bucket-name/path-to-index-dir',
isCompleteOverwrite: true,
),
);
copied to clipboard
Create an index endpoint #
final operation = await marchingEngine.indexEndpoints.create(
displayName: 'test-index-endpoint',
description: 'This is a test index endpoint',
publicEndpointEnabled: true,
);
copied to clipboard
To check the status of the operation:
final operation = await marchingEngine.indexEndpoints.operations.get(
name: operation.name,
);
print(operation.done);
copied to clipboard
Deploy an index to an index endpoint #
final operation = await marchingEngine.indexEndpoints.deployIndex(
indexId: '5086059315115065344',
indexEndpointId: '8572232454792807200',
deployedIndexId: 'deployment1',
deployedIndexDisplayName: 'test-deployed-index',
);
copied to clipboard
You can check the status of the operation as shown above.
If you want to enable autoscaling:
final operation = await marchingEngine.indexEndpoints.deployIndex(
indexId: '5086059315115065344',
indexEndpointId: '8572232454792807200',
deployedIndexId: 'deployment1',
deployedIndexDisplayName: 'test-deployed-index',
automaticResources: const VertexAIAutomaticResources(
minReplicaCount: 2,
maxReplicaCount: 10,
),
);
copied to clipboard
Get index endpoint information #
final ie = await marchingEngine.indexEndpoints.get(id: '8572232454792807200');
copied to clipboard
You can also list all index endpoints:
final indexEndpoints = await marchingEngine.indexEndpoints.list();
copied to clipboard
Mutate index endpoint #
final operation = await marchingEngine.indexEndpoints.mutateDeployedIndex(
indexEndpointId: '8572232454792807200',
deployedIndexId: 'deployment1',
automaticResources: const VertexAIAutomaticResources(
minReplicaCount: 2,
maxReplicaCount: 20,
),
);
copied to clipboard
Undeploy an index from an index endpoint #
final operation = await marchingEngine.indexEndpoints.undeployIndex(
indexEndpointId: '8572232454792807200',
deployedIndexId: 'deployment1',
);
copied to clipboard
Delete an index endpoint #
final operation = await marchingEngine.indexEndpoints.delete(
id: '8572232454792807200',
);
copied to clipboard
Delete an index #
final operation = await marchingEngine.indexes.delete(
id: '5086059315115065344',
);
copied to clipboard
Query an index using the index endpoint #
Once you've created the index, you can run queries to get its nearest neighbors.
Mind that you will need a different VertexAIMatchingEngineClient for
calling this method, as the public query endpoint has a different rootUrl
than the rest of the API (e.g. https://xxxxxxxxxx.europe-west1-xxxxxxxxxxxx.vdb.vertexai.goog).
Check the VertexAIIndexEndpoint.publicEndpointDomainName of your index
endpoint by calling VertexAIMatchingEngineClient.indexEndpoints.get. Then
create a new client setting the [VertexAIMatchingEngineClient.rootUrl] to that
value (mind that you need to add https:// to the beginning of the domain
name).
final machineEngineQuery = VertexAIMatchingEngineClient(
authHttpClient: authClient,
project: Platform.environment['VERTEX_AI_PROJECT_ID']!,
rootUrl:
'https://1451028333.europe-west1-706285145444.vdb.vertexai.goog/',
);
final res = await machineEngineQuery.indexEndpoints.findNeighbors(
indexEndpointId: '8572232454792807200',
deployedIndexId: 'deployment1',
queries: const [
VertexAIFindNeighborsRequestQuery(
datapoint: VertexAIIndexDatapoint(
datapointId: 'your-datapoint-id',
featureVector: [-0.0024800552055239677, 0.011974085122346878, ...],
),
neighborCount: 3,
),
],
);
copied to clipboard
Docs: https://cloud.google.com/vertex-ai/docs/matching-engine/query-index-public-endpoint
License #
Vertex AI API Client is licensed under the MIT License.

License

For personal and professional use. You cannot resell or redistribute these repositories in their original state.

Files:

Customer Reviews

There are no reviews.