提问者:小点点

使用自定义训练容器和模型服务容器构建顶点AI管道


我希望能够使用我制作并保存到我的工件注册表中的训练应用程序容器来训练模型。我希望能够使用烧瓶应用程序和可以处理一些逻辑的 /predict路由来部署模型——不一定只是预测输入json。它还需要一个我理解的 /healthz路由。所以基本上我想要一个管道,它在我制作的模型训练容器上执行训练工作,并使用带有我制作的模型服务容器的烧瓶应用程序部署模型。环顾Overflow,我想知道这个问题的管道是否具有我最终想要的正确布局。所以,像这样:

import kfp
from kfp.v2 import compiler
from kfp.v2.dsl import component
from kfp.v2.google import experimental
from google.cloud import aiplatform
from google_cloud_pipeline_components import aiplatform as gcc_aip

@kfp.dsl.pipeline(name=pipeline_name, pipeline_root=pipeline_root_path)
def pipeline():
        training_job_run_op = gcc_aip.CustomPythonPackageTrainingJobRunOp(
            project=project_id,
            display_name=training_job_name,
            model_display_name=model_display_name,
            python_package_gcs_uri=python_package_gcs_uri,
            python_module=python_module,
            container_uri=container_uri,
            staging_bucket=staging_bucket,
            model_serving_container_image_uri=model_serving_container_image_uri)

        # Upload model
        model_upload_op = gcc_aip.ModelUploadOp(
            project=project_id,
            display_name=model_display_name,
            artifact_uri=output_dir,
            serving_container_image_uri=model_serving_container_image_uri,
        )
        model_upload_op.after(training_job_run_op)

        # Deploy model
        model_deploy_op = gcc_aip.ModelDeployOp(
            project=project_id,
            model=model_upload_op.outputs["model"],
            endpoint=aiplatform.Endpoint(
                endpoint_name='0000000000').resource_name,
            deployed_model_display_name=model_display_name,
            machine_type="n1-standard-2",
            traffic_percentage=100)

    compiler.Compiler().compile(pipeline_func=pipeline,
                                package_path=pipeline_spec_path)

我希望model_serving_container_image_uriserving_container_image_uri都指的是我要制作的模型服务容器的URI。我已经制作了一个训练容器来训练模型并将saved_model. pb保存到Google Cloud Storage中。除了有一个处理预测和健康检查路由的烧瓶应用程序和一个公开烧瓶应用程序端口的Dockerfile之外,我还需要做什么来确保模型服务容器在此管道中工作?我在代码中从GCS安装模型的位置?在Dockerfile中?模型服务容器是如何工作的,以便在管道的构建过程中一切顺利?我很难找到任何教程或示例,准确地说,我想在任何地方做什么,尽管这似乎是一个很常见的场景。

为此,我尝试使用以下管道:

import kfp
from kfp.v2 import compiler
from kfp.v2.dsl import component
from kfp.v2.google import experimental
from google.cloud import aiplatform
from google_cloud_pipeline_components import aiplatform as gcc_aip

@kfp.dsl.pipeline(name=pipeline_name, pipeline_root=pipeline_root_path)
def pipeline(
        project: str = [redacted project ID],
        display_name: str = "custom-pipe",
        model_display_name: str = "test_model",
        training_container_uri: str = "us-central1-docker.pkg.dev/[redacted project ID]/custom-training-test",
        model_serving_container_image_uri: str = "us-central1-docker.pkg.dev/[redacted project ID]/custom-model-serving-test",
        model_serving_container_predict_route: str = "/predict",
        model_serving_container_health_route: str = "/healthz",
        model_serving_container_ports: str = "8080"
):
        training_job_run_op = gcc_aip.CustomContainerTrainingJobRunOp(
            display_name = display_name,
            container_uri=training_container_uri,
            model_serving_container_image_uri=model_serving_container_image_uri,
            model_serving_container_predict_route = model_serving_container_predict_route,
            model_serving_container_health_route = model_serving_container_health_route,
            model_serving_container_ports = model_serving_container_ports)

        # Upload model
        model_upload_op = gcc_aip.ModelUploadOp(
            project=project,
            display_name=model_display_name,
            serving_container_image_uri=model_serving_container_image_uri,
        )
        model_upload_op.after(training_job_run_op)

        # Deploy model
#        model_deploy_op = gcc_aip.ModelDeployOp(
#            project=project,
#            model=model_upload_op.outputs["model"],
#            endpoint=aiplatform.Endpoint(
#                endpoint_name='0000000000').resource_name,
#            deployed_model_display_name=model_display_name,
#            machine_type="n1-standard-2",
#            traffic_percentage=100)

这是失败的

google.api_core.exceptions.PermissionDenied: 403 Permission 'aiplatform.trainingPipelines.create' denied on resource '//aiplatform.googleapis.com/projects/u15c36a5b7a72fabfp-tp/locations/us-central1' (or it may not exist).

尽管我的服务号具有AI平台管道工作所需的查看器和库伯内特斯引擎管理员角色。我的训练容器将我的模型上传到谷歌云存储,我已经下载了我的模型服务容器,并将其用于在/预测服务。


共1个答案

匿名用户

根据403错误,请确保:

  • 您已启用AI平台API:https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com、compute_component、storage-component.googleapis.com
  • 您的服务号配置正确:https://cloud.google.com/vertex-ai/docs/general/custom-service-account

这可能是一个很好的谷歌创作的样本来比较:https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/pipelines/google_cloud_pipeline_components_model_upload_predict_evaluate.ipynb

相关问题