提问者:小点点

如何使用kfp神器与skLearning?


我正在尝试在VertexAI(Google Cloud Platform)中使用kubeflow管道(kfp)组件开发自定义管道。管道的步骤是:

  1. 从大查询表中读取数据
  2. 创建一个熊猫DataFrame
  3. 使用DataFrame训练K-Means模型
  4. 将模型部署到endpoint

这是第2步的代码。我不得不使用Output[Artiect]作为输出,因为我在这里找到的pd. DataFrame类型不起作用。

@component(base_image="python:3.9", packages_to_install=["google-cloud-bigquery","pandas","pyarrow"])
def create_dataframe(
    project: str,
    region: str,
    destination_dataset: str,
    destination_table_name: str,
    df: Output[Artifact],
):
    
    from google.cloud import bigquery
    
    client = bigquery.Client(project=project, location=region)
    dataset_ref = bigquery.DatasetReference(project, destination_dataset)
    table_ref = dataset_ref.table(destination_table_name)
    table = client.get_table(table_ref)

    df = client.list_rows(table).to_dataframe()

这里是第3步的代码:

@component(base_image="python:3.9", packages_to_install=['sklearn'])
def kmeans_training(
        dataset: Input[Artifact],
        model: Output[Model],
        num_clusters: int,
):
    from sklearn.cluster import KMeans
    model = KMeans(num_clusters, random_state=220417)
    model.fit(dataset)

由于以下错误,管道的运行停止:

TypeError: float() argument must be a string or a number, not 'Artifact'

是否可以将神器转换为numpy数组Dataframe


共1个答案

匿名用户

我使用以下代码找到了解决方案。现在我可以使用步骤2的输出在步骤3中训练模型。

第二步:

@component(base_image="python:3.9", packages_to_install=["google-cloud-bigquery","pandas","pyarrow"])
def create_dataframe(
    project: str,
    region: str,
    destination_dataset: str,
    destination_table_name: str,
    df: Output[Dataset],
):
    
    from google.cloud import bigquery
    
    client = bigquery.Client(project=project, location=region)
    dataset_ref = bigquery.DatasetReference(project, destination_dataset)
    table_ref = dataset_ref.table(destination_table_name)
    table = client.get_table(table_ref)

    train = client.list_rows(table).to_dataframe()
    
    train.to_csv(df.path)

第三步:

@component(base_image="python:3.9", packages_to_install=['sklearn','pandas','joblib'])
def kmeans_training(
        dataset: Input[Dataset],
        model_artifact: Output[Model],
        num_clusters: int,
):
    from sklearn.cluster import KMeans
    import pandas as pd
    from joblib import dump
    
    data = pd.read_csv(dataset.path)
    
    model = KMeans(num_clusters, random_state=220417)
    model.fit(data)
    
    dump(model, model_artifact.path)