Description
Hi, I'm running Kubeflow (V0.5) on AWS, using Pipelines I want to use the Tensorboard button in the pipelines metadata result.
I saved my pipeline result into a private S3 bucket with the following code:
metadata = {
'outputs' : [{
'type': 'tensorboard',
'source': 's3://'+ bucket_name + '/' + destination,
}]}
with open('/mlpipeline-ui-metadata.json', 'w') as f:
json.dump(metadata, f)`
But when I try opening the Artifacts results of that operation I only get the circle loading bar and nothing happens. If I look at the Network Request i find a POST REQUEST with the following response :
{"outputs": [{"type": "tensorboard", "source": "s3://BUCKET_NAME/FOLDER1/FOLDER2/1564392073"}]}
I tried using GCS instead to save the file and in that case, the Tensorboard button appears and I can start the TB instance. (It fails because I don't have any GC creds (MountVolume.SetUp failed for volume "gcp-credentials" : secrets "user-gcp-sa" not found).
How should I save my summary writers to be able to start the TB instance from the pipeline artifacts on AWS/ S3?
Thank your for your help,
EDIT :
I found that issue kubeflow/pipelines#337 in the pipeline project of Kubeflow. Can someone confirm that Tensorboard through Pipelines on AWS stored in S3 is currently not supported ?