Skip to content

feat: implement inference server by using vllm #205

feat: implement inference server by using vllm

feat: implement inference server by using vllm #205

Triggered via pull request October 10, 2024 10:15
Status Cancelled
Total duration 4d 12h 19m 52s
Artifacts

preset-image-build-1ES.yml

on: pull_request
determine-models
0s
determine-models
Matrix: build-models
Fit to window
Zoom out
Zoom in

Annotations

1 error
determine-models
Canceling since a higher priority waiting request for 'Build and Push Preset Models 1ES-zhuangqh/support-vllm' exists