Skip to content

Actions: ywang96/vllm

All workflows

Actions

Loading...
Loading

Showing runs from all workflows
236 workflow runs
236 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

[BugFix] Prevent LLM.encode for non-generation Models (#5184)
clang-format #3: Commit 044793d pushed by ywang96
June 2, 2024 01:02 24s main
June 2, 2024 01:02 24s
June 2, 2024 01:02 33s
June 2, 2024 01:02 22s
[BugFix] Prevent LLM.encode for non-generation Models (#5184)
yapf #11: Commit 044793d pushed by ywang96
June 2, 2024 01:02 1m 12s main
June 2, 2024 01:02 1m 12s
[Feature][Kernel] Support bitsandbytes quantization and QLoRA (#4776)
clang-format #2: Commit b9c0605 pushed by ywang96
June 1, 2024 22:19 25s main
June 1, 2024 22:19 25s
June 1, 2024 22:19 1m 13s
June 1, 2024 22:19 26s
June 1, 2024 22:19 31s
[Core][Bugfix]: fix prefix caching for blockv2 (#4764)
clang-format #1: Commit e64fde4 pushed by ywang96
May 24, 2024 22:39 22s main
May 24, 2024 22:39 22s
[Core][Bugfix]: fix prefix caching for blockv2 (#4764)
mypy #9: Commit e64fde4 pushed by ywang96
May 24, 2024 22:39 29s main
May 24, 2024 22:39 29s
[Core][Bugfix]: fix prefix caching for blockv2 (#4764)
ruff #9: Commit e64fde4 pushed by ywang96
May 24, 2024 22:39 22s main
May 24, 2024 22:39 22s
[Core][Bugfix]: fix prefix caching for blockv2 (#4764)
yapf #9: Commit e64fde4 pushed by ywang96
May 24, 2024 22:39 1m 15s main
May 24, 2024 22:39 1m 15s
[Core] Add MultiprocessingGPUExecutor (#4539)
ruff #8: Commit 676a999 pushed by ywang96
May 14, 2024 18:36 23s main
May 14, 2024 18:36 23s
[Core] Add MultiprocessingGPUExecutor (#4539)
mypy #8: Commit 676a999 pushed by ywang96
May 14, 2024 18:36 34s main
May 14, 2024 18:36 34s
[Core] Add MultiprocessingGPUExecutor (#4539)
yapf #8: Commit 676a999 pushed by ywang96
May 14, 2024 18:36 1m 10s main
May 14, 2024 18:36 1m 10s
[Misc] Use vllm-flash-attn instead of flash-attn (#4686)
ruff #7: Commit 89579a2 pushed by ywang96
May 8, 2024 20:17 21s main
May 8, 2024 20:17 21s
[Misc] Use vllm-flash-attn instead of flash-attn (#4686)
mypy #7: Commit 89579a2 pushed by ywang96
May 8, 2024 20:17 33s main
May 8, 2024 20:17 33s
[Misc] Use vllm-flash-attn instead of flash-attn (#4686)
yapf #7: Commit 89579a2 pushed by ywang96
May 8, 2024 20:17 1m 17s main
May 8, 2024 20:17 1m 17s
April 23, 2024 00:30 36s
[Core][Distributed] use absolute path for library file (#4271)
yapf #6: Commit c1b4e41 pushed by ywang96
April 23, 2024 00:30 1m 9s main
April 23, 2024 00:30 1m 9s
April 23, 2024 00:30 36s
[Kernel] Add punica dimension for Swallow-MS-7B LoRA (#4134)
mypy #5: Commit a532225 pushed by ywang96
April 17, 2024 23:26 30s main
April 17, 2024 23:26 30s
[Kernel] Add punica dimension for Swallow-MS-7B LoRA (#4134)
ruff #5: Commit a532225 pushed by ywang96
April 17, 2024 23:26 23s main
April 17, 2024 23:26 23s
[Kernel] Add punica dimension for Swallow-MS-7B LoRA (#4134)
yapf #5: Commit a532225 pushed by ywang96
April 17, 2024 23:26 59s main
April 17, 2024 23:26 59s
LlavaNext
ruff #4: Pull request #1 synchronize by ywang96
April 15, 2024 06:05 26s llavanext
April 15, 2024 06:05 26s