Skip to content

Actions: Mozilla-Ocho/llamafile

All workflows

Actions

Loading...

Showing runs from all workflows
288 workflow runs
288 workflow runs
Event

Filter by event

Status

Filter by status

Branch
Actor

Filter by actor

Include ngl parameter in README.md (#267)
CI #132: Commit 6d8790b pushed by jart
February 20, 2024 15:03 5m 49s main
February 20, 2024 15:03 5m 49s
Have less checks in check_args (#241)
CI #129: Commit 84490a7 pushed by jart
February 19, 2024 21:56 5m 9s main
February 19, 2024 21:56 5m 9s
Add sandboxing for the server on Apple Silicon Macs (#261)
CI #128: Commit d8e58ed pushed by jart
February 19, 2024 21:55 5m 36s main
February 19, 2024 21:55 5m 36s
Always enable --embedding mode on server
CI #127: Commit 97eca06 pushed by jart
February 19, 2024 20:20 6m 1s main
February 19, 2024 20:20 6m 1s
Add sandboxing for the server on Apple Silicon Macs
CI #126: Pull request #261 opened by hafta
February 16, 2024 21:57 6m 43s hafta:main
February 16, 2024 21:57 6m 43s
Improve prompt eval time on x86 CPUs
CI #125: Commit 9f5002a pushed by jart
February 12, 2024 21:40 8m 48s main
February 12, 2024 21:40 8m 48s
Remove -fopenmp flag
CI #124: Commit 176f089 pushed by jart
February 12, 2024 19:41 6m 59s main
February 12, 2024 19:41 6m 59s
have less checks in check_args
CI #123: Pull request #241 opened by ahgamut
February 6, 2024 05:43 2m 40s ahgamut:less-checks
February 6, 2024 05:43 2m 40s
Make clickable HTTP server links easier to see
CI #122: Commit 7a8d5ee pushed by jart
February 5, 2024 18:38 5m 0s main
February 5, 2024 18:38 5m 0s
Make AVX mandatory and support VNNI
CI #121: Commit cdd7458 pushed by jart
February 1, 2024 17:42 7m 25s main
February 1, 2024 17:42 7m 25s
Fix issue with recent llamafile-convert change
CI #120: Commit 4fd603e pushed by jart
January 30, 2024 19:34 6m 39s main
January 30, 2024 19:34 6m 39s
Improve llamafile-convert command
CI #119: Commit 703e03a pushed by jart
January 30, 2024 02:01 7m 34s main
January 30, 2024 02:01 7m 34s
Add missing build rule
CI #118: Commit fb150ca pushed by jart
January 27, 2024 21:38 7m 4s main
January 27, 2024 21:38 7m 4s
Release llamafile v0.6.2
CI #117: Commit d4c602d pushed by jart
January 27, 2024 21:19 5m 21s main
January 27, 2024 21:19 5m 21s
Synchronize with llama.cpp 2024-01-27
CI #116: Commit dfd3335 pushed by jart
January 27, 2024 20:40 6m 46s main
January 27, 2024 20:40 6m 46s
Synchronize with llama.cpp 2024-01-26
CI #115: Commit c008e43 pushed by jart
January 27, 2024 19:32 5m 53s main
January 27, 2024 19:32 5m 53s
Synchronize with llama.cpp 2024-01-26
CI #114: Commit b5f245a pushed by jart
January 27, 2024 19:31 9m 29s main
January 27, 2024 19:31 9m 29s
Make GPU auto configuration more resilient
CI #113: Commit e34b35c pushed by jart
January 24, 2024 00:32 9m 29s main
January 24, 2024 00:32 9m 29s
Sanitize -ngl flag on Apple Metal
CI #112: Commit 79b88f8 pushed by jart
January 23, 2024 09:45 7m 1s main
January 23, 2024 09:45 7m 1s
Release llamafile v0.6.1
CI #111: Commit 389c389 pushed by jart
January 20, 2024 08:00 4m 56s main
January 20, 2024 08:00 4m 56s
Fix typo in OpenAI API
CI #110: Commit eb4989a pushed by jart
January 20, 2024 07:03 4m 40s main
January 20, 2024 07:03 4m 40s
Use thread-local register file for matmul speedups (#205)
CI #108: Commit df0b3ff pushed by jart
January 18, 2024 19:50 6m 47s main
January 18, 2024 19:50 6m 47s
use thread-local register file for matmul speedups
CI #107: Pull request #205 synchronize by ahgamut
January 18, 2024 15:25 4m 55s ahgamut:thread-local
January 18, 2024 15:25 4m 55s
use thread-local register file for matmul speedups
CI #106: Pull request #205 opened by ahgamut
January 16, 2024 01:12 4m 16s ahgamut:thread-local
January 16, 2024 01:12 4m 16s
Change BM/BN/BK to template parameters (#203)
CI #104: Commit 4892494 pushed by jart
January 15, 2024 02:17 7m 38s main
January 15, 2024 02:17 7m 38s