Skip to content

Actions: Mozilla-Ocho/llamafile

All workflows

Actions

Loading...

Showing runs from all workflows
288 workflow runs
288 workflow runs
Event

Filter by event

Status

Filter by status

Branch
Actor

Filter by actor

github: add mention of strace and ftrace
Pull Request Labeler #2: Pull request #449 synchronize by mofosyne
May 26, 2024 11:55 13s
May 26, 2024 11:55 13s
github: add mention of strace and ftrace
Pull Request Labeler #1: Pull request #449 opened by mofosyne
May 26, 2024 11:54 18s
May 26, 2024 11:54 18s
Clarify 'man' as manual in README (#376)
CI #248: Commit 564d9fb pushed by jart
May 8, 2024 11:57 46m 8s main
May 8, 2024 11:57 46m 8s
Sync with upstream llama.cpp project
CI #247: Commit 94d0940 pushed by jart
May 8, 2024 05:31 47m 46s main
May 8, 2024 05:31 47m 46s
Fix issues with server send_embeddings function
CI #246: Commit 0e2845a pushed by jart
May 8, 2024 04:48 45m 10s main
May 8, 2024 04:48 45m 10s
Fix server multimodal statistics (#392)
CI #243: Commit a2d159e pushed by jart
May 7, 2024 10:55 49m 59s main
May 7, 2024 10:55 49m 59s
Revert moondream vision language model support
CI #242: Commit aa8c01a pushed by jart
May 7, 2024 06:22 2m 26s main
May 7, 2024 06:22 2m 26s
Faster AVX2 prompt processing for k-quants and IQ4_XS (#394)
CI #241: Commit e6532f7 pushed by jart
May 7, 2024 02:58 2m 18s main
May 7, 2024 02:58 2m 18s
Fix vim modelines (#351)
CI #239: Commit 911d58f pushed by jart
May 6, 2024 18:29 2m 33s main
May 6, 2024 18:29 2m 33s
More conservative strong/em markdown matcher (#352)
CI #237: Commit eecbf89 pushed by jart
May 6, 2024 18:26 3m 1s main
May 6, 2024 18:26 3m 1s
Add special tokens to server llama_decode() inputs
CI #236: Commit 7900294 pushed by jart
May 4, 2024 03:06 2m 28s main
May 4, 2024 03:06 2m 28s
Add --precise and --fast flags
CI #235: Commit bbae0f6 pushed by jart
May 3, 2024 15:53 2m 35s main
May 3, 2024 15:53 2m 35s
Add --precise and --fast flags
CI #234: Commit 62381a7 pushed by jart
May 3, 2024 15:51 3m 40s main
May 3, 2024 15:51 3m 40s
Speed up prediction on CPUs with many cores
CI #232: Commit 89c189e pushed by jart
May 3, 2024 05:23 2m 58s main
May 3, 2024 05:23 2m 58s
Disable an unintended integrity check
CI #230: Commit 2b4da98 pushed by jart
May 2, 2024 07:13 3m 25s main
May 2, 2024 07:13 3m 25s
Make GGML vector ops go faster across hardware
CI #229: Commit c9d7393 pushed by jart
May 2, 2024 05:50 2m 47s main
May 2, 2024 05:50 2m 47s
Upgrade to latest llama.cpp code
CI #228: Commit 0bdea60 pushed by jart
May 1, 2024 05:08 2m 31s main
May 1, 2024 05:08 2m 31s
Add Kahan summation to tinyBLAS Q8/Q4 on ARM
CI #227: Commit 2af3b88 pushed by jart
April 30, 2024 21:32 2m 37s main
April 30, 2024 21:32 2m 37s
Synchronize after every op
CI #226: Commit 9cf7363 pushed by jart
April 30, 2024 03:35 2m 29s main
April 30, 2024 03:35 2m 29s
Synchronize after every op
CI #225: Commit 622924c pushed by jart
April 30, 2024 03:32 2m 30s main
April 30, 2024 03:32 2m 30s
Synchronize after every op
CI #224: Commit cbb94e5 pushed by jart
April 30, 2024 03:31 3m 9s main
April 30, 2024 03:31 3m 9s
Use 4.7x fewer synchronization barriers in GGML
CI #223: Commit bd8c0de pushed by jart
April 29, 2024 05:43 2m 45s main
April 29, 2024 05:43 2m 45s
Rewrite synchronization primitives
CI #222: Commit 6162004 pushed by jart
April 28, 2024 16:29 1m 57s main
April 28, 2024 16:29 1m 57s