|
|
843b64a644
|
llm: use xiomo model
|
2025-04-30 11:22:52 -04:00 |
|
|
|
b00ad9a33a
|
deepcoder 14b
|
2025-04-14 13:11:40 -04:00 |
|
|
|
db78740db3
|
change llm model
|
2025-04-10 11:15:18 -04:00 |
|
|
|
fe85083810
|
llm: model stuff
|
2025-04-08 00:22:12 -04:00 |
|
|
|
3653e06c7d
|
create single function to optimize for system
|
2025-04-07 14:33:34 -04:00 |
|
|
|
b764d2de45
|
move optimizeWithFlags
|
2025-04-07 14:31:56 -04:00 |
|
|
|
a688d9e264
|
fmt
|
2025-04-02 23:19:06 -04:00 |
|
|
|
11164f0859
|
llm: use finetuned model
|
2025-04-02 10:11:41 -04:00 |
|
|
|
06feb4e1e2
|
gemma-3 27b
|
2025-03-31 21:52:14 -04:00 |
|
|
|
2d47c441fe
|
llm: use Q4_0 quants (faster)
|
2025-03-31 18:33:24 -04:00 |
|
|
|
c31635bdd7
|
format
|
2025-03-31 17:04:41 -04:00 |
|
|
|
1482429a00
|
llm: enable AVX2
|
2025-03-31 12:02:38 -04:00 |
|
|
|
6cc3d96362
|
llama-cpp: compiler optimizations
|
2025-03-31 11:17:56 -04:00 |
|
|
|
d5ac5c8cd8
|
gemma-3 12b
|
2025-03-31 10:31:29 -04:00 |
|
|
|
d774568e01
|
auth for llm
|
2025-03-31 10:29:36 -04:00 |
|
|
|
d34793c18f
|
add llama-server
|
2025-03-31 03:59:54 -04:00 |
|