Releases: ggerganov/llama.cpp
Releases · ggerganov/llama.cpp
b3943
llama : remove all_pos_0, all_pos_1, all_seq_id from llama_batch (#9745) * refactor llama_batch_get_one * adapt all examples * fix simple.cpp * fix llama_bench * fix * fix context shifting * free batch before return * use common_batch_add, reuse llama_batch in loop * null terminated seq_id list * fix save-load-state example * fix perplexity * correct token pos in llama_batch_allocr
b3942
rpc : backend refactoring (#9912) * rpc : refactor backend Use structs for RPC request/response messages * rpc : refactor server
b3941
[SYCL] Add SYCL Backend registry, device and Event Interfaces (#9705) * implemented missing SYCL event APIs * sycl : Added device and backend reg interfaces * Restructured ggml-sycl.cpp
b3940
add amx kernel for gemm (#8998) add intel amx isa detection add vnni kernel for gemv cases add vnni and amx kernel support for block_q8_0 code cleanup fix packing B issue enable openmp fine tune amx kernel switch to aten parallel pattern add error message for nested parallelism code cleanup add f16 support in ggml-amx add amx kernels for QK_K quant formats: Q4_K, Q5_K, Q6_K and IQ4_XS update CMakeList update README fix some compilation warning fix compiler warning when amx is not enabled minor change ggml-ci move ggml_amx_init from ggml.c to ggml-amx/mmq.cpp ggml-ci update CMakeLists with -mamx-tile, -mamx-int8 and -mamx-bf16 ggml-ci add amx as an ggml-backend update header file, the old path for immintrin.h has changed to ggml-cpu-impl.h minor change update CMakeLists.txt minor change apply weight prepacking in set_tensor method in ggml-backend fix compile error ggml-ci minor change ggml-ci update CMakeLists.txt ggml-ci add march dependency minor change ggml-ci change ggml_backend_buffer_is_host to return false for amx backend ggml-ci fix supports_op use device reg for AMX backend ggml-ci minor change ggml-ci minor change fix rebase set .buffer_from_host_ptr to be false for AMX backend
b3939
server : add n_indent parameter for line indentation requirement (#9929) ggml-ci
b3938
llama : rename batch_all to batch (#8881) This commit addresses the TODO in the code to rename the `batch_all` parameter to `batch` in `llama_decode_internal`.
b3936
llama : change warning to debug log
b3935
llama : infill sampling handle very long tokens (#9924) * llama : infill sampling handle very long tokens ggml-ci * cont : better indices ggml-ci
b3933
vulkan : add backend registry / device interfaces (#9721) * vulkan : add backend registry / device interfaces * llama : print devices used on model load
b3932
fix: allocating CPU buffer with size `0` (#9917)