Search Criteria
Package Details: llama.cpp b5233-1
Package Actions
Git Clone URL: | http://aur.archlinux.org/llama.cpp.git (read-only, click to copy) |
---|---|
Package Base: | llama.cpp |
Description: | Port of Facebook's LLaMA model in C/C++ |
Upstream URL: | http://github.com/ggerganov/llama.cpp |
Licenses: | MIT |
Submitter: | txtsd |
Maintainer: | txtsd |
Last Packager: | txtsd |
Votes: | 7 |
Popularity: | 1.19 |
First Submitted: | 2024-10-26 15:38 (UTC) |
Last Updated: | 2025-04-30 16:21 (UTC) |
Dependencies (10)
- curl (curl-gitAUR, curl-c-aresAUR)
- gcc-libs (gcc-libs-gitAUR, gccrs-libs-gitAUR, gcc-libs-snapshotAUR)
- glibc (glibc-gitAUR, glibc-linux4AUR, glibc-eacAUR)
- python (python37AUR, python311AUR, python310AUR)
- python-numpy (python-numpy-gitAUR, python-numpy1AUR, python-numpy-mkl-binAUR, python-numpy-mkl-tbbAUR, python-numpy-mklAUR)
- python-sentencepieceAUR (python-sentencepiece-gitAUR)
- cmake (cmake-gitAUR, cmake3AUR) (make)
- git (git-gitAUR, git-glAUR) (make)
- openmp (make)
- python-pytorch (python-pytorch-cxx11abiAUR, python-pytorch-cxx11abi-optAUR, python-pytorch-cxx11abi-cudaAUR, python-pytorch-cxx11abi-opt-cudaAUR, python-pytorch-cxx11abi-rocmAUR, python-pytorch-cxx11abi-opt-rocmAUR, python-pytorch-cuda, python-pytorch-opt, python-pytorch-opt-cuda, python-pytorch-opt-rocm, python-pytorch-rocm) (optional)
Required by (0)
Sources (4)
Latest Comments
« First ‹ Previous 1 2
txtsd commented on 2024-10-26 20:14 (UTC) (edited on 2024-12-06 14:14 (UTC) by txtsd)
txtsd commented on 2024-10-26 15:25 (UTC)
I'm merging this package into llama.cpp
since that's the upstream name, and .
is allowed in Arch package names.
llama.cpp-*
packages will be separate packages. I don't think anyone wants to install the 20GB+ dependencies and compile all variants just to get one part of the split package.
abitrolly commented on 2024-10-26 09:28 (UTC)
@txtsd there is also more recent http://aur.archlinux.org/pkgbase/llama.cpp-git that also fails, but at least most dependencies seem to be in place.
txtsd commented on 2024-10-25 16:53 (UTC)
Okay. I'm overhauling the PKGBUILD. It's a complete mess atm. I've got basic llama.cpp building. I'll add cuda, opencl, and vulkan, and then push the next version.
txtsd commented on 2024-10-25 16:06 (UTC) (edited on 2024-10-25 16:08 (UTC) by txtsd)
@heikkiyp I'm unable to get it to build with your PKGBUILD
See: http://bpa.st/Y56Q
heikkiyp commented on 2024-08-31 00:41 (UTC) (edited on 2024-08-31 00:45 (UTC) by heikkiyp)
With following changes I managed to get build work .
1) renaming of main and server binaries were removed as those are obsolete references
2) building package_llama-cpp-cuda does not support LLAMA_CUBLAS anymore .. it is replaced with GGML_CUDA
3) building main package the name of directory to match the tar filename ( it does not have the master part )
4) source url changed to use tags url instead or archive url
5) pkgver update
6) sha256sum calculated for the pkgver
Source : http://github.com/ggerganov/llama.cpp/archive/refs/tags/b3647.tar.gz
PKGBUILD:
!/usr/bin/env -S sh -c 'nvchecker -cnvchecker.toml --logger=json | jq -r '\''.version | sub("^v"; "") | split("-") | .[-1]'\'' | xargs -i{} sed -i "s/^\\(pkgver=\\).*/\\1{}/" $0'
# shellcheck shell=bash disable=SC2034,SC2154
# ex: nowrap
# Maintainer: Wu Zhenyu <wuzhenyu@ustc.edu>
_pkgname=llama.cpp
pkgbase=llama-cpp
pkgname=("$pkgbase" "$pkgbase-cuda" "$pkgbase-opencl")
pkgver=b3647
pkgrel=1
pkgdesc="Port of Facebook's LLaMA model in C/C++"
arch=(x86 x86_64 arm aarch64)
url=http://github.com/ggerganov/llama.cpp
depends=(openmpi python-numpy python-sentencepiece)
makedepends=(cmake intel-oneapi-dpcpp-cpp cuda intel-oneapi-mkl clblast)
license=(GPL3)
source=("$url/archive/refs/tags/$pkgver.tar.gz")
sha256sums=('03514a396f1366e5f03e181f5f3b9b3a4a595a715b1c505fb6af7674a177a4ed')
_build() {
cd "$_pkgname-$pkgver" || return 1
# http://github.com/ggerganov/llama.cpp/pull/2277
sed -i 's/NOT DepBLAS/NOT DepBLAS_FOUND/' CMakeLists.txt
cmake -B$1 -DCMAKE_INSTALL_PREFIX=/usr -DLLAMA_MPI=ON -DBUILD_SHARED_LIBS=ON \
${*:2:$#}
cmake --build $1
}
_package() {
DESTDIR="$pkgdir" cmake --install $1
}
package_llama-cpp() {
local _arch data_type_model
_arch="$(uname -m)"
if [[ "$_arch" != x86* ]]; then
depends+=(openblas)
_build build -DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS
else
if [[ "$_arch" == x86_64 ]]; then
data_type_model=64lp
else
data_type_model=32
fi
depends+=(intel-oneapi-mkl)
_build build -DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=Intel10_"$data_type_model" -DCMAKE_C_COMPILER=/opt/intel/oneapi/compiler/2024.1/bin/icx -DCMAKE_CXX_COMPILER=/opt/intel/oneapi/compiler/2024.1/bin/icpx
fi
_package build
}
package_llama-cpp-cuda() {
pkgdesc="${pkgdesc} (with CUDA)"
depends+=(cuda)
provides=(llama-cpp)
conflicts=(llama-cpp)
_build build-cuda -DGGML_CUDA=ON
_package build-cuda
}
package_llama-cpp-opencl() {
pkgdesc="${pkgdesc} (with OpenCL)"
depends+=(clblast)
provides=(llama-cpp)
conflicts=(llama-cpp)
_build build-opencl -DLLAMA_CLBLAST=ON
_package build-opencl
}
lmat commented on 2024-04-03 13:02 (UTC)
I just tried to build this and got:
curl: (56) The requested URL returned error: 404 ERROR: Failure while downloading http://github.com/ggerganov/llama.cpp/archive/master-c3e53b4.tar.gz
I changed the source to http://github.com/ggerganov/llama.cpp/archive/refs/tags/b2586.tar.gz, and hoping for the best.
dront78 commented on 2023-09-08 07:51 (UTC) (edited on 2023-09-08 07:52 (UTC) by dront78)
b1198 PKGBUILD
#!/usr/bin/env -S sh -c 'nvchecker -cnvchecker.toml --logger=json | jq -r '\''.version | sub("^v"; "") | split("-") | .[-1]'\'' | xargs -i{} sed -i "s/^\\(pkgver=\\).*/\\1{}/" $0'
# shellcheck shell=bash disable=SC2034,SC2154
# ex: nowrap
# Maintainer: Wu Zhenyu <wuzhenyu@ustc.edu>
_pkgname=llama.cpp
pkgbase=llama-cpp
pkgname=("$pkgbase" "$pkgbase-cuda" "$pkgbase-opencl")
pkgver=b1198
pkgrel=1
pkgdesc="Port of Facebook's LLaMA model in C/C++"
arch=(x86 x86_64 arm aarch64)
url=http://github.com/ggerganov/llama.cpp
depends=(openmpi python-numpy python-sentencepiece)
makedepends=(cmake intel-oneapi-dpcpp-cpp cuda intel-oneapi-mkl clblast)
license=(GPL3)
source=("$url/archive/refs/tags/$pkgver.tar.gz")
sha256sums=('1c9494b2d98f6f32942f5b5ee1b59260384ab9fcc0a12867b23544e08f64bd1b')
_build() {
cd "$_pkgname-$pkgver" || return 1
# http://github.com/ggerganov/llama.cpp/pull/2277
sed -i 's/NOT DepBLAS/NOT DepBLAS_FOUND/' CMakeLists.txt
cmake -B$1 -DCMAKE_INSTALL_PREFIX=/usr -DLLAMA_MPI=ON -DBUILD_SHARED_LIBS=ON \
${*:2:$#}
cmake --build $1
}
_package() {
DESTDIR="$pkgdir" cmake --install $1
mv $pkgdir/usr/bin/main $pkgdir/usr/bin/llama
mv $pkgdir/usr/bin/server $pkgdir/usr/bin/llama-server
}
package_llama-cpp() {
local _arch data_type_model
_arch="$(uname -m)"
if [[ "$_arch" != x86* ]]; then
depends+=(openblas)
_build build -DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS
else
if [[ "$_arch" == x86_64 ]]; then
data_type_model=64lp
else
data_type_model=32
fi
depends+=(intel-oneapi-mkl)
_build build -DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=Intel10_"$data_type_model" -DCMAKE_C_COMPILER=/opt/intel/oneapi/compiler/latest/linux/bin/icx -DCMAKE_CXX_COMPILER=/opt/intel/oneapi/compiler/latest/linux/bin/icpx
fi
_package build
}
package_llama-cpp-cuda() {
pkgdesc="${pkgdesc} (with CUDA)"
depends+=(cuda)
provides=(llama-cpp)
conflicts=(llama-cpp)
_build build-cuda -DLLAMA_CUBLAS=ON
_package build-cuda
}
package_llama-cpp-opencl() {
pkgdesc="${pkgdesc} (with OpenCL)"
depends+=(clblast)
provides=(llama-cpp)
conflicts=(llama-cpp)
_build build-opencl -DLLAMA_CLBLAST=ON
_package build-opencl
}
colobas commented on 2023-09-01 18:09 (UTC)
I used the following patch to get this to build. Using release tags as pkgver.
From 6e47ffdc7baf6fa60fad2d9b3f9b8dc29b3d3ee1 Mon Sep 17 00:00:00 2001
From: Guilherme Pires <guilherme.pires@alleninstitute.org>
Date: Fri, 1 Sep 2023 11:01:50 -0700
Subject: [PATCH] use tags for versioning
---
PKGBUILD | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/PKGBUILD b/PKGBUILD
index fd6d00f..8d867ec 100755
--- a/PKGBUILD
+++ b/PKGBUILD
@@ -5,7 +5,7 @@
_pkgname=llama.cpp
pkgbase=llama-cpp
pkgname=("$pkgbase" "$pkgbase-cuda" "$pkgbase-opencl")
-pkgver=c3e53b4
+pkgver=b1147
pkgrel=1
pkgdesc="Port of Facebook's LLaMA model in C/C++"
arch=(x86 x86_64 arm aarch64)
@@ -13,11 +13,11 @@ url=http://github.com/ggerganov/llama.cpp
depends=(openmpi python-numpy python-sentencepiece)
makedepends=(cmake intel-oneapi-dpcpp-cpp cuda intel-oneapi-mkl clblast)
license=(GPL3)
-source=("$url/archive/master-$pkgver.tar.gz")
-sha256sums=('7bf8a74bd3393b2c96abca17099487dccdd114c6bb5bb59b70daf02efe437606')
+source=("$url/archive/refs/tags/$pkgver.tar.gz")
+sha256sums=('d6e0fbd1e21ca27aef90e71ad62d45ae16696483c4183fa1cfad9deb0da5abec')
_build() {
- cd "$_pkgname-master-$pkgver" || return 1
+ cd "$_pkgname-$pkgver" || return 1
# http://github.com/ggerganov/llama.cpp/pull/2277
sed -i 's/NOT DepBLAS/NOT DepBLAS_FOUND/' CMakeLists.txt
--
2.42.0
sunng commented on 2023-08-07 03:35 (UTC)
Is cuda required for this opencl package?
Pinned Comments
txtsd commented on 2024-10-26 20:14 (UTC) (edited on 2024-12-06 14:14 (UTC) by txtsd)
Alternate versions
llama.cpp
llama.cpp-vulkan
llama.cpp-sycl-fp16
llama.cpp-sycl-fp32
llama.cpp-cuda
llama.cpp-cuda-f16
llama.cpp-hip