The Swift Package Index logo.Swift Package Index

Has it really been five years since Swift Package Index launched? Read our anniversary blog post!

Build Information

Failed to build SwiftLlama, reference main (792181), with Swift 6.2 (beta) for Wasm on 26 Aug 2025 09:40:54 UTC.

Build Command

bash -c docker run --pull=always --rm -v "checkouts-4609320-2":/host -w "$PWD" -e JAVA_HOME="/root/.sdkman/candidates/java/current" -e SPI_BUILD="1" -e SPI_PROCESSING="1" registry.gitlab.com/finestructure/spi-images:wasm-6.2-latest swift build --swift-sdk wasm32-unknown-wasi 2>&1

Build Log

========================================
RunAll
========================================
Builder version: 4.67.1
Interrupt handler set up.
========================================
Checkout
========================================
Clone URL: https://github.com/ShenghaiWang/SwiftLlama.git
Reference: main
Initialized empty Git repository in /host/spi-builder-workspace/.git/
hint: Using 'master' as the name for the initial branch. This default branch name
hint: is subject to change. To configure the initial branch name to use in all
hint: of your new repositories, which will suppress this warning, call:
hint:
hint: 	git config --global init.defaultBranch <name>
hint:
hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and
hint: 'development'. The just-created branch can be renamed via this command:
hint:
hint: 	git branch -m <name>
From https://github.com/ShenghaiWang/SwiftLlama
 * branch            main       -> FETCH_HEAD
 * [new branch]      main       -> origin/main
HEAD is now at 7921814 Merge pull request #22 from gpotari/main
Cloned https://github.com/ShenghaiWang/SwiftLlama.git
Revision (git rev-parse @):
792181492beff9edba52314a040032cefb19edd6
SUCCESS checkout https://github.com/ShenghaiWang/SwiftLlama.git at main
========================================
Build
========================================
Selected platform:         wasm
Swift version:             6.2
Building package at path:  $PWD
https://github.com/ShenghaiWang/SwiftLlama.git
Running build ...
bash -c docker run --pull=always --rm -v "checkouts-4609320-2":/host -w "$PWD" -e JAVA_HOME="/root/.sdkman/candidates/java/current" -e SPI_BUILD="1" -e SPI_PROCESSING="1" registry.gitlab.com/finestructure/spi-images:wasm-6.2-latest swift build --swift-sdk wasm32-unknown-wasi 2>&1
wasm-6.2-latest: Pulling from finestructure/spi-images
Digest: sha256:3160178686d03086db4c1712d78c1980537bb37521128c64baade7f466b6b4aa
Status: Image is up to date for registry.gitlab.com/finestructure/spi-images:wasm-6.2-latest
Fetching https://github.com/ggerganov/llama.cpp.git
[1/218368] Fetching llama.cpp
Fetched https://github.com/ggerganov/llama.cpp.git from cache (38.10s)
Creating working copy for https://github.com/ggerganov/llama.cpp.git
Working copy of https://github.com/ggerganov/llama.cpp.git resolved at b6d6c5289f1c9c677657c380591201ddb210b649
Downloading binary artifact https://github.com/ggml-org/llama.cpp/releases/download/b5046/llama-b5046-xcframework.zip
[1371/74944281] Downloading https://github.com/ggml-org/llama.cpp/releases/download/b5046/llama-b5046-xcframework.zip
Downloaded https://github.com/ggml-org/llama.cpp/releases/download/b5046/llama-b5046-xcframework.zip (6.84s)
Building for debugging...
[0/13] Write swift-version-24593BA9C3E375BF.txt
[1/13] Compiling ggml-alloc.c
[2/13] Compiling ggml-backend.cpp
[3/13] Compiling llama-grammar.cpp
/host/spi-builder-workspace/.build/checkouts/llama.cpp/src/llama.cpp:5044:26: error: no member named 'future' in namespace 'std'
 5044 |         std::vector<std::future<std::pair<ggml_tensor *, bool>>> validation_result;
      |                     ~~~~~^
/host/spi-builder-workspace/.build/checkouts/llama.cpp/src/llama.cpp:5044:63: error: expected '(' for function-style cast or type construction
 5044 |         std::vector<std::future<std::pair<ggml_tensor *, bool>>> validation_result;
      |                                 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
<scratch space>:40:1: note: expanded from here
   40 | >
      | ^
/host/spi-builder-workspace/.build/checkouts/llama.cpp/src/llama.cpp:5044:64: error: expected unqualified-id
 5044 |         std::vector<std::future<std::pair<ggml_tensor *, bool>>> validation_result;
      |                                                                ^
/host/spi-builder-workspace/.build/checkouts/llama.cpp/src/llama.cpp:5159:21: error: use of undeclared identifier 'validation_result'
 5159 |                     validation_result.emplace_back(std::async(std::launch::async, [cur, data, n_size] {
      |                     ^
/host/spi-builder-workspace/.build/checkouts/llama.cpp/src/llama.cpp:5159:68: error: no member named 'launch' in namespace 'std'
 5159 |                     validation_result.emplace_back(std::async(std::launch::async, [cur, data, n_size] {
      |                                                               ~~~~~^
/host/spi-builder-workspace/.build/checkouts/llama.cpp/src/llama.cpp:5159:52: error: no member named 'async' in namespace 'std'; did you mean 'fsync'?
 5159 |                     validation_result.emplace_back(std::async(std::launch::async, [cur, data, n_size] {
      |                                                    ^~~~~~~~~~
      |                                                    fsync
/root/.swiftpm/swift-sdks/swift-6.2-DEVELOPMENT-SNAPSHOT-2025-05-30-a_wasm.artifactbundle/swift-6.2-DEVELOPMENT-SNAPSHOT-2025-05-30-a_wasm/wasm32-unknown-wasi/WASI.sdk/include/wasm32-wasi/unistd.h:88:5: note: 'fsync' declared here
   88 | int fsync(int);
      |     ^
/host/spi-builder-workspace/.build/checkouts/llama.cpp/src/llama.cpp:5185:25: error: use of undeclared identifier 'validation_result'
 5185 |                         validation_result.emplace_back(std::async(std::launch::async, [cur, n_size] {
      |                         ^
/host/spi-builder-workspace/.build/checkouts/llama.cpp/src/llama.cpp:5185:72: error: no member named 'launch' in namespace 'std'
 5185 |                         validation_result.emplace_back(std::async(std::launch::async, [cur, n_size] {
      |                                                                   ~~~~~^
/host/spi-builder-workspace/.build/checkouts/llama.cpp/src/llama.cpp:5185:56: error: no member named 'async' in namespace 'std'; did you mean 'fsync'?
 5185 |                         validation_result.emplace_back(std::async(std::launch::async, [cur, n_size] {
      |                                                        ^~~~~~~~~~
      |                                                        fsync
/root/.swiftpm/swift-sdks/swift-6.2-DEVELOPMENT-SNAPSHOT-2025-05-30-a_wasm.artifactbundle/swift-6.2-DEVELOPMENT-SNAPSHOT-2025-05-30-a_wasm/wasm32-unknown-wasi/WASI.sdk/include/wasm32-wasi/unistd.h:88:5: note: 'fsync' declared here
   88 | int fsync(int);
      |     ^
/host/spi-builder-workspace/.build/checkouts/llama.cpp/src/llama.cpp:5235:30: error: use of undeclared identifier 'validation_result'; did you mean 'validation_failed'?
 5235 |         for (auto & future : validation_result) {
      |                              ^~~~~~~~~~~~~~~~~
      |                              validation_failed
/host/spi-builder-workspace/.build/checkouts/llama.cpp/src/llama.cpp:5234:14: note: 'validation_failed' declared here
 5234 |         bool validation_failed = false;
      |              ^
/host/spi-builder-workspace/.build/checkouts/llama.cpp/src/llama.cpp:5235:28: error: invalid range expression of type 'bool'; no viable 'begin' function available
 5235 |         for (auto & future : validation_result) {
      |                            ^ ~~~~~~~~~~~~~~~~~
/host/spi-builder-workspace/.build/checkouts/llama.cpp/src/llama.cpp:17889:89: error: no member named 'thread' in namespace 'std'
 17889 |     struct ggml_tensor * tensor, std::vector<no_init<float>> & output, std::vector<std::thread> & workers,
       |                                                                                    ~~~~~^
/host/spi-builder-workspace/.build/checkouts/llama.cpp/src/llama.cpp:18254:212: error: no member named 'thread' in namespace 'std'
 18254 | static size_t llama_tensor_quantize_internal(enum ggml_type new_type, const float * f32_data, void * new_data, const int64_t chunk_size, int64_t nrows, int64_t n_per_row, const float * imatrix, std::vector<std::thread> & workers, const int nthread) {
       |                                                                                                                                                                                                               ~~~~~^
/host/spi-builder-workspace/.build/checkouts/llama.cpp/src/llama.cpp:18264:10: error: no type named 'mutex' in namespace 'std'
 18264 |     std::mutex mutex;
       |     ~~~~~^
/host/spi-builder-workspace/.build/checkouts/llama.cpp/src/llama.cpp:18273:18: error: no template named 'unique_lock' in namespace 'std'; did you mean 'unique_copy'?
 18273 |             std::unique_lock<std::mutex> lock(mutex);
       |             ~~~~~^~~~~~~~~~~
       |                  unique_copy
/root/.swiftpm/swift-sdks/swift-6.2-DEVELOPMENT-SNAPSHOT-2025-05-30-a_wasm.artifactbundle/swift-6.2-DEVELOPMENT-SNAPSHOT-2025-05-30-a_wasm/wasm32-unknown-wasi/WASI.sdk/include/c++/v1/__algorithm/unique_copy.h:102:1: note: 'unique_copy' declared here
  102 | unique_copy(_InputIterator __first, _InputIterator __last, _OutputIterator __result, _BinaryPredicate __pred) {
      | ^
/host/spi-builder-workspace/.build/checkouts/llama.cpp/src/llama.cpp:18273:35: error: no member named 'mutex' in namespace 'std'
 18273 |             std::unique_lock<std::mutex> lock(mutex);
       |                              ~~~~~^
/host/spi-builder-workspace/.build/checkouts/llama.cpp/src/llama.cpp:18281:13: error: use of undeclared identifier 'lock'; did you mean 'clock'?
 18281 |             lock.unlock();
       |             ^~~~
       |             clock
/root/.swiftpm/swift-sdks/swift-6.2-DEVELOPMENT-SNAPSHOT-2025-05-30-a_wasm.artifactbundle/swift-6.2-DEVELOPMENT-SNAPSHOT-2025-05-30-a_wasm/wasm32-unknown-wasi/WASI.sdk/include/wasm32-wasi/time.h:71:9: note: 'clock' declared here
   71 | clock_t clock (void);
      |         ^
/host/spi-builder-workspace/.build/checkouts/llama.cpp/src/llama.cpp:18281:17: error: member reference base type 'clock_t ()' (aka 'long long ()') is not a structure or union
 18281 |             lock.unlock();
       |             ~~~~^~~~~~~
/host/spi-builder-workspace/.build/checkouts/llama.cpp/src/llama.cpp:18290:22: error: no template named 'unique_lock' in namespace 'std'; did you mean 'unique_copy'?
 18290 |                 std::unique_lock<std::mutex> lock(mutex);
       |                 ~~~~~^~~~~~~~~~~
       |                      unique_copy
/root/.swiftpm/swift-sdks/swift-6.2-DEVELOPMENT-SNAPSHOT-2025-05-30-a_wasm.artifactbundle/swift-6.2-DEVELOPMENT-SNAPSHOT-2025-05-30-a_wasm/wasm32-unknown-wasi/WASI.sdk/include/c++/v1/__algorithm/unique_copy.h:102:1: note: 'unique_copy' declared here
  102 | unique_copy(_InputIterator __first, _InputIterator __last, _OutputIterator __result, _BinaryPredicate __pred) {
      | ^
fatal error: too many errors emitted, stopping now [-ferror-limit=]
20 errors generated.
[4/13] Compiling llama.cpp
In file included from /host/spi-builder-workspace/.build/checkouts/llama.cpp/ggml/src/ggml.c:29:
/root/.swiftpm/swift-sdks/swift-6.2-DEVELOPMENT-SNAPSHOT-2025-05-30-a_wasm.artifactbundle/swift-6.2-DEVELOPMENT-SNAPSHOT-2025-05-30-a_wasm/wasm32-unknown-wasi/WASI.sdk/include/wasm32-wasi/signal.h:2:2: error: "wasm lacks signal support; to enable minimal signal emulation, compile with -D_WASI_EMULATED_SIGNAL and link with -lwasi-emulated-signal"
    2 | #error "wasm lacks signal support; to enable minimal signal emulation, \
      |  ^
1 error generated.
[4/13] Compiling ggml.c
[4/13] Compiling ggml-aarch64.c
[4/13] Compiling ggml-quants.c
[4/13] Compiling unicode-data.cpp
[4/13] Compiling llama-sampling.cpp
[4/13] Compiling llama-vocab.cpp
[4/13] Compiling unicode.cpp
BUILD FAILURE 6.2 wasm