nf-core/llamacpppython/run @ 0.0.0-3a0ac08
Summary
Python wrapper for running locally-hosted LLM with llama.cpp
Get started
Add the following snippet to your workflow script to include this module.
include { LLAMACPPPYTHON_RUN } from 'nf-core/llamacpppython/run'
License
MIT License
Process
Name
|
LLAMACPPPYTHON_RUN |
|---|
Input
1 channel
#1
tuple
meta
map
|
Groovy Map containing sample information e.g. |
|---|---|
prompt_file
file
|
Prompt file Structure: [ val(meta), path(prompt_file) ] |
gguf_model
file
|
GGUF model Structure: [ val(meta), path(gguf_model) ] |
Output
2 channels
#1
output
tuple
meta
map
|
Groovy Map containing sample information e.g. |
|---|---|
${prefix}.txt
file
|
File with the output of LLM inference request |
#2
versions_llama_cpp_python
versions.yml
file
|
File containing software versions versions.yml
|
|---|
| Tool | Description | Homepage |
|---|---|---|
| llama-cpp-python | Python wrapper for llama.cpp LLM inference tool | https://llama-cpp-python.readthedocs.io/en/latest/ |
| Version | 0.0.0-3a0ac08 |
|---|---|
| Commit ID | 4c44634073c8b785e34735a3bd1b3c89123bc34b |
| Release Date | 07 May 2026 15:01:00 (UTC) |
| Download URL | https://registry.nextflow.io/api/v1/modules/nf-core%2Fllamacpppython%2Frun/0.0.0-3a0ac08/download |
| OCI Store URL | https://public.cr.seqera.io/v2/nextflow/plugin/modules/nf-core/llamacpppython/run/blobs/sha256:1306f80794844bac31fd7865e71fb388bfdaf972d7d5c30e74bad0c0604eca31 |
| Size | 5.0 KB |
| Checksum | sha256:1306f80794844bac31fd7865e71fb388bfdaf972d7d5c30e74bad0c0604eca31 |
| Downloads | 1 |
| Version | Date | Status | Downloads | Size | Diff |
|---|---|---|---|---|---|
| 0.0.0-4c44634 | 08 May 2026 15:00:41 (UTC) | 3 | 5.0 KB | ↔ | |
| 0.0.0-3a0ac08 | 07 May 2026 15:01:00 (UTC) | 1 | 5.0 KB | - |