×

Nextflow Modules

Clear

Showing module(s) with keyword "llama"

Module Keywords Description
nf-core/llamacpppython/run inference llama llm local-inference offline-llm Python wrapper for running locally-hosted LLM with llama.cpp