Source: llama.vim
Section: editors
Priority: optional
Maintainer: Debian Deep Learning Team <debian-ai@lists.debian.org>
Uploaders: Christian Kastner <ckk@debian.org>,
           Mo Zhou <lumin@debian.org>,
Standards-Version: 4.7.2
Vcs-Browser: https://salsa.debian.org/deeplearning-team/llama.vim
Vcs-Git: https://salsa.debian.org/deeplearning-team/llama.vim.git
Homepage: https://github.com/ggml-org/llama.vim
Build-Depends: debhelper-compat (= 13), dh-sequence-vim-addon
Rules-Requires-Root: no

Package: vim-llama.cpp
Architecture: all
Depends: llama.cpp-tools,
         vim (>= 9.1~),
         ${misc:Depends},
         ${vim-addon:Depends},
Description: Local LLM-assisted text completion for vim
 This is a vim plugin for LLM-assisted text completion using llama.cpp.
 .
 To be useful, you will need to provide your own model, or download a
 LLM model from https://huggingface.co.
 .
 Features:
 .
  * Auto-suggest on cursor movement in Insert mode
  * Toggle the suggestion manually by pressing Ctrl+F
  * Accept a suggestion with Tab
  * Accept the first line of a suggestion with Shift+Tab
  * Control max text generation time
  * Configure scope of context around the cursor
  * Ring context with chunks from open and edited files and yanked text
  * Supports very large contexts even on low-end hardware via smart context
    reuse
  * Speculative FIM support
  * Speculative Decoding support
  * Display performance stats
