[go: nahoru, domu]

Skip to content

Plugins

Silas Marvin edited this page Jun 7, 2024 · 14 revisions

Why Plugins

LSP-AI is meant to be used by editor specific plugins.

By default it provides textDocument/completion capabilities which is enough for most editors with language server support to have decent LLM powered auto completion.

However, to replicate inline completion like VS Code's copilot plugin, editor specific plugins must typically be written.

Note that the goal behind LSP-AI is to provide shared functionality that editor specific plugins can take advantage of.

Plugins

VS Code

The official VS Code LSP-AI plugin marketplace link.

The LSP-AI project maintains an official VS Code plugin. Functionality is currently limited, but it supports inline completion and a custom function lsp-ai.generation.

By default this plugin uses OpenAI with gpt-4o as the backend for completion. For this to work, please set the OPENAI_API_KEY env variable, or use a different configuration as outlined before.

You do not need to configure completion for this plugin as does not use the language server completion capabilities at all. Instead the plugin uses the custom textDocument/generation function.

Make sure to set Quick Suggestions other option to inline as depicted below. You may also want to edit the Quick Suggestions Delay option.

Which settings to edit

Configuration Examples

By default the VS Code plugin uses OpenAI and gpt-4o for completions. This is completely customizable. Here is an example of changing the default prompts used with gpt-4o.

{
  "lsp-ai.serverConfiguration": {
    "memory": {
      "file_store": {}
    },
    "models": {
      "model1": {
        "type": "open_ai",
        "chat_endpoint": "https://api.openai.com/v1/chat/completions",
        "model": "gpt-4o",
        "auth_token_env_var_name": "OPENAI_API_KEY"
      }
    }
  },
  "lsp-ai.generationConfiguration": {
    "model": "model1",
    "parameters": {
      "max_tokens": 128,
      "max_context": 1024,
      "messages": [
        {
          "role": "system",
          "content": "SOME CUSTOM SYSTEM MESSAGE"
        },
        {
          "role": "user",
          "content": "SOME CUSTOM USER MESSAGE WITH THE {CODE}"
        }
      ]
    }
  },
  "lsp-ai.inlineCompletionConfiguration": {
    "maxCompletionsPerSecond": 1
  }
}

Here is an example using llama.cpp as the backend.

{
  "lsp-ai.serverConfiguration": {
    "memory": {
      "file_store": {}
    },
    "models": {
      "model1": {
        "type": "llama_cpp",
        "repository": "stabilityai/stable-code-3b",
        "name": "stable-code-3b-Q5_K_M.gguf",
        "n_ctx": 2048
      }
    }
  },
  "lsp-ai.generationConfiguration": {
    "model": "model1",
    "parameters": {
      "fim": {
        "start": "<fim_prefix>",
        "middle": "<fim_suffix>",
        "end": "<fim_middle>"
      },
      "max_context": 2000,
      "max_new_tokens": 32
    }
  },
  "lsp-ai.inlineCompletionConfiguration": {
    "maxCompletionsPerSecond": 1
  }
}

Here is an example using Mistral AI FIM.

{
  "lsp-ai.serverConfiguration": {
    "memory": {
      "file_store": {}
    },
    "models": {
      "model1": {
        "type": "mistral_fim",
        "fim_endpoint": "https://api.mistral.ai/v1/fim/completions",
        "model": "codestral-latest",
        "auth_token_env_var_name": "MISTRAL_API_KEY"
      }
    }
  },
  "lsp-ai.generationConfiguration": {
    "model": "model1",
    "parameters": {
      "max_tokens": 32
    }
  },
  "lsp-ai.inlineCompletionConfiguration": {
    "maxCompletionsPerSecond": 1
  }
}
Clone this wiki locally