Skip to content

Commit

Permalink
feat: improve logging (#34)
Browse files Browse the repository at this point in the history
* feat: imporve logging

* feat: bump to `0.4.0`

* docs: add features section in README
  • Loading branch information
McPatate authored Oct 18, 2023
1 parent fdd55dd commit 4aacd70
Show file tree
Hide file tree
Showing 6 changed files with 200 additions and 75 deletions.
15 changes: 14 additions & 1 deletion Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

31 changes: 30 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,27 @@
> [!IMPORTANT]
> This is currently a work in progress, expect things to be broken!
**llm-ls** is a LSP server leveraging LLMs for code completion (and more?).
**llm-ls** is a LSP server leveraging LLMs to make your development experience smoother and more efficient.

The goal of llm-ls is to provide a common platform for IDE extensions to be build on. llm-ls takes care of the heavy lifting with regards to interacting with LLMs so that extension code can be as lightweight as possible.

## Features

### Prompt

Uses the current file as context to generate the prompt. Can use "fill in the middle" or not depending on your needs.

It also makes sure that you are within the context window of the model by tokenizing the prompt.

### Telemetry

Gathers information about requests and completions that can enable retraining.

Note that **llm-ls** does not export any data anywhere (other than setting a user agent when querying the model API), everything is stored in a log file if you set the log level to `info`.

### Completion

**llm-ls** parses the AST of the code to determine if completions should be multi line, single line or empty (no completion).

## Compatible extensions

Expand All @@ -12,3 +32,12 @@
- [x] [llm-intellij](https://github.com/huggingface/llm-intellij)
- [ ] [jupytercoder](https://github.com/bigcode-project/jupytercoder)

## Roadmap

- support getting context from multiple files in the workspace
- add `suffix_percent` setting that determines the ratio of # of tokens for the prefix vs the suffix in the prompt
- add context window fill percent or change context_window to `max_tokens`
- filter bad suggestions (repetitive, same as below, etc)
- support for ollama
- support for llama.cpp
- oltp traces ?
10 changes: 9 additions & 1 deletion crates/llm-ls/Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[package]
name = "llm-ls"
version = "0.3.0"
version = "0.4.0"
edition = "2021"

[[bin]]
Expand All @@ -11,6 +11,7 @@ home = "0.5"
ropey = "1.6"
reqwest = { version = "0.11", default-features = false, features = ["json", "rustls-tls"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
tokenizers = { version = "0.13", default-features = false, features = ["onig"] }
tokio = { version = "1", features = ["fs", "io-std", "io-util", "macros", "rt-multi-thread"] }
tower-lsp = "0.20"
Expand Down Expand Up @@ -40,3 +41,10 @@ tree-sitter-scala = "0.20"
tree-sitter-swift = "0.3"
tree-sitter-typescript = "0.20"

[dependencies.uuid]
version = "1.4"
features = [
"v4",
"fast-rng",
"serde",
]
3 changes: 1 addition & 2 deletions crates/llm-ls/src/document.rs
Original file line number Diff line number Diff line change
Expand Up @@ -166,8 +166,7 @@ fn get_parser(language_id: LanguageId) -> Result<Parser> {
}

pub(crate) struct Document {
#[allow(dead_code)]
language_id: LanguageId,
pub(crate) language_id: LanguageId,
pub(crate) text: Rope,
parser: Parser,
pub(crate) tree: Option<Tree>,
Expand Down
3 changes: 2 additions & 1 deletion crates/llm-ls/src/language_id.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
use serde::{Deserialize, Serialize};
use std::fmt;

#[derive(Clone, Copy)]
#[derive(Clone, Copy, Serialize, Deserialize)]
pub(crate) enum LanguageId {
Bash,
C,
Expand Down
Loading

0 comments on commit 4aacd70

Please sign in to comment.