Skip to content

Commit

Permalink
feat(godcontext): added godcontext to gracefully end all contexts bef…
Browse files Browse the repository at this point in the history
…ore quitting
  • Loading branch information
MohammadBnei committed Mar 13, 2024
1 parent 660fe6a commit e2903a1
Show file tree
Hide file tree
Showing 32 changed files with 611 additions and 270 deletions.
71 changes: 38 additions & 33 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,71 +1,76 @@
# go-ai-cli

go-ai-cli is a command-line interface that provides access to OpenAI's GPT-3 language generation service. With this tool, users can send a prompt to the OpenAI API and receive a generated response, which can then be printed on the command-line or saved to a markdown file.
`go-ai-cli` is a versatile command-line interface that enables users to interact with various AI models for text generation, speech-to-text conversion, image generation, and web scraping. It is designed to be extensible with any AI service. This tool is ideal for developers, content creators, and anyone interested in leveraging AI capabilities directly from their terminal. This tool works well with Ollama.

This project is useful for quickly generating text for various purposes such as creative writing, chatbots, virtual assistants, or content generation for websites.
## Features

- **Text Generation**: Utilize OpenAI's GPT-3 for generating text based on prompts.
- **Speech to Text**: Convert spoken language into text.
- **Image Generation**: Create images from textual descriptions.
- **Modular Design**: Easily extendable to incorporate additional AI models and services.

## Installation

To install and use the go-ai-cli with golang :
### Using Go

```sh
go install github.com/MohammadBnei/go-ai-cli@latest
```

To install the compiled binaries, go to the the [release page](https://github.com/MohammadBnei/go-ai-cli/releases/) and select the exec matching your operating system.
with portaudio (recommended) :

Lastly, there is an unstable docker image. To run it, here is the code :
```sh
docker run -e OPENAI_KEY=<YOUR_OPENAI_KEY> -it mohammaddocker/go-ai-cli prompt
go install -tags portaudio github.com/MohammadBnei/go-ai-cli@latest
```

## Usage

First, set up your OpenAI API key :
### Pre-compiled Binaries

Download the appropriate binary for your operating system from the [releases page](https://github.com/MohammadBnei/go-ai-cli/releases/).

## Configuration

Before using `go-ai-cli`, configure it with your OpenAI API key:

```sh
go-ai-cli config --OPENAI_KEY=<YOUR_API_KEY>
```

To send a prompt to OpenAI GPT, run:
You can also specify the AI model to use:

```sh
go-ai-cli prompt
go-ai-cli config --model=<MODEL_NAME>
```

You will be prompted to enter your text. After submitting your prompt, OpenAI will process your input and generate a response.
To list available models:

### Available command in prompt
```
q: quit
h: help
s: save the response to a file
f: add a file to the messages (won't send to openAi until you send a prompt)
c: clear message list
```sh
go-ai-cli config -l
```

## Configuration
## Usage

To store your OpenAI API key and model, run the following command:
```sh
go-ai-cli config --OPENAI_KEY=<YOUR_API_KEY> --model=<MODEL>
```
To start the interactive prompt:

To get a list of available models, run:
```sh
go-ai-cli config -l
go-ai-cli prompt
```

### Flags
- `--OPENAI_KEY`: Your OpenAI API key.
- `--model`: The default model to use.
- `-l, --list-model`: List available models.
- `--config`: The config file location.
Within the prompt, you have several commands at your disposal:

- `ctrl+d`: Quit the application.
- `ctrl+h`: Display help information.
- `ctrl+g`: Open options page.
- `ctrl+f`: Add a file to the messages. The file content won't be sent to the model until you submit a prompt.

## Advanced Configuration

The configuration file is located in `$HOME/.go-ai-cli.yaml`.
The configuration file is located at `$HOME/.go-ai-cli.yaml`. You can customize various settings, including the default AI model and API keys for different services.

## Contributing

To contribute to this project, fork the repository, make your changes, and submit a pull request. Please also ensure that your code adheres to the accepted [Go style guide](https://golang.org/doc/effective_go.html).
Contributions are welcome! Please fork the repository, make your changes, and submit a pull request. Ensure your code follows the [Go style guide](https://golang.org/doc/effective_go.html).

## License

This project is licensed under the [MIT License](https://opensource.org/licenses/MIT).
`go-ai-cli` is open-source software licensed under the [MIT License](https://opensource.org/licenses/MIT).
7 changes: 4 additions & 3 deletions api/agent/agent_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,13 @@ import (
"context"
"testing"

"github.com/MohammadBnei/go-ai-cli/api"
"github.com/MohammadBnei/go-ai-cli/api/agent"
"github.com/MohammadBnei/go-ai-cli/config"
"github.com/spf13/viper"
"github.com/tmc/langchaingo/chains"
"github.com/tmc/langchaingo/llms/openai"

"github.com/MohammadBnei/go-ai-cli/api"
"github.com/MohammadBnei/go-ai-cli/api/agent"
"github.com/MohammadBnei/go-ai-cli/config"
)

func TestWebSearchAgent(t *testing.T) {
Expand Down
102 changes: 102 additions & 0 deletions api/agent/system_generator.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
package agent

import (
"context"

"github.com/tmc/langchaingo/agents"
"github.com/tmc/langchaingo/prompts"
"github.com/tmc/langchaingo/tools"
"github.com/tmc/langchaingo/tools/wikipedia"

"github.com/MohammadBnei/go-ai-cli/api"
)

type UserExchangeChans struct {
Out chan string
In chan string
}

func NewSystemGeneratorExecutor(sgc *UserExchangeChans) (*agents.OpenAIFunctionsAgent, error) {
llm, err := api.GetLlmModel()
if err != nil {
return nil, err
}
wikiTool := wikipedia.New(RandomUserAgent())

t := []tools.Tool{
wikiTool,
}

if sgc != nil {
t = append(t, NewExchangeWithUser(sgc))
defer close(sgc.In)
defer close(sgc.Out)
}

promptTemplate := prompts.NewPromptTemplate(SystemGeneratorPrompt, []string{
"input",
})

executor := agents.NewOpenAIFunctionsAgent(llm, t,
agents.WithPrompt(promptTemplate),
agents.WithReturnIntermediateSteps(),
)
return executor, nil
}

type ExchangeWithUser struct {
exchangeChannels *UserExchangeChans
}

func NewExchangeWithUser(sgc *UserExchangeChans) *ExchangeWithUser {
return &ExchangeWithUser{
exchangeChannels: sgc,
}
}

func (e *ExchangeWithUser) Call(ctx context.Context, input string) (string, error) {
e.exchangeChannels.In <- input
return <-e.exchangeChannels.Out, nil
}

func (e *ExchangeWithUser) Name() string {
return "Exchange With User"
}

func (e *ExchangeWithUser) Description() string {
return "Exchange With User is a tool designed to help users exchange with the agent. The model can ask a question or a specification to the user and get his response"
}

var SystemGeneratorPrompt = `
Your task is to assist users in crafting detailed and effective system prompts that leverage the full capabilities of large language models like GPT. Follow these guidelines meticulously to ensure each generated prompt is tailored, insightful, and maximizes user engagement:
1. Interpret User Input with Detail: Begin by analyzing the user's request. Pay close attention to the details provided to ensure a deep understanding of their needs. Encourage users to include specific details in their queries to get more relevant answers.
2. Persona Adoption: Based on the user's request, adopt a suitable persona for responding. This could range from a scholarly persona for academic inquiries to a more casual tone for creative brainstorming sessions.
3. Use of Delimiters: In your generated prompts, instruct users on the use of delimiters (like triple quotes or XML tags) to clearly separate different parts of their input. This helps in maintaining clarity, especially in complex requests.
4. Step-by-Step Instructions: Break down tasks into clear, actionable steps. Provide users with a structured approach to completing their tasks, ensuring each step is concise and directly contributes to the overall goal.
5. Incorporate Examples: Wherever possible, include examples in your prompts. This could be examples of how to structure their request, or examples of similar queries and their outcomes.
6. Reference Text Usage: Instruct users to provide reference texts when their queries relate to specific information or topics. Guide them on how to ask the model to use these texts to construct answers, ensuring responses are grounded in relevant content.
7. Citations from Reference Texts: Encourage users to request citations from reference texts for answers that require factual accuracy. This enhances the reliability of the information provided.
8. Intent Classification: Utilize intent classification to identify the most relevant instructions or responses to a user's query. This ensures that the generated prompts are highly targeted and effective.
9. Dialogue Summarization: For long conversations or documents, instruct users on how to ask for summaries or filtered dialogue. This helps in maintaining focus and relevance over extended interactions.
10. Recursive Summarization: Teach users to request piecewise summarization for long documents, constructing a full summary recursively. This method is particularly useful for digesting large volumes of text.
11. Solution Development: Encourage users to ask the model to 'think aloud' or work out its own solution before providing a final answer. This process helps in revealing the model's reasoning and ensures more accurate outcomes.
12. Inner Monologue: Instruct users on how to request the model to use an inner monologue or a sequence of queries for complex problem-solving. This hides the model's reasoning process from the user, making the final response more concise.
13. Review for Omissions: Finally, remind users they can ask the model if it missed anything on previous passes. This ensures comprehensive coverage of the topic at hand.
By following these guidelines, you will generate system prompts that are not only highly effective but also enhance the user's ability to engage with the model meaningfully. Remember, the goal is to empower users to craft queries that are detailed, structured, and yield the most insightful responses possible.
When finished, respond only with the system prompt and nothing else.
`
52 changes: 52 additions & 0 deletions api/agent/system_generator_test.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
package agent_test

import (
"context"
"fmt"
"testing"

"github.com/spf13/viper"
"github.com/tmc/langchaingo/llms"
"github.com/tmc/langchaingo/schema"

"github.com/MohammadBnei/go-ai-cli/api"
"github.com/MohammadBnei/go-ai-cli/api/agent"
"github.com/MohammadBnei/go-ai-cli/config"
)

func TestSystemGenerator(t *testing.T) {
viper.Set(config.AI_API_TYPE, api.API_OPENAI)
viper.Set(config.AI_MODEL_NAME, "gpt-4-turbo-preview")
viper.BindEnv(config.AI_OPENAI_KEY, "OPENAI_API_KEY")

scg := &agent.UserExchangeChans{
In: make(chan string),
Out: make(chan string),
}

go func() {
for input := range scg.In {
res := ""
t.Log("Input: " + input)
fmt.Scanln(res)
scg.Out <- res
}
}()

t.Log("TestSystemGenerator")
executor, err := agent.NewSystemGeneratorExecutor(scg)
if err != nil {
t.Fatal(err)
}
t.Log("Created executor")

result, err := executor.LLM.GenerateContent(context.Background(), []llms.MessageContent{
llms.TextParts(schema.ChatMessageTypeSystem, agent.SystemGeneratorPrompt),
llms.TextParts(schema.ChatMessageTypeHuman, "Create a system prompt for golang code generation."),
})
if err != nil {
t.Fatal(err)
}

t.Logf("Result: %s", result.Choices[0].Content)
}
Loading

0 comments on commit e2903a1

Please sign in to comment.