llmq
is a command-line utility that interacts with the OpenAI inference service to process piped input and provide intelligent responses. It can be used to analyze logs, manage system processes, or even answer general queries using AI models like GPT-4o-mini.
- AI-Powered Command Line: Send your command line output directly to an AI model for analysis and feedback.
- Flexible Input Handling: Works with piped input from any command-line tool.
- Customizable: Configure default models and API keys through a YAML configuration file.
- Go: Make sure you have Go installed on your machine.
- API Key: You'll need an API key for the OpenAI service.
git clone https://github.com/phildougherty/llmq.git
cd llmq
go build
Create a configuration file at ~/.config/llmq.yaml
:
default_model: "gpt-4o-mini"
api_key: "YOUR_API_KEY"
log_level: "info"
Replace "YOUR_API_KEY"
with your actual OpenAI API key.
You can use llmq
by piping input into it and providing a query:
echo "toast" | ./llmq --query "What is the best topping for this?"
Here are some creative and useful ways to integrate llmq
into your Linux command-line environment:
Quickly scan logs for issues:
cat /var/log/syslog | ./llmq --query "Does anything look broken in here?"
Get a high-level summary of recent git commits:
git log --oneline | ./llmq --query "Summarize the recent changes in this project"
Summarize a large text file to get the key points:
cat large_text_file.txt | ./llmq --query "What are the key points in this text?"
Get suggestions on how to improve or optimize your shell script:
cat my_script.sh | ./llmq --query "How can I improve this shell script?"
Find processes that are consuming a lot of resources:
ps aux | ./llmq --query "Which processes are consuming the most resources?"
Convert a meeting transcript into actionable items:
cat meeting_transcript.txt | ./llmq --query "Generate a to-do list from this transcript"
Get a summary of the packages that were updated:
sudo apt-get update | ./llmq --query "Summarize the system updates"
If you experience timeout issues, you can increase the timeout duration in the ai/client.go
file:
client := &http.Client{
Timeout: 30 * time.Second,
}
Currently, llmq
only supports OpenAI as the inference service. Ensure you have a valid API key from OpenAI to use this tool.
Contributions are welcome! Please feel free to submit a Pull Request.
If you find a bug or have a feature request, please create an issue on GitHub.
This project is licensed under the MIT License. See the LICENSE file for details.