Installation
Get Chat Linux Client up and running on your Linux system.
System Requirements
- Python 3.8 or higher
- Linux (Ubuntu 20.04+, Fedora 35+, Arch Linux)
- 4GB RAM minimum (8GB recommended)
- 500MB free storage
- Ollama (optional, for local models)
Quick Installation
git clone https://github.com/AutoBotSolutions/AI-Chat-Linux-Client.git
cd chat-linux-client
./scripts/install.sh
./scripts/run.sh
Manual Installation
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
python main.py
Installing Ollama (Optional)
For offline AI support with local models:
curl -fsSL https://ollama.ai/install.sh | sh
ollama pull llama3.2:1b
ollama pull mistral:7b
Configuration
Configure API keys, local models, and application settings.
API Keys
Add API keys through the application settings or environment variables:
GROQ_API_KEY=gsk_your_key_here
HUGGINGFACE_API_KEY=hf_your_key_here
OPENROUTER_API_KEY=sk-or-your_key_here
OPENAI_API_KEY=sk-your_key_here
Configuration File
Configuration is stored at ~/.config/chat-linux-client/config.json:
{
"providers": {
"groq": {
"enabled": true,
"api_key": "your_api_key",
"base_url": "https://api.groq.com/openai/v1"
},
"ollama": {
"enabled": true,
"base_url": "http://localhost:11434"
}
},
"chat": {
"temperature": 0.7,
"max_tokens": null,
"routing_strategy": "offline_first"
}
}
Usage
Learn how to use Chat Linux Client effectively.
Basic Chat
- Launch the application
- Select a model from the dropdown
- Type your message
- Press Enter or click Send
Model Selection
Choose models based on your needs:
- For speed: Groq or lightweight Ollama models
- For quality: GPT-4 or capable Ollama models
- For privacy: Ollama local models
- For cost: Free tier providers or local models
Keyboard Shortcuts
| Shortcut |
Action |
Enter |
Send message |
Ctrl+N |
New chat |
Ctrl+H |
Open history |
Ctrl+, |
Open settings |
API Providers
Learn about supported AI providers and how to use them.
Ollama (Local)
Run AI models locally with complete privacy. No API costs, works offline.
Available Models: Llama3.2, Qwen2.5, Phi3.5, Mistral, CodeLlama
Local • Free • Private
Groq
Ultra-low latency inference using LPU technology. Generous free tier.
Available Models: Llama3-8B, Mixtral-8x7B, Gemma-7B
Cloud • Fast • Free Tier
HuggingFace
Access thousands of open-source models. Free tier available.
Available Models: Qwen2.5, Mistral-7B, DeepSeek-R1, DialoGPT
Cloud • Open Source • Free Tier
OpenRouter
Access multiple models from various providers through one API.
Available Models: GPT-4, Claude-3, GPT-3.5
Cloud • Multi-Model • Pay-per-use
OpenAI
State-of-the-art GPT models with excellent quality.
Available Models: GPT-4o, GPT-4-Turbo, GPT-3.5
Cloud • Premium • Pay-per-use
Development Setup
Set up your development environment for contributing.
Prerequisites
- Python 3.8 or higher
- Git
- Virtual environment
Setup
git clone https://github.com/AutoBotSolutions/AI-Chat-Linux-Client.git
cd chat-linux-client
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
pip install pytest pytest-qt black flake8 mypy
Running Tests
pytest tests/
pytest --cov=. tests/
pytest -v tests/
Testing
Comprehensive testing guide for the project.
Test Structure
Tests are organized by module in the tests/ directory:
test_api_client.py - API client tests
test_provider_router.py - Routing tests
test_key_handler.py - Key storage tests
test_main_window.py - UI tests
Writing Tests
Use pytest with fixtures for clean test code:
import pytest
from core.groq_client import GroqClient
@pytest.fixture
def client():
return GroqClient(api_key="test_key")
def test_initialization(client):
assert client.api_key == "test_key"
Architecture
System architecture and design patterns.
Layered Architecture
- UI Layer: PyQt6 desktop interface
- Routing Layer: Intelligent model selection
- Provider Layer: AI provider implementations
- Storage Layer: Configuration and history
- Utility Layer: Helper functions
Key Components
core/ - AI provider logic
ui/ - User interface
storage/ - Data persistence
utils/ - Utility modules
Contributing
How to contribute to Chat Linux Client.
Contribution Guidelines
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Ensure all tests pass
- Submit a pull request
Code Style
We follow standard Python conventions:
Security
Security features and best practices.
Security Features
- No telemetry or analytics
- Encrypted API key storage
- Optional chat encryption
- HTTPS-only communications
- Local-first data storage
Reporting Vulnerabilities
For security issues, email: security@example.com
Do NOT open public issues for security vulnerabilities.
Troubleshooting
Common issues and solutions.
Application Won't Start
Run system checks:
python main.py --check-system
API Key Not Working
- Verify the key is correct
- Check provider account is active
- Ensure key has proper permissions
- Try regenerating the key
Models Not Showing
- Verify provider is enabled
- Check API key is configured
- Ensure Ollama is running (for local models)
FAQ
Frequently asked questions.
Is Chat Linux Client free?
Yes! It's open source and free. Local models are completely free. Some cloud providers have free tiers.
Can I use it offline?
Yes! With Ollama installed, you can use local models entirely offline.
Which provider should I use?
- For privacy: Ollama (local)
- For speed: Groq
- For quality: OpenAI
- For cost: Ollama or free tiers