# Privacy-First Command-Line AI for Linux ![AI_ENV](logo.webp) Unlock the power of AI—right from your Linux terminal. This project delivers a fully local AI environment, running open source language models directly on your machine. No cloud. No GAFAM. Just full privacy, control, and the freedom to manipulate commands in your shell. ## How it works * [Ollama](https://ollama.com/) run language models on the local machine. * [openedai-speech](https://github.com/matatonic/openedai-speech) provides text-to-speech capability. * [speaches-ai](https://github.com/speaches-ai/speaches) provide transcription, translation, and speech generation. * [nginx](https://nginx.org/en/) add an authentication to the API. * [AIChat](https://github.com/sigoden/aichat) is used as LLM CLI tool featuring Shell Assistant, Chat-REPL, RAG, AI Tools & Agents. Everything is free, open-source and automated using Docker Compose and shell scripts. ## Requirements To run this project efficiently, a powerful computer with a recent NVIDIA GPU is required. As an example, I achieved good performance with an Intel(R) Core(TM) i7-14700HX, a GeForce RTX 4050, and 32GB of RAM using the [qwen2.5:7b](https://ollama.com/library/qwen2.5) model. The model [qwen2.5-coder:32b](https://ollama.com/library/qwen2.5-coder:32b) is usable but slow with this configuration. Note that more modest models like [llama3.2:3b](https://ollama.com/library/llama3.2) require much fewer resources and still allow you to do a lot of things. On GNU/Linux, you must use the [NVIDIA Container Toolkit](https://github.com/NVIDIA/nvidia-container-toolkit). Note that it is probably possible to run the project on other GPUs or modern MacBooks, but this is not the purpose of this project. ## How to launch the server Choose the models you wish to use in the docker-compose.yaml file. ``` environment: - MODELS=.... ``` Add an API key to secure server access by adding a `.env` file like this: ``` LLM_API_KEY=1234567890 ``` Next, start the servers and their configuration with Docker Compose: ```bash docker compose up --build -d ``` ## How to use The `setup_desktop.sh` script allows for copying a compiled static version of [AIChat](https://github.com/sigoden/aichat) from a container to your host and configuring the tool. ### AIChat essentials A request to populate a demo database: ```bash aichat "10 fictitious identities with username, firstname, lastname and email then display in json format. The data must be realistic, especially from known email domains." ``` Request a snippet of code: ``` aichat -m ollama:qwen2.5-coder:32b -c "if a docker image exist in bash" ``` To launch a chatbot while maintaining context: ```bash aichat -s ``` Pipe a command and interpret the result: ``` ps aux | aichat 'which process use the most memory' ``` Using roles: ``` aichat -r short "tcp port of mysql" ./tools/speech.sh synthesize --play --lang en --voice bryce "$(aichat -r english-translator 'Bienvenue dans le monde de l\' AI et de la ligne de commande.')" ``` Go to the [AIChat](https://github.com/sigoden/aichat) website for other possible use cases. ### Text To Speech & Speech To Text For these two features, use the speech.sh script like this: ``` ./speech.sh synthesize --play --lang fr --voice pierre "Bonjour, aujourd'hui nous somme le $(date +%A\ %d\ %B\ %Y)." ./speech.sh transcript --lang fr --filename speech.wav ``` ## How to Use Remotely The API authentication via Nginx allows you to open the API on the internet and use it remotely. By adding a reverse proxy like Caddy in front of it, you can also add TLS encryption. This way, you can securely use this environment remotely. To use script tools in a remote context, use the environment variables TTS_API_HOST and STT_API_HOST and modify AIChat config (~/.config/aichat/config.yaml) . Example: ``` export TTS_API_HOST="https://your-remote-domain" export STT_API_HOST="https://your-remote-domain" ./tools/speech.sh ... ```