cli-llm-chat: The Command-Line Rebel’s Answer to AI Madness

Forget the flashy buttons and over-engineered UIs — real hackers live and die by the command line. Welcome to cli-llm-chat, where the chaos of AI meets the simplicity (and brutality) of a terminal window. Whether you’re here to automate conversations, troll your friends on Telegram, or just dive deep into the world of large language models (LLMs), this project throws away the gloss and gives you raw, unfiltered power.

What makes it stand out? No more fluff, no unnecessary graphical interfaces, and no hand-holding. Just pure code-fueled mayhem with your terminal as the only battleground. And with Telegram integration? Yeah, you can spread the chaos far beyond your local machine.

So grab your keyboard, your coffee (or something stronger), and let’s hack the planet—one command at a time.

🚀 Features: What Makes cli-llm-chat the Ultimate Command-Line Beast

No bloated interfaces or complicated setups — cli-llm-chat is built for those who live by the terminal and aren’t afraid to get their hands dirty. Here’s what this stripped-down AI juggernaut brings to the table:

  • No GUI Nonsense: No windows, no icons, no distractions. Just you, your terminal, and the power to bend an AI to your will.
  • AI on Overdrive: Built to work with large language models, this thing runs like a chaotic dream, giving you responses that range from helpful to borderline reality-breaking.
  • Fully Customizable: Want to tweak the AI until it thinks it’s a cyberpunk street poet? Go for it. The system messages and settings are yours to shape however you want.
  • System Message Shenanigans: Turn the AI into anything — an old-school hacker, a sci-fi villain, or your personal assistant who’s had one too many energy drinks. Your creativity sets the mood.
  • Conversation History: Even in the middle of digital anarchy, you might want to keep track of the madness. cli-llm-chat keeps a log, so you don’t lose those epic conversations.
  • Telegram Integration: Bring the chaos to your Telegram chats! Use the bot and let loose on unsuspecting friends, or just have an AI buddy on standby.
  • Command Control: Both CLI and Telegram versions come with a set of commands that let you fine-tune responses, adjust settings on the fly, and more. It’s like having cheat codes for an AI.

🛠 Installation & Environment Variables: Setting Up the Madness

Ready to unleash cli-llm-chat? Buckle up, ‘cause it’s time to get your hands dirty. Whether you’re running it locally or bringing it to Telegram, the setup is a breeze (if you’re not afraid of a little terminal work). Here’s how you can get this beast up and running:

Installation

  1. Clone the repo: Get the source code by running:bashCopy codegit clone https://github.com/psyb0t/cli-llm-chat.git
  2. Enter the project folder: Welcome to the madhouse:bashCopy codecd cli-llm-chat
  3. Install dependencies: Time to grab the digital essentials:bashCopy codepip install -r requirements.txt
  4. Optional but encouraged: Sacrifice a rubber duck to the coding gods for smooth sailing.

Environment Variables: Tweaking the Beast

cli-llm-chat gives you full control over how the AI behaves. To customize it, you’ll need to set a few environment variables. These options are your keys to unleashing the chaos:

  • HF_TOKEN: Your Hugging Face API key (grab it here).
  • MODEL_NAME: Choose the AI model you want to use (default: mistralai/Mistral-7B-Instruct-v0.3). You can point to a local directory too if you have models stored.
  • MODEL_LOAD_IN_4BIT: Squeeze that model into 4 bits of precision for low-end hardware (default: false).
  • MODEL_LOAD_IN_8BIT: If 4 bits won’t cut it, try 8 bits (default: false).
  • TOKENIZER_NAME: Set this if you want a different tokenizer; defaults to MODEL_NAME.
  • CHAT_TEMPLATE: Use a custom chat template if you’re feeling creative.
  • LORA_WEIGHTS: Fine-tune with LoRA weights for more advanced setups.
  • ASSISTANT_NAME: Name your AI assistant (default: “AI”). Make it personal!
  • SYSTEM_MESSAGE: Set the AI’s vibe—whether you want it to be a pirate, a hacker, or an over-caffeinated intern.
  • TEMPERATURE: Control how creative (or crazy) the AI gets (default: 0.7).
  • MAX_NEW_TOKENS: Set a limit on how much the AI talks back (default: 256 tokens).
  • TOP_P: Adjust the randomness of responses (default: 0.95).
  • TOP_K: More sampling tweaks (default: 40).
  • REPETITION_PENALTY: Prevent the AI from looping the same phrases (default: 1.1).
  • HISTORY_LENGTH: How long before the AI forgets previous conversations (default: 10).
  • DEBUG: Want to see what’s happening under the hood? Set this to true (default: false).
  • DEVICE: Whether you’re rolling with CUDA or CPU, choose your weapon.
  • ENABLE_SKELETON_KEY_JAILBREAK: For the daredevils, use this to jailbreak models (default: false).

Additional Variables for Telegram:

  • TELEGRAM_BOT_TOKEN: Get your bot token from @BotFather.
  • TELEGRAM_BOT_USER_DATA_FILE: Path to where you want to store user data.
  • TELEGRAM_BOT_SUPERUSER_CHAT_ID: Set the superuser ID to grant yourself ultimate control (optional but recommended).
  • TELEGRAM_BOT_SPLIT_RESPONSE_NEWLINES: Whether to spam responses by splitting them at newlines (default: false).

Example of setting up your environment:

export MODEL_NAME="NousResearch/Nous-Hermes-Llama2-13b"
export TEMPERATURE=0.9
export MAX_NEW_TOKENS=512
export DEBUG=true
export TELEGRAM_BOT_TOKEN="your_telegram_bot_token_here"
export TELEGRAM_BOT_USER_DATA_FILE="/path/to/user_data.json"
export TELEGRAM_BOT_SUPERUSER_CHAT_ID="your_chat_id_here"

Set these variables before running the script, and you’re ready to dive into AI madness.

🚀 Usage: Controlling the AI Mayhem

Once you’ve got cli-llm-chat installed and your environment variables set, it’s time to dive headfirst into the chaos. Whether you’re using the CLI version or integrating it with Telegram, this beast gives you full control over how the AI responds and behaves. And don’t worry, it comes with a handy set of commands to help you steer the madness.

CLI Version

To run the CLI version and start chatting with the AI, just fire up the script:

python chat.py

From there, it’s simple:

  • Type your message: Spill your thoughts, ask questions, or just keysmash for fun.
  • Get a response: The AI will respond based on your input and the settings you’ve configured. Be ready for anything, from mind-bending insights to digital absurdity.

Want to mess around with the AI’s settings mid-chat? You’ve got a set of commands to play with:

CLI Commands:

  • /temp <value>: Adjust the AI’s creativity. Higher values make it more unpredictable.
  • /max_tokens <value>: Control how long the AI’s response will be.
  • /top_p <value>: Fine-tune the randomness of its responses.
  • /top_k <value>: Another way to adjust the sampling — tweak this to influence how diverse the responses are.
  • /repetition_penalty <value>: Make sure the AI doesn’t repeat itself too much (or maybe you want it to?).
  • /system <message>: Rewrite the AI’s personality on the fly. Make it think it’s a pirate, a hacker, or whatever you want.
  • /debug true|false: Peek into the engine room and see what’s happening behind the scenes.
  • /clear: Wipe the chat history for a clean slate.
  • /history: Relive the madness by viewing past interactions.

Telegram Version

Want to take the chaos to Telegram? Easy. Set up your bot with @BotFather, configure the necessary environment variables, and you’re ready to go.

Run the script:

python telegram_chatbot.py

Now, find your bot on Telegram and start chatting. The AI will respond to your messages, and you can hit it with commands to shape the conversation however you like.

Telegram Commands:

Regular User Commands:
  • /start: Get the party started, or if you’re lost, get a reminder on how to proceed.
  • /clear: Wipe the current chat and start fresh.
  • /history: Review the madness of your previous chats.
  • /help or /?: Feeling lost? Get a list of commands to guide you.
Superuser Commands (for those with ultimate power):
  • /temp <value>: Adjust how creative (or wild) the AI gets.
  • /max_tokens <value>: Set the max number of tokens the AI will respond with.
  • /top_p <value>: Tweak how much randomness you want in the response.
  • /top_k <value>: Another way to influence randomness — experiment with it to see how the AI reacts.
  • /repetition_penalty <value>: Force the AI to keep things fresh or, you know, let it loop if that’s your thing.
  • /system <message>: Reprogram the AI on the fly — make it anything you want.
  • /debug true|false: Peek into the backend and see what’s really going on.
  • /users: View a list of users interacting with the bot.

Pro tip: Superuser commands are only available if you’ve set the TELEGRAM_BOT_SUPERUSER_CHAT_ID variable. It’s worth it for the ultimate control.

And that’s about it! Check it out here: https://github.com/psyb0t/cli-llm-chat