Frea is an interactive terminal-based chat application powered by Google's generative AI, designed to provide seamless user interactions with advanced natural language processing capabilities. This application offers a variety of features, including multi-line input, special commands, and a customizable loading animation.
- Interactive Chat Interface: Engage in dynamic conversations with a generative AI model.
- Customization: Configure API keys, loading styles, and instruction files via
config.ini
. - Special Commands: Use commands like
exit
,clear
,reset
,print
,reconfigure
, andhelp
. - Multi-Line Input: Easily handle multi-line user inputs.
- Loading Animations: Enjoy visually appealing loading animations while waiting for responses.
- Safety Settings: Ensure content safety with predefined thresholds for harmful content categories.
- Conversation Log: Save conversation logs to a file.
- Model Switching: Easily switch between different AI models.
- Openai API key
- Gemini API key
- Python 3.8 or later
- Required Python packages listed in
requirements.txt
-
Clone the Repository:
git clone https://github.com/1999AZZAR/frea.git cd frea
-
Create a virtual env:
python3 -m venv .venv
-
Activate the venv:
source .venv/bin/activate
-
Install Dependencies:
pip install -r requirements.txt
-
Run the code:
cd code python main.py
-
Configuration:
- On the first run, the application will prompt for the API key, loading style, and instruction file path.
- These settings will be saved in a
config.ini
file for future use.
To start the Frea application, run:
cd src
python main.py
- exit: Exit the application.
- clear: Clear the terminal screen.
- reset: Reset the chat session.
- print: Save the conversation log to a file.
- reconfigure: Reconfigure the settings.
- help: Display help information.
- model: Switch between models and services.
Upon running the application, you will see a prompt to enter your message:
╭─ User
╰─> Hello, how are you?
The application will respond after processing your input, showing a loading animation in the meantime.
To enter multi-line messages, end each line with a backslash (\
):
╭─ User
╰─> This is a multi-line \
input example.
You can run system commands by prefixing them with run
:
╭─ User
╰─> run ls -la
On the first run, the application will guide you through creating a config.ini
file:
╭─ Frea
╰─> No Configuration found. Creating configuration file.
Enter the API key: your_api_key_here
Enter the loading style (e.g., L1): L1
Enter the path to the instruction file: /path/to/instruction_file.txt
Configuration saved successfully!
you can get your own gemini api key from here
To update the configuration at any time, use the reconfigure
command within the application.
- Color: Contains ANSI escape codes for terminal colors.
- GeminiChatConfig: Handles configuration, API initialization, and command processing.
- GeminiChat: Main class to run the chat application.
- cursor_hide(): Hides the terminal cursor.
- cursor_show(): Shows the terminal cursor.
- remove_emojis(text): Removes emojis from text.
- run_subprocess(command): Executes a system command.
- generate_chat(): Main loop to handle user input and generate AI responses.
The application includes predefined safety settings to block harmful content categories:
- Harassment
- Hate Speech
- Sexually Explicit
- Dangerous Content
These settings can be adjusted in the gemini_safety_settings
method.
flowchart TD
A[Start] --> B[Initialize AIChat Class]
B --> C{User Input?}
C -- "Help Command" --> D[Display Help Information]
C -- "Exit Command" --> E[Exit Application]
C -- "Clear Command" --> F[Clear Terminal Screen]
C -- "Reset Command" --> G[Reset Chat Session]
C -- "Print Command" --> H[Save Conversation Log to File]
C -- "Model Command" --> I[Change AI Model]
C -- "Reconfigure Command" --> J[Reconfigure Settings]
C -- "Run Command" --> K[Run Subprocess Command]
C -- "Other Input" --> L[Process Input with AI Model]
L --> M{API Key Initialized?}
M -- "Yes" --> N[Generate Response with Gemini]
M -- "No" --> O[Generate Response with OpenAI GPT]
N --> P[Sanitize Response]
O --> P
P --> Q[Print Response]
Q --> R[Log Conversation]
Q --> C
E --> S[Goodbye Message]
S --> T[End]
subgraph Error Handling
P --> U{Error Occurred?}
U -- "Yes" --> V[Display Error Message]
U -- "No" --> W[Continue]
end
W --> Q
subgraph Generate Response
L --> X[Add User Input to Chat History]
X --> Y[Determine AI Model to Use]
Y --> M
end
note: you can incorporate frea to your bash terminal by doing this step.
By following this README, you should be able to set up, configure, and run the Frea application seamlessly. Enjoy interacting with your AI chat assistant!