Skip to content

guanlisheng/synochatgpt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Synochatgpt

Inspired by synochat, chatgpt and ollama.

The goal is to run LLM 100% locally and integrate as a chatbot with Synology Chat

Usage

Install ollama and download llama3:8b on your mac

ollama pull llama3:8b
ollama server

It also needs your Synology Chat Bot's token and incoming URL (host), set them as environment variables before using the app:

export export SYNOLOGY_TOKEN='...'
export export SYNOLOGY_INCOMING_URL='...'

Disable PROXY for localhost HTTP access if needed

export NO_PROXY=http://127.0.0.1

Run

pip install -r requirements.txt

python synochatgpt.py

TODO

  • Fine tune
  • Docker
  • RAG

About

Synology Chat + Ollama + Chatgpt => synochatgpt

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages