If you want to host it on your own UI or third party UI, you can launch the OpenAI compatible server and host with a tunnelling service such as Tunnelmole or ngrok, and then enter the credentials appropriately.
You can find suitable UIs from third party repos:
-
Please note that some third-party providers only offer the standard
gpt-3.5-turbo
,gpt-4
, etc., so you will have to add your own custom model inside the code. Here is an example of how to create a UI with any custom model name.
Tunnelmole is an open source tunnelling tool. You can find its source code on Github. Here's how you can use Tunnelmole:
- Install Tunnelmole with
curl -O https://install.tunnelmole.com/9Wtxu/install && sudo bash install
. (On Windows, download tmole.exe). Head over to the README for other methods such asnpm
or building from source. - Run
tmole 7860
(replace7860
with your listening port if it is different from 7860). The output will display two URLs: one HTTP and one HTTPS. It's best to use the HTTPS URL for better privacy and security.
➜ ~ tmole 7860
http://bvdo5f-ip-49-183-170-144.tunnelmole.net is forwarding to localhost:7860
https://bvdo5f-ip-49-183-170-144.tunnelmole.net is forwarding to localhost:7860
ngrok is a popular closed source tunnelling tool. First download and install it from ngrok.com. Here's how to use it to expose port 7860.
ngrok http 7860