Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Chainlit Studio #1728

Merged
merged 2 commits into from
Sep 16, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 15 additions & 2 deletions tutorials/deploy.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,9 @@

This document shows how you can serve a LitGPT for deployment.


 
## Serve an LLM
## Serve an LLM with LitServe

This section illustrates how we can set up an inference server for a phi-2 LLM using `litgpt serve` that is minimal and highly scalable.

Expand Down Expand Up @@ -48,7 +49,7 @@ Example input.
```

 
## Optional streaming mode
### Optional: Use the streaming mode

The 2-step procedure described above returns the complete response all at once. If you want to stream the response on a token-by-token basis, start the server with the streaming option enabled:

Expand Down Expand Up @@ -78,3 +79,15 @@ Sure, here is the corrected sentence:

Example input
```

 
## Serve an LLM UI with Chainlit

If you are interested in developing a simple ChatGPT-like UI prototype, see the Chainlit tutorial in the following Studio:

<a target="_blank" href="https://lightning.ai/lightning-ai/studios/chatgpt-like-llm-uis-via-chainlit">
<img src="https://pl-bolts-doc-images.s3.us-east-2.amazonaws.com/app-2/studio-badge.svg" alt="Open In Studio"/>
</a>



Loading